All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] drm/i915/gem: Introduce a migrate interface
@ 2021-06-24 18:31 ` Thomas Hellström
  0 siblings, 0 replies; 34+ messages in thread
From: Thomas Hellström @ 2021-06-24 18:31 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: Thomas Hellström, matthew.auld

We want to be able to explicitly migrate objects between gem memory
regions, initially for display and dma-buf, but there might be more
use-cases coming up.

Introduce a gem migrate interface, add a selftest and use it for
display fb pinning and dma-buf mapping.

This series should make the desktop light up on DG1 with DG1-enabled
mesa.

Matthew Auld (1):
  drm/i915/gem: Introduce a selftest for the gem object migrate
    functionality

Thomas Hellström (3):
  drm/i915/gem: Implement object migration
  drm/i915/display: Migrate objects to LMEM if possible for display
  drm/i915/gem: Migrate to system at dma-buf map time

 drivers/gpu/drm/i915/display/intel_display.c  |   5 +-
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |   9 +-
 drivers/gpu/drm/i915/gem/i915_gem_domain.c    |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_lmem.c      |  21 --
 drivers/gpu/drm/i915/gem/i915_gem_object.c    |  92 +++++++
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  12 +-
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |   9 +
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c       |  69 +++--
 drivers/gpu/drm/i915/gem/i915_gem_wait.c      |  19 ++
 .../drm/i915/gem/selftests/i915_gem_migrate.c | 237 ++++++++++++++++++
 .../drm/i915/selftests/i915_live_selftests.h  |   1 +
 11 files changed, 434 insertions(+), 42 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c

-- 
2.31.1


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 0/4] drm/i915/gem: Introduce a migrate interface
@ 2021-06-24 18:31 ` Thomas Hellström
  0 siblings, 0 replies; 34+ messages in thread
From: Thomas Hellström @ 2021-06-24 18:31 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: Thomas Hellström, matthew.auld

We want to be able to explicitly migrate objects between gem memory
regions, initially for display and dma-buf, but there might be more
use-cases coming up.

Introduce a gem migrate interface, add a selftest and use it for
display fb pinning and dma-buf mapping.

This series should make the desktop light up on DG1 with DG1-enabled
mesa.

Matthew Auld (1):
  drm/i915/gem: Introduce a selftest for the gem object migrate
    functionality

Thomas Hellström (3):
  drm/i915/gem: Implement object migration
  drm/i915/display: Migrate objects to LMEM if possible for display
  drm/i915/gem: Migrate to system at dma-buf map time

 drivers/gpu/drm/i915/display/intel_display.c  |   5 +-
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |   9 +-
 drivers/gpu/drm/i915/gem/i915_gem_domain.c    |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_lmem.c      |  21 --
 drivers/gpu/drm/i915/gem/i915_gem_object.c    |  92 +++++++
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  12 +-
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |   9 +
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c       |  69 +++--
 drivers/gpu/drm/i915/gem/i915_gem_wait.c      |  19 ++
 .../drm/i915/gem/selftests/i915_gem_migrate.c | 237 ++++++++++++++++++
 .../drm/i915/selftests/i915_live_selftests.h  |   1 +
 11 files changed, 434 insertions(+), 42 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c

-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 1/4] drm/i915/gem: Implement object migration
  2021-06-24 18:31 ` [Intel-gfx] " Thomas Hellström
@ 2021-06-24 18:31   ` Thomas Hellström
  -1 siblings, 0 replies; 34+ messages in thread
From: Thomas Hellström @ 2021-06-24 18:31 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: Thomas Hellström, matthew.auld

Introduce an interface to migrate objects between regions.
This is primarily intended to migrate objects to LMEM for display and
to SYSTEM for dma-buf, but might be reused in one form or another for
performande-based migration.

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c    | 91 +++++++++++++++++++
 drivers/gpu/drm/i915/gem/i915_gem_object.h    | 12 +++
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |  9 ++
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c       | 69 ++++++++++----
 drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 19 ++++
 5 files changed, 183 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 07e8ff9a8aae..6421c3a8b2f3 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -513,6 +513,97 @@ bool i915_gem_object_has_iomem(const struct drm_i915_gem_object *obj)
 	return obj->mem_flags & I915_BO_FLAG_IOMEM;
 }
 
+/**
+ * i915_gem_object_can_migrate - Whether an object likely can be migrated
+ *
+ * @obj: The object to migrate
+ * @id: The region intended to migrate to
+ *
+ * Check whether the object backend supports migration to the
+ * given region. Note that pinning may affect the ability to migrate.
+ *
+ * Return: true if migration is possible, false otherwise.
+ */
+bool i915_gem_object_can_migrate(struct drm_i915_gem_object *obj,
+				 enum intel_region_id id)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	unsigned int num_allowed = obj->mm.n_placements;
+	struct intel_memory_region *mr;
+	unsigned int i;
+
+	GEM_BUG_ON(id >= INTEL_REGION_UNKNOWN);
+	GEM_BUG_ON(obj->mm.madv != I915_MADV_WILLNEED);
+
+	if (!obj->ops->migrate)
+		return -EOPNOTSUPP;
+
+	mr = i915->mm.regions[id];
+	if (obj->mm.region == mr)
+		return true;
+
+	if (!i915_gem_object_evictable(obj))
+		return false;
+
+	if (!(obj->flags & I915_BO_ALLOC_USER))
+		return true;
+
+	if (num_allowed == 0)
+		return false;
+
+	for (i = 0; i < num_allowed; ++i) {
+		if (mr == obj->mm.placements[i])
+			return true;
+	}
+
+	return false;
+}
+
+/**
+ * i915_gem_object_migrate - Migrate an object to the desired region id
+ * @obj: The object to migrate.
+ * @ww: An optional struct i915_gem_ww_ctx. If NULL, the backend may
+ * not be successful in evicting other objects to make room for this object.
+ * @id: The region id to migrate to.
+ *
+ * Attempt to migrate the object to the desired memory region. The
+ * object backend must support migration and the object may not be
+ * pinned, (explicitly pinned pages or pinned vmas). The object must
+ * be locked.
+ * On successful completion, the object will have pages pointing to
+ * memory in the new region, but an async migration task may not have
+ * completed yet, and to accomplish that, i915_gem_object_wait_migration()
+ * must be called.
+ *
+ * Return: 0 on success. Negative error code on failure. In particular may
+ * return -ENXIO on lack of region space, -EDEADLK for deadlock avoidance
+ * if @ww is set, -EINTR or -ERESTARTSYS if signal pending, and
+ * -EBUSY if the object is pinned.
+ */
+int i915_gem_object_migrate(struct drm_i915_gem_object *obj,
+			    struct i915_gem_ww_ctx *ww,
+			    enum intel_region_id id)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	struct intel_memory_region *mr;
+
+	GEM_BUG_ON(id >= INTEL_REGION_UNKNOWN);
+	GEM_BUG_ON(obj->mm.madv != I915_MADV_WILLNEED);
+	assert_object_held(obj);
+
+	mr = i915->mm.regions[id];
+	if (obj->mm.region == mr)
+		return 0;
+
+	if (!i915_gem_object_evictable(obj))
+		return -EBUSY;
+
+	if (!obj->ops->migrate)
+		return -EOPNOTSUPP;
+
+	return obj->ops->migrate(obj, mr);
+}
+
 void i915_gem_init__objects(struct drm_i915_private *i915)
 {
 	INIT_WORK(&i915->mm.free_work, __i915_gem_free_work);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index ea3224a480c4..8cbd7a5334e2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -17,6 +17,8 @@
 #include "i915_gem_ww.h"
 #include "i915_vma_types.h"
 
+enum intel_region_id;
+
 /*
  * XXX: There is a prevalence of the assumption that we fit the
  * object's page count inside a 32bit _signed_ variable. Let's document
@@ -597,6 +599,16 @@ bool i915_gem_object_migratable(struct drm_i915_gem_object *obj);
 
 bool i915_gem_object_validates_to_lmem(struct drm_i915_gem_object *obj);
 
+int i915_gem_object_migrate(struct drm_i915_gem_object *obj,
+			    struct i915_gem_ww_ctx *ww,
+			    enum intel_region_id id);
+
+bool i915_gem_object_can_migrate(struct drm_i915_gem_object *obj,
+				 enum intel_region_id id);
+
+int i915_gem_object_wait_migration(struct drm_i915_gem_object *obj,
+				   unsigned int flags);
+
 #ifdef CONFIG_MMU_NOTIFIER
 static inline bool
 i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index 441f913c87e6..ef3de2ae9723 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -18,6 +18,7 @@
 
 struct drm_i915_gem_object;
 struct intel_fronbuffer;
+struct intel_memory_region;
 
 /*
  * struct i915_lut_handle tracks the fast lookups from handle to vma used
@@ -77,6 +78,14 @@ struct drm_i915_gem_object_ops {
 	 * delayed_free - Override the default delayed free implementation
 	 */
 	void (*delayed_free)(struct drm_i915_gem_object *obj);
+
+	/**
+	 * migrate - Migrate object to a different region either for
+	 * pinning or for as long as the object lock is held.
+	 */
+	int (*migrate)(struct drm_i915_gem_object *obj,
+		       struct intel_memory_region *mr);
+
 	void (*release)(struct drm_i915_gem_object *obj);
 
 	const struct vm_operations_struct *mmap_ops;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
index c39d982c4fa6..b01bf0650ab0 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
@@ -617,7 +617,8 @@ struct ttm_device_funcs *i915_ttm_driver(void)
 	return &i915_ttm_bo_driver;
 }
 
-static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
+static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
+				struct ttm_placement *placement)
 {
 	struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
 	struct ttm_operation_ctx ctx = {
@@ -625,19 +626,12 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
 		.no_wait_gpu = false,
 	};
 	struct sg_table *st;
-	struct ttm_place requested, busy[I915_TTM_MAX_PLACEMENTS];
-	struct ttm_placement placement;
 	int real_num_busy;
 	int ret;
 
-	GEM_BUG_ON(obj->mm.n_placements > I915_TTM_MAX_PLACEMENTS);
-
-	/* Move to the requested placement. */
-	i915_ttm_placement_from_obj(obj, &requested, busy, &placement);
-
 	/* First try only the requested placement. No eviction. */
-	real_num_busy = fetch_and_zero(&placement.num_busy_placement);
-	ret = ttm_bo_validate(bo, &placement, &ctx);
+	real_num_busy = fetch_and_zero(&placement->num_busy_placement);
+	ret = ttm_bo_validate(bo, placement, &ctx);
 	if (ret) {
 		ret = i915_ttm_err_to_gem(ret);
 		/*
@@ -652,8 +646,8 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
 		 * If the initial attempt fails, allow all accepted placements,
 		 * evicting if necessary.
 		 */
-		placement.num_busy_placement = real_num_busy;
-		ret = ttm_bo_validate(bo, &placement, &ctx);
+		placement->num_busy_placement = real_num_busy;
+		ret = ttm_bo_validate(bo, placement, &ctx);
 		if (ret)
 			return i915_ttm_err_to_gem(ret);
 	}
@@ -668,16 +662,56 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
 		i915_ttm_adjust_gem_after_move(obj);
 	}
 
-	/* Object either has a page vector or is an iomem object */
-	st = bo->ttm ? i915_ttm_tt_get_st(bo->ttm) : obj->ttm.cached_io_st;
-	if (IS_ERR(st))
-		return PTR_ERR(st);
+	if (!obj->mm.pages) {
+		/* Object either has a page vector or is an iomem object */
+		st = bo->ttm ? i915_ttm_tt_get_st(bo->ttm) : obj->ttm.cached_io_st;
+		if (IS_ERR(st))
+			return PTR_ERR(st);
 
-	__i915_gem_object_set_pages(obj, st, i915_sg_dma_sizes(st->sgl));
+		__i915_gem_object_set_pages(obj, st, i915_sg_dma_sizes(st->sgl));
+	}
 
 	return ret;
 }
 
+static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
+{
+	struct ttm_place requested, busy[I915_TTM_MAX_PLACEMENTS];
+	struct ttm_placement placement;
+
+	GEM_BUG_ON(obj->mm.n_placements > I915_TTM_MAX_PLACEMENTS);
+
+	/* Move to the requested placement. */
+	i915_ttm_placement_from_obj(obj, &requested, busy, &placement);
+
+	return __i915_ttm_get_pages(obj, &placement);
+}
+
+static int i915_ttm_migrate(struct drm_i915_gem_object *obj,
+			    struct intel_memory_region *mr)
+{
+	struct ttm_place requested;
+	struct ttm_placement placement;
+	int ret;
+
+	i915_ttm_place_from_region(mr, &requested, obj->flags);
+	placement.num_placement = 1;
+	placement.num_busy_placement = 1;
+	placement.placement = &requested;
+	placement.busy_placement = &requested;
+
+	ret = __i915_ttm_get_pages(obj, &placement);
+	if (ret)
+		return ret;
+
+	if (obj->mm.region != mr) {
+		intel_memory_region_put(obj->mm.region);
+		obj->mm.region = intel_memory_region_get(mr);
+	}
+
+	return 0;
+}
+
 static void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
 			       struct sg_table *st)
 {
@@ -814,6 +848,7 @@ static const struct drm_i915_gem_object_ops i915_gem_ttm_obj_ops = {
 	.truncate = i915_ttm_purge,
 	.adjust_lru = i915_ttm_adjust_lru,
 	.delayed_free = i915_ttm_delayed_free,
+	.migrate = i915_ttm_migrate,
 	.mmap_offset = i915_ttm_mmap_offset,
 	.mmap_ops = &vm_ops_ttm,
 };
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
index 1070d3afdce7..e9f7a8d9f0e2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
@@ -290,3 +290,22 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 	i915_gem_object_put(obj);
 	return ret;
 }
+
+/**
+ * i915_gem_object_wait_migrate - Sync an accelerated migration operation
+ * @obj: The migrating object.
+ * @flags: waiting flags. Currently supports only I915_WAIT_INTERRUPTIBLE.
+ *
+ * Wait for any pending async migration operation on the object,
+ * whether it's explicitly (i915_gem_object_migrate()) or implicitly
+ * (swapin, initial clearing) initiated.
+ *
+ * Return: 0 if successful, -ERESTARTSYS if a signal was hit during waiting.
+ */
+int i915_gem_object_wait_migration(struct drm_i915_gem_object *obj,
+				   unsigned int flags)
+{
+	might_sleep();
+	/* NOP for now. */
+	return 0;
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 1/4] drm/i915/gem: Implement object migration
@ 2021-06-24 18:31   ` Thomas Hellström
  0 siblings, 0 replies; 34+ messages in thread
From: Thomas Hellström @ 2021-06-24 18:31 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: Thomas Hellström, matthew.auld

Introduce an interface to migrate objects between regions.
This is primarily intended to migrate objects to LMEM for display and
to SYSTEM for dma-buf, but might be reused in one form or another for
performande-based migration.

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c    | 91 +++++++++++++++++++
 drivers/gpu/drm/i915/gem/i915_gem_object.h    | 12 +++
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |  9 ++
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c       | 69 ++++++++++----
 drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 19 ++++
 5 files changed, 183 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 07e8ff9a8aae..6421c3a8b2f3 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -513,6 +513,97 @@ bool i915_gem_object_has_iomem(const struct drm_i915_gem_object *obj)
 	return obj->mem_flags & I915_BO_FLAG_IOMEM;
 }
 
+/**
+ * i915_gem_object_can_migrate - Whether an object likely can be migrated
+ *
+ * @obj: The object to migrate
+ * @id: The region intended to migrate to
+ *
+ * Check whether the object backend supports migration to the
+ * given region. Note that pinning may affect the ability to migrate.
+ *
+ * Return: true if migration is possible, false otherwise.
+ */
+bool i915_gem_object_can_migrate(struct drm_i915_gem_object *obj,
+				 enum intel_region_id id)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	unsigned int num_allowed = obj->mm.n_placements;
+	struct intel_memory_region *mr;
+	unsigned int i;
+
+	GEM_BUG_ON(id >= INTEL_REGION_UNKNOWN);
+	GEM_BUG_ON(obj->mm.madv != I915_MADV_WILLNEED);
+
+	if (!obj->ops->migrate)
+		return -EOPNOTSUPP;
+
+	mr = i915->mm.regions[id];
+	if (obj->mm.region == mr)
+		return true;
+
+	if (!i915_gem_object_evictable(obj))
+		return false;
+
+	if (!(obj->flags & I915_BO_ALLOC_USER))
+		return true;
+
+	if (num_allowed == 0)
+		return false;
+
+	for (i = 0; i < num_allowed; ++i) {
+		if (mr == obj->mm.placements[i])
+			return true;
+	}
+
+	return false;
+}
+
+/**
+ * i915_gem_object_migrate - Migrate an object to the desired region id
+ * @obj: The object to migrate.
+ * @ww: An optional struct i915_gem_ww_ctx. If NULL, the backend may
+ * not be successful in evicting other objects to make room for this object.
+ * @id: The region id to migrate to.
+ *
+ * Attempt to migrate the object to the desired memory region. The
+ * object backend must support migration and the object may not be
+ * pinned, (explicitly pinned pages or pinned vmas). The object must
+ * be locked.
+ * On successful completion, the object will have pages pointing to
+ * memory in the new region, but an async migration task may not have
+ * completed yet, and to accomplish that, i915_gem_object_wait_migration()
+ * must be called.
+ *
+ * Return: 0 on success. Negative error code on failure. In particular may
+ * return -ENXIO on lack of region space, -EDEADLK for deadlock avoidance
+ * if @ww is set, -EINTR or -ERESTARTSYS if signal pending, and
+ * -EBUSY if the object is pinned.
+ */
+int i915_gem_object_migrate(struct drm_i915_gem_object *obj,
+			    struct i915_gem_ww_ctx *ww,
+			    enum intel_region_id id)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	struct intel_memory_region *mr;
+
+	GEM_BUG_ON(id >= INTEL_REGION_UNKNOWN);
+	GEM_BUG_ON(obj->mm.madv != I915_MADV_WILLNEED);
+	assert_object_held(obj);
+
+	mr = i915->mm.regions[id];
+	if (obj->mm.region == mr)
+		return 0;
+
+	if (!i915_gem_object_evictable(obj))
+		return -EBUSY;
+
+	if (!obj->ops->migrate)
+		return -EOPNOTSUPP;
+
+	return obj->ops->migrate(obj, mr);
+}
+
 void i915_gem_init__objects(struct drm_i915_private *i915)
 {
 	INIT_WORK(&i915->mm.free_work, __i915_gem_free_work);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index ea3224a480c4..8cbd7a5334e2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -17,6 +17,8 @@
 #include "i915_gem_ww.h"
 #include "i915_vma_types.h"
 
+enum intel_region_id;
+
 /*
  * XXX: There is a prevalence of the assumption that we fit the
  * object's page count inside a 32bit _signed_ variable. Let's document
@@ -597,6 +599,16 @@ bool i915_gem_object_migratable(struct drm_i915_gem_object *obj);
 
 bool i915_gem_object_validates_to_lmem(struct drm_i915_gem_object *obj);
 
+int i915_gem_object_migrate(struct drm_i915_gem_object *obj,
+			    struct i915_gem_ww_ctx *ww,
+			    enum intel_region_id id);
+
+bool i915_gem_object_can_migrate(struct drm_i915_gem_object *obj,
+				 enum intel_region_id id);
+
+int i915_gem_object_wait_migration(struct drm_i915_gem_object *obj,
+				   unsigned int flags);
+
 #ifdef CONFIG_MMU_NOTIFIER
 static inline bool
 i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index 441f913c87e6..ef3de2ae9723 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -18,6 +18,7 @@
 
 struct drm_i915_gem_object;
 struct intel_fronbuffer;
+struct intel_memory_region;
 
 /*
  * struct i915_lut_handle tracks the fast lookups from handle to vma used
@@ -77,6 +78,14 @@ struct drm_i915_gem_object_ops {
 	 * delayed_free - Override the default delayed free implementation
 	 */
 	void (*delayed_free)(struct drm_i915_gem_object *obj);
+
+	/**
+	 * migrate - Migrate object to a different region either for
+	 * pinning or for as long as the object lock is held.
+	 */
+	int (*migrate)(struct drm_i915_gem_object *obj,
+		       struct intel_memory_region *mr);
+
 	void (*release)(struct drm_i915_gem_object *obj);
 
 	const struct vm_operations_struct *mmap_ops;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
index c39d982c4fa6..b01bf0650ab0 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
@@ -617,7 +617,8 @@ struct ttm_device_funcs *i915_ttm_driver(void)
 	return &i915_ttm_bo_driver;
 }
 
-static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
+static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
+				struct ttm_placement *placement)
 {
 	struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
 	struct ttm_operation_ctx ctx = {
@@ -625,19 +626,12 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
 		.no_wait_gpu = false,
 	};
 	struct sg_table *st;
-	struct ttm_place requested, busy[I915_TTM_MAX_PLACEMENTS];
-	struct ttm_placement placement;
 	int real_num_busy;
 	int ret;
 
-	GEM_BUG_ON(obj->mm.n_placements > I915_TTM_MAX_PLACEMENTS);
-
-	/* Move to the requested placement. */
-	i915_ttm_placement_from_obj(obj, &requested, busy, &placement);
-
 	/* First try only the requested placement. No eviction. */
-	real_num_busy = fetch_and_zero(&placement.num_busy_placement);
-	ret = ttm_bo_validate(bo, &placement, &ctx);
+	real_num_busy = fetch_and_zero(&placement->num_busy_placement);
+	ret = ttm_bo_validate(bo, placement, &ctx);
 	if (ret) {
 		ret = i915_ttm_err_to_gem(ret);
 		/*
@@ -652,8 +646,8 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
 		 * If the initial attempt fails, allow all accepted placements,
 		 * evicting if necessary.
 		 */
-		placement.num_busy_placement = real_num_busy;
-		ret = ttm_bo_validate(bo, &placement, &ctx);
+		placement->num_busy_placement = real_num_busy;
+		ret = ttm_bo_validate(bo, placement, &ctx);
 		if (ret)
 			return i915_ttm_err_to_gem(ret);
 	}
@@ -668,16 +662,56 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
 		i915_ttm_adjust_gem_after_move(obj);
 	}
 
-	/* Object either has a page vector or is an iomem object */
-	st = bo->ttm ? i915_ttm_tt_get_st(bo->ttm) : obj->ttm.cached_io_st;
-	if (IS_ERR(st))
-		return PTR_ERR(st);
+	if (!obj->mm.pages) {
+		/* Object either has a page vector or is an iomem object */
+		st = bo->ttm ? i915_ttm_tt_get_st(bo->ttm) : obj->ttm.cached_io_st;
+		if (IS_ERR(st))
+			return PTR_ERR(st);
 
-	__i915_gem_object_set_pages(obj, st, i915_sg_dma_sizes(st->sgl));
+		__i915_gem_object_set_pages(obj, st, i915_sg_dma_sizes(st->sgl));
+	}
 
 	return ret;
 }
 
+static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
+{
+	struct ttm_place requested, busy[I915_TTM_MAX_PLACEMENTS];
+	struct ttm_placement placement;
+
+	GEM_BUG_ON(obj->mm.n_placements > I915_TTM_MAX_PLACEMENTS);
+
+	/* Move to the requested placement. */
+	i915_ttm_placement_from_obj(obj, &requested, busy, &placement);
+
+	return __i915_ttm_get_pages(obj, &placement);
+}
+
+static int i915_ttm_migrate(struct drm_i915_gem_object *obj,
+			    struct intel_memory_region *mr)
+{
+	struct ttm_place requested;
+	struct ttm_placement placement;
+	int ret;
+
+	i915_ttm_place_from_region(mr, &requested, obj->flags);
+	placement.num_placement = 1;
+	placement.num_busy_placement = 1;
+	placement.placement = &requested;
+	placement.busy_placement = &requested;
+
+	ret = __i915_ttm_get_pages(obj, &placement);
+	if (ret)
+		return ret;
+
+	if (obj->mm.region != mr) {
+		intel_memory_region_put(obj->mm.region);
+		obj->mm.region = intel_memory_region_get(mr);
+	}
+
+	return 0;
+}
+
 static void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
 			       struct sg_table *st)
 {
@@ -814,6 +848,7 @@ static const struct drm_i915_gem_object_ops i915_gem_ttm_obj_ops = {
 	.truncate = i915_ttm_purge,
 	.adjust_lru = i915_ttm_adjust_lru,
 	.delayed_free = i915_ttm_delayed_free,
+	.migrate = i915_ttm_migrate,
 	.mmap_offset = i915_ttm_mmap_offset,
 	.mmap_ops = &vm_ops_ttm,
 };
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
index 1070d3afdce7..e9f7a8d9f0e2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
@@ -290,3 +290,22 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 	i915_gem_object_put(obj);
 	return ret;
 }
+
+/**
+ * i915_gem_object_wait_migrate - Sync an accelerated migration operation
+ * @obj: The migrating object.
+ * @flags: waiting flags. Currently supports only I915_WAIT_INTERRUPTIBLE.
+ *
+ * Wait for any pending async migration operation on the object,
+ * whether it's explicitly (i915_gem_object_migrate()) or implicitly
+ * (swapin, initial clearing) initiated.
+ *
+ * Return: 0 if successful, -ERESTARTSYS if a signal was hit during waiting.
+ */
+int i915_gem_object_wait_migration(struct drm_i915_gem_object *obj,
+				   unsigned int flags)
+{
+	might_sleep();
+	/* NOP for now. */
+	return 0;
+}
-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 2/4] drm/i915/gem: Introduce a selftest for the gem object migrate functionality
  2021-06-24 18:31 ` [Intel-gfx] " Thomas Hellström
@ 2021-06-24 18:31   ` Thomas Hellström
  -1 siblings, 0 replies; 34+ messages in thread
From: Thomas Hellström @ 2021-06-24 18:31 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: Thomas Hellström, matthew.auld

From: Matthew Auld <matthew.auld@intel.com>

A selftest for the gem object migrate functionality. Slightly adapted
from the original by Matthew to the new interface and new fill blit
code.

Co-developed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c    |   1 +
 .../drm/i915/gem/selftests/i915_gem_migrate.c | 237 ++++++++++++++++++
 .../drm/i915/selftests/i915_live_selftests.h  |   1 +
 3 files changed, 239 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 6421c3a8b2f3..24f4395bf387 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -644,6 +644,7 @@ static const struct drm_gem_object_funcs i915_gem_object_funcs = {
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftests/huge_gem_object.c"
 #include "selftests/huge_pages.c"
+#include "selftests/i915_gem_migrate.c"
 #include "selftests/i915_gem_object.c"
 #include "selftests/i915_gem_coherency.c"
 #endif
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c
new file mode 100644
index 000000000000..a437b66f64d9
--- /dev/null
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c
@@ -0,0 +1,237 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2020-2021 Intel Corporation
+ */
+
+#include "gt/intel_migrate.h"
+
+static int igt_smem_create_migrate(void *arg)
+{
+	struct intel_gt *gt = arg;
+	struct drm_i915_private *i915 = gt->i915;
+	struct drm_i915_gem_object *obj;
+	struct i915_gem_ww_ctx ww;
+	int err = 0;
+
+	/* Switch object backing-store on create */
+	obj = i915_gem_object_create_lmem(i915, PAGE_SIZE, 0);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	for_i915_gem_ww(&ww, err, true) {
+		err = i915_gem_object_lock(obj, &ww);
+		if (err)
+			continue;
+
+		if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM)) {
+			err = -EINVAL;
+			continue;
+		}
+
+		err = i915_gem_object_migrate(obj, &ww, INTEL_REGION_SMEM);
+		if (err)
+			continue;
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err)
+			continue;
+
+		if (i915_gem_object_can_migrate(obj, INTEL_REGION_LMEM))
+			err = -EINVAL;
+
+		i915_gem_object_unpin_pages(obj);
+	}
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+static int igt_lmem_create_migrate(void *arg)
+{
+	struct intel_gt *gt = arg;
+	struct drm_i915_private *i915 = gt->i915;
+	struct drm_i915_gem_object *obj;
+	struct i915_gem_ww_ctx ww;
+	int err = 0;
+
+	/* Switch object backing-store on create */
+	obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	for_i915_gem_ww(&ww, err, true) {
+		err = i915_gem_object_lock(obj, &ww);
+		if (err)
+			continue;
+
+		if (!i915_gem_object_can_migrate(obj, INTEL_REGION_LMEM)) {
+			err = -EINVAL;
+			continue;
+		}
+
+		err = i915_gem_object_migrate(obj, &ww, INTEL_REGION_LMEM);
+		if (err)
+			continue;
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err)
+			continue;
+
+		if (i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM))
+			err = -EINVAL;
+
+		i915_gem_object_unpin_pages(obj);
+	}
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+static int lmem_pages_migrate_one(struct i915_gem_ww_ctx *ww,
+				  struct drm_i915_gem_object *obj)
+{
+	int err;
+
+	err = i915_gem_object_lock(obj, ww);
+	if (err)
+		return err;
+
+	err = i915_gem_object_wait(obj,
+				   I915_WAIT_INTERRUPTIBLE |
+				   I915_WAIT_PRIORITY |
+				   I915_WAIT_ALL,
+				   MAX_SCHEDULE_TIMEOUT);
+	if (err)
+		return err;
+
+	if (i915_gem_object_is_lmem(obj)) {
+		if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM)) {
+			pr_err("object can't migrate to smem.\n");
+			return -EINVAL;
+		}
+
+		err = i915_gem_object_migrate(obj, ww, INTEL_REGION_SMEM);
+		if (err) {
+			pr_err("Object failed migration to smem\n");
+			if (err)
+				return err;
+		}
+
+		if (i915_gem_object_is_lmem(obj)) {
+			pr_err("object still backed by lmem\n");
+			err = -EINVAL;
+		}
+
+		if (!i915_gem_object_has_struct_page(obj)) {
+			pr_err("object not backed by struct page\n");
+			err = -EINVAL;
+		}
+
+	} else {
+		if (!i915_gem_object_can_migrate(obj, INTEL_REGION_LMEM)) {
+			pr_err("object can't migrate to lmem.\n");
+			return -EINVAL;
+		}
+
+		err = i915_gem_object_migrate(obj, ww, INTEL_REGION_LMEM);
+		if (err) {
+			pr_err("Object failed migration to lmem\n");
+			if (err)
+				return err;
+		}
+
+		if (i915_gem_object_has_struct_page(obj)) {
+			pr_err("object still backed by struct page\n");
+			err = -EINVAL;
+		}
+
+		if (!i915_gem_object_is_lmem(obj)) {
+			pr_err("object not backed by lmem\n");
+			err = -EINVAL;
+		}
+	}
+
+	return err;
+}
+
+static int igt_lmem_pages_migrate(void *arg)
+{
+	struct intel_gt *gt = arg;
+	struct drm_i915_private *i915 = gt->i915;
+	struct drm_i915_gem_object *obj;
+	struct i915_gem_ww_ctx ww;
+	struct i915_request *rq;
+	int err;
+	int i;
+
+	/* From LMEM to shmem and back again */
+
+	obj = i915_gem_object_create_lmem(i915, SZ_2M, 0);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	err = i915_gem_object_lock(obj, NULL);
+	if (err)
+		goto out_put;
+
+	err = ____i915_gem_object_get_pages(obj);
+	if (err) {
+		i915_gem_object_unlock(obj);
+		goto out_put;
+	}
+
+	err = intel_context_migrate_clear(gt->migrate.context, NULL,
+					  obj->mm.pages->sgl, obj->cache_level,
+					  i915_gem_object_is_lmem(obj),
+					  0, &rq);
+	if (rq) {
+		dma_resv_add_excl_fence(obj->base.resv, &rq->fence);
+		i915_request_put(rq);
+	}
+	i915_gem_object_unlock(obj);
+	if (err)
+		goto out_put;
+
+	for (i = 1; i <= 4; ++i) {
+		for_i915_gem_ww(&ww, err, true) {
+			err = lmem_pages_migrate_one(&ww, obj);
+			if (err)
+				continue;
+
+			err = i915_gem_object_wait_migration(obj, true);
+			if (err)
+				continue;
+
+			err = intel_migrate_clear(&gt->migrate, &ww, NULL,
+						  obj->mm.pages->sgl,
+						  obj->cache_level,
+						  i915_gem_object_is_lmem(obj),
+						  0xdeadbeaf, &rq);
+			if (rq) {
+				dma_resv_add_excl_fence(obj->base.resv,
+							&rq->fence);
+				i915_request_put(rq);
+			}
+		}
+		if (err)
+			break;
+	}
+out_put:
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+int i915_gem_migrate_live_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_smem_create_migrate),
+		SUBTEST(igt_lmem_create_migrate),
+		SUBTEST(igt_lmem_pages_migrate),
+	};
+
+	if (!HAS_LMEM(i915))
+		return 0;
+
+	return intel_gt_live_subtests(tests, &i915->gt);
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index a68197cf1044..e2fd1b61af71 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -40,6 +40,7 @@ selftest(hugepages, i915_gem_huge_page_live_selftests)
 selftest(gem_contexts, i915_gem_context_live_selftests)
 selftest(gem_execbuf, i915_gem_execbuffer_live_selftests)
 selftest(client, i915_gem_client_blt_live_selftests)
+selftest(gem_migrate, i915_gem_migrate_live_selftests)
 selftest(reset, intel_reset_live_selftests)
 selftest(memory_region, intel_memory_region_live_selftests)
 selftest(hangcheck, intel_hangcheck_live_selftests)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 2/4] drm/i915/gem: Introduce a selftest for the gem object migrate functionality
@ 2021-06-24 18:31   ` Thomas Hellström
  0 siblings, 0 replies; 34+ messages in thread
From: Thomas Hellström @ 2021-06-24 18:31 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: Thomas Hellström, matthew.auld

From: Matthew Auld <matthew.auld@intel.com>

A selftest for the gem object migrate functionality. Slightly adapted
from the original by Matthew to the new interface and new fill blit
code.

Co-developed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c    |   1 +
 .../drm/i915/gem/selftests/i915_gem_migrate.c | 237 ++++++++++++++++++
 .../drm/i915/selftests/i915_live_selftests.h  |   1 +
 3 files changed, 239 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 6421c3a8b2f3..24f4395bf387 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -644,6 +644,7 @@ static const struct drm_gem_object_funcs i915_gem_object_funcs = {
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftests/huge_gem_object.c"
 #include "selftests/huge_pages.c"
+#include "selftests/i915_gem_migrate.c"
 #include "selftests/i915_gem_object.c"
 #include "selftests/i915_gem_coherency.c"
 #endif
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c
new file mode 100644
index 000000000000..a437b66f64d9
--- /dev/null
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c
@@ -0,0 +1,237 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2020-2021 Intel Corporation
+ */
+
+#include "gt/intel_migrate.h"
+
+static int igt_smem_create_migrate(void *arg)
+{
+	struct intel_gt *gt = arg;
+	struct drm_i915_private *i915 = gt->i915;
+	struct drm_i915_gem_object *obj;
+	struct i915_gem_ww_ctx ww;
+	int err = 0;
+
+	/* Switch object backing-store on create */
+	obj = i915_gem_object_create_lmem(i915, PAGE_SIZE, 0);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	for_i915_gem_ww(&ww, err, true) {
+		err = i915_gem_object_lock(obj, &ww);
+		if (err)
+			continue;
+
+		if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM)) {
+			err = -EINVAL;
+			continue;
+		}
+
+		err = i915_gem_object_migrate(obj, &ww, INTEL_REGION_SMEM);
+		if (err)
+			continue;
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err)
+			continue;
+
+		if (i915_gem_object_can_migrate(obj, INTEL_REGION_LMEM))
+			err = -EINVAL;
+
+		i915_gem_object_unpin_pages(obj);
+	}
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+static int igt_lmem_create_migrate(void *arg)
+{
+	struct intel_gt *gt = arg;
+	struct drm_i915_private *i915 = gt->i915;
+	struct drm_i915_gem_object *obj;
+	struct i915_gem_ww_ctx ww;
+	int err = 0;
+
+	/* Switch object backing-store on create */
+	obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	for_i915_gem_ww(&ww, err, true) {
+		err = i915_gem_object_lock(obj, &ww);
+		if (err)
+			continue;
+
+		if (!i915_gem_object_can_migrate(obj, INTEL_REGION_LMEM)) {
+			err = -EINVAL;
+			continue;
+		}
+
+		err = i915_gem_object_migrate(obj, &ww, INTEL_REGION_LMEM);
+		if (err)
+			continue;
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err)
+			continue;
+
+		if (i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM))
+			err = -EINVAL;
+
+		i915_gem_object_unpin_pages(obj);
+	}
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+static int lmem_pages_migrate_one(struct i915_gem_ww_ctx *ww,
+				  struct drm_i915_gem_object *obj)
+{
+	int err;
+
+	err = i915_gem_object_lock(obj, ww);
+	if (err)
+		return err;
+
+	err = i915_gem_object_wait(obj,
+				   I915_WAIT_INTERRUPTIBLE |
+				   I915_WAIT_PRIORITY |
+				   I915_WAIT_ALL,
+				   MAX_SCHEDULE_TIMEOUT);
+	if (err)
+		return err;
+
+	if (i915_gem_object_is_lmem(obj)) {
+		if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM)) {
+			pr_err("object can't migrate to smem.\n");
+			return -EINVAL;
+		}
+
+		err = i915_gem_object_migrate(obj, ww, INTEL_REGION_SMEM);
+		if (err) {
+			pr_err("Object failed migration to smem\n");
+			if (err)
+				return err;
+		}
+
+		if (i915_gem_object_is_lmem(obj)) {
+			pr_err("object still backed by lmem\n");
+			err = -EINVAL;
+		}
+
+		if (!i915_gem_object_has_struct_page(obj)) {
+			pr_err("object not backed by struct page\n");
+			err = -EINVAL;
+		}
+
+	} else {
+		if (!i915_gem_object_can_migrate(obj, INTEL_REGION_LMEM)) {
+			pr_err("object can't migrate to lmem.\n");
+			return -EINVAL;
+		}
+
+		err = i915_gem_object_migrate(obj, ww, INTEL_REGION_LMEM);
+		if (err) {
+			pr_err("Object failed migration to lmem\n");
+			if (err)
+				return err;
+		}
+
+		if (i915_gem_object_has_struct_page(obj)) {
+			pr_err("object still backed by struct page\n");
+			err = -EINVAL;
+		}
+
+		if (!i915_gem_object_is_lmem(obj)) {
+			pr_err("object not backed by lmem\n");
+			err = -EINVAL;
+		}
+	}
+
+	return err;
+}
+
+static int igt_lmem_pages_migrate(void *arg)
+{
+	struct intel_gt *gt = arg;
+	struct drm_i915_private *i915 = gt->i915;
+	struct drm_i915_gem_object *obj;
+	struct i915_gem_ww_ctx ww;
+	struct i915_request *rq;
+	int err;
+	int i;
+
+	/* From LMEM to shmem and back again */
+
+	obj = i915_gem_object_create_lmem(i915, SZ_2M, 0);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	err = i915_gem_object_lock(obj, NULL);
+	if (err)
+		goto out_put;
+
+	err = ____i915_gem_object_get_pages(obj);
+	if (err) {
+		i915_gem_object_unlock(obj);
+		goto out_put;
+	}
+
+	err = intel_context_migrate_clear(gt->migrate.context, NULL,
+					  obj->mm.pages->sgl, obj->cache_level,
+					  i915_gem_object_is_lmem(obj),
+					  0, &rq);
+	if (rq) {
+		dma_resv_add_excl_fence(obj->base.resv, &rq->fence);
+		i915_request_put(rq);
+	}
+	i915_gem_object_unlock(obj);
+	if (err)
+		goto out_put;
+
+	for (i = 1; i <= 4; ++i) {
+		for_i915_gem_ww(&ww, err, true) {
+			err = lmem_pages_migrate_one(&ww, obj);
+			if (err)
+				continue;
+
+			err = i915_gem_object_wait_migration(obj, true);
+			if (err)
+				continue;
+
+			err = intel_migrate_clear(&gt->migrate, &ww, NULL,
+						  obj->mm.pages->sgl,
+						  obj->cache_level,
+						  i915_gem_object_is_lmem(obj),
+						  0xdeadbeaf, &rq);
+			if (rq) {
+				dma_resv_add_excl_fence(obj->base.resv,
+							&rq->fence);
+				i915_request_put(rq);
+			}
+		}
+		if (err)
+			break;
+	}
+out_put:
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+int i915_gem_migrate_live_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_smem_create_migrate),
+		SUBTEST(igt_lmem_create_migrate),
+		SUBTEST(igt_lmem_pages_migrate),
+	};
+
+	if (!HAS_LMEM(i915))
+		return 0;
+
+	return intel_gt_live_subtests(tests, &i915->gt);
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index a68197cf1044..e2fd1b61af71 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -40,6 +40,7 @@ selftest(hugepages, i915_gem_huge_page_live_selftests)
 selftest(gem_contexts, i915_gem_context_live_selftests)
 selftest(gem_execbuf, i915_gem_execbuffer_live_selftests)
 selftest(client, i915_gem_client_blt_live_selftests)
+selftest(gem_migrate, i915_gem_migrate_live_selftests)
 selftest(reset, intel_reset_live_selftests)
 selftest(memory_region, intel_memory_region_live_selftests)
 selftest(hangcheck, intel_hangcheck_live_selftests)
-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 3/4] drm/i915/display: Migrate objects to LMEM if possible for display
  2021-06-24 18:31 ` [Intel-gfx] " Thomas Hellström
@ 2021-06-24 18:31   ` Thomas Hellström
  -1 siblings, 0 replies; 34+ messages in thread
From: Thomas Hellström @ 2021-06-24 18:31 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: Thomas Hellström, matthew.auld

Objects intended to be used as display framebuffers must reside in
LMEM for discrete. If they happen to not do that, migrate them to
LMEM before pinning.

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c |  5 ++++-
 drivers/gpu/drm/i915/gem/i915_gem_domain.c   |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_lmem.c     | 21 --------------------
 drivers/gpu/drm/i915/gem/i915_gem_object.h   |  2 --
 4 files changed, 5 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 4524dbfa5e42..83a4aba54d67 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -1331,6 +1331,9 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
 	ret = i915_gem_object_lock(obj, &ww);
 	if (!ret && phys_cursor)
 		ret = i915_gem_object_attach_phys(obj, alignment);
+	else if (!ret && HAS_LMEM(dev_priv))
+		ret = i915_gem_object_migrate(obj, &ww, INTEL_REGION_LMEM);
+	/* TODO: Do we need to sync when migration becomes async? */
 	if (!ret)
 		ret = i915_gem_object_pin_pages(obj);
 	if (ret)
@@ -11770,7 +11773,7 @@ intel_user_framebuffer_create(struct drm_device *dev,
 
 	/* object is backed with LMEM for discrete */
 	i915 = to_i915(obj->base.dev);
-	if (HAS_LMEM(i915) && !i915_gem_object_validates_to_lmem(obj)) {
+	if (HAS_LMEM(i915) && !i915_gem_object_can_migrate(obj, INTEL_REGION_LMEM)) {
 		/* object is "remote", not in local memory */
 		i915_gem_object_put(obj);
 		return ERR_PTR(-EREMOTE);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
index 073822100da7..7d1400b13429 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
@@ -375,7 +375,7 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
 	struct i915_vma *vma;
 	int ret;
 
-	/* Frame buffer must be in LMEM (no migration yet) */
+	/* Frame buffer must be in LMEM */
 	if (HAS_LMEM(i915) && !i915_gem_object_is_lmem(obj))
 		return ERR_PTR(-EINVAL);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
index 41d5182cd367..be1d122574af 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
@@ -23,27 +23,6 @@ i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj,
 	return io_mapping_map_wc(&obj->mm.region->iomap, offset, size);
 }
 
-/**
- * i915_gem_object_validates_to_lmem - Whether the object is resident in
- * lmem when pages are present.
- * @obj: The object to check.
- *
- * Migratable objects residency may change from under us if the object is
- * not pinned or locked. This function is intended to be used to check whether
- * the object can only reside in lmem when pages are present.
- *
- * Return: Whether the object is always resident in lmem when pages are
- * present.
- */
-bool i915_gem_object_validates_to_lmem(struct drm_i915_gem_object *obj)
-{
-	struct intel_memory_region *mr = READ_ONCE(obj->mm.region);
-
-	return !i915_gem_object_migratable(obj) &&
-		mr && (mr->type == INTEL_MEMORY_LOCAL ||
-		       mr->type == INTEL_MEMORY_STOLEN_LOCAL);
-}
-
 /**
  * i915_gem_object_is_lmem - Whether the object is resident in
  * lmem
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 8cbd7a5334e2..d423d8cac4f2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -597,8 +597,6 @@ bool i915_gem_object_evictable(struct drm_i915_gem_object *obj);
 
 bool i915_gem_object_migratable(struct drm_i915_gem_object *obj);
 
-bool i915_gem_object_validates_to_lmem(struct drm_i915_gem_object *obj);
-
 int i915_gem_object_migrate(struct drm_i915_gem_object *obj,
 			    struct i915_gem_ww_ctx *ww,
 			    enum intel_region_id id);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 3/4] drm/i915/display: Migrate objects to LMEM if possible for display
@ 2021-06-24 18:31   ` Thomas Hellström
  0 siblings, 0 replies; 34+ messages in thread
From: Thomas Hellström @ 2021-06-24 18:31 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: Thomas Hellström, matthew.auld

Objects intended to be used as display framebuffers must reside in
LMEM for discrete. If they happen to not do that, migrate them to
LMEM before pinning.

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c |  5 ++++-
 drivers/gpu/drm/i915/gem/i915_gem_domain.c   |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_lmem.c     | 21 --------------------
 drivers/gpu/drm/i915/gem/i915_gem_object.h   |  2 --
 4 files changed, 5 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 4524dbfa5e42..83a4aba54d67 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -1331,6 +1331,9 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
 	ret = i915_gem_object_lock(obj, &ww);
 	if (!ret && phys_cursor)
 		ret = i915_gem_object_attach_phys(obj, alignment);
+	else if (!ret && HAS_LMEM(dev_priv))
+		ret = i915_gem_object_migrate(obj, &ww, INTEL_REGION_LMEM);
+	/* TODO: Do we need to sync when migration becomes async? */
 	if (!ret)
 		ret = i915_gem_object_pin_pages(obj);
 	if (ret)
@@ -11770,7 +11773,7 @@ intel_user_framebuffer_create(struct drm_device *dev,
 
 	/* object is backed with LMEM for discrete */
 	i915 = to_i915(obj->base.dev);
-	if (HAS_LMEM(i915) && !i915_gem_object_validates_to_lmem(obj)) {
+	if (HAS_LMEM(i915) && !i915_gem_object_can_migrate(obj, INTEL_REGION_LMEM)) {
 		/* object is "remote", not in local memory */
 		i915_gem_object_put(obj);
 		return ERR_PTR(-EREMOTE);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
index 073822100da7..7d1400b13429 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
@@ -375,7 +375,7 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
 	struct i915_vma *vma;
 	int ret;
 
-	/* Frame buffer must be in LMEM (no migration yet) */
+	/* Frame buffer must be in LMEM */
 	if (HAS_LMEM(i915) && !i915_gem_object_is_lmem(obj))
 		return ERR_PTR(-EINVAL);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
index 41d5182cd367..be1d122574af 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
@@ -23,27 +23,6 @@ i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj,
 	return io_mapping_map_wc(&obj->mm.region->iomap, offset, size);
 }
 
-/**
- * i915_gem_object_validates_to_lmem - Whether the object is resident in
- * lmem when pages are present.
- * @obj: The object to check.
- *
- * Migratable objects residency may change from under us if the object is
- * not pinned or locked. This function is intended to be used to check whether
- * the object can only reside in lmem when pages are present.
- *
- * Return: Whether the object is always resident in lmem when pages are
- * present.
- */
-bool i915_gem_object_validates_to_lmem(struct drm_i915_gem_object *obj)
-{
-	struct intel_memory_region *mr = READ_ONCE(obj->mm.region);
-
-	return !i915_gem_object_migratable(obj) &&
-		mr && (mr->type == INTEL_MEMORY_LOCAL ||
-		       mr->type == INTEL_MEMORY_STOLEN_LOCAL);
-}
-
 /**
  * i915_gem_object_is_lmem - Whether the object is resident in
  * lmem
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 8cbd7a5334e2..d423d8cac4f2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -597,8 +597,6 @@ bool i915_gem_object_evictable(struct drm_i915_gem_object *obj);
 
 bool i915_gem_object_migratable(struct drm_i915_gem_object *obj);
 
-bool i915_gem_object_validates_to_lmem(struct drm_i915_gem_object *obj);
-
 int i915_gem_object_migrate(struct drm_i915_gem_object *obj,
 			    struct i915_gem_ww_ctx *ww,
 			    enum intel_region_id id);
-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
  2021-06-24 18:31 ` [Intel-gfx] " Thomas Hellström
@ 2021-06-24 18:31   ` Thomas Hellström
  -1 siblings, 0 replies; 34+ messages in thread
From: Thomas Hellström @ 2021-06-24 18:31 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: Thomas Hellström, matthew.auld

Until we support p2p dma or as a complement to that, migrate data
to system memory at dma-buf map time if possible.

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 616c3a2f1baf..a52f885bc09a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -25,7 +25,14 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme
 	struct scatterlist *src, *dst;
 	int ret, i;
 
-	ret = i915_gem_object_pin_pages_unlocked(obj);
+	ret = i915_gem_object_lock_interruptible(obj, NULL);
+	if (ret)
+		return ERR_PTR(ret);
+
+	ret = i915_gem_object_migrate(obj, NULL, INTEL_REGION_SMEM);
+	if (!ret)
+		ret = i915_gem_object_pin_pages(obj);
+	i915_gem_object_unlock(obj);
 	if (ret)
 		goto err;
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
@ 2021-06-24 18:31   ` Thomas Hellström
  0 siblings, 0 replies; 34+ messages in thread
From: Thomas Hellström @ 2021-06-24 18:31 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: Thomas Hellström, matthew.auld

Until we support p2p dma or as a complement to that, migrate data
to system memory at dma-buf map time if possible.

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 616c3a2f1baf..a52f885bc09a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -25,7 +25,14 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme
 	struct scatterlist *src, *dst;
 	int ret, i;
 
-	ret = i915_gem_object_pin_pages_unlocked(obj);
+	ret = i915_gem_object_lock_interruptible(obj, NULL);
+	if (ret)
+		return ERR_PTR(ret);
+
+	ret = i915_gem_object_migrate(obj, NULL, INTEL_REGION_SMEM);
+	if (!ret)
+		ret = i915_gem_object_pin_pages(obj);
+	i915_gem_object_unlock(obj);
 	if (ret)
 		goto err;
 
-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915/gem: Introduce a migrate interface
  2021-06-24 18:31 ` [Intel-gfx] " Thomas Hellström
                   ` (4 preceding siblings ...)
  (?)
@ 2021-06-24 22:02 ` Patchwork
  -1 siblings, 0 replies; 34+ messages in thread
From: Patchwork @ 2021-06-24 22:02 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-gfx

== Series Details ==

Series: drm/i915/gem: Introduce a migrate interface
URL   : https://patchwork.freedesktop.org/series/91890/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
ee56ef21edf6 drm/i915/gem: Implement object migration
c7fc34104143 drm/i915/gem: Introduce a selftest for the gem object migrate functionality
-:31: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#31: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 251 lines checked
974c401400ff drm/i915/display: Migrate objects to LMEM if possible for display
de1e6e1fa3f7 drm/i915/gem: Migrate to system at dma-buf map time


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915/gem: Introduce a migrate interface
  2021-06-24 18:31 ` [Intel-gfx] " Thomas Hellström
                   ` (5 preceding siblings ...)
  (?)
@ 2021-06-24 22:04 ` Patchwork
  -1 siblings, 0 replies; 34+ messages in thread
From: Patchwork @ 2021-06-24 22:04 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-gfx

== Series Details ==

Series: drm/i915/gem: Introduce a migrate interface
URL   : https://patchwork.freedesktop.org/series/91890/
State : warning

== Summary ==

$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.
-
+drivers/gpu/drm/i915/display/intel_display.c:1896:21:    expected struct i915_vma *[assigned] vma
+drivers/gpu/drm/i915/display/intel_display.c:1896:21:    got void [noderef] __iomem *[assigned] iomem
+drivers/gpu/drm/i915/display/intel_display.c:1896:21: warning: incorrect type in assignment (different address spaces)
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:27:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:27:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:27:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:32:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:32:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:49:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:49:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:49:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:56:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:56:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_reset.c:1396:5: warning: context imbalance in 'intel_gt_reset_trylock' - different lock contexts for basic block
+drivers/gpu/drm/i915/gt/intel_ring_submission.c:1207:24: warning: Using plain integer as NULL pointer
+drivers/gpu/drm/i915/i915_perf.c:1434:15: warning: memset with byte count of 16777216
+drivers/gpu/drm/i915/i915_perf.c:1488:15: warning: memset with byte count of 16777216
+drivers/gpu/drm/i915/selftests/i915_syncmap.c:80:54: warning: dubious: x | !y
+./include/asm-generic/bitops/find.h:112:45: warning: shift count is negative (-262080)
+./include/asm-generic/bitops/find.h:32:31: warning: shift count is negative (-262080)
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'fwtable_read16' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'fwtable_read32' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'fwtable_read64' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'fwtable_read8' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'fwtable_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'fwtable_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'fwtable_write8' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen11_fwtable_read16' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen11_fwtable_read32' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen11_fwtable_read64' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen11_fwtable_read8' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen11_fwtable_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen11_fwtable_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen11_fwtable_write8' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen12_fwtable_read16' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen12_fwtable_read32' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen12_fwtable_read64' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen12_fwtable_read8' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen12_fwtable_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen12_fwtable_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen12_fwtable_write8' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen6_read16' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen6_read32' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen6_read64' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen6_read8' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen6_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen6_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen6_write8' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen8_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen8_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen8_write8' - different lock contexts for basic block


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/gem: Introduce a migrate interface
  2021-06-24 18:31 ` [Intel-gfx] " Thomas Hellström
                   ` (6 preceding siblings ...)
  (?)
@ 2021-06-24 22:33 ` Patchwork
  -1 siblings, 0 replies; 34+ messages in thread
From: Patchwork @ 2021-06-24 22:33 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 3385 bytes --]

== Series Details ==

Series: drm/i915/gem: Introduce a migrate interface
URL   : https://patchwork.freedesktop.org/series/91890/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10278 -> Patchwork_20462
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/index.html

New tests
---------

  New tests have been introduced between CI_DRM_10278 and Patchwork_20462:

### New IGT tests (1) ###

  * igt@i915_selftest@live@gem_migrate:
    - Statuses : 34 pass(s)
    - Exec time: [0.43, 5.13] s

  

Known issues
------------

  Here are the changes found in Patchwork_20462 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_pm_rpm@module-reload:
    - fi-kbl-7500u:       [PASS][1] -> [INCOMPLETE][2] ([i915#151] / [i915#2405])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/fi-kbl-7500u/igt@i915_pm_rpm@module-reload.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/fi-kbl-7500u/igt@i915_pm_rpm@module-reload.html

  * igt@runner@aborted:
    - fi-kbl-7500u:       NOTRUN -> [FAIL][3] ([fdo#109271] / [i915#1814] / [i915#2722] / [i915#3363])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/fi-kbl-7500u/igt@runner@aborted.html

  
#### Possible fixes ####

  * igt@i915_module_load@reload:
    - {fi-tgl-dsi}:       [DMESG-WARN][4] ([i915#1982] / [k.org#205379]) -> [PASS][5]
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/fi-tgl-dsi/igt@i915_module_load@reload.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/fi-tgl-dsi/igt@i915_module_load@reload.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [i915#151]: https://gitlab.freedesktop.org/drm/intel/issues/151
  [i915#1814]: https://gitlab.freedesktop.org/drm/intel/issues/1814
  [i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
  [i915#2405]: https://gitlab.freedesktop.org/drm/intel/issues/2405
  [i915#2722]: https://gitlab.freedesktop.org/drm/intel/issues/2722
  [i915#3363]: https://gitlab.freedesktop.org/drm/intel/issues/3363
  [k.org#205379]: https://bugzilla.kernel.org/show_bug.cgi?id=205379


Participating hosts (43 -> 39)
------------------------------

  Missing    (4): fi-ilk-m540 fi-bsw-cyan fi-bdw-samus fi-hsw-4200u 


Build changes
-------------

  * Linux: CI_DRM_10278 -> Patchwork_20462

  CI-20190529: 20190529
  CI_DRM_10278: 33497e95c159f1fe3393f1016b1ec1187eab1589 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6119: a306810ebbc8984bde38a57ef0c33eea394f4e18 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_20462: de1e6e1fa3f7380663488844a29ddb461157ef3c @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

de1e6e1fa3f7 drm/i915/gem: Migrate to system at dma-buf map time
974c401400ff drm/i915/display: Migrate objects to LMEM if possible for display
c7fc34104143 drm/i915/gem: Introduce a selftest for the gem object migrate functionality
ee56ef21edf6 drm/i915/gem: Implement object migration

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/index.html

[-- Attachment #1.2: Type: text/html, Size: 4080 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915/gem: Introduce a migrate interface
  2021-06-24 18:31 ` [Intel-gfx] " Thomas Hellström
                   ` (7 preceding siblings ...)
  (?)
@ 2021-06-25  6:53 ` Patchwork
  -1 siblings, 0 replies; 34+ messages in thread
From: Patchwork @ 2021-06-25  6:53 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 30265 bytes --]

== Series Details ==

Series: drm/i915/gem: Introduce a migrate interface
URL   : https://patchwork.freedesktop.org/series/91890/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10278_full -> Patchwork_20462_full
====================================================

Summary
-------

  **WARNING**

  Minor unknown changes coming with Patchwork_20462_full need to be verified
  manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_20462_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_20462_full:

### IGT changes ###

#### Warnings ####

  * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-5:
    - shard-iclb:         [SKIP][1] ([i915#658]) -> [SKIP][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-iclb7/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-5.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-iclb2/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-5.html

  
New tests
---------

  New tests have been introduced between CI_DRM_10278_full and Patchwork_20462_full:

### New IGT tests (1) ###

  * igt@i915_selftest@live@gem_migrate:
    - Statuses : 7 pass(s)
    - Exec time: [0.50, 5.13] s

  

Known issues
------------

  Here are the changes found in Patchwork_20462_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_ctx_persistence@idempotent:
    - shard-snb:          NOTRUN -> [SKIP][3] ([fdo#109271] / [i915#1099]) +1 similar issue
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-snb7/igt@gem_ctx_persistence@idempotent.html

  * igt@gem_eio@unwedge-stress:
    - shard-iclb:         [PASS][4] -> [TIMEOUT][5] ([i915#2369] / [i915#2481] / [i915#3070])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-iclb6/igt@gem_eio@unwedge-stress.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-iclb8/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_capture@pi@rcs0:
    - shard-skl:          [PASS][6] -> [INCOMPLETE][7] ([i915#2369])
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-skl1/igt@gem_exec_capture@pi@rcs0.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-skl3/igt@gem_exec_capture@pi@rcs0.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
    - shard-glk:          [PASS][8] -> [FAIL][9] ([i915#2842])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-glk6/igt@gem_exec_fair@basic-none-share@rcs0.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-glk2/igt@gem_exec_fair@basic-none-share@rcs0.html

  * igt@gem_exec_fair@basic-none@vcs1:
    - shard-kbl:          NOTRUN -> [FAIL][10] ([i915#2842])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl4/igt@gem_exec_fair@basic-none@vcs1.html

  * igt@gem_exec_params@no-vebox:
    - shard-iclb:         NOTRUN -> [SKIP][11] ([fdo#109283])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-iclb4/igt@gem_exec_params@no-vebox.html

  * igt@gem_exec_reloc@basic-wide-active@bcs0:
    - shard-apl:          NOTRUN -> [FAIL][12] ([i915#3633]) +3 similar issues
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl1/igt@gem_exec_reloc@basic-wide-active@bcs0.html

  * igt@gem_exec_reloc@basic-wide-active@rcs0:
    - shard-kbl:          NOTRUN -> [FAIL][13] ([i915#3633]) +4 similar issues
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl4/igt@gem_exec_reloc@basic-wide-active@rcs0.html

  * igt@gem_pread@exhaustion:
    - shard-apl:          NOTRUN -> [WARN][14] ([i915#2658])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl1/igt@gem_pread@exhaustion.html

  * igt@gem_render_copy@yf-tiled-to-vebox-yf-tiled:
    - shard-iclb:         NOTRUN -> [SKIP][15] ([i915#768])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-iclb4/igt@gem_render_copy@yf-tiled-to-vebox-yf-tiled.html

  * igt@gem_userptr_blits@dmabuf-sync:
    - shard-apl:          NOTRUN -> [SKIP][16] ([fdo#109271] / [i915#3323])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl1/igt@gem_userptr_blits@dmabuf-sync.html

  * igt@gem_userptr_blits@input-checking:
    - shard-apl:          NOTRUN -> [DMESG-WARN][17] ([i915#3002])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl8/igt@gem_userptr_blits@input-checking.html

  * igt@gem_userptr_blits@vma-merge:
    - shard-apl:          NOTRUN -> [FAIL][18] ([i915#3318])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl1/igt@gem_userptr_blits@vma-merge.html

  * igt@gen7_exec_parse@basic-offset:
    - shard-apl:          NOTRUN -> [SKIP][19] ([fdo#109271]) +290 similar issues
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl2/igt@gen7_exec_parse@basic-offset.html

  * igt@gen9_exec_parse@batch-invalid-length:
    - shard-snb:          NOTRUN -> [SKIP][20] ([fdo#109271]) +129 similar issues
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-snb7/igt@gen9_exec_parse@batch-invalid-length.html

  * igt@gen9_exec_parse@bb-start-far:
    - shard-iclb:         NOTRUN -> [SKIP][21] ([fdo#112306])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-iclb4/igt@gen9_exec_parse@bb-start-far.html

  * igt@i915_hangman@engine-error@vecs0:
    - shard-kbl:          NOTRUN -> [SKIP][22] ([fdo#109271]) +141 similar issues
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl4/igt@i915_hangman@engine-error@vecs0.html

  * igt@i915_suspend@fence-restore-untiled:
    - shard-apl:          NOTRUN -> [DMESG-WARN][23] ([i915#180])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl3/igt@i915_suspend@fence-restore-untiled.html

  * igt@i915_suspend@sysfs-reader:
    - shard-apl:          [PASS][24] -> [DMESG-WARN][25] ([i915#180])
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-apl3/igt@i915_suspend@sysfs-reader.html
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl1/igt@i915_suspend@sysfs-reader.html

  * igt@kms_chamelium@hdmi-hpd:
    - shard-skl:          NOTRUN -> [SKIP][26] ([fdo#109271] / [fdo#111827]) +2 similar issues
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-skl7/igt@kms_chamelium@hdmi-hpd.html

  * igt@kms_chamelium@vga-hpd-for-each-pipe:
    - shard-glk:          NOTRUN -> [SKIP][27] ([fdo#109271] / [fdo#111827]) +2 similar issues
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-glk7/igt@kms_chamelium@vga-hpd-for-each-pipe.html

  * igt@kms_color_chamelium@pipe-a-ctm-limited-range:
    - shard-apl:          NOTRUN -> [SKIP][28] ([fdo#109271] / [fdo#111827]) +28 similar issues
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl8/igt@kms_color_chamelium@pipe-a-ctm-limited-range.html

  * igt@kms_color_chamelium@pipe-a-degamma:
    - shard-kbl:          NOTRUN -> [SKIP][29] ([fdo#109271] / [fdo#111827]) +14 similar issues
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl4/igt@kms_color_chamelium@pipe-a-degamma.html

  * igt@kms_color_chamelium@pipe-c-ctm-limited-range:
    - shard-snb:          NOTRUN -> [SKIP][30] ([fdo#109271] / [fdo#111827]) +7 similar issues
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-snb5/igt@kms_color_chamelium@pipe-c-ctm-limited-range.html

  * igt@kms_content_protection@atomic:
    - shard-apl:          NOTRUN -> [TIMEOUT][31] ([i915#1319]) +2 similar issues
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl2/igt@kms_content_protection@atomic.html

  * igt@kms_content_protection@uevent:
    - shard-apl:          NOTRUN -> [FAIL][32] ([i915#2105])
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl2/igt@kms_content_protection@uevent.html

  * igt@kms_cursor_crc@pipe-b-cursor-32x32-onscreen:
    - shard-skl:          NOTRUN -> [SKIP][33] ([fdo#109271]) +20 similar issues
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-skl7/igt@kms_cursor_crc@pipe-b-cursor-32x32-onscreen.html

  * igt@kms_cursor_crc@pipe-b-cursor-suspend:
    - shard-skl:          NOTRUN -> [INCOMPLETE][34] ([i915#2405] / [i915#300])
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-skl7/igt@kms_cursor_crc@pipe-b-cursor-suspend.html

  * igt@kms_cursor_crc@pipe-c-cursor-suspend:
    - shard-iclb:         [PASS][35] -> [INCOMPLETE][36] ([i915#1185])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-iclb2/igt@kms_cursor_crc@pipe-c-cursor-suspend.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-iclb3/igt@kms_cursor_crc@pipe-c-cursor-suspend.html

  * igt@kms_cursor_crc@pipe-d-cursor-256x256-random:
    - shard-glk:          NOTRUN -> [SKIP][37] ([fdo#109271]) +26 similar issues
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-glk7/igt@kms_cursor_crc@pipe-d-cursor-256x256-random.html

  * igt@kms_cursor_crc@pipe-d-cursor-64x21-sliding:
    - shard-iclb:         NOTRUN -> [SKIP][38] ([fdo#109278]) +3 similar issues
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-iclb4/igt@kms_cursor_crc@pipe-d-cursor-64x21-sliding.html

  * igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions:
    - shard-iclb:         NOTRUN -> [SKIP][39] ([fdo#109274] / [fdo#109278])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-iclb4/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions.html

  * igt@kms_draw_crc@draw-method-xrgb8888-mmap-cpu-ytiled:
    - shard-glk:          [PASS][40] -> [DMESG-WARN][41] ([i915#118] / [i915#95]) +2 similar issues
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-glk5/igt@kms_draw_crc@draw-method-xrgb8888-mmap-cpu-ytiled.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-glk8/igt@kms_draw_crc@draw-method-xrgb8888-mmap-cpu-ytiled.html

  * igt@kms_flip@absolute-wf_vblank-interruptible@a-edp1:
    - shard-skl:          [PASS][42] -> [DMESG-WARN][43] ([i915#1982])
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-skl5/igt@kms_flip@absolute-wf_vblank-interruptible@a-edp1.html
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-skl2/igt@kms_flip@absolute-wf_vblank-interruptible@a-edp1.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible@a-edp1:
    - shard-skl:          [PASS][44] -> [FAIL][45] ([i915#79])
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-skl7/igt@kms_flip@flip-vs-expired-vblank-interruptible@a-edp1.html
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-skl7/igt@kms_flip@flip-vs-expired-vblank-interruptible@a-edp1.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible@c-hdmi-a2:
    - shard-glk:          [PASS][46] -> [FAIL][47] ([i915#79])
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-glk2/igt@kms_flip@flip-vs-expired-vblank-interruptible@c-hdmi-a2.html
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-glk1/igt@kms_flip@flip-vs-expired-vblank-interruptible@c-hdmi-a2.html

  * igt@kms_flip@flip-vs-suspend-interruptible@a-dp1:
    - shard-kbl:          [PASS][48] -> [DMESG-WARN][49] ([i915#180]) +5 similar issues
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-kbl4/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl3/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html

  * igt@kms_flip@flip-vs-suspend@b-edp1:
    - shard-skl:          [PASS][50] -> [INCOMPLETE][51] ([i915#198])
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-skl10/igt@kms_flip@flip-vs-suspend@b-edp1.html
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-skl6/igt@kms_flip@flip-vs-suspend@b-edp1.html

  * igt@kms_flip@plain-flip-fb-recreate-interruptible@c-edp1:
    - shard-skl:          [PASS][52] -> [FAIL][53] ([i915#2122]) +1 similar issue
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-skl2/igt@kms_flip@plain-flip-fb-recreate-interruptible@c-edp1.html
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-skl9/igt@kms_flip@plain-flip-fb-recreate-interruptible@c-edp1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs:
    - shard-apl:          NOTRUN -> [SKIP][54] ([fdo#109271] / [i915#2672])
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl3/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-blt:
    - shard-iclb:         NOTRUN -> [SKIP][55] ([fdo#109280]) +5 similar issues
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-iclb4/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-blt.html

  * igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d:
    - shard-apl:          NOTRUN -> [SKIP][56] ([fdo#109271] / [i915#533]) +2 similar issues
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl3/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d.html

  * igt@kms_plane_alpha_blend@pipe-a-alpha-basic:
    - shard-apl:          NOTRUN -> [FAIL][57] ([fdo#108145] / [i915#265]) +3 similar issues
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl8/igt@kms_plane_alpha_blend@pipe-a-alpha-basic.html

  * igt@kms_plane_alpha_blend@pipe-c-alpha-basic:
    - shard-kbl:          NOTRUN -> [FAIL][58] ([fdo#108145] / [i915#265])
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl7/igt@kms_plane_alpha_blend@pipe-c-alpha-basic.html

  * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-1:
    - shard-apl:          NOTRUN -> [SKIP][59] ([fdo#109271] / [i915#658]) +3 similar issues
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl2/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-1.html

  * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-5:
    - shard-kbl:          NOTRUN -> [SKIP][60] ([fdo#109271] / [i915#658]) +1 similar issue
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl4/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-5.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-2:
    - shard-glk:          NOTRUN -> [SKIP][61] ([fdo#109271] / [i915#658])
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-glk7/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-2.html

  * igt@kms_psr@psr2_suspend:
    - shard-iclb:         [PASS][62] -> [SKIP][63] ([fdo#109441]) +1 similar issue
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-iclb2/igt@kms_psr@psr2_suspend.html
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-iclb8/igt@kms_psr@psr2_suspend.html

  * igt@kms_sequence@queue-idle:
    - shard-tglb:         [PASS][64] -> [DMESG-WARN][65] ([i915#2868])
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-tglb5/igt@kms_sequence@queue-idle.html
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-tglb5/igt@kms_sequence@queue-idle.html

  * igt@kms_setmode@clone-exclusive-crtc:
    - shard-skl:          NOTRUN -> [WARN][66] ([i915#2100])
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-skl3/igt@kms_setmode@clone-exclusive-crtc.html

  * igt@kms_vblank@pipe-b-ts-continuation-dpms-suspend:
    - shard-apl:          NOTRUN -> [DMESG-WARN][67] ([i915#165])
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl2/igt@kms_vblank@pipe-b-ts-continuation-dpms-suspend.html

  * igt@kms_vblank@pipe-c-ts-continuation-dpms-suspend:
    - shard-skl:          [PASS][68] -> [INCOMPLETE][69] ([i915#198] / [i915#2405])
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-skl9/igt@kms_vblank@pipe-c-ts-continuation-dpms-suspend.html
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-skl6/igt@kms_vblank@pipe-c-ts-continuation-dpms-suspend.html

  * igt@kms_vblank@pipe-d-wait-idle:
    - shard-kbl:          NOTRUN -> [SKIP][70] ([fdo#109271] / [i915#533])
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl1/igt@kms_vblank@pipe-d-wait-idle.html

  * igt@kms_writeback@writeback-invalid-parameters:
    - shard-apl:          NOTRUN -> [SKIP][71] ([fdo#109271] / [i915#2437]) +1 similar issue
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl8/igt@kms_writeback@writeback-invalid-parameters.html

  * igt@nouveau_crc@pipe-c-source-outp-complete:
    - shard-iclb:         NOTRUN -> [SKIP][72] ([i915#2530])
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-iclb4/igt@nouveau_crc@pipe-c-source-outp-complete.html

  * igt@perf@polling-parameterized:
    - shard-skl:          [PASS][73] -> [FAIL][74] ([i915#1542])
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-skl10/igt@perf@polling-parameterized.html
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-skl5/igt@perf@polling-parameterized.html

  * igt@perf@polling-small-buf:
    - shard-skl:          [PASS][75] -> [FAIL][76] ([i915#1722])
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-skl10/igt@perf@polling-small-buf.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-skl6/igt@perf@polling-small-buf.html

  * igt@sysfs_clients@create:
    - shard-apl:          NOTRUN -> [SKIP][77] ([fdo#109271] / [i915#2994]) +2 similar issues
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl2/igt@sysfs_clients@create.html

  * igt@sysfs_clients@fair-1:
    - shard-kbl:          NOTRUN -> [SKIP][78] ([fdo#109271] / [i915#2994])
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl4/igt@sysfs_clients@fair-1.html

  
#### Possible fixes ####

  * igt@feature_discovery@psr2:
    - shard-iclb:         [SKIP][79] ([i915#658]) -> [PASS][80]
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-iclb7/igt@feature_discovery@psr2.html
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-iclb2/igt@feature_discovery@psr2.html

  * igt@gem_eio@unwedge-stress:
    - shard-tglb:         [TIMEOUT][81] ([i915#2369] / [i915#3063]) -> [PASS][82]
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-tglb5/igt@gem_eio@unwedge-stress.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-tglb5/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_fair@basic-deadline:
    - shard-glk:          [FAIL][83] ([i915#2846]) -> [PASS][84]
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-glk1/igt@gem_exec_fair@basic-deadline.html
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-glk3/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
    - shard-tglb:         [FAIL][85] ([i915#2842]) -> [PASS][86]
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-tglb2/igt@gem_exec_fair@basic-none-share@rcs0.html
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-tglb3/igt@gem_exec_fair@basic-none-share@rcs0.html

  * igt@gem_exec_fair@basic-throttle@rcs0:
    - shard-iclb:         [FAIL][87] ([i915#2849]) -> [PASS][88]
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-iclb4/igt@gem_exec_fair@basic-throttle@rcs0.html
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-iclb1/igt@gem_exec_fair@basic-throttle@rcs0.html

  * igt@gem_mmap_gtt@big-copy-xy:
    - shard-skl:          [FAIL][89] ([i915#307]) -> [PASS][90] +1 similar issue
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-skl7/igt@gem_mmap_gtt@big-copy-xy.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-skl9/igt@gem_mmap_gtt@big-copy-xy.html

  * igt@gem_workarounds@suspend-resume-fd:
    - shard-kbl:          [DMESG-WARN][91] ([i915#180]) -> [PASS][92] +2 similar issues
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-kbl2/igt@gem_workarounds@suspend-resume-fd.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl7/igt@gem_workarounds@suspend-resume-fd.html

  * igt@i915_suspend@fence-restore-tiled2untiled:
    - shard-apl:          [DMESG-WARN][93] ([i915#180]) -> [PASS][94] +2 similar issues
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-apl3/igt@i915_suspend@fence-restore-tiled2untiled.html
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl1/igt@i915_suspend@fence-restore-tiled2untiled.html

  * igt@kms_big_fb@linear-64bpp-rotate-0:
    - shard-iclb:         [DMESG-WARN][95] ([i915#3621]) -> [PASS][96]
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-iclb1/igt@kms_big_fb@linear-64bpp-rotate-0.html
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-iclb4/igt@kms_big_fb@linear-64bpp-rotate-0.html

  * igt@kms_cursor_edge_walk@pipe-c-128x128-left-edge:
    - shard-skl:          [DMESG-WARN][97] ([i915#1982]) -> [PASS][98]
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-skl9/igt@kms_cursor_edge_walk@pipe-c-128x128-left-edge.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-skl6/igt@kms_cursor_edge_walk@pipe-c-128x128-left-edge.html

  * igt@kms_fbcon_fbt@fbc-suspend:
    - shard-kbl:          [INCOMPLETE][99] ([i915#155] / [i915#180] / [i915#636]) -> [PASS][100]
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-kbl7/igt@kms_fbcon_fbt@fbc-suspend.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl1/igt@kms_fbcon_fbt@fbc-suspend.html

  * igt@kms_flip@2x-flip-vs-absolute-wf_vblank-interruptible@ac-hdmi-a1-hdmi-a2:
    - shard-glk:          [FAIL][101] ([i915#2122]) -> [PASS][102]
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-glk7/igt@kms_flip@2x-flip-vs-absolute-wf_vblank-interruptible@ac-hdmi-a1-hdmi-a2.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-glk6/igt@kms_flip@2x-flip-vs-absolute-wf_vblank-interruptible@ac-hdmi-a1-hdmi-a2.html

  * igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ac-hdmi-a1-hdmi-a2:
    - shard-glk:          [FAIL][103] ([i915#79]) -> [PASS][104]
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-glk4/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ac-hdmi-a1-hdmi-a2.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-glk5/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ac-hdmi-a1-hdmi-a2.html

  * igt@kms_flip@plain-flip-ts-check@c-edp1:
    - shard-skl:          [FAIL][105] ([i915#2122]) -> [PASS][106]
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-skl6/igt@kms_flip@plain-flip-ts-check@c-edp1.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-skl1/igt@kms_flip@plain-flip-ts-check@c-edp1.html

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-skl:          [FAIL][107] ([i915#1188]) -> [PASS][108]
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-skl7/igt@kms_hdr@bpc-switch-suspend.html
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-skl7/igt@kms_hdr@bpc-switch-suspend.html

  * igt@kms_plane_alpha_blend@pipe-a-coverage-7efc:
    - shard-skl:          [FAIL][109] ([fdo#108145] / [i915#265]) -> [PASS][110] +1 similar issue
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-skl3/igt@kms_plane_alpha_blend@pipe-a-coverage-7efc.html
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-skl6/igt@kms_plane_alpha_blend@pipe-a-coverage-7efc.html

  * igt@kms_psr@psr2_cursor_render:
    - shard-iclb:         [SKIP][111] ([fdo#109441]) -> [PASS][112] +1 similar issue
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-iclb7/igt@kms_psr@psr2_cursor_render.html
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-iclb2/igt@kms_psr@psr2_cursor_render.html

  * igt@kms_vblank@pipe-a-ts-continuation-dpms-suspend:
    - shard-kbl:          [INCOMPLETE][113] ([i915#155]) -> [PASS][114]
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-kbl3/igt@kms_vblank@pipe-a-ts-continuation-dpms-suspend.html
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl6/igt@kms_vblank@pipe-a-ts-continuation-dpms-suspend.html

  
#### Warnings ####

  * igt@i915_pm_rc6_residency@rc6-fence:
    - shard-iclb:         [WARN][115] ([i915#1804] / [i915#2684]) -> [WARN][116] ([i915#2684]) +1 similar issue
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-iclb7/igt@i915_pm_rc6_residency@rc6-fence.html
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-iclb2/igt@i915_pm_rc6_residency@rc6-fence.html

  * igt@kms_psr2_sf@plane-move-sf-dmg-area-2:
    - shard-iclb:         [SKIP][117] -> [SKIP][118] ([i915#658])
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-iclb2/igt@kms_psr2_sf@plane-move-sf-dmg-area-2.html
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-iclb8/igt@kms_psr2_sf@plane-move-sf-dmg-area-2.html

  * igt@runner@aborted:
    - shard-kbl:          ([FAIL][119], [FAIL][120], [FAIL][121], [FAIL][122], [FAIL][123], [FAIL][124], [FAIL][125], [FAIL][126]) ([i915#180] / [i915#1814] / [i915#3002] / [i915#3363] / [i915#92]) -> ([FAIL][127], [FAIL][128], [FAIL][129], [FAIL][130], [FAIL][131], [FAIL][132], [FAIL][133]) ([i915#1436] / [i915#180] / [i915#1814] / [i915#2505] / [i915#3002] / [i915#3363])
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-kbl3/igt@runner@aborted.html
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-kbl2/igt@runner@aborted.html
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-kbl3/igt@runner@aborted.html
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-kbl2/igt@runner@aborted.html
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-kbl3/igt@runner@aborted.html
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-kbl3/igt@runner@aborted.html
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-kbl7/igt@runner@aborted.html
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-kbl7/igt@runner@aborted.html
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl6/igt@runner@aborted.html
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl7/igt@runner@aborted.html
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl2/igt@runner@aborted.html
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl3/igt@runner@aborted.html
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl3/igt@runner@aborted.html
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl1/igt@runner@aborted.html
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-kbl1/igt@runner@aborted.html
    - shard-apl:          ([FAIL][134], [FAIL][135], [FAIL][136], [FAIL][137], [FAIL][138]) ([fdo#109271] / [i915#180] / [i915#3363]) -> ([FAIL][139], [FAIL][140], [FAIL][141]) ([i915#180] / [i915#3002] / [i915#3363])
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-apl3/igt@runner@aborted.html
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-apl8/igt@runner@aborted.html
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-apl3/igt@runner@aborted.html
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-apl1/igt@runner@aborted.html
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-apl3/igt@runner@aborted.html
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl1/igt@runner@aborted.html
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl8/igt@runner@aborted.html
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-apl3/igt@runner@aborted.html
    - shard-skl:          ([FAIL][142], [FAIL][143]) ([i915#1814] / [i915#3002] / [i915#3363]) -> [FAIL][144] ([i915#3002] / [i915#3363])
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-skl5/igt@runner@aborted.html
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10278/shard-skl9/igt@runner@aborted.html
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/shard-skl2/igt@runner@aborted.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109274]: https://bugs.freedesktop.org/show_bug.cgi?id=109274
  [fdo#109278]: https://bugs.freedesktop.org/show_bug.cgi?id=109278
  [fdo#109280]: https://bugs.freedesktop.org/show_bug.cgi?id=109280
  [fdo#109283]: https://bugs.freedesktop.org/show_bug.cgi?id=109283
  [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [fdo#112306]: https://bugs.freedesktop.org/show_bug.cgi?id=112306
  [i915#1099]: https://gitlab.freedesktop.org/drm/intel/issues/1099
  [i915#118]: https://gitlab.freedesktop.org/drm/intel/issues/118
  [i915#1185]: https://gitlab.freedesktop.org/drm/intel/issues/1185
  [i915#1188]: https://gitlab.freedesktop.org/drm/intel/issues/1188
  [i915#1319]: https://gitlab.freedesktop.org/drm/intel/issues/1319

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20462/index.html

[-- Attachment #1.2: Type: text/html, Size: 35479 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 1/4] drm/i915/gem: Implement object migration
  2021-06-24 18:31   ` [Intel-gfx] " Thomas Hellström
@ 2021-06-25 10:51     ` Matthew Auld
  -1 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2021-06-25 10:51 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel

On 24/06/2021 19:31, Thomas Hellström wrote:
> Introduce an interface to migrate objects between regions.
> This is primarily intended to migrate objects to LMEM for display and
> to SYSTEM for dma-buf, but might be reused in one form or another for
> performande-based migration.
> 
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
>   drivers/gpu/drm/i915/gem/i915_gem_object.c    | 91 +++++++++++++++++++
>   drivers/gpu/drm/i915/gem/i915_gem_object.h    | 12 +++
>   .../gpu/drm/i915/gem/i915_gem_object_types.h  |  9 ++
>   drivers/gpu/drm/i915/gem/i915_gem_ttm.c       | 69 ++++++++++----
>   drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 19 ++++
>   5 files changed, 183 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
> index 07e8ff9a8aae..6421c3a8b2f3 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
> @@ -513,6 +513,97 @@ bool i915_gem_object_has_iomem(const struct drm_i915_gem_object *obj)
>   	return obj->mem_flags & I915_BO_FLAG_IOMEM;
>   }
>   
> +/**
> + * i915_gem_object_can_migrate - Whether an object likely can be migrated
> + *
> + * @obj: The object to migrate
> + * @id: The region intended to migrate to
> + *
> + * Check whether the object backend supports migration to the
> + * given region. Note that pinning may affect the ability to migrate.
> + *
> + * Return: true if migration is possible, false otherwise.
> + */
> +bool i915_gem_object_can_migrate(struct drm_i915_gem_object *obj,
> +				 enum intel_region_id id)
> +{
> +	struct drm_i915_private *i915 = to_i915(obj->base.dev);
> +	unsigned int num_allowed = obj->mm.n_placements;
> +	struct intel_memory_region *mr;
> +	unsigned int i;
> +
> +	GEM_BUG_ON(id >= INTEL_REGION_UNKNOWN);
> +	GEM_BUG_ON(obj->mm.madv != I915_MADV_WILLNEED);
> +
> +	if (!obj->ops->migrate)
> +		return -EOPNOTSUPP;
> +
> +	mr = i915->mm.regions[id];

if (!mr)
     return false;

?

> +	if (obj->mm.region == mr)
> +		return true;
> +
> +	if (!i915_gem_object_evictable(obj))
> +		return false;
> +
> +	if (!(obj->flags & I915_BO_ALLOC_USER))
> +		return true;
> +
> +	if (num_allowed == 0)
> +		return false;
> +
> +	for (i = 0; i < num_allowed; ++i) {
> +		if (mr == obj->mm.placements[i])
> +			return true;
> +	}
> +
> +	return false;
> +}
> +
> +/**
> + * i915_gem_object_migrate - Migrate an object to the desired region id
> + * @obj: The object to migrate.
> + * @ww: An optional struct i915_gem_ww_ctx. If NULL, the backend may
> + * not be successful in evicting other objects to make room for this object.
> + * @id: The region id to migrate to.
> + *
> + * Attempt to migrate the object to the desired memory region. The
> + * object backend must support migration and the object may not be
> + * pinned, (explicitly pinned pages or pinned vmas). The object must
> + * be locked.
> + * On successful completion, the object will have pages pointing to
> + * memory in the new region, but an async migration task may not have
> + * completed yet, and to accomplish that, i915_gem_object_wait_migration()
> + * must be called.
> + *
> + * Return: 0 on success. Negative error code on failure. In particular may
> + * return -ENXIO on lack of region space, -EDEADLK for deadlock avoidance
> + * if @ww is set, -EINTR or -ERESTARTSYS if signal pending, and
> + * -EBUSY if the object is pinned.
> + */
> +int i915_gem_object_migrate(struct drm_i915_gem_object *obj,
> +			    struct i915_gem_ww_ctx *ww,
> +			    enum intel_region_id id)
> +{
> +	struct drm_i915_private *i915 = to_i915(obj->base.dev);
> +	struct intel_memory_region *mr;
> +
> +	GEM_BUG_ON(id >= INTEL_REGION_UNKNOWN);
> +	GEM_BUG_ON(obj->mm.madv != I915_MADV_WILLNEED);
> +	assert_object_held(obj);
> +
> +	mr = i915->mm.regions[id];

GEM_BUG_ON(!mr) ?

> +	if (obj->mm.region == mr)
> +		return 0;
> +
> +	if (!i915_gem_object_evictable(obj))
> +		return -EBUSY;
> +
> +	if (!obj->ops->migrate)
> +		return -EOPNOTSUPP;
> +
> +	return obj->ops->migrate(obj, mr);
> +}
> +
>   void i915_gem_init__objects(struct drm_i915_private *i915)
>   {
>   	INIT_WORK(&i915->mm.free_work, __i915_gem_free_work);
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> index ea3224a480c4..8cbd7a5334e2 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> @@ -17,6 +17,8 @@
>   #include "i915_gem_ww.h"
>   #include "i915_vma_types.h"
>   
> +enum intel_region_id;
> +
>   /*
>    * XXX: There is a prevalence of the assumption that we fit the
>    * object's page count inside a 32bit _signed_ variable. Let's document
> @@ -597,6 +599,16 @@ bool i915_gem_object_migratable(struct drm_i915_gem_object *obj);
>   
>   bool i915_gem_object_validates_to_lmem(struct drm_i915_gem_object *obj);
>   
> +int i915_gem_object_migrate(struct drm_i915_gem_object *obj,
> +			    struct i915_gem_ww_ctx *ww,
> +			    enum intel_region_id id);
> +
> +bool i915_gem_object_can_migrate(struct drm_i915_gem_object *obj,
> +				 enum intel_region_id id);
> +
> +int i915_gem_object_wait_migration(struct drm_i915_gem_object *obj,
> +				   unsigned int flags);
> +
>   #ifdef CONFIG_MMU_NOTIFIER
>   static inline bool
>   i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> index 441f913c87e6..ef3de2ae9723 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> @@ -18,6 +18,7 @@
>   
>   struct drm_i915_gem_object;
>   struct intel_fronbuffer;
> +struct intel_memory_region;
>   
>   /*
>    * struct i915_lut_handle tracks the fast lookups from handle to vma used
> @@ -77,6 +78,14 @@ struct drm_i915_gem_object_ops {
>   	 * delayed_free - Override the default delayed free implementation
>   	 */
>   	void (*delayed_free)(struct drm_i915_gem_object *obj);
> +
> +	/**
> +	 * migrate - Migrate object to a different region either for
> +	 * pinning or for as long as the object lock is held.
> +	 */
> +	int (*migrate)(struct drm_i915_gem_object *obj,
> +		       struct intel_memory_region *mr);
> +
>   	void (*release)(struct drm_i915_gem_object *obj);
>   
>   	const struct vm_operations_struct *mmap_ops;
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> index c39d982c4fa6..b01bf0650ab0 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> @@ -617,7 +617,8 @@ struct ttm_device_funcs *i915_ttm_driver(void)
>   	return &i915_ttm_bo_driver;
>   }
>   
> -static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
> +static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
> +				struct ttm_placement *placement)
>   {
>   	struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
>   	struct ttm_operation_ctx ctx = {
> @@ -625,19 +626,12 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
>   		.no_wait_gpu = false,
>   	};
>   	struct sg_table *st;
> -	struct ttm_place requested, busy[I915_TTM_MAX_PLACEMENTS];
> -	struct ttm_placement placement;
>   	int real_num_busy;
>   	int ret;
>   
> -	GEM_BUG_ON(obj->mm.n_placements > I915_TTM_MAX_PLACEMENTS);
> -
> -	/* Move to the requested placement. */
> -	i915_ttm_placement_from_obj(obj, &requested, busy, &placement);
> -
>   	/* First try only the requested placement. No eviction. */
> -	real_num_busy = fetch_and_zero(&placement.num_busy_placement);
> -	ret = ttm_bo_validate(bo, &placement, &ctx);
> +	real_num_busy = fetch_and_zero(&placement->num_busy_placement);
> +	ret = ttm_bo_validate(bo, placement, &ctx);
>   	if (ret) {
>   		ret = i915_ttm_err_to_gem(ret);
>   		/*
> @@ -652,8 +646,8 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
>   		 * If the initial attempt fails, allow all accepted placements,
>   		 * evicting if necessary.
>   		 */
> -		placement.num_busy_placement = real_num_busy;
> -		ret = ttm_bo_validate(bo, &placement, &ctx);
> +		placement->num_busy_placement = real_num_busy;
> +		ret = ttm_bo_validate(bo, placement, &ctx);
>   		if (ret)
>   			return i915_ttm_err_to_gem(ret);
>   	}
> @@ -668,16 +662,56 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
>   		i915_ttm_adjust_gem_after_move(obj);
>   	}
>   
> -	/* Object either has a page vector or is an iomem object */
> -	st = bo->ttm ? i915_ttm_tt_get_st(bo->ttm) : obj->ttm.cached_io_st;
> -	if (IS_ERR(st))
> -		return PTR_ERR(st);
> +	if (!obj->mm.pages) {
> +		/* Object either has a page vector or is an iomem object */
> +		st = bo->ttm ? i915_ttm_tt_get_st(bo->ttm) : obj->ttm.cached_io_st;
> +		if (IS_ERR(st))
> +			return PTR_ERR(st);
>   
> -	__i915_gem_object_set_pages(obj, st, i915_sg_dma_sizes(st->sgl));
> +		__i915_gem_object_set_pages(obj, st, i915_sg_dma_sizes(st->sgl));
> +	}
>   
>   	return ret;
>   }
>   
> +static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
> +{
> +	struct ttm_place requested, busy[I915_TTM_MAX_PLACEMENTS];
> +	struct ttm_placement placement;
> +
> +	GEM_BUG_ON(obj->mm.n_placements > I915_TTM_MAX_PLACEMENTS);
> +
> +	/* Move to the requested placement. */
> +	i915_ttm_placement_from_obj(obj, &requested, busy, &placement);
> +
> +	return __i915_ttm_get_pages(obj, &placement);
> +}
> +
> +static int i915_ttm_migrate(struct drm_i915_gem_object *obj,
> +			    struct intel_memory_region *mr)
> +{
> +	struct ttm_place requested;
> +	struct ttm_placement placement;
> +	int ret;
> +
> +	i915_ttm_place_from_region(mr, &requested, obj->flags);
> +	placement.num_placement = 1;
> +	placement.num_busy_placement = 1;
> +	placement.placement = &requested;
> +	placement.busy_placement = &requested;
> +
> +	ret = __i915_ttm_get_pages(obj, &placement);
> +	if (ret)
> +		return ret;
> +
> +	if (obj->mm.region != mr) {
> +		intel_memory_region_put(obj->mm.region);
> +		obj->mm.region = intel_memory_region_get(mr);

i915_gem_object_{init,release}_memory_region?

> +	}
> +
> +	return 0;
> +}
> +
>   static void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
>   			       struct sg_table *st)
>   {
> @@ -814,6 +848,7 @@ static const struct drm_i915_gem_object_ops i915_gem_ttm_obj_ops = {
>   	.truncate = i915_ttm_purge,
>   	.adjust_lru = i915_ttm_adjust_lru,
>   	.delayed_free = i915_ttm_delayed_free,
> +	.migrate = i915_ttm_migrate,
>   	.mmap_offset = i915_ttm_mmap_offset,
>   	.mmap_ops = &vm_ops_ttm,
>   };
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> index 1070d3afdce7..e9f7a8d9f0e2 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> @@ -290,3 +290,22 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>   	i915_gem_object_put(obj);
>   	return ret;
>   }
> +
> +/**
> + * i915_gem_object_wait_migrate - Sync an accelerated migration operation
> + * @obj: The migrating object.
> + * @flags: waiting flags. Currently supports only I915_WAIT_INTERRUPTIBLE.
> + *
> + * Wait for any pending async migration operation on the object,
> + * whether it's explicitly (i915_gem_object_migrate()) or implicitly
> + * (swapin, initial clearing) initiated.
> + *
> + * Return: 0 if successful, -ERESTARTSYS if a signal was hit during waiting.
> + */
> +int i915_gem_object_wait_migration(struct drm_i915_gem_object *obj,
> +				   unsigned int flags)
> +{
> +	might_sleep();
> +	/* NOP for now. */
> +	return 0;
> +}
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Intel-gfx] [PATCH 1/4] drm/i915/gem: Implement object migration
@ 2021-06-25 10:51     ` Matthew Auld
  0 siblings, 0 replies; 34+ messages in thread
From: Matthew Auld @ 2021-06-25 10:51 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel

On 24/06/2021 19:31, Thomas Hellström wrote:
> Introduce an interface to migrate objects between regions.
> This is primarily intended to migrate objects to LMEM for display and
> to SYSTEM for dma-buf, but might be reused in one form or another for
> performande-based migration.
> 
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
>   drivers/gpu/drm/i915/gem/i915_gem_object.c    | 91 +++++++++++++++++++
>   drivers/gpu/drm/i915/gem/i915_gem_object.h    | 12 +++
>   .../gpu/drm/i915/gem/i915_gem_object_types.h  |  9 ++
>   drivers/gpu/drm/i915/gem/i915_gem_ttm.c       | 69 ++++++++++----
>   drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 19 ++++
>   5 files changed, 183 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
> index 07e8ff9a8aae..6421c3a8b2f3 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
> @@ -513,6 +513,97 @@ bool i915_gem_object_has_iomem(const struct drm_i915_gem_object *obj)
>   	return obj->mem_flags & I915_BO_FLAG_IOMEM;
>   }
>   
> +/**
> + * i915_gem_object_can_migrate - Whether an object likely can be migrated
> + *
> + * @obj: The object to migrate
> + * @id: The region intended to migrate to
> + *
> + * Check whether the object backend supports migration to the
> + * given region. Note that pinning may affect the ability to migrate.
> + *
> + * Return: true if migration is possible, false otherwise.
> + */
> +bool i915_gem_object_can_migrate(struct drm_i915_gem_object *obj,
> +				 enum intel_region_id id)
> +{
> +	struct drm_i915_private *i915 = to_i915(obj->base.dev);
> +	unsigned int num_allowed = obj->mm.n_placements;
> +	struct intel_memory_region *mr;
> +	unsigned int i;
> +
> +	GEM_BUG_ON(id >= INTEL_REGION_UNKNOWN);
> +	GEM_BUG_ON(obj->mm.madv != I915_MADV_WILLNEED);
> +
> +	if (!obj->ops->migrate)
> +		return -EOPNOTSUPP;
> +
> +	mr = i915->mm.regions[id];

if (!mr)
     return false;

?

> +	if (obj->mm.region == mr)
> +		return true;
> +
> +	if (!i915_gem_object_evictable(obj))
> +		return false;
> +
> +	if (!(obj->flags & I915_BO_ALLOC_USER))
> +		return true;
> +
> +	if (num_allowed == 0)
> +		return false;
> +
> +	for (i = 0; i < num_allowed; ++i) {
> +		if (mr == obj->mm.placements[i])
> +			return true;
> +	}
> +
> +	return false;
> +}
> +
> +/**
> + * i915_gem_object_migrate - Migrate an object to the desired region id
> + * @obj: The object to migrate.
> + * @ww: An optional struct i915_gem_ww_ctx. If NULL, the backend may
> + * not be successful in evicting other objects to make room for this object.
> + * @id: The region id to migrate to.
> + *
> + * Attempt to migrate the object to the desired memory region. The
> + * object backend must support migration and the object may not be
> + * pinned, (explicitly pinned pages or pinned vmas). The object must
> + * be locked.
> + * On successful completion, the object will have pages pointing to
> + * memory in the new region, but an async migration task may not have
> + * completed yet, and to accomplish that, i915_gem_object_wait_migration()
> + * must be called.
> + *
> + * Return: 0 on success. Negative error code on failure. In particular may
> + * return -ENXIO on lack of region space, -EDEADLK for deadlock avoidance
> + * if @ww is set, -EINTR or -ERESTARTSYS if signal pending, and
> + * -EBUSY if the object is pinned.
> + */
> +int i915_gem_object_migrate(struct drm_i915_gem_object *obj,
> +			    struct i915_gem_ww_ctx *ww,
> +			    enum intel_region_id id)
> +{
> +	struct drm_i915_private *i915 = to_i915(obj->base.dev);
> +	struct intel_memory_region *mr;
> +
> +	GEM_BUG_ON(id >= INTEL_REGION_UNKNOWN);
> +	GEM_BUG_ON(obj->mm.madv != I915_MADV_WILLNEED);
> +	assert_object_held(obj);
> +
> +	mr = i915->mm.regions[id];

GEM_BUG_ON(!mr) ?

> +	if (obj->mm.region == mr)
> +		return 0;
> +
> +	if (!i915_gem_object_evictable(obj))
> +		return -EBUSY;
> +
> +	if (!obj->ops->migrate)
> +		return -EOPNOTSUPP;
> +
> +	return obj->ops->migrate(obj, mr);
> +}
> +
>   void i915_gem_init__objects(struct drm_i915_private *i915)
>   {
>   	INIT_WORK(&i915->mm.free_work, __i915_gem_free_work);
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> index ea3224a480c4..8cbd7a5334e2 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> @@ -17,6 +17,8 @@
>   #include "i915_gem_ww.h"
>   #include "i915_vma_types.h"
>   
> +enum intel_region_id;
> +
>   /*
>    * XXX: There is a prevalence of the assumption that we fit the
>    * object's page count inside a 32bit _signed_ variable. Let's document
> @@ -597,6 +599,16 @@ bool i915_gem_object_migratable(struct drm_i915_gem_object *obj);
>   
>   bool i915_gem_object_validates_to_lmem(struct drm_i915_gem_object *obj);
>   
> +int i915_gem_object_migrate(struct drm_i915_gem_object *obj,
> +			    struct i915_gem_ww_ctx *ww,
> +			    enum intel_region_id id);
> +
> +bool i915_gem_object_can_migrate(struct drm_i915_gem_object *obj,
> +				 enum intel_region_id id);
> +
> +int i915_gem_object_wait_migration(struct drm_i915_gem_object *obj,
> +				   unsigned int flags);
> +
>   #ifdef CONFIG_MMU_NOTIFIER
>   static inline bool
>   i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> index 441f913c87e6..ef3de2ae9723 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> @@ -18,6 +18,7 @@
>   
>   struct drm_i915_gem_object;
>   struct intel_fronbuffer;
> +struct intel_memory_region;
>   
>   /*
>    * struct i915_lut_handle tracks the fast lookups from handle to vma used
> @@ -77,6 +78,14 @@ struct drm_i915_gem_object_ops {
>   	 * delayed_free - Override the default delayed free implementation
>   	 */
>   	void (*delayed_free)(struct drm_i915_gem_object *obj);
> +
> +	/**
> +	 * migrate - Migrate object to a different region either for
> +	 * pinning or for as long as the object lock is held.
> +	 */
> +	int (*migrate)(struct drm_i915_gem_object *obj,
> +		       struct intel_memory_region *mr);
> +
>   	void (*release)(struct drm_i915_gem_object *obj);
>   
>   	const struct vm_operations_struct *mmap_ops;
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> index c39d982c4fa6..b01bf0650ab0 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> @@ -617,7 +617,8 @@ struct ttm_device_funcs *i915_ttm_driver(void)
>   	return &i915_ttm_bo_driver;
>   }
>   
> -static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
> +static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
> +				struct ttm_placement *placement)
>   {
>   	struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
>   	struct ttm_operation_ctx ctx = {
> @@ -625,19 +626,12 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
>   		.no_wait_gpu = false,
>   	};
>   	struct sg_table *st;
> -	struct ttm_place requested, busy[I915_TTM_MAX_PLACEMENTS];
> -	struct ttm_placement placement;
>   	int real_num_busy;
>   	int ret;
>   
> -	GEM_BUG_ON(obj->mm.n_placements > I915_TTM_MAX_PLACEMENTS);
> -
> -	/* Move to the requested placement. */
> -	i915_ttm_placement_from_obj(obj, &requested, busy, &placement);
> -
>   	/* First try only the requested placement. No eviction. */
> -	real_num_busy = fetch_and_zero(&placement.num_busy_placement);
> -	ret = ttm_bo_validate(bo, &placement, &ctx);
> +	real_num_busy = fetch_and_zero(&placement->num_busy_placement);
> +	ret = ttm_bo_validate(bo, placement, &ctx);
>   	if (ret) {
>   		ret = i915_ttm_err_to_gem(ret);
>   		/*
> @@ -652,8 +646,8 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
>   		 * If the initial attempt fails, allow all accepted placements,
>   		 * evicting if necessary.
>   		 */
> -		placement.num_busy_placement = real_num_busy;
> -		ret = ttm_bo_validate(bo, &placement, &ctx);
> +		placement->num_busy_placement = real_num_busy;
> +		ret = ttm_bo_validate(bo, placement, &ctx);
>   		if (ret)
>   			return i915_ttm_err_to_gem(ret);
>   	}
> @@ -668,16 +662,56 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
>   		i915_ttm_adjust_gem_after_move(obj);
>   	}
>   
> -	/* Object either has a page vector or is an iomem object */
> -	st = bo->ttm ? i915_ttm_tt_get_st(bo->ttm) : obj->ttm.cached_io_st;
> -	if (IS_ERR(st))
> -		return PTR_ERR(st);
> +	if (!obj->mm.pages) {
> +		/* Object either has a page vector or is an iomem object */
> +		st = bo->ttm ? i915_ttm_tt_get_st(bo->ttm) : obj->ttm.cached_io_st;
> +		if (IS_ERR(st))
> +			return PTR_ERR(st);
>   
> -	__i915_gem_object_set_pages(obj, st, i915_sg_dma_sizes(st->sgl));
> +		__i915_gem_object_set_pages(obj, st, i915_sg_dma_sizes(st->sgl));
> +	}
>   
>   	return ret;
>   }
>   
> +static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
> +{
> +	struct ttm_place requested, busy[I915_TTM_MAX_PLACEMENTS];
> +	struct ttm_placement placement;
> +
> +	GEM_BUG_ON(obj->mm.n_placements > I915_TTM_MAX_PLACEMENTS);
> +
> +	/* Move to the requested placement. */
> +	i915_ttm_placement_from_obj(obj, &requested, busy, &placement);
> +
> +	return __i915_ttm_get_pages(obj, &placement);
> +}
> +
> +static int i915_ttm_migrate(struct drm_i915_gem_object *obj,
> +			    struct intel_memory_region *mr)
> +{
> +	struct ttm_place requested;
> +	struct ttm_placement placement;
> +	int ret;
> +
> +	i915_ttm_place_from_region(mr, &requested, obj->flags);
> +	placement.num_placement = 1;
> +	placement.num_busy_placement = 1;
> +	placement.placement = &requested;
> +	placement.busy_placement = &requested;
> +
> +	ret = __i915_ttm_get_pages(obj, &placement);
> +	if (ret)
> +		return ret;
> +
> +	if (obj->mm.region != mr) {
> +		intel_memory_region_put(obj->mm.region);
> +		obj->mm.region = intel_memory_region_get(mr);

i915_gem_object_{init,release}_memory_region?

> +	}
> +
> +	return 0;
> +}
> +
>   static void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
>   			       struct sg_table *st)
>   {
> @@ -814,6 +848,7 @@ static const struct drm_i915_gem_object_ops i915_gem_ttm_obj_ops = {
>   	.truncate = i915_ttm_purge,
>   	.adjust_lru = i915_ttm_adjust_lru,
>   	.delayed_free = i915_ttm_delayed_free,
> +	.migrate = i915_ttm_migrate,
>   	.mmap_offset = i915_ttm_mmap_offset,
>   	.mmap_ops = &vm_ops_ttm,
>   };
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> index 1070d3afdce7..e9f7a8d9f0e2 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
> @@ -290,3 +290,22 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>   	i915_gem_object_put(obj);
>   	return ret;
>   }
> +
> +/**
> + * i915_gem_object_wait_migrate - Sync an accelerated migration operation
> + * @obj: The migrating object.
> + * @flags: waiting flags. Currently supports only I915_WAIT_INTERRUPTIBLE.
> + *
> + * Wait for any pending async migration operation on the object,
> + * whether it's explicitly (i915_gem_object_migrate()) or implicitly
> + * (swapin, initial clearing) initiated.
> + *
> + * Return: 0 if successful, -ERESTARTSYS if a signal was hit during waiting.
> + */
> +int i915_gem_object_wait_migration(struct drm_i915_gem_object *obj,
> +				   unsigned int flags)
> +{
> +	might_sleep();
> +	/* NOP for now. */
> +	return 0;
> +}
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
  2021-06-24 18:31   ` [Intel-gfx] " Thomas Hellström
@ 2021-06-25 16:02     ` Ruhl, Michael J
  -1 siblings, 0 replies; 34+ messages in thread
From: Ruhl, Michael J @ 2021-06-25 16:02 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel; +Cc: Auld, Matthew

>-----Original Message-----
>From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of
>Thomas Hellström
>Sent: Thursday, June 24, 2021 2:31 PM
>To: intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Auld, Matthew
><matthew.auld@intel.com>
>Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
>
>Until we support p2p dma or as a complement to that, migrate data
>to system memory at dma-buf map time if possible.
>
>Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>---
> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
>diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>index 616c3a2f1baf..a52f885bc09a 100644
>--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>@@ -25,7 +25,14 @@ static struct sg_table *i915_gem_map_dma_buf(struct
>dma_buf_attachment *attachme
> 	struct scatterlist *src, *dst;
> 	int ret, i;
>
>-	ret = i915_gem_object_pin_pages_unlocked(obj);
>+	ret = i915_gem_object_lock_interruptible(obj, NULL);

Hmm, I believe in most cases that the caller should be holding the
lock (object dma-resv) on this object already.

I know for the dynamic version of dma-buf, there is a check to make
sure that the lock is held when called.

I think you will run into some issues if you try to get it here as well.

Mike

>+	if (ret)
>+		return ERR_PTR(ret);
>+
>+	ret = i915_gem_object_migrate(obj, NULL, INTEL_REGION_SMEM);
>+	if (!ret)
>+		ret = i915_gem_object_pin_pages(obj);
>+	i915_gem_object_unlock(obj);
> 	if (ret)
> 		goto err;
>
>--
>2.31.1


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Intel-gfx] [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
@ 2021-06-25 16:02     ` Ruhl, Michael J
  0 siblings, 0 replies; 34+ messages in thread
From: Ruhl, Michael J @ 2021-06-25 16:02 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel; +Cc: Auld, Matthew

>-----Original Message-----
>From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of
>Thomas Hellström
>Sent: Thursday, June 24, 2021 2:31 PM
>To: intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Auld, Matthew
><matthew.auld@intel.com>
>Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
>
>Until we support p2p dma or as a complement to that, migrate data
>to system memory at dma-buf map time if possible.
>
>Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>---
> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
>diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>index 616c3a2f1baf..a52f885bc09a 100644
>--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>@@ -25,7 +25,14 @@ static struct sg_table *i915_gem_map_dma_buf(struct
>dma_buf_attachment *attachme
> 	struct scatterlist *src, *dst;
> 	int ret, i;
>
>-	ret = i915_gem_object_pin_pages_unlocked(obj);
>+	ret = i915_gem_object_lock_interruptible(obj, NULL);

Hmm, I believe in most cases that the caller should be holding the
lock (object dma-resv) on this object already.

I know for the dynamic version of dma-buf, there is a check to make
sure that the lock is held when called.

I think you will run into some issues if you try to get it here as well.

Mike

>+	if (ret)
>+		return ERR_PTR(ret);
>+
>+	ret = i915_gem_object_migrate(obj, NULL, INTEL_REGION_SMEM);
>+	if (!ret)
>+		ret = i915_gem_object_pin_pages(obj);
>+	i915_gem_object_unlock(obj);
> 	if (ret)
> 		goto err;
>
>--
>2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
  2021-06-25 16:02     ` [Intel-gfx] " Ruhl, Michael J
@ 2021-06-25 16:17       ` Thomas Hellström
  -1 siblings, 0 replies; 34+ messages in thread
From: Thomas Hellström @ 2021-06-25 16:17 UTC (permalink / raw)
  To: Ruhl, Michael J, intel-gfx, dri-devel; +Cc: Auld, Matthew

Hi, Michael,

thanks for looking at this.

On 6/25/21 6:02 PM, Ruhl, Michael J wrote:
>> -----Original Message-----
>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of
>> Thomas Hellström
>> Sent: Thursday, June 24, 2021 2:31 PM
>> To: intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Auld, Matthew
>> <matthew.auld@intel.com>
>> Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
>>
>> Until we support p2p dma or as a complement to that, migrate data
>> to system memory at dma-buf map time if possible.
>>
>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> ---
>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>> index 616c3a2f1baf..a52f885bc09a 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>> @@ -25,7 +25,14 @@ static struct sg_table *i915_gem_map_dma_buf(struct
>> dma_buf_attachment *attachme
>> 	struct scatterlist *src, *dst;
>> 	int ret, i;
>>
>> -	ret = i915_gem_object_pin_pages_unlocked(obj);
>> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
> Hmm, I believe in most cases that the caller should be holding the
> lock (object dma-resv) on this object already.

Yes, I agree, In particular for other instances of our own driver,  at 
least since the dma_resv introduction.

But I also think that's a pre-existing bug, since 
i915_gem_object_pin_pages_unlocked() will also take the lock.

I Think we need to initially make the exporter dynamic-capable to 
resolve this, and drop the locking here completely, as dma-buf docs says 
that we're then guaranteed to get called with the object lock held.

I figure if we make the exporter dynamic, we need to migrate already at 
dma_buf_pin time so we don't pin the object in the wrong location.

/Thomas


>
> I know for the dynamic version of dma-buf, there is a check to make
> sure that the lock is held when called.
>
> I think you will run into some issues if you try to get it here as well.
>
> Mike
>
>> +	if (ret)
>> +		return ERR_PTR(ret);
>> +
>> +	ret = i915_gem_object_migrate(obj, NULL, INTEL_REGION_SMEM);
>> +	if (!ret)
>> +		ret = i915_gem_object_pin_pages(obj);
>> +	i915_gem_object_unlock(obj);
>> 	if (ret)
>> 		goto err;
>>
>> --
>> 2.31.1

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Intel-gfx] [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
@ 2021-06-25 16:17       ` Thomas Hellström
  0 siblings, 0 replies; 34+ messages in thread
From: Thomas Hellström @ 2021-06-25 16:17 UTC (permalink / raw)
  To: Ruhl, Michael J, intel-gfx, dri-devel; +Cc: Auld, Matthew

Hi, Michael,

thanks for looking at this.

On 6/25/21 6:02 PM, Ruhl, Michael J wrote:
>> -----Original Message-----
>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of
>> Thomas Hellström
>> Sent: Thursday, June 24, 2021 2:31 PM
>> To: intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Auld, Matthew
>> <matthew.auld@intel.com>
>> Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
>>
>> Until we support p2p dma or as a complement to that, migrate data
>> to system memory at dma-buf map time if possible.
>>
>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> ---
>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>> index 616c3a2f1baf..a52f885bc09a 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>> @@ -25,7 +25,14 @@ static struct sg_table *i915_gem_map_dma_buf(struct
>> dma_buf_attachment *attachme
>> 	struct scatterlist *src, *dst;
>> 	int ret, i;
>>
>> -	ret = i915_gem_object_pin_pages_unlocked(obj);
>> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
> Hmm, I believe in most cases that the caller should be holding the
> lock (object dma-resv) on this object already.

Yes, I agree, In particular for other instances of our own driver,  at 
least since the dma_resv introduction.

But I also think that's a pre-existing bug, since 
i915_gem_object_pin_pages_unlocked() will also take the lock.

I Think we need to initially make the exporter dynamic-capable to 
resolve this, and drop the locking here completely, as dma-buf docs says 
that we're then guaranteed to get called with the object lock held.

I figure if we make the exporter dynamic, we need to migrate already at 
dma_buf_pin time so we don't pin the object in the wrong location.

/Thomas


>
> I know for the dynamic version of dma-buf, there is a check to make
> sure that the lock is held when called.
>
> I think you will run into some issues if you try to get it here as well.
>
> Mike
>
>> +	if (ret)
>> +		return ERR_PTR(ret);
>> +
>> +	ret = i915_gem_object_migrate(obj, NULL, INTEL_REGION_SMEM);
>> +	if (!ret)
>> +		ret = i915_gem_object_pin_pages(obj);
>> +	i915_gem_object_unlock(obj);
>> 	if (ret)
>> 		goto err;
>>
>> --
>> 2.31.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
  2021-06-25 16:17       ` [Intel-gfx] " Thomas Hellström
@ 2021-06-25 17:38         ` Ruhl, Michael J
  -1 siblings, 0 replies; 34+ messages in thread
From: Ruhl, Michael J @ 2021-06-25 17:38 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel; +Cc: Auld, Matthew

>-----Original Message-----
>From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>Sent: Friday, June 25, 2021 12:18 PM
>To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>Cc: Auld, Matthew <matthew.auld@intel.com>
>Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>time
>
>Hi, Michael,
>
>thanks for looking at this.
>
>On 6/25/21 6:02 PM, Ruhl, Michael J wrote:
>>> -----Original Message-----
>>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of
>>> Thomas Hellström
>>> Sent: Thursday, June 24, 2021 2:31 PM
>>> To: intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Auld,
>Matthew
>>> <matthew.auld@intel.com>
>>> Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>time
>>>
>>> Until we support p2p dma or as a complement to that, migrate data
>>> to system memory at dma-buf map time if possible.
>>>
>>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>> ---
>>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>> index 616c3a2f1baf..a52f885bc09a 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>> @@ -25,7 +25,14 @@ static struct sg_table
>*i915_gem_map_dma_buf(struct
>>> dma_buf_attachment *attachme
>>> 	struct scatterlist *src, *dst;
>>> 	int ret, i;
>>>
>>> -	ret = i915_gem_object_pin_pages_unlocked(obj);
>>> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
>> Hmm, I believe in most cases that the caller should be holding the
>> lock (object dma-resv) on this object already.
>
>Yes, I agree, In particular for other instances of our own driver,  at
>least since the dma_resv introduction.
>
>But I also think that's a pre-existing bug, since
>i915_gem_object_pin_pages_unlocked() will also take the lock.

Ouch yes.  Missed that.

>I Think we need to initially make the exporter dynamic-capable to
>resolve this, and drop the locking here completely, as dma-buf docs says
>that we're then guaranteed to get called with the object lock held.
>
>I figure if we make the exporter dynamic, we need to migrate already at
>dma_buf_pin time so we don't pin the object in the wrong location.

The exporter as dynamic  (ops->pin is available) is optional, but importer
dynamic (ops->move_notify) is required.

With that in mind, it would seem that there are three possible combinations
for the migrate to be attempted:

1) in the ops->pin function (export_dynamic != import_dynamic, during attach)
2) in the ops->pin function (export_dynamic and !CONFIG_DMABUF_MOVE_NOTIFY) during mapping
3) and possibly in ops->map_dma_buf (exort_dynamic iand CONFIG_DMABUF_MOVE_NOTIFY)

Since one possibility has to be in the mapping function, it seems that if we
can figure out the locking, that the migrate should probably be available here.

Mike


>/Thomas
>
>
>>
>> I know for the dynamic version of dma-buf, there is a check to make
>> sure that the lock is held when called.
>>
>> I think you will run into some issues if you try to get it here as well.
>>
>> Mike
>>
>>> +	if (ret)
>>> +		return ERR_PTR(ret);
>>> +
>>> +	ret = i915_gem_object_migrate(obj, NULL, INTEL_REGION_SMEM);
>>> +	if (!ret)
>>> +		ret = i915_gem_object_pin_pages(obj);
>>> +	i915_gem_object_unlock(obj);
>>> 	if (ret)
>>> 		goto err;
>>>
>>> --
>>> 2.31.1

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Intel-gfx] [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
@ 2021-06-25 17:38         ` Ruhl, Michael J
  0 siblings, 0 replies; 34+ messages in thread
From: Ruhl, Michael J @ 2021-06-25 17:38 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel; +Cc: Auld, Matthew

>-----Original Message-----
>From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>Sent: Friday, June 25, 2021 12:18 PM
>To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>Cc: Auld, Matthew <matthew.auld@intel.com>
>Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>time
>
>Hi, Michael,
>
>thanks for looking at this.
>
>On 6/25/21 6:02 PM, Ruhl, Michael J wrote:
>>> -----Original Message-----
>>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of
>>> Thomas Hellström
>>> Sent: Thursday, June 24, 2021 2:31 PM
>>> To: intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Auld,
>Matthew
>>> <matthew.auld@intel.com>
>>> Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>time
>>>
>>> Until we support p2p dma or as a complement to that, migrate data
>>> to system memory at dma-buf map time if possible.
>>>
>>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>> ---
>>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>> index 616c3a2f1baf..a52f885bc09a 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>> @@ -25,7 +25,14 @@ static struct sg_table
>*i915_gem_map_dma_buf(struct
>>> dma_buf_attachment *attachme
>>> 	struct scatterlist *src, *dst;
>>> 	int ret, i;
>>>
>>> -	ret = i915_gem_object_pin_pages_unlocked(obj);
>>> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
>> Hmm, I believe in most cases that the caller should be holding the
>> lock (object dma-resv) on this object already.
>
>Yes, I agree, In particular for other instances of our own driver,  at
>least since the dma_resv introduction.
>
>But I also think that's a pre-existing bug, since
>i915_gem_object_pin_pages_unlocked() will also take the lock.

Ouch yes.  Missed that.

>I Think we need to initially make the exporter dynamic-capable to
>resolve this, and drop the locking here completely, as dma-buf docs says
>that we're then guaranteed to get called with the object lock held.
>
>I figure if we make the exporter dynamic, we need to migrate already at
>dma_buf_pin time so we don't pin the object in the wrong location.

The exporter as dynamic  (ops->pin is available) is optional, but importer
dynamic (ops->move_notify) is required.

With that in mind, it would seem that there are three possible combinations
for the migrate to be attempted:

1) in the ops->pin function (export_dynamic != import_dynamic, during attach)
2) in the ops->pin function (export_dynamic and !CONFIG_DMABUF_MOVE_NOTIFY) during mapping
3) and possibly in ops->map_dma_buf (exort_dynamic iand CONFIG_DMABUF_MOVE_NOTIFY)

Since one possibility has to be in the mapping function, it seems that if we
can figure out the locking, that the migrate should probably be available here.

Mike


>/Thomas
>
>
>>
>> I know for the dynamic version of dma-buf, there is a check to make
>> sure that the lock is held when called.
>>
>> I think you will run into some issues if you try to get it here as well.
>>
>> Mike
>>
>>> +	if (ret)
>>> +		return ERR_PTR(ret);
>>> +
>>> +	ret = i915_gem_object_migrate(obj, NULL, INTEL_REGION_SMEM);
>>> +	if (!ret)
>>> +		ret = i915_gem_object_pin_pages(obj);
>>> +	i915_gem_object_unlock(obj);
>>> 	if (ret)
>>> 		goto err;
>>>
>>> --
>>> 2.31.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
  2021-06-25 17:38         ` [Intel-gfx] " Ruhl, Michael J
@ 2021-06-25 17:52           ` Thomas Hellström
  -1 siblings, 0 replies; 34+ messages in thread
From: Thomas Hellström @ 2021-06-25 17:52 UTC (permalink / raw)
  To: Ruhl, Michael J, intel-gfx, dri-devel; +Cc: Auld, Matthew


On 6/25/21 7:38 PM, Ruhl, Michael J wrote:
>> -----Original Message-----
>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Sent: Friday, June 25, 2021 12:18 PM
>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>> Cc: Auld, Matthew <matthew.auld@intel.com>
>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>> time
>>
>> Hi, Michael,
>>
>> thanks for looking at this.
>>
>> On 6/25/21 6:02 PM, Ruhl, Michael J wrote:
>>>> -----Original Message-----
>>>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of
>>>> Thomas Hellström
>>>> Sent: Thursday, June 24, 2021 2:31 PM
>>>> To: intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Auld,
>> Matthew
>>>> <matthew.auld@intel.com>
>>>> Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>> time
>>>> Until we support p2p dma or as a complement to that, migrate data
>>>> to system memory at dma-buf map time if possible.
>>>>
>>>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>> ---
>>>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
>>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>> index 616c3a2f1baf..a52f885bc09a 100644
>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>> @@ -25,7 +25,14 @@ static struct sg_table
>> *i915_gem_map_dma_buf(struct
>>>> dma_buf_attachment *attachme
>>>> 	struct scatterlist *src, *dst;
>>>> 	int ret, i;
>>>>
>>>> -	ret = i915_gem_object_pin_pages_unlocked(obj);
>>>> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
>>> Hmm, I believe in most cases that the caller should be holding the
>>> lock (object dma-resv) on this object already.
>> Yes, I agree, In particular for other instances of our own driver,  at
>> least since the dma_resv introduction.
>>
>> But I also think that's a pre-existing bug, since
>> i915_gem_object_pin_pages_unlocked() will also take the lock.
> Ouch yes.  Missed that.
>
>> I Think we need to initially make the exporter dynamic-capable to
>> resolve this, and drop the locking here completely, as dma-buf docs says
>> that we're then guaranteed to get called with the object lock held.
>>
>> I figure if we make the exporter dynamic, we need to migrate already at
>> dma_buf_pin time so we don't pin the object in the wrong location.
> The exporter as dynamic  (ops->pin is available) is optional, but importer
> dynamic (ops->move_notify) is required.
>
> With that in mind, it would seem that there are three possible combinations
> for the migrate to be attempted:
>
> 1) in the ops->pin function (export_dynamic != import_dynamic, during attach)
> 2) in the ops->pin function (export_dynamic and !CONFIG_DMABUF_MOVE_NOTIFY) during mapping
> 3) and possibly in ops->map_dma_buf (exort_dynamic iand CONFIG_DMABUF_MOVE_NOTIFY)
>
> Since one possibility has to be in the mapping function, it seems that if we
> can figure out the locking, that the migrate should probably be available here.
>
> Mike

So perhaps just to initially fix the bug, we could just implement NOP 
pin() and unpin() callbacks and drop the locking in map_attach() and 
replace it with an assert_object_held();

/Thomas



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Intel-gfx] [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
@ 2021-06-25 17:52           ` Thomas Hellström
  0 siblings, 0 replies; 34+ messages in thread
From: Thomas Hellström @ 2021-06-25 17:52 UTC (permalink / raw)
  To: Ruhl, Michael J, intel-gfx, dri-devel; +Cc: Auld, Matthew


On 6/25/21 7:38 PM, Ruhl, Michael J wrote:
>> -----Original Message-----
>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Sent: Friday, June 25, 2021 12:18 PM
>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>> Cc: Auld, Matthew <matthew.auld@intel.com>
>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>> time
>>
>> Hi, Michael,
>>
>> thanks for looking at this.
>>
>> On 6/25/21 6:02 PM, Ruhl, Michael J wrote:
>>>> -----Original Message-----
>>>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of
>>>> Thomas Hellström
>>>> Sent: Thursday, June 24, 2021 2:31 PM
>>>> To: intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Auld,
>> Matthew
>>>> <matthew.auld@intel.com>
>>>> Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>> time
>>>> Until we support p2p dma or as a complement to that, migrate data
>>>> to system memory at dma-buf map time if possible.
>>>>
>>>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>> ---
>>>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
>>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>> index 616c3a2f1baf..a52f885bc09a 100644
>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>> @@ -25,7 +25,14 @@ static struct sg_table
>> *i915_gem_map_dma_buf(struct
>>>> dma_buf_attachment *attachme
>>>> 	struct scatterlist *src, *dst;
>>>> 	int ret, i;
>>>>
>>>> -	ret = i915_gem_object_pin_pages_unlocked(obj);
>>>> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
>>> Hmm, I believe in most cases that the caller should be holding the
>>> lock (object dma-resv) on this object already.
>> Yes, I agree, In particular for other instances of our own driver,  at
>> least since the dma_resv introduction.
>>
>> But I also think that's a pre-existing bug, since
>> i915_gem_object_pin_pages_unlocked() will also take the lock.
> Ouch yes.  Missed that.
>
>> I Think we need to initially make the exporter dynamic-capable to
>> resolve this, and drop the locking here completely, as dma-buf docs says
>> that we're then guaranteed to get called with the object lock held.
>>
>> I figure if we make the exporter dynamic, we need to migrate already at
>> dma_buf_pin time so we don't pin the object in the wrong location.
> The exporter as dynamic  (ops->pin is available) is optional, but importer
> dynamic (ops->move_notify) is required.
>
> With that in mind, it would seem that there are three possible combinations
> for the migrate to be attempted:
>
> 1) in the ops->pin function (export_dynamic != import_dynamic, during attach)
> 2) in the ops->pin function (export_dynamic and !CONFIG_DMABUF_MOVE_NOTIFY) during mapping
> 3) and possibly in ops->map_dma_buf (exort_dynamic iand CONFIG_DMABUF_MOVE_NOTIFY)
>
> Since one possibility has to be in the mapping function, it seems that if we
> can figure out the locking, that the migrate should probably be available here.
>
> Mike

So perhaps just to initially fix the bug, we could just implement NOP 
pin() and unpin() callbacks and drop the locking in map_attach() and 
replace it with an assert_object_held();

/Thomas


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
  2021-06-25 17:52           ` [Intel-gfx] " Thomas Hellström
@ 2021-06-25 17:57             ` Ruhl, Michael J
  -1 siblings, 0 replies; 34+ messages in thread
From: Ruhl, Michael J @ 2021-06-25 17:57 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel; +Cc: Auld, Matthew

>-----Original Message-----
>From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>Sent: Friday, June 25, 2021 1:52 PM
>To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>Cc: Auld, Matthew <matthew.auld@intel.com>
>Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>time
>
>
>On 6/25/21 7:38 PM, Ruhl, Michael J wrote:
>>> -----Original Message-----
>>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>> Sent: Friday, June 25, 2021 12:18 PM
>>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>> Cc: Auld, Matthew <matthew.auld@intel.com>
>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>map
>>> time
>>>
>>> Hi, Michael,
>>>
>>> thanks for looking at this.
>>>
>>> On 6/25/21 6:02 PM, Ruhl, Michael J wrote:
>>>>> -----Original Message-----
>>>>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf
>Of
>>>>> Thomas Hellström
>>>>> Sent: Thursday, June 24, 2021 2:31 PM
>>>>> To: intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Auld,
>>> Matthew
>>>>> <matthew.auld@intel.com>
>>>>> Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>>> time
>>>>> Until we support p2p dma or as a complement to that, migrate data
>>>>> to system memory at dma-buf map time if possible.
>>>>>
>>>>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>>> ---
>>>>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
>>>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>> index 616c3a2f1baf..a52f885bc09a 100644
>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>> @@ -25,7 +25,14 @@ static struct sg_table
>>> *i915_gem_map_dma_buf(struct
>>>>> dma_buf_attachment *attachme
>>>>> 	struct scatterlist *src, *dst;
>>>>> 	int ret, i;
>>>>>
>>>>> -	ret = i915_gem_object_pin_pages_unlocked(obj);
>>>>> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
>>>> Hmm, I believe in most cases that the caller should be holding the
>>>> lock (object dma-resv) on this object already.
>>> Yes, I agree, In particular for other instances of our own driver,  at
>>> least since the dma_resv introduction.
>>>
>>> But I also think that's a pre-existing bug, since
>>> i915_gem_object_pin_pages_unlocked() will also take the lock.
>> Ouch yes.  Missed that.
>>
>>> I Think we need to initially make the exporter dynamic-capable to
>>> resolve this, and drop the locking here completely, as dma-buf docs says
>>> that we're then guaranteed to get called with the object lock held.
>>>
>>> I figure if we make the exporter dynamic, we need to migrate already at
>>> dma_buf_pin time so we don't pin the object in the wrong location.
>> The exporter as dynamic  (ops->pin is available) is optional, but importer
>> dynamic (ops->move_notify) is required.
>>
>> With that in mind, it would seem that there are three possible combinations
>> for the migrate to be attempted:
>>
>> 1) in the ops->pin function (export_dynamic != import_dynamic, during
>attach)
>> 2) in the ops->pin function (export_dynamic and
>!CONFIG_DMABUF_MOVE_NOTIFY) during mapping
>> 3) and possibly in ops->map_dma_buf (exort_dynamic iand
>CONFIG_DMABUF_MOVE_NOTIFY)
>>
>> Since one possibility has to be in the mapping function, it seems that if we
>> can figure out the locking, that the migrate should probably be available
>here.
>>
>> Mike
>
>So perhaps just to initially fix the bug, we could just implement NOP
>pin() and unpin() callbacks and drop the locking in map_attach() and
>replace it with an assert_object_held();

That is the sticky part of the move notify API.

If you do the attach_dynamic you have to have an ops with move_notify.

(https://elixir.bootlin.com/linux/v5.13-rc7/source/drivers/dma-buf/dma-buf.c#L730)

If you don't have that, i.e. just the pin interface, the attach will be
rejected, and you will not get the callbacks.

So I think that the only thing we can do for now is to dop the locking and add the 

assert_object_held();

M

>/Thomas
>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Intel-gfx] [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
@ 2021-06-25 17:57             ` Ruhl, Michael J
  0 siblings, 0 replies; 34+ messages in thread
From: Ruhl, Michael J @ 2021-06-25 17:57 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel; +Cc: Auld, Matthew

>-----Original Message-----
>From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>Sent: Friday, June 25, 2021 1:52 PM
>To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>Cc: Auld, Matthew <matthew.auld@intel.com>
>Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>time
>
>
>On 6/25/21 7:38 PM, Ruhl, Michael J wrote:
>>> -----Original Message-----
>>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>> Sent: Friday, June 25, 2021 12:18 PM
>>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>> Cc: Auld, Matthew <matthew.auld@intel.com>
>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>map
>>> time
>>>
>>> Hi, Michael,
>>>
>>> thanks for looking at this.
>>>
>>> On 6/25/21 6:02 PM, Ruhl, Michael J wrote:
>>>>> -----Original Message-----
>>>>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf
>Of
>>>>> Thomas Hellström
>>>>> Sent: Thursday, June 24, 2021 2:31 PM
>>>>> To: intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Auld,
>>> Matthew
>>>>> <matthew.auld@intel.com>
>>>>> Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>>> time
>>>>> Until we support p2p dma or as a complement to that, migrate data
>>>>> to system memory at dma-buf map time if possible.
>>>>>
>>>>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>>> ---
>>>>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
>>>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>> index 616c3a2f1baf..a52f885bc09a 100644
>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>> @@ -25,7 +25,14 @@ static struct sg_table
>>> *i915_gem_map_dma_buf(struct
>>>>> dma_buf_attachment *attachme
>>>>> 	struct scatterlist *src, *dst;
>>>>> 	int ret, i;
>>>>>
>>>>> -	ret = i915_gem_object_pin_pages_unlocked(obj);
>>>>> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
>>>> Hmm, I believe in most cases that the caller should be holding the
>>>> lock (object dma-resv) on this object already.
>>> Yes, I agree, In particular for other instances of our own driver,  at
>>> least since the dma_resv introduction.
>>>
>>> But I also think that's a pre-existing bug, since
>>> i915_gem_object_pin_pages_unlocked() will also take the lock.
>> Ouch yes.  Missed that.
>>
>>> I Think we need to initially make the exporter dynamic-capable to
>>> resolve this, and drop the locking here completely, as dma-buf docs says
>>> that we're then guaranteed to get called with the object lock held.
>>>
>>> I figure if we make the exporter dynamic, we need to migrate already at
>>> dma_buf_pin time so we don't pin the object in the wrong location.
>> The exporter as dynamic  (ops->pin is available) is optional, but importer
>> dynamic (ops->move_notify) is required.
>>
>> With that in mind, it would seem that there are three possible combinations
>> for the migrate to be attempted:
>>
>> 1) in the ops->pin function (export_dynamic != import_dynamic, during
>attach)
>> 2) in the ops->pin function (export_dynamic and
>!CONFIG_DMABUF_MOVE_NOTIFY) during mapping
>> 3) and possibly in ops->map_dma_buf (exort_dynamic iand
>CONFIG_DMABUF_MOVE_NOTIFY)
>>
>> Since one possibility has to be in the mapping function, it seems that if we
>> can figure out the locking, that the migrate should probably be available
>here.
>>
>> Mike
>
>So perhaps just to initially fix the bug, we could just implement NOP
>pin() and unpin() callbacks and drop the locking in map_attach() and
>replace it with an assert_object_held();

That is the sticky part of the move notify API.

If you do the attach_dynamic you have to have an ops with move_notify.

(https://elixir.bootlin.com/linux/v5.13-rc7/source/drivers/dma-buf/dma-buf.c#L730)

If you don't have that, i.e. just the pin interface, the attach will be
rejected, and you will not get the callbacks.

So I think that the only thing we can do for now is to dop the locking and add the 

assert_object_held();

M

>/Thomas
>

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
  2021-06-25 17:57             ` [Intel-gfx] " Ruhl, Michael J
@ 2021-06-25 18:49               ` Thomas Hellström
  -1 siblings, 0 replies; 34+ messages in thread
From: Thomas Hellström @ 2021-06-25 18:49 UTC (permalink / raw)
  To: Ruhl, Michael J, intel-gfx, dri-devel; +Cc: Auld, Matthew

Hi, Mike,

On 6/25/21 7:57 PM, Ruhl, Michael J wrote:
>> -----Original Message-----
>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Sent: Friday, June 25, 2021 1:52 PM
>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>> Cc: Auld, Matthew <matthew.auld@intel.com>
>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>> time
>>
>>
>> On 6/25/21 7:38 PM, Ruhl, Michael J wrote:
>>>> -----Original Message-----
>>>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>> Sent: Friday, June 25, 2021 12:18 PM
>>>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>>>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>> Cc: Auld, Matthew <matthew.auld@intel.com>
>>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>> map
>>>> time
>>>>
>>>> Hi, Michael,
>>>>
>>>> thanks for looking at this.
>>>>
>>>> On 6/25/21 6:02 PM, Ruhl, Michael J wrote:
>>>>>> -----Original Message-----
>>>>>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf
>> Of
>>>>>> Thomas Hellström
>>>>>> Sent: Thursday, June 24, 2021 2:31 PM
>>>>>> To: intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Auld,
>>>> Matthew
>>>>>> <matthew.auld@intel.com>
>>>>>> Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>>>> time
>>>>>> Until we support p2p dma or as a complement to that, migrate data
>>>>>> to system memory at dma-buf map time if possible.
>>>>>>
>>>>>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>>>> ---
>>>>>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
>>>>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>> index 616c3a2f1baf..a52f885bc09a 100644
>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>> @@ -25,7 +25,14 @@ static struct sg_table
>>>> *i915_gem_map_dma_buf(struct
>>>>>> dma_buf_attachment *attachme
>>>>>> 	struct scatterlist *src, *dst;
>>>>>> 	int ret, i;
>>>>>>
>>>>>> -	ret = i915_gem_object_pin_pages_unlocked(obj);
>>>>>> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
>>>>> Hmm, I believe in most cases that the caller should be holding the
>>>>> lock (object dma-resv) on this object already.
>>>> Yes, I agree, In particular for other instances of our own driver,  at
>>>> least since the dma_resv introduction.
>>>>
>>>> But I also think that's a pre-existing bug, since
>>>> i915_gem_object_pin_pages_unlocked() will also take the lock.
>>> Ouch yes.  Missed that.
>>>
>>>> I Think we need to initially make the exporter dynamic-capable to
>>>> resolve this, and drop the locking here completely, as dma-buf docs says
>>>> that we're then guaranteed to get called with the object lock held.
>>>>
>>>> I figure if we make the exporter dynamic, we need to migrate already at
>>>> dma_buf_pin time so we don't pin the object in the wrong location.
>>> The exporter as dynamic  (ops->pin is available) is optional, but importer
>>> dynamic (ops->move_notify) is required.
>>>
>>> With that in mind, it would seem that there are three possible combinations
>>> for the migrate to be attempted:
>>>
>>> 1) in the ops->pin function (export_dynamic != import_dynamic, during
>> attach)
>>> 2) in the ops->pin function (export_dynamic and
>> !CONFIG_DMABUF_MOVE_NOTIFY) during mapping
>>> 3) and possibly in ops->map_dma_buf (exort_dynamic iand
>> CONFIG_DMABUF_MOVE_NOTIFY)
>>> Since one possibility has to be in the mapping function, it seems that if we
>>> can figure out the locking, that the migrate should probably be available
>> here.
>>> Mike
>> So perhaps just to initially fix the bug, we could just implement NOP
>> pin() and unpin() callbacks and drop the locking in map_attach() and
>> replace it with an assert_object_held();
> That is the sticky part of the move notify API.
>
> If you do the attach_dynamic you have to have an ops with move_notify.
>
> (https://elixir.bootlin.com/linux/v5.13-rc7/source/drivers/dma-buf/dma-buf.c#L730)
>
> If you don't have that, i.e. just the pin interface, the attach will be
> rejected, and you will not get the callbacks.

I understood that as the requirement for move_notify is only if the 
*importer* declares dynamic. A dynamic exporter could choose whether to 
call move_notify() on eviction or to pin and never evict. If the 
importer is non-dynamic, the core calls pin() and the only choice is to 
pin and never evict.

So if we temporarily choose to pin and never evict for *everything*, (as 
the current code does now), I think we should be good for now, and then 
we can implement all fancy p2p and move_notify stuff on top of that.

/Thomas


>
> So I think that the only thing we can do for now is to dop the locking and add the
>
> assert_object_held();
>
> M



>
>> /Thomas
>>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Intel-gfx] [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
@ 2021-06-25 18:49               ` Thomas Hellström
  0 siblings, 0 replies; 34+ messages in thread
From: Thomas Hellström @ 2021-06-25 18:49 UTC (permalink / raw)
  To: Ruhl, Michael J, intel-gfx, dri-devel; +Cc: Auld, Matthew

Hi, Mike,

On 6/25/21 7:57 PM, Ruhl, Michael J wrote:
>> -----Original Message-----
>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Sent: Friday, June 25, 2021 1:52 PM
>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>> Cc: Auld, Matthew <matthew.auld@intel.com>
>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>> time
>>
>>
>> On 6/25/21 7:38 PM, Ruhl, Michael J wrote:
>>>> -----Original Message-----
>>>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>> Sent: Friday, June 25, 2021 12:18 PM
>>>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>>>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>> Cc: Auld, Matthew <matthew.auld@intel.com>
>>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>> map
>>>> time
>>>>
>>>> Hi, Michael,
>>>>
>>>> thanks for looking at this.
>>>>
>>>> On 6/25/21 6:02 PM, Ruhl, Michael J wrote:
>>>>>> -----Original Message-----
>>>>>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf
>> Of
>>>>>> Thomas Hellström
>>>>>> Sent: Thursday, June 24, 2021 2:31 PM
>>>>>> To: intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Auld,
>>>> Matthew
>>>>>> <matthew.auld@intel.com>
>>>>>> Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>>>> time
>>>>>> Until we support p2p dma or as a complement to that, migrate data
>>>>>> to system memory at dma-buf map time if possible.
>>>>>>
>>>>>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>>>> ---
>>>>>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
>>>>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>> index 616c3a2f1baf..a52f885bc09a 100644
>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>> @@ -25,7 +25,14 @@ static struct sg_table
>>>> *i915_gem_map_dma_buf(struct
>>>>>> dma_buf_attachment *attachme
>>>>>> 	struct scatterlist *src, *dst;
>>>>>> 	int ret, i;
>>>>>>
>>>>>> -	ret = i915_gem_object_pin_pages_unlocked(obj);
>>>>>> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
>>>>> Hmm, I believe in most cases that the caller should be holding the
>>>>> lock (object dma-resv) on this object already.
>>>> Yes, I agree, In particular for other instances of our own driver,  at
>>>> least since the dma_resv introduction.
>>>>
>>>> But I also think that's a pre-existing bug, since
>>>> i915_gem_object_pin_pages_unlocked() will also take the lock.
>>> Ouch yes.  Missed that.
>>>
>>>> I Think we need to initially make the exporter dynamic-capable to
>>>> resolve this, and drop the locking here completely, as dma-buf docs says
>>>> that we're then guaranteed to get called with the object lock held.
>>>>
>>>> I figure if we make the exporter dynamic, we need to migrate already at
>>>> dma_buf_pin time so we don't pin the object in the wrong location.
>>> The exporter as dynamic  (ops->pin is available) is optional, but importer
>>> dynamic (ops->move_notify) is required.
>>>
>>> With that in mind, it would seem that there are three possible combinations
>>> for the migrate to be attempted:
>>>
>>> 1) in the ops->pin function (export_dynamic != import_dynamic, during
>> attach)
>>> 2) in the ops->pin function (export_dynamic and
>> !CONFIG_DMABUF_MOVE_NOTIFY) during mapping
>>> 3) and possibly in ops->map_dma_buf (exort_dynamic iand
>> CONFIG_DMABUF_MOVE_NOTIFY)
>>> Since one possibility has to be in the mapping function, it seems that if we
>>> can figure out the locking, that the migrate should probably be available
>> here.
>>> Mike
>> So perhaps just to initially fix the bug, we could just implement NOP
>> pin() and unpin() callbacks and drop the locking in map_attach() and
>> replace it with an assert_object_held();
> That is the sticky part of the move notify API.
>
> If you do the attach_dynamic you have to have an ops with move_notify.
>
> (https://elixir.bootlin.com/linux/v5.13-rc7/source/drivers/dma-buf/dma-buf.c#L730)
>
> If you don't have that, i.e. just the pin interface, the attach will be
> rejected, and you will not get the callbacks.

I understood that as the requirement for move_notify is only if the 
*importer* declares dynamic. A dynamic exporter could choose whether to 
call move_notify() on eviction or to pin and never evict. If the 
importer is non-dynamic, the core calls pin() and the only choice is to 
pin and never evict.

So if we temporarily choose to pin and never evict for *everything*, (as 
the current code does now), I think we should be good for now, and then 
we can implement all fancy p2p and move_notify stuff on top of that.

/Thomas


>
> So I think that the only thing we can do for now is to dop the locking and add the
>
> assert_object_held();
>
> M



>
>> /Thomas
>>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
  2021-06-25 18:49               ` [Intel-gfx] " Thomas Hellström
@ 2021-06-25 19:07                 ` Ruhl, Michael J
  -1 siblings, 0 replies; 34+ messages in thread
From: Ruhl, Michael J @ 2021-06-25 19:07 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel; +Cc: Auld, Matthew

>-----Original Message-----
>From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>Sent: Friday, June 25, 2021 2:50 PM
>To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>Cc: Auld, Matthew <matthew.auld@intel.com>
>Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>time
>
>Hi, Mike,
>
>On 6/25/21 7:57 PM, Ruhl, Michael J wrote:
>>> -----Original Message-----
>>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>> Sent: Friday, June 25, 2021 1:52 PM
>>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>> Cc: Auld, Matthew <matthew.auld@intel.com>
>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>map
>>> time
>>>
>>>
>>> On 6/25/21 7:38 PM, Ruhl, Michael J wrote:
>>>>> -----Original Message-----
>>>>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>>> Sent: Friday, June 25, 2021 12:18 PM
>>>>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>>>>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>>> Cc: Auld, Matthew <matthew.auld@intel.com>
>>>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>>> map
>>>>> time
>>>>>
>>>>> Hi, Michael,
>>>>>
>>>>> thanks for looking at this.
>>>>>
>>>>> On 6/25/21 6:02 PM, Ruhl, Michael J wrote:
>>>>>>> -----Original Message-----
>>>>>>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On
>Behalf
>>> Of
>>>>>>> Thomas Hellström
>>>>>>> Sent: Thursday, June 24, 2021 2:31 PM
>>>>>>> To: intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>>>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Auld,
>>>>> Matthew
>>>>>>> <matthew.auld@intel.com>
>>>>>>> Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>map
>>>>> time
>>>>>>> Until we support p2p dma or as a complement to that, migrate data
>>>>>>> to system memory at dma-buf map time if possible.
>>>>>>>
>>>>>>> Signed-off-by: Thomas Hellström
><thomas.hellstrom@linux.intel.com>
>>>>>>> ---
>>>>>>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
>>>>>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>> index 616c3a2f1baf..a52f885bc09a 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>> @@ -25,7 +25,14 @@ static struct sg_table
>>>>> *i915_gem_map_dma_buf(struct
>>>>>>> dma_buf_attachment *attachme
>>>>>>> 	struct scatterlist *src, *dst;
>>>>>>> 	int ret, i;
>>>>>>>
>>>>>>> -	ret = i915_gem_object_pin_pages_unlocked(obj);
>>>>>>> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
>>>>>> Hmm, I believe in most cases that the caller should be holding the
>>>>>> lock (object dma-resv) on this object already.
>>>>> Yes, I agree, In particular for other instances of our own driver,  at
>>>>> least since the dma_resv introduction.
>>>>>
>>>>> But I also think that's a pre-existing bug, since
>>>>> i915_gem_object_pin_pages_unlocked() will also take the lock.
>>>> Ouch yes.  Missed that.
>>>>
>>>>> I Think we need to initially make the exporter dynamic-capable to
>>>>> resolve this, and drop the locking here completely, as dma-buf docs says
>>>>> that we're then guaranteed to get called with the object lock held.
>>>>>
>>>>> I figure if we make the exporter dynamic, we need to migrate already at
>>>>> dma_buf_pin time so we don't pin the object in the wrong location.
>>>> The exporter as dynamic  (ops->pin is available) is optional, but importer
>>>> dynamic (ops->move_notify) is required.
>>>>
>>>> With that in mind, it would seem that there are three possible
>combinations
>>>> for the migrate to be attempted:
>>>>
>>>> 1) in the ops->pin function (export_dynamic != import_dynamic, during
>>> attach)
>>>> 2) in the ops->pin function (export_dynamic and
>>> !CONFIG_DMABUF_MOVE_NOTIFY) during mapping
>>>> 3) and possibly in ops->map_dma_buf (exort_dynamic iand
>>> CONFIG_DMABUF_MOVE_NOTIFY)
>>>> Since one possibility has to be in the mapping function, it seems that if we
>>>> can figure out the locking, that the migrate should probably be available
>>> here.
>>>> Mike
>>> So perhaps just to initially fix the bug, we could just implement NOP
>>> pin() and unpin() callbacks and drop the locking in map_attach() and
>>> replace it with an assert_object_held();
>> That is the sticky part of the move notify API.
>>
>> If you do the attach_dynamic you have to have an ops with move_notify.
>>
>> (https://elixir.bootlin.com/linux/v5.13-rc7/source/drivers/dma-buf/dma-
>buf.c#L730)
>>
>> If you don't have that, i.e. just the pin interface, the attach will be
>> rejected, and you will not get the callbacks.
>
>I understood that as the requirement for move_notify is only if the
>*importer* declares dynamic. A dynamic exporter could choose whether to
>call move_notify() on eviction or to pin and never evict. If the
>importer is non-dynamic, the core calls pin() and the only choice is to
>pin and never evict.
>
>So if we temporarily choose to pin and never evict for *everything*, (as
>the current code does now), I think we should be good for now, and then
>we can implement all fancy p2p and move_notify stuff on top of that.

/sigh.

You are correct.  I was mistakenly placing the pin API (dma_buf_ops) in the
attach_ops. 😐 Must be Friday.

Upon further reflection, I think that your path will work.

However, is doing a pin (with no locking) from the dma_buf_mapping any different
from using the pin API + export_dynamic?

M

>/Thomas
>
>
>>
>> So I think that the only thing we can do for now is to dop the locking and add
>the
>>
>> assert_object_held();
>>
>> M
>
>
>
>>
>>> /Thomas
>>>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Intel-gfx] [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
@ 2021-06-25 19:07                 ` Ruhl, Michael J
  0 siblings, 0 replies; 34+ messages in thread
From: Ruhl, Michael J @ 2021-06-25 19:07 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel; +Cc: Auld, Matthew

>-----Original Message-----
>From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>Sent: Friday, June 25, 2021 2:50 PM
>To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>Cc: Auld, Matthew <matthew.auld@intel.com>
>Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>time
>
>Hi, Mike,
>
>On 6/25/21 7:57 PM, Ruhl, Michael J wrote:
>>> -----Original Message-----
>>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>> Sent: Friday, June 25, 2021 1:52 PM
>>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>> Cc: Auld, Matthew <matthew.auld@intel.com>
>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>map
>>> time
>>>
>>>
>>> On 6/25/21 7:38 PM, Ruhl, Michael J wrote:
>>>>> -----Original Message-----
>>>>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>>> Sent: Friday, June 25, 2021 12:18 PM
>>>>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>>>>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>>> Cc: Auld, Matthew <matthew.auld@intel.com>
>>>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>>> map
>>>>> time
>>>>>
>>>>> Hi, Michael,
>>>>>
>>>>> thanks for looking at this.
>>>>>
>>>>> On 6/25/21 6:02 PM, Ruhl, Michael J wrote:
>>>>>>> -----Original Message-----
>>>>>>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On
>Behalf
>>> Of
>>>>>>> Thomas Hellström
>>>>>>> Sent: Thursday, June 24, 2021 2:31 PM
>>>>>>> To: intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>>>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Auld,
>>>>> Matthew
>>>>>>> <matthew.auld@intel.com>
>>>>>>> Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>map
>>>>> time
>>>>>>> Until we support p2p dma or as a complement to that, migrate data
>>>>>>> to system memory at dma-buf map time if possible.
>>>>>>>
>>>>>>> Signed-off-by: Thomas Hellström
><thomas.hellstrom@linux.intel.com>
>>>>>>> ---
>>>>>>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
>>>>>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>> index 616c3a2f1baf..a52f885bc09a 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>> @@ -25,7 +25,14 @@ static struct sg_table
>>>>> *i915_gem_map_dma_buf(struct
>>>>>>> dma_buf_attachment *attachme
>>>>>>> 	struct scatterlist *src, *dst;
>>>>>>> 	int ret, i;
>>>>>>>
>>>>>>> -	ret = i915_gem_object_pin_pages_unlocked(obj);
>>>>>>> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
>>>>>> Hmm, I believe in most cases that the caller should be holding the
>>>>>> lock (object dma-resv) on this object already.
>>>>> Yes, I agree, In particular for other instances of our own driver,  at
>>>>> least since the dma_resv introduction.
>>>>>
>>>>> But I also think that's a pre-existing bug, since
>>>>> i915_gem_object_pin_pages_unlocked() will also take the lock.
>>>> Ouch yes.  Missed that.
>>>>
>>>>> I Think we need to initially make the exporter dynamic-capable to
>>>>> resolve this, and drop the locking here completely, as dma-buf docs says
>>>>> that we're then guaranteed to get called with the object lock held.
>>>>>
>>>>> I figure if we make the exporter dynamic, we need to migrate already at
>>>>> dma_buf_pin time so we don't pin the object in the wrong location.
>>>> The exporter as dynamic  (ops->pin is available) is optional, but importer
>>>> dynamic (ops->move_notify) is required.
>>>>
>>>> With that in mind, it would seem that there are three possible
>combinations
>>>> for the migrate to be attempted:
>>>>
>>>> 1) in the ops->pin function (export_dynamic != import_dynamic, during
>>> attach)
>>>> 2) in the ops->pin function (export_dynamic and
>>> !CONFIG_DMABUF_MOVE_NOTIFY) during mapping
>>>> 3) and possibly in ops->map_dma_buf (exort_dynamic iand
>>> CONFIG_DMABUF_MOVE_NOTIFY)
>>>> Since one possibility has to be in the mapping function, it seems that if we
>>>> can figure out the locking, that the migrate should probably be available
>>> here.
>>>> Mike
>>> So perhaps just to initially fix the bug, we could just implement NOP
>>> pin() and unpin() callbacks and drop the locking in map_attach() and
>>> replace it with an assert_object_held();
>> That is the sticky part of the move notify API.
>>
>> If you do the attach_dynamic you have to have an ops with move_notify.
>>
>> (https://elixir.bootlin.com/linux/v5.13-rc7/source/drivers/dma-buf/dma-
>buf.c#L730)
>>
>> If you don't have that, i.e. just the pin interface, the attach will be
>> rejected, and you will not get the callbacks.
>
>I understood that as the requirement for move_notify is only if the
>*importer* declares dynamic. A dynamic exporter could choose whether to
>call move_notify() on eviction or to pin and never evict. If the
>importer is non-dynamic, the core calls pin() and the only choice is to
>pin and never evict.
>
>So if we temporarily choose to pin and never evict for *everything*, (as
>the current code does now), I think we should be good for now, and then
>we can implement all fancy p2p and move_notify stuff on top of that.

/sigh.

You are correct.  I was mistakenly placing the pin API (dma_buf_ops) in the
attach_ops. 😐 Must be Friday.

Upon further reflection, I think that your path will work.

However, is doing a pin (with no locking) from the dma_buf_mapping any different
from using the pin API + export_dynamic?

M

>/Thomas
>
>
>>
>> So I think that the only thing we can do for now is to dop the locking and add
>the
>>
>> assert_object_held();
>>
>> M
>
>
>
>>
>>> /Thomas
>>>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
  2021-06-25 19:07                 ` [Intel-gfx] " Ruhl, Michael J
@ 2021-06-25 19:10                   ` Thomas Hellström
  -1 siblings, 0 replies; 34+ messages in thread
From: Thomas Hellström @ 2021-06-25 19:10 UTC (permalink / raw)
  To: Ruhl, Michael J, intel-gfx, dri-devel; +Cc: Auld, Matthew


On 6/25/21 9:07 PM, Ruhl, Michael J wrote:
>> -----Original Message-----
>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Sent: Friday, June 25, 2021 2:50 PM
>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>> Cc: Auld, Matthew <matthew.auld@intel.com>
>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>> time
>>
>> Hi, Mike,
>>
>> On 6/25/21 7:57 PM, Ruhl, Michael J wrote:
>>>> -----Original Message-----
>>>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>> Sent: Friday, June 25, 2021 1:52 PM
>>>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>>>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>> Cc: Auld, Matthew <matthew.auld@intel.com>
>>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>> map
>>>> time
>>>>
>>>>
>>>> On 6/25/21 7:38 PM, Ruhl, Michael J wrote:
>>>>>> -----Original Message-----
>>>>>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>>>> Sent: Friday, June 25, 2021 12:18 PM
>>>>>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>>>>>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>>>> Cc: Auld, Matthew <matthew.auld@intel.com>
>>>>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>>>> map
>>>>>> time
>>>>>>
>>>>>> Hi, Michael,
>>>>>>
>>>>>> thanks for looking at this.
>>>>>>
>>>>>> On 6/25/21 6:02 PM, Ruhl, Michael J wrote:
>>>>>>>> -----Original Message-----
>>>>>>>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On
>> Behalf
>>>> Of
>>>>>>>> Thomas Hellström
>>>>>>>> Sent: Thursday, June 24, 2021 2:31 PM
>>>>>>>> To: intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>>>>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Auld,
>>>>>> Matthew
>>>>>>>> <matthew.auld@intel.com>
>>>>>>>> Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>> map
>>>>>> time
>>>>>>>> Until we support p2p dma or as a complement to that, migrate data
>>>>>>>> to system memory at dma-buf map time if possible.
>>>>>>>>
>>>>>>>> Signed-off-by: Thomas Hellström
>> <thomas.hellstrom@linux.intel.com>
>>>>>>>> ---
>>>>>>>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
>>>>>>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>> index 616c3a2f1baf..a52f885bc09a 100644
>>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>> @@ -25,7 +25,14 @@ static struct sg_table
>>>>>> *i915_gem_map_dma_buf(struct
>>>>>>>> dma_buf_attachment *attachme
>>>>>>>> 	struct scatterlist *src, *dst;
>>>>>>>> 	int ret, i;
>>>>>>>>
>>>>>>>> -	ret = i915_gem_object_pin_pages_unlocked(obj);
>>>>>>>> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
>>>>>>> Hmm, I believe in most cases that the caller should be holding the
>>>>>>> lock (object dma-resv) on this object already.
>>>>>> Yes, I agree, In particular for other instances of our own driver,  at
>>>>>> least since the dma_resv introduction.
>>>>>>
>>>>>> But I also think that's a pre-existing bug, since
>>>>>> i915_gem_object_pin_pages_unlocked() will also take the lock.
>>>>> Ouch yes.  Missed that.
>>>>>
>>>>>> I Think we need to initially make the exporter dynamic-capable to
>>>>>> resolve this, and drop the locking here completely, as dma-buf docs says
>>>>>> that we're then guaranteed to get called with the object lock held.
>>>>>>
>>>>>> I figure if we make the exporter dynamic, we need to migrate already at
>>>>>> dma_buf_pin time so we don't pin the object in the wrong location.
>>>>> The exporter as dynamic  (ops->pin is available) is optional, but importer
>>>>> dynamic (ops->move_notify) is required.
>>>>>
>>>>> With that in mind, it would seem that there are three possible
>> combinations
>>>>> for the migrate to be attempted:
>>>>>
>>>>> 1) in the ops->pin function (export_dynamic != import_dynamic, during
>>>> attach)
>>>>> 2) in the ops->pin function (export_dynamic and
>>>> !CONFIG_DMABUF_MOVE_NOTIFY) during mapping
>>>>> 3) and possibly in ops->map_dma_buf (exort_dynamic iand
>>>> CONFIG_DMABUF_MOVE_NOTIFY)
>>>>> Since one possibility has to be in the mapping function, it seems that if we
>>>>> can figure out the locking, that the migrate should probably be available
>>>> here.
>>>>> Mike
>>>> So perhaps just to initially fix the bug, we could just implement NOP
>>>> pin() and unpin() callbacks and drop the locking in map_attach() and
>>>> replace it with an assert_object_held();
>>> That is the sticky part of the move notify API.
>>>
>>> If you do the attach_dynamic you have to have an ops with move_notify.
>>>
>>> (https://elixir.bootlin.com/linux/v5.13-rc7/source/drivers/dma-buf/dma-
>> buf.c#L730)
>>> If you don't have that, i.e. just the pin interface, the attach will be
>>> rejected, and you will not get the callbacks.
>> I understood that as the requirement for move_notify is only if the
>> *importer* declares dynamic. A dynamic exporter could choose whether to
>> call move_notify() on eviction or to pin and never evict. If the
>> importer is non-dynamic, the core calls pin() and the only choice is to
>> pin and never evict.
>>
>> So if we temporarily choose to pin and never evict for *everything*, (as
>> the current code does now), I think we should be good for now, and then
>> we can implement all fancy p2p and move_notify stuff on top of that.
> /sigh.
>
> You are correct.  I was mistakenly placing the pin API (dma_buf_ops) in the
> attach_ops. 😐 Must be Friday.
>
> Upon further reflection, I think that your path will work.
>
> However, is doing a pin (with no locking) from the dma_buf_mapping any different
> from using the pin API + export_dynamic?
>
> M

Yes, it's different for dynamic importers only that would otherwise 
never pin, and we could mistakenly evict the object without having 
implemented calling move_notify. If we pin, we never evict.

/Thomas



>> /Thomas
>>
>>
>>> So I think that the only thing we can do for now is to dop the locking and add
>> the
>>> assert_object_held();
>>>
>>> M
>>
>>
>>>> /Thomas
>>>>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Intel-gfx] [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
@ 2021-06-25 19:10                   ` Thomas Hellström
  0 siblings, 0 replies; 34+ messages in thread
From: Thomas Hellström @ 2021-06-25 19:10 UTC (permalink / raw)
  To: Ruhl, Michael J, intel-gfx, dri-devel; +Cc: Auld, Matthew


On 6/25/21 9:07 PM, Ruhl, Michael J wrote:
>> -----Original Message-----
>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Sent: Friday, June 25, 2021 2:50 PM
>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>> Cc: Auld, Matthew <matthew.auld@intel.com>
>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>> time
>>
>> Hi, Mike,
>>
>> On 6/25/21 7:57 PM, Ruhl, Michael J wrote:
>>>> -----Original Message-----
>>>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>> Sent: Friday, June 25, 2021 1:52 PM
>>>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>>>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>> Cc: Auld, Matthew <matthew.auld@intel.com>
>>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>> map
>>>> time
>>>>
>>>>
>>>> On 6/25/21 7:38 PM, Ruhl, Michael J wrote:
>>>>>> -----Original Message-----
>>>>>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>>>> Sent: Friday, June 25, 2021 12:18 PM
>>>>>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>>>>>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>>>> Cc: Auld, Matthew <matthew.auld@intel.com>
>>>>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>>>> map
>>>>>> time
>>>>>>
>>>>>> Hi, Michael,
>>>>>>
>>>>>> thanks for looking at this.
>>>>>>
>>>>>> On 6/25/21 6:02 PM, Ruhl, Michael J wrote:
>>>>>>>> -----Original Message-----
>>>>>>>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On
>> Behalf
>>>> Of
>>>>>>>> Thomas Hellström
>>>>>>>> Sent: Thursday, June 24, 2021 2:31 PM
>>>>>>>> To: intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>>>>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Auld,
>>>>>> Matthew
>>>>>>>> <matthew.auld@intel.com>
>>>>>>>> Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>> map
>>>>>> time
>>>>>>>> Until we support p2p dma or as a complement to that, migrate data
>>>>>>>> to system memory at dma-buf map time if possible.
>>>>>>>>
>>>>>>>> Signed-off-by: Thomas Hellström
>> <thomas.hellstrom@linux.intel.com>
>>>>>>>> ---
>>>>>>>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
>>>>>>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>> index 616c3a2f1baf..a52f885bc09a 100644
>>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>> @@ -25,7 +25,14 @@ static struct sg_table
>>>>>> *i915_gem_map_dma_buf(struct
>>>>>>>> dma_buf_attachment *attachme
>>>>>>>> 	struct scatterlist *src, *dst;
>>>>>>>> 	int ret, i;
>>>>>>>>
>>>>>>>> -	ret = i915_gem_object_pin_pages_unlocked(obj);
>>>>>>>> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
>>>>>>> Hmm, I believe in most cases that the caller should be holding the
>>>>>>> lock (object dma-resv) on this object already.
>>>>>> Yes, I agree, In particular for other instances of our own driver,  at
>>>>>> least since the dma_resv introduction.
>>>>>>
>>>>>> But I also think that's a pre-existing bug, since
>>>>>> i915_gem_object_pin_pages_unlocked() will also take the lock.
>>>>> Ouch yes.  Missed that.
>>>>>
>>>>>> I Think we need to initially make the exporter dynamic-capable to
>>>>>> resolve this, and drop the locking here completely, as dma-buf docs says
>>>>>> that we're then guaranteed to get called with the object lock held.
>>>>>>
>>>>>> I figure if we make the exporter dynamic, we need to migrate already at
>>>>>> dma_buf_pin time so we don't pin the object in the wrong location.
>>>>> The exporter as dynamic  (ops->pin is available) is optional, but importer
>>>>> dynamic (ops->move_notify) is required.
>>>>>
>>>>> With that in mind, it would seem that there are three possible
>> combinations
>>>>> for the migrate to be attempted:
>>>>>
>>>>> 1) in the ops->pin function (export_dynamic != import_dynamic, during
>>>> attach)
>>>>> 2) in the ops->pin function (export_dynamic and
>>>> !CONFIG_DMABUF_MOVE_NOTIFY) during mapping
>>>>> 3) and possibly in ops->map_dma_buf (exort_dynamic iand
>>>> CONFIG_DMABUF_MOVE_NOTIFY)
>>>>> Since one possibility has to be in the mapping function, it seems that if we
>>>>> can figure out the locking, that the migrate should probably be available
>>>> here.
>>>>> Mike
>>>> So perhaps just to initially fix the bug, we could just implement NOP
>>>> pin() and unpin() callbacks and drop the locking in map_attach() and
>>>> replace it with an assert_object_held();
>>> That is the sticky part of the move notify API.
>>>
>>> If you do the attach_dynamic you have to have an ops with move_notify.
>>>
>>> (https://elixir.bootlin.com/linux/v5.13-rc7/source/drivers/dma-buf/dma-
>> buf.c#L730)
>>> If you don't have that, i.e. just the pin interface, the attach will be
>>> rejected, and you will not get the callbacks.
>> I understood that as the requirement for move_notify is only if the
>> *importer* declares dynamic. A dynamic exporter could choose whether to
>> call move_notify() on eviction or to pin and never evict. If the
>> importer is non-dynamic, the core calls pin() and the only choice is to
>> pin and never evict.
>>
>> So if we temporarily choose to pin and never evict for *everything*, (as
>> the current code does now), I think we should be good for now, and then
>> we can implement all fancy p2p and move_notify stuff on top of that.
> /sigh.
>
> You are correct.  I was mistakenly placing the pin API (dma_buf_ops) in the
> attach_ops. 😐 Must be Friday.
>
> Upon further reflection, I think that your path will work.
>
> However, is doing a pin (with no locking) from the dma_buf_mapping any different
> from using the pin API + export_dynamic?
>
> M

Yes, it's different for dynamic importers only that would otherwise 
never pin, and we could mistakenly evict the object without having 
implemented calling move_notify. If we pin, we never evict.

/Thomas



>> /Thomas
>>
>>
>>> So I think that the only thing we can do for now is to dop the locking and add
>> the
>>> assert_object_held();
>>>
>>> M
>>
>>
>>>> /Thomas
>>>>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
  2021-06-25 19:10                   ` [Intel-gfx] " Thomas Hellström
@ 2021-06-25 19:21                     ` Ruhl, Michael J
  -1 siblings, 0 replies; 34+ messages in thread
From: Ruhl, Michael J @ 2021-06-25 19:21 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel; +Cc: Auld, Matthew



>-----Original Message-----
>From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>Sent: Friday, June 25, 2021 3:10 PM>To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>Cc: Auld, Matthew <matthew.auld@intel.com>
>Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>time
>
>
>On 6/25/21 9:07 PM, Ruhl, Michael J wrote:
>>> -----Original Message-----
>>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>> Sent: Friday, June 25, 2021 2:50 PM
>>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>> Cc: Auld, Matthew <matthew.auld@intel.com>
>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>map
>>> time
>>>
>>> Hi, Mike,
>>>
>>> On 6/25/21 7:57 PM, Ruhl, Michael J wrote:
>>>>> -----Original Message-----
>>>>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>>> Sent: Friday, June 25, 2021 1:52 PM
>>>>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>>>>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>>> Cc: Auld, Matthew <matthew.auld@intel.com>
>>>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>>> map
>>>>> time
>>>>>
>>>>>
>>>>> On 6/25/21 7:38 PM, Ruhl, Michael J wrote:
>>>>>>> -----Original Message-----
>>>>>>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>>>>> Sent: Friday, June 25, 2021 12:18 PM
>>>>>>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>>>>>>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>>>>> Cc: Auld, Matthew <matthew.auld@intel.com>
>>>>>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-
>buf
>>>>> map
>>>>>>> time
>>>>>>>
>>>>>>> Hi, Michael,
>>>>>>>
>>>>>>> thanks for looking at this.
>>>>>>>
>>>>>>> On 6/25/21 6:02 PM, Ruhl, Michael J wrote:
>>>>>>>>> -----Original Message-----
>>>>>>>>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On
>>> Behalf
>>>>> Of
>>>>>>>>> Thomas Hellström
>>>>>>>>> Sent: Thursday, June 24, 2021 2:31 PM
>>>>>>>>> To: intel-gfx@lists.freedesktop.org; dri-
>devel@lists.freedesktop.org
>>>>>>>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Auld,
>>>>>>> Matthew
>>>>>>>>> <matthew.auld@intel.com>
>>>>>>>>> Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>>> map
>>>>>>> time
>>>>>>>>> Until we support p2p dma or as a complement to that, migrate data
>>>>>>>>> to system memory at dma-buf map time if possible.
>>>>>>>>>
>>>>>>>>> Signed-off-by: Thomas Hellström
>>> <thomas.hellstrom@linux.intel.com>
>>>>>>>>> ---
>>>>>>>>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
>>>>>>>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>>>>>>>
>>>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>>> index 616c3a2f1baf..a52f885bc09a 100644
>>>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>>> @@ -25,7 +25,14 @@ static struct sg_table
>>>>>>> *i915_gem_map_dma_buf(struct
>>>>>>>>> dma_buf_attachment *attachme
>>>>>>>>> 	struct scatterlist *src, *dst;
>>>>>>>>> 	int ret, i;
>>>>>>>>>
>>>>>>>>> -	ret = i915_gem_object_pin_pages_unlocked(obj);
>>>>>>>>> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
>>>>>>>> Hmm, I believe in most cases that the caller should be holding the
>>>>>>>> lock (object dma-resv) on this object already.
>>>>>>> Yes, I agree, In particular for other instances of our own driver,  at
>>>>>>> least since the dma_resv introduction.
>>>>>>>
>>>>>>> But I also think that's a pre-existing bug, since
>>>>>>> i915_gem_object_pin_pages_unlocked() will also take the lock.
>>>>>> Ouch yes.  Missed that.
>>>>>>
>>>>>>> I Think we need to initially make the exporter dynamic-capable to
>>>>>>> resolve this, and drop the locking here completely, as dma-buf docs
>says
>>>>>>> that we're then guaranteed to get called with the object lock held.
>>>>>>>
>>>>>>> I figure if we make the exporter dynamic, we need to migrate already
>at
>>>>>>> dma_buf_pin time so we don't pin the object in the wrong location.
>>>>>> The exporter as dynamic  (ops->pin is available) is optional, but
>importer
>>>>>> dynamic (ops->move_notify) is required.
>>>>>>
>>>>>> With that in mind, it would seem that there are three possible
>>> combinations
>>>>>> for the migrate to be attempted:
>>>>>>
>>>>>> 1) in the ops->pin function (export_dynamic != import_dynamic,
>during
>>>>> attach)
>>>>>> 2) in the ops->pin function (export_dynamic and
>>>>> !CONFIG_DMABUF_MOVE_NOTIFY) during mapping
>>>>>> 3) and possibly in ops->map_dma_buf (exort_dynamic iand
>>>>> CONFIG_DMABUF_MOVE_NOTIFY)
>>>>>> Since one possibility has to be in the mapping function, it seems that if
>we
>>>>>> can figure out the locking, that the migrate should probably be
>available
>>>>> here.
>>>>>> Mike
>>>>> So perhaps just to initially fix the bug, we could just implement NOP
>>>>> pin() and unpin() callbacks and drop the locking in map_attach() and
>>>>> replace it with an assert_object_held();
>>>> That is the sticky part of the move notify API.
>>>>
>>>> If you do the attach_dynamic you have to have an ops with move_notify.
>>>>
>>>> (https://elixir.bootlin.com/linux/v5.13-rc7/source/drivers/dma-buf/dma-
>>> buf.c#L730)
>>>> If you don't have that, i.e. just the pin interface, the attach will be
>>>> rejected, and you will not get the callbacks.
>>> I understood that as the requirement for move_notify is only if the
>>> *importer* declares dynamic. A dynamic exporter could choose whether
>to
>>> call move_notify() on eviction or to pin and never evict. If the
>>> importer is non-dynamic, the core calls pin() and the only choice is to
>>> pin and never evict.
>>>
>>> So if we temporarily choose to pin and never evict for *everything*, (as
>>> the current code does now), I think we should be good for now, and then
>>> we can implement all fancy p2p and move_notify stuff on top of that.
>> /sigh.
>>
>> You are correct.  I was mistakenly placing the pin API (dma_buf_ops) in the
>> attach_ops. 😐 Must be Friday.
>>
>> Upon further reflection, I think that your path will work.
>>
>> However, is doing a pin (with no locking) from the dma_buf_mapping any
>different
>> from using the pin API + export_dynamic?
>>
>> M
>
>Yes, it's different for dynamic importers only that would otherwise
>never pin, and we could mistakenly evict the object without having
>implemented calling move_notify. If we pin, we never evict.

Ahh.  Got it.  That is an interesting nuance.  I need to remember that
there are other things than i915... 😊

So that would definitely put the migrate code in the pin path.

M


>/Thomas
>
>
>
>>> /Thomas
>>>
>>>
>>>> So I think that the only thing we can do for now is to dop the locking and
>add
>>> the
>>>> assert_object_held();
>>>>
>>>> M
>>>
>>>
>>>>> /Thomas
>>>>>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Intel-gfx] [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time
@ 2021-06-25 19:21                     ` Ruhl, Michael J
  0 siblings, 0 replies; 34+ messages in thread
From: Ruhl, Michael J @ 2021-06-25 19:21 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel; +Cc: Auld, Matthew



>-----Original Message-----
>From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>Sent: Friday, June 25, 2021 3:10 PM>To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>Cc: Auld, Matthew <matthew.auld@intel.com>
>Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map
>time
>
>
>On 6/25/21 9:07 PM, Ruhl, Michael J wrote:
>>> -----Original Message-----
>>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>> Sent: Friday, June 25, 2021 2:50 PM
>>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>> Cc: Auld, Matthew <matthew.auld@intel.com>
>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>map
>>> time
>>>
>>> Hi, Mike,
>>>
>>> On 6/25/21 7:57 PM, Ruhl, Michael J wrote:
>>>>> -----Original Message-----
>>>>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>>> Sent: Friday, June 25, 2021 1:52 PM
>>>>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>>>>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>>> Cc: Auld, Matthew <matthew.auld@intel.com>
>>>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>>> map
>>>>> time
>>>>>
>>>>>
>>>>> On 6/25/21 7:38 PM, Ruhl, Michael J wrote:
>>>>>>> -----Original Message-----
>>>>>>> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>>>>> Sent: Friday, June 25, 2021 12:18 PM
>>>>>>> To: Ruhl, Michael J <michael.j.ruhl@intel.com>; intel-
>>>>>>> gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>>>>>>> Cc: Auld, Matthew <matthew.auld@intel.com>
>>>>>>> Subject: Re: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-
>buf
>>>>> map
>>>>>>> time
>>>>>>>
>>>>>>> Hi, Michael,
>>>>>>>
>>>>>>> thanks for looking at this.
>>>>>>>
>>>>>>> On 6/25/21 6:02 PM, Ruhl, Michael J wrote:
>>>>>>>>> -----Original Message-----
>>>>>>>>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On
>>> Behalf
>>>>> Of
>>>>>>>>> Thomas Hellström
>>>>>>>>> Sent: Thursday, June 24, 2021 2:31 PM
>>>>>>>>> To: intel-gfx@lists.freedesktop.org; dri-
>devel@lists.freedesktop.org
>>>>>>>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Auld,
>>>>>>> Matthew
>>>>>>>>> <matthew.auld@intel.com>
>>>>>>>>> Subject: [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf
>>> map
>>>>>>> time
>>>>>>>>> Until we support p2p dma or as a complement to that, migrate data
>>>>>>>>> to system memory at dma-buf map time if possible.
>>>>>>>>>
>>>>>>>>> Signed-off-by: Thomas Hellström
>>> <thomas.hellstrom@linux.intel.com>
>>>>>>>>> ---
>>>>>>>>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++++++++-
>>>>>>>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>>>>>>>
>>>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>>> index 616c3a2f1baf..a52f885bc09a 100644
>>>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>>>>>>>>> @@ -25,7 +25,14 @@ static struct sg_table
>>>>>>> *i915_gem_map_dma_buf(struct
>>>>>>>>> dma_buf_attachment *attachme
>>>>>>>>> 	struct scatterlist *src, *dst;
>>>>>>>>> 	int ret, i;
>>>>>>>>>
>>>>>>>>> -	ret = i915_gem_object_pin_pages_unlocked(obj);
>>>>>>>>> +	ret = i915_gem_object_lock_interruptible(obj, NULL);
>>>>>>>> Hmm, I believe in most cases that the caller should be holding the
>>>>>>>> lock (object dma-resv) on this object already.
>>>>>>> Yes, I agree, In particular for other instances of our own driver,  at
>>>>>>> least since the dma_resv introduction.
>>>>>>>
>>>>>>> But I also think that's a pre-existing bug, since
>>>>>>> i915_gem_object_pin_pages_unlocked() will also take the lock.
>>>>>> Ouch yes.  Missed that.
>>>>>>
>>>>>>> I Think we need to initially make the exporter dynamic-capable to
>>>>>>> resolve this, and drop the locking here completely, as dma-buf docs
>says
>>>>>>> that we're then guaranteed to get called with the object lock held.
>>>>>>>
>>>>>>> I figure if we make the exporter dynamic, we need to migrate already
>at
>>>>>>> dma_buf_pin time so we don't pin the object in the wrong location.
>>>>>> The exporter as dynamic  (ops->pin is available) is optional, but
>importer
>>>>>> dynamic (ops->move_notify) is required.
>>>>>>
>>>>>> With that in mind, it would seem that there are three possible
>>> combinations
>>>>>> for the migrate to be attempted:
>>>>>>
>>>>>> 1) in the ops->pin function (export_dynamic != import_dynamic,
>during
>>>>> attach)
>>>>>> 2) in the ops->pin function (export_dynamic and
>>>>> !CONFIG_DMABUF_MOVE_NOTIFY) during mapping
>>>>>> 3) and possibly in ops->map_dma_buf (exort_dynamic iand
>>>>> CONFIG_DMABUF_MOVE_NOTIFY)
>>>>>> Since one possibility has to be in the mapping function, it seems that if
>we
>>>>>> can figure out the locking, that the migrate should probably be
>available
>>>>> here.
>>>>>> Mike
>>>>> So perhaps just to initially fix the bug, we could just implement NOP
>>>>> pin() and unpin() callbacks and drop the locking in map_attach() and
>>>>> replace it with an assert_object_held();
>>>> That is the sticky part of the move notify API.
>>>>
>>>> If you do the attach_dynamic you have to have an ops with move_notify.
>>>>
>>>> (https://elixir.bootlin.com/linux/v5.13-rc7/source/drivers/dma-buf/dma-
>>> buf.c#L730)
>>>> If you don't have that, i.e. just the pin interface, the attach will be
>>>> rejected, and you will not get the callbacks.
>>> I understood that as the requirement for move_notify is only if the
>>> *importer* declares dynamic. A dynamic exporter could choose whether
>to
>>> call move_notify() on eviction or to pin and never evict. If the
>>> importer is non-dynamic, the core calls pin() and the only choice is to
>>> pin and never evict.
>>>
>>> So if we temporarily choose to pin and never evict for *everything*, (as
>>> the current code does now), I think we should be good for now, and then
>>> we can implement all fancy p2p and move_notify stuff on top of that.
>> /sigh.
>>
>> You are correct.  I was mistakenly placing the pin API (dma_buf_ops) in the
>> attach_ops. 😐 Must be Friday.
>>
>> Upon further reflection, I think that your path will work.
>>
>> However, is doing a pin (with no locking) from the dma_buf_mapping any
>different
>> from using the pin API + export_dynamic?
>>
>> M
>
>Yes, it's different for dynamic importers only that would otherwise
>never pin, and we could mistakenly evict the object without having
>implemented calling move_notify. If we pin, we never evict.

Ahh.  Got it.  That is an interesting nuance.  I need to remember that
there are other things than i915... 😊

So that would definitely put the migrate code in the pin path.

M


>/Thomas
>
>
>
>>> /Thomas
>>>
>>>
>>>> So I think that the only thing we can do for now is to dop the locking and
>add
>>> the
>>>> assert_object_held();
>>>>
>>>> M
>>>
>>>
>>>>> /Thomas
>>>>>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2021-06-25 19:21 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-24 18:31 [PATCH 0/4] drm/i915/gem: Introduce a migrate interface Thomas Hellström
2021-06-24 18:31 ` [Intel-gfx] " Thomas Hellström
2021-06-24 18:31 ` [PATCH 1/4] drm/i915/gem: Implement object migration Thomas Hellström
2021-06-24 18:31   ` [Intel-gfx] " Thomas Hellström
2021-06-25 10:51   ` Matthew Auld
2021-06-25 10:51     ` [Intel-gfx] " Matthew Auld
2021-06-24 18:31 ` [PATCH 2/4] drm/i915/gem: Introduce a selftest for the gem object migrate functionality Thomas Hellström
2021-06-24 18:31   ` [Intel-gfx] " Thomas Hellström
2021-06-24 18:31 ` [PATCH 3/4] drm/i915/display: Migrate objects to LMEM if possible for display Thomas Hellström
2021-06-24 18:31   ` [Intel-gfx] " Thomas Hellström
2021-06-24 18:31 ` [PATCH 4/4] drm/i915/gem: Migrate to system at dma-buf map time Thomas Hellström
2021-06-24 18:31   ` [Intel-gfx] " Thomas Hellström
2021-06-25 16:02   ` Ruhl, Michael J
2021-06-25 16:02     ` [Intel-gfx] " Ruhl, Michael J
2021-06-25 16:17     ` Thomas Hellström
2021-06-25 16:17       ` [Intel-gfx] " Thomas Hellström
2021-06-25 17:38       ` Ruhl, Michael J
2021-06-25 17:38         ` [Intel-gfx] " Ruhl, Michael J
2021-06-25 17:52         ` Thomas Hellström
2021-06-25 17:52           ` [Intel-gfx] " Thomas Hellström
2021-06-25 17:57           ` Ruhl, Michael J
2021-06-25 17:57             ` [Intel-gfx] " Ruhl, Michael J
2021-06-25 18:49             ` Thomas Hellström
2021-06-25 18:49               ` [Intel-gfx] " Thomas Hellström
2021-06-25 19:07               ` Ruhl, Michael J
2021-06-25 19:07                 ` [Intel-gfx] " Ruhl, Michael J
2021-06-25 19:10                 ` Thomas Hellström
2021-06-25 19:10                   ` [Intel-gfx] " Thomas Hellström
2021-06-25 19:21                   ` Ruhl, Michael J
2021-06-25 19:21                     ` [Intel-gfx] " Ruhl, Michael J
2021-06-24 22:02 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915/gem: Introduce a migrate interface Patchwork
2021-06-24 22:04 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-06-24 22:33 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-06-25  6:53 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.