intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH v2 0/5] drm/i915/gem: Introduce a migrate interface
@ 2021-06-28  9:09 Thomas Hellström
  2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 1/5] drm/i915/gem: Implement object migration Thomas Hellström
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Thomas Hellström @ 2021-06-28  9:09 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: Thomas Hellström, matthew.auld

We want to be able to explicitly migrate objects between gem memory
regions, initially for display and dma-buf, but there might be more
use-cases coming up.

Introduce a gem migrate interface, add a selftest and use it for
display fb pinning and dma-buf mapping.

This series should make accelerated desktop work on DG1 with DG1-enabled
OpenGL.

v2:
- Address review comments by Matthew Auld on patch 1/5. More details on
  the patch commit message.
- Address a dma-buf locking issue pointed out by Michael Ruhl, and
  add a selftest to catch that issue moving forward.
- Rebase the dma-buf migration patch on the above-mentioned fix.

Matthew Auld (1):
  drm/i915/gem: Introduce a selftest for the gem object migrate
    functionality

Thomas Hellström (4):
  drm/i915/gem: Implement object migration
  drm/i915/display: Migrate objects to LMEM if possible for display
  drm/i915/gem: Fix same-driver-another-instance dma-buf export
  drm/i915/gem: Migrate to system at dma-buf map time

 drivers/gpu/drm/i915/display/intel_display.c  |   5 +-
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  36 ++-
 drivers/gpu/drm/i915/gem/i915_gem_domain.c    |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_lmem.c      |  21 --
 drivers/gpu/drm/i915/gem/i915_gem_object.c    |  97 +++++++
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  12 +-
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |   9 +
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c       |  69 +++--
 drivers/gpu/drm/i915/gem/i915_gem_wait.c      |  19 ++
 .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  83 +++++-
 .../drm/i915/gem/selftests/i915_gem_migrate.c | 237 ++++++++++++++++++
 .../drm/i915/selftests/i915_live_selftests.h  |   1 +
 12 files changed, 546 insertions(+), 45 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c

-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Intel-gfx] [PATCH v2 1/5] drm/i915/gem: Implement object migration
  2021-06-28  9:09 [Intel-gfx] [PATCH v2 0/5] drm/i915/gem: Introduce a migrate interface Thomas Hellström
@ 2021-06-28  9:09 ` Thomas Hellström
  2021-06-28  9:47   ` Thomas Hellström
                     ` (2 more replies)
  2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 2/5] drm/i915/gem: Introduce a selftest for the gem object migrate functionality Thomas Hellström
                   ` (3 subsequent siblings)
  4 siblings, 3 replies; 9+ messages in thread
From: Thomas Hellström @ 2021-06-28  9:09 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: Thomas Hellström, matthew.auld

Introduce an interface to migrate objects between regions.
This is primarily intended to migrate objects to LMEM for display and
to SYSTEM for dma-buf, but might be reused in one form or another for
performande-based migration.

v2:
- Verify that the memory region given as an id really exists.
  (Reported by Matthew Auld)
- Call i915_gem_object_{init,release}_memory_region() when switching region
  to handle also switching region lists. (Reported by Matthew Auld)

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c    | 96 +++++++++++++++++++
 drivers/gpu/drm/i915/gem/i915_gem_object.h    | 12 +++
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |  9 ++
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c       | 69 +++++++++----
 drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 19 ++++
 5 files changed, 188 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 07e8ff9a8aae..52a37619054d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -513,6 +513,102 @@ bool i915_gem_object_has_iomem(const struct drm_i915_gem_object *obj)
 	return obj->mem_flags & I915_BO_FLAG_IOMEM;
 }
 
+/**
+ * i915_gem_object_can_migrate - Whether an object likely can be migrated
+ *
+ * @obj: The object to migrate
+ * @id: The region intended to migrate to
+ *
+ * Check whether the object backend supports migration to the
+ * given region. Note that pinning may affect the ability to migrate.
+ *
+ * Return: true if migration is possible, false otherwise.
+ */
+bool i915_gem_object_can_migrate(struct drm_i915_gem_object *obj,
+				 enum intel_region_id id)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	unsigned int num_allowed = obj->mm.n_placements;
+	struct intel_memory_region *mr;
+	unsigned int i;
+
+	GEM_BUG_ON(id >= INTEL_REGION_UNKNOWN);
+	GEM_BUG_ON(obj->mm.madv != I915_MADV_WILLNEED);
+
+	if (!obj->ops->migrate)
+		return false;
+
+	mr = i915->mm.regions[id];
+	if (!mr)
+		return false;
+
+	if (obj->mm.region == mr)
+		return true;
+
+	if (!i915_gem_object_evictable(obj))
+		return false;
+
+	if (!(obj->flags & I915_BO_ALLOC_USER))
+		return true;
+
+	if (num_allowed == 0)
+		return false;
+
+	for (i = 0; i < num_allowed; ++i) {
+		if (mr == obj->mm.placements[i])
+			return true;
+	}
+
+	return false;
+}
+
+/**
+ * i915_gem_object_migrate - Migrate an object to the desired region id
+ * @obj: The object to migrate.
+ * @ww: An optional struct i915_gem_ww_ctx. If NULL, the backend may
+ * not be successful in evicting other objects to make room for this object.
+ * @id: The region id to migrate to.
+ *
+ * Attempt to migrate the object to the desired memory region. The
+ * object backend must support migration and the object may not be
+ * pinned, (explicitly pinned pages or pinned vmas). The object must
+ * be locked.
+ * On successful completion, the object will have pages pointing to
+ * memory in the new region, but an async migration task may not have
+ * completed yet, and to accomplish that, i915_gem_object_wait_migration()
+ * must be called.
+ *
+ * Return: 0 on success. Negative error code on failure. In particular may
+ * return -ENXIO on lack of region space, -EDEADLK for deadlock avoidance
+ * if @ww is set, -EINTR or -ERESTARTSYS if signal pending, and
+ * -EBUSY if the object is pinned.
+ */
+int i915_gem_object_migrate(struct drm_i915_gem_object *obj,
+			    struct i915_gem_ww_ctx *ww,
+			    enum intel_region_id id)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	struct intel_memory_region *mr;
+
+	GEM_BUG_ON(id >= INTEL_REGION_UNKNOWN);
+	GEM_BUG_ON(obj->mm.madv != I915_MADV_WILLNEED);
+	assert_object_held(obj);
+
+	mr = i915->mm.regions[id];
+	GEM_BUG_ON(!mr);
+
+	if (obj->mm.region == mr)
+		return 0;
+
+	if (!i915_gem_object_evictable(obj))
+		return -EBUSY;
+
+	if (!obj->ops->migrate)
+		return -EOPNOTSUPP;
+
+	return obj->ops->migrate(obj, mr);
+}
+
 void i915_gem_init__objects(struct drm_i915_private *i915)
 {
 	INIT_WORK(&i915->mm.free_work, __i915_gem_free_work);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index ea3224a480c4..8cbd7a5334e2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -17,6 +17,8 @@
 #include "i915_gem_ww.h"
 #include "i915_vma_types.h"
 
+enum intel_region_id;
+
 /*
  * XXX: There is a prevalence of the assumption that we fit the
  * object's page count inside a 32bit _signed_ variable. Let's document
@@ -597,6 +599,16 @@ bool i915_gem_object_migratable(struct drm_i915_gem_object *obj);
 
 bool i915_gem_object_validates_to_lmem(struct drm_i915_gem_object *obj);
 
+int i915_gem_object_migrate(struct drm_i915_gem_object *obj,
+			    struct i915_gem_ww_ctx *ww,
+			    enum intel_region_id id);
+
+bool i915_gem_object_can_migrate(struct drm_i915_gem_object *obj,
+				 enum intel_region_id id);
+
+int i915_gem_object_wait_migration(struct drm_i915_gem_object *obj,
+				   unsigned int flags);
+
 #ifdef CONFIG_MMU_NOTIFIER
 static inline bool
 i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index 441f913c87e6..ef3de2ae9723 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -18,6 +18,7 @@
 
 struct drm_i915_gem_object;
 struct intel_fronbuffer;
+struct intel_memory_region;
 
 /*
  * struct i915_lut_handle tracks the fast lookups from handle to vma used
@@ -77,6 +78,14 @@ struct drm_i915_gem_object_ops {
 	 * delayed_free - Override the default delayed free implementation
 	 */
 	void (*delayed_free)(struct drm_i915_gem_object *obj);
+
+	/**
+	 * migrate - Migrate object to a different region either for
+	 * pinning or for as long as the object lock is held.
+	 */
+	int (*migrate)(struct drm_i915_gem_object *obj,
+		       struct intel_memory_region *mr);
+
 	void (*release)(struct drm_i915_gem_object *obj);
 
 	const struct vm_operations_struct *mmap_ops;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
index c39d982c4fa6..8f89185b6507 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
@@ -617,7 +617,8 @@ struct ttm_device_funcs *i915_ttm_driver(void)
 	return &i915_ttm_bo_driver;
 }
 
-static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
+static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
+				struct ttm_placement *placement)
 {
 	struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
 	struct ttm_operation_ctx ctx = {
@@ -625,19 +626,12 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
 		.no_wait_gpu = false,
 	};
 	struct sg_table *st;
-	struct ttm_place requested, busy[I915_TTM_MAX_PLACEMENTS];
-	struct ttm_placement placement;
 	int real_num_busy;
 	int ret;
 
-	GEM_BUG_ON(obj->mm.n_placements > I915_TTM_MAX_PLACEMENTS);
-
-	/* Move to the requested placement. */
-	i915_ttm_placement_from_obj(obj, &requested, busy, &placement);
-
 	/* First try only the requested placement. No eviction. */
-	real_num_busy = fetch_and_zero(&placement.num_busy_placement);
-	ret = ttm_bo_validate(bo, &placement, &ctx);
+	real_num_busy = fetch_and_zero(&placement->num_busy_placement);
+	ret = ttm_bo_validate(bo, placement, &ctx);
 	if (ret) {
 		ret = i915_ttm_err_to_gem(ret);
 		/*
@@ -652,8 +646,8 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
 		 * If the initial attempt fails, allow all accepted placements,
 		 * evicting if necessary.
 		 */
-		placement.num_busy_placement = real_num_busy;
-		ret = ttm_bo_validate(bo, &placement, &ctx);
+		placement->num_busy_placement = real_num_busy;
+		ret = ttm_bo_validate(bo, placement, &ctx);
 		if (ret)
 			return i915_ttm_err_to_gem(ret);
 	}
@@ -668,16 +662,56 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
 		i915_ttm_adjust_gem_after_move(obj);
 	}
 
-	/* Object either has a page vector or is an iomem object */
-	st = bo->ttm ? i915_ttm_tt_get_st(bo->ttm) : obj->ttm.cached_io_st;
-	if (IS_ERR(st))
-		return PTR_ERR(st);
+	if (!obj->mm.pages) {
+		/* Object either has a page vector or is an iomem object */
+		st = bo->ttm ? i915_ttm_tt_get_st(bo->ttm) : obj->ttm.cached_io_st;
+		if (IS_ERR(st))
+			return PTR_ERR(st);
 
-	__i915_gem_object_set_pages(obj, st, i915_sg_dma_sizes(st->sgl));
+		__i915_gem_object_set_pages(obj, st, i915_sg_dma_sizes(st->sgl));
+	}
 
 	return ret;
 }
 
+static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
+{
+	struct ttm_place requested, busy[I915_TTM_MAX_PLACEMENTS];
+	struct ttm_placement placement;
+
+	GEM_BUG_ON(obj->mm.n_placements > I915_TTM_MAX_PLACEMENTS);
+
+	/* Move to the requested placement. */
+	i915_ttm_placement_from_obj(obj, &requested, busy, &placement);
+
+	return __i915_ttm_get_pages(obj, &placement);
+}
+
+static int i915_ttm_migrate(struct drm_i915_gem_object *obj,
+			    struct intel_memory_region *mr)
+{
+	struct ttm_place requested;
+	struct ttm_placement placement;
+	int ret;
+
+	i915_ttm_place_from_region(mr, &requested, obj->flags);
+	placement.num_placement = 1;
+	placement.num_busy_placement = 1;
+	placement.placement = &requested;
+	placement.busy_placement = &requested;
+
+	ret = __i915_ttm_get_pages(obj, &placement);
+	if (ret)
+		return ret;
+
+	if (obj->mm.region != mr) {
+		i915_gem_object_release_memory_region(obj);
+		i915_gem_object_init_memory_region(obj, mr);
+	}
+
+	return 0;
+}
+
 static void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
 			       struct sg_table *st)
 {
@@ -814,6 +848,7 @@ static const struct drm_i915_gem_object_ops i915_gem_ttm_obj_ops = {
 	.truncate = i915_ttm_purge,
 	.adjust_lru = i915_ttm_adjust_lru,
 	.delayed_free = i915_ttm_delayed_free,
+	.migrate = i915_ttm_migrate,
 	.mmap_offset = i915_ttm_mmap_offset,
 	.mmap_ops = &vm_ops_ttm,
 };
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
index 1070d3afdce7..e9f7a8d9f0e2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
@@ -290,3 +290,22 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 	i915_gem_object_put(obj);
 	return ret;
 }
+
+/**
+ * i915_gem_object_wait_migrate - Sync an accelerated migration operation
+ * @obj: The migrating object.
+ * @flags: waiting flags. Currently supports only I915_WAIT_INTERRUPTIBLE.
+ *
+ * Wait for any pending async migration operation on the object,
+ * whether it's explicitly (i915_gem_object_migrate()) or implicitly
+ * (swapin, initial clearing) initiated.
+ *
+ * Return: 0 if successful, -ERESTARTSYS if a signal was hit during waiting.
+ */
+int i915_gem_object_wait_migration(struct drm_i915_gem_object *obj,
+				   unsigned int flags)
+{
+	might_sleep();
+	/* NOP for now. */
+	return 0;
+}
-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Intel-gfx] [PATCH v2 2/5] drm/i915/gem: Introduce a selftest for the gem object migrate functionality
  2021-06-28  9:09 [Intel-gfx] [PATCH v2 0/5] drm/i915/gem: Introduce a migrate interface Thomas Hellström
  2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 1/5] drm/i915/gem: Implement object migration Thomas Hellström
@ 2021-06-28  9:09 ` Thomas Hellström
  2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 3/5] drm/i915/display: Migrate objects to LMEM if possible for display Thomas Hellström
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Thomas Hellström @ 2021-06-28  9:09 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: Thomas Hellström, matthew.auld

From: Matthew Auld <matthew.auld@intel.com>

A selftest for the gem object migrate functionality. Slightly adapted
from the original by Matthew to the new interface and new fill blit
code.

Co-developed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c    |   1 +
 .../drm/i915/gem/selftests/i915_gem_migrate.c | 237 ++++++++++++++++++
 .../drm/i915/selftests/i915_live_selftests.h  |   1 +
 3 files changed, 239 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 52a37619054d..58cc25f38b1c 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -649,6 +649,7 @@ static const struct drm_gem_object_funcs i915_gem_object_funcs = {
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftests/huge_gem_object.c"
 #include "selftests/huge_pages.c"
+#include "selftests/i915_gem_migrate.c"
 #include "selftests/i915_gem_object.c"
 #include "selftests/i915_gem_coherency.c"
 #endif
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c
new file mode 100644
index 000000000000..a437b66f64d9
--- /dev/null
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c
@@ -0,0 +1,237 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2020-2021 Intel Corporation
+ */
+
+#include "gt/intel_migrate.h"
+
+static int igt_smem_create_migrate(void *arg)
+{
+	struct intel_gt *gt = arg;
+	struct drm_i915_private *i915 = gt->i915;
+	struct drm_i915_gem_object *obj;
+	struct i915_gem_ww_ctx ww;
+	int err = 0;
+
+	/* Switch object backing-store on create */
+	obj = i915_gem_object_create_lmem(i915, PAGE_SIZE, 0);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	for_i915_gem_ww(&ww, err, true) {
+		err = i915_gem_object_lock(obj, &ww);
+		if (err)
+			continue;
+
+		if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM)) {
+			err = -EINVAL;
+			continue;
+		}
+
+		err = i915_gem_object_migrate(obj, &ww, INTEL_REGION_SMEM);
+		if (err)
+			continue;
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err)
+			continue;
+
+		if (i915_gem_object_can_migrate(obj, INTEL_REGION_LMEM))
+			err = -EINVAL;
+
+		i915_gem_object_unpin_pages(obj);
+	}
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+static int igt_lmem_create_migrate(void *arg)
+{
+	struct intel_gt *gt = arg;
+	struct drm_i915_private *i915 = gt->i915;
+	struct drm_i915_gem_object *obj;
+	struct i915_gem_ww_ctx ww;
+	int err = 0;
+
+	/* Switch object backing-store on create */
+	obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	for_i915_gem_ww(&ww, err, true) {
+		err = i915_gem_object_lock(obj, &ww);
+		if (err)
+			continue;
+
+		if (!i915_gem_object_can_migrate(obj, INTEL_REGION_LMEM)) {
+			err = -EINVAL;
+			continue;
+		}
+
+		err = i915_gem_object_migrate(obj, &ww, INTEL_REGION_LMEM);
+		if (err)
+			continue;
+
+		err = i915_gem_object_pin_pages(obj);
+		if (err)
+			continue;
+
+		if (i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM))
+			err = -EINVAL;
+
+		i915_gem_object_unpin_pages(obj);
+	}
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+static int lmem_pages_migrate_one(struct i915_gem_ww_ctx *ww,
+				  struct drm_i915_gem_object *obj)
+{
+	int err;
+
+	err = i915_gem_object_lock(obj, ww);
+	if (err)
+		return err;
+
+	err = i915_gem_object_wait(obj,
+				   I915_WAIT_INTERRUPTIBLE |
+				   I915_WAIT_PRIORITY |
+				   I915_WAIT_ALL,
+				   MAX_SCHEDULE_TIMEOUT);
+	if (err)
+		return err;
+
+	if (i915_gem_object_is_lmem(obj)) {
+		if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM)) {
+			pr_err("object can't migrate to smem.\n");
+			return -EINVAL;
+		}
+
+		err = i915_gem_object_migrate(obj, ww, INTEL_REGION_SMEM);
+		if (err) {
+			pr_err("Object failed migration to smem\n");
+			if (err)
+				return err;
+		}
+
+		if (i915_gem_object_is_lmem(obj)) {
+			pr_err("object still backed by lmem\n");
+			err = -EINVAL;
+		}
+
+		if (!i915_gem_object_has_struct_page(obj)) {
+			pr_err("object not backed by struct page\n");
+			err = -EINVAL;
+		}
+
+	} else {
+		if (!i915_gem_object_can_migrate(obj, INTEL_REGION_LMEM)) {
+			pr_err("object can't migrate to lmem.\n");
+			return -EINVAL;
+		}
+
+		err = i915_gem_object_migrate(obj, ww, INTEL_REGION_LMEM);
+		if (err) {
+			pr_err("Object failed migration to lmem\n");
+			if (err)
+				return err;
+		}
+
+		if (i915_gem_object_has_struct_page(obj)) {
+			pr_err("object still backed by struct page\n");
+			err = -EINVAL;
+		}
+
+		if (!i915_gem_object_is_lmem(obj)) {
+			pr_err("object not backed by lmem\n");
+			err = -EINVAL;
+		}
+	}
+
+	return err;
+}
+
+static int igt_lmem_pages_migrate(void *arg)
+{
+	struct intel_gt *gt = arg;
+	struct drm_i915_private *i915 = gt->i915;
+	struct drm_i915_gem_object *obj;
+	struct i915_gem_ww_ctx ww;
+	struct i915_request *rq;
+	int err;
+	int i;
+
+	/* From LMEM to shmem and back again */
+
+	obj = i915_gem_object_create_lmem(i915, SZ_2M, 0);
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
+
+	err = i915_gem_object_lock(obj, NULL);
+	if (err)
+		goto out_put;
+
+	err = ____i915_gem_object_get_pages(obj);
+	if (err) {
+		i915_gem_object_unlock(obj);
+		goto out_put;
+	}
+
+	err = intel_context_migrate_clear(gt->migrate.context, NULL,
+					  obj->mm.pages->sgl, obj->cache_level,
+					  i915_gem_object_is_lmem(obj),
+					  0, &rq);
+	if (rq) {
+		dma_resv_add_excl_fence(obj->base.resv, &rq->fence);
+		i915_request_put(rq);
+	}
+	i915_gem_object_unlock(obj);
+	if (err)
+		goto out_put;
+
+	for (i = 1; i <= 4; ++i) {
+		for_i915_gem_ww(&ww, err, true) {
+			err = lmem_pages_migrate_one(&ww, obj);
+			if (err)
+				continue;
+
+			err = i915_gem_object_wait_migration(obj, true);
+			if (err)
+				continue;
+
+			err = intel_migrate_clear(&gt->migrate, &ww, NULL,
+						  obj->mm.pages->sgl,
+						  obj->cache_level,
+						  i915_gem_object_is_lmem(obj),
+						  0xdeadbeaf, &rq);
+			if (rq) {
+				dma_resv_add_excl_fence(obj->base.resv,
+							&rq->fence);
+				i915_request_put(rq);
+			}
+		}
+		if (err)
+			break;
+	}
+out_put:
+	i915_gem_object_put(obj);
+
+	return err;
+}
+
+int i915_gem_migrate_live_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(igt_smem_create_migrate),
+		SUBTEST(igt_lmem_create_migrate),
+		SUBTEST(igt_lmem_pages_migrate),
+	};
+
+	if (!HAS_LMEM(i915))
+		return 0;
+
+	return intel_gt_live_subtests(tests, &i915->gt);
+}
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index a68197cf1044..e2fd1b61af71 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -40,6 +40,7 @@ selftest(hugepages, i915_gem_huge_page_live_selftests)
 selftest(gem_contexts, i915_gem_context_live_selftests)
 selftest(gem_execbuf, i915_gem_execbuffer_live_selftests)
 selftest(client, i915_gem_client_blt_live_selftests)
+selftest(gem_migrate, i915_gem_migrate_live_selftests)
 selftest(reset, intel_reset_live_selftests)
 selftest(memory_region, intel_memory_region_live_selftests)
 selftest(hangcheck, intel_hangcheck_live_selftests)
-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Intel-gfx] [PATCH v2 3/5] drm/i915/display: Migrate objects to LMEM if possible for display
  2021-06-28  9:09 [Intel-gfx] [PATCH v2 0/5] drm/i915/gem: Introduce a migrate interface Thomas Hellström
  2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 1/5] drm/i915/gem: Implement object migration Thomas Hellström
  2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 2/5] drm/i915/gem: Introduce a selftest for the gem object migrate functionality Thomas Hellström
@ 2021-06-28  9:09 ` Thomas Hellström
  2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 4/5] drm/i915/gem: Fix same-driver-another-instance dma-buf export Thomas Hellström
  2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 5/5] drm/i915/gem: Migrate to system at dma-buf map time Thomas Hellström
  4 siblings, 0 replies; 9+ messages in thread
From: Thomas Hellström @ 2021-06-28  9:09 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: Thomas Hellström, matthew.auld

Objects intended to be used as display framebuffers must reside in
LMEM for discrete. If they happen to not do that, migrate them to
LMEM before pinning.

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c |  5 ++++-
 drivers/gpu/drm/i915/gem/i915_gem_domain.c   |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_lmem.c     | 21 --------------------
 drivers/gpu/drm/i915/gem/i915_gem_object.h   |  2 --
 4 files changed, 5 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index eec6c9e9cda7..026c28c612f0 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -1331,6 +1331,9 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
 	ret = i915_gem_object_lock(obj, &ww);
 	if (!ret && phys_cursor)
 		ret = i915_gem_object_attach_phys(obj, alignment);
+	else if (!ret && HAS_LMEM(dev_priv))
+		ret = i915_gem_object_migrate(obj, &ww, INTEL_REGION_LMEM);
+	/* TODO: Do we need to sync when migration becomes async? */
 	if (!ret)
 		ret = i915_gem_object_pin_pages(obj);
 	if (ret)
@@ -11778,7 +11781,7 @@ intel_user_framebuffer_create(struct drm_device *dev,
 
 	/* object is backed with LMEM for discrete */
 	i915 = to_i915(obj->base.dev);
-	if (HAS_LMEM(i915) && !i915_gem_object_validates_to_lmem(obj)) {
+	if (HAS_LMEM(i915) && !i915_gem_object_can_migrate(obj, INTEL_REGION_LMEM)) {
 		/* object is "remote", not in local memory */
 		i915_gem_object_put(obj);
 		return ERR_PTR(-EREMOTE);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
index 073822100da7..7d1400b13429 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
@@ -375,7 +375,7 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
 	struct i915_vma *vma;
 	int ret;
 
-	/* Frame buffer must be in LMEM (no migration yet) */
+	/* Frame buffer must be in LMEM */
 	if (HAS_LMEM(i915) && !i915_gem_object_is_lmem(obj))
 		return ERR_PTR(-EINVAL);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
index 41d5182cd367..be1d122574af 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
@@ -23,27 +23,6 @@ i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj,
 	return io_mapping_map_wc(&obj->mm.region->iomap, offset, size);
 }
 
-/**
- * i915_gem_object_validates_to_lmem - Whether the object is resident in
- * lmem when pages are present.
- * @obj: The object to check.
- *
- * Migratable objects residency may change from under us if the object is
- * not pinned or locked. This function is intended to be used to check whether
- * the object can only reside in lmem when pages are present.
- *
- * Return: Whether the object is always resident in lmem when pages are
- * present.
- */
-bool i915_gem_object_validates_to_lmem(struct drm_i915_gem_object *obj)
-{
-	struct intel_memory_region *mr = READ_ONCE(obj->mm.region);
-
-	return !i915_gem_object_migratable(obj) &&
-		mr && (mr->type == INTEL_MEMORY_LOCAL ||
-		       mr->type == INTEL_MEMORY_STOLEN_LOCAL);
-}
-
 /**
  * i915_gem_object_is_lmem - Whether the object is resident in
  * lmem
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 8cbd7a5334e2..d423d8cac4f2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -597,8 +597,6 @@ bool i915_gem_object_evictable(struct drm_i915_gem_object *obj);
 
 bool i915_gem_object_migratable(struct drm_i915_gem_object *obj);
 
-bool i915_gem_object_validates_to_lmem(struct drm_i915_gem_object *obj);
-
 int i915_gem_object_migrate(struct drm_i915_gem_object *obj,
 			    struct i915_gem_ww_ctx *ww,
 			    enum intel_region_id id);
-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Intel-gfx] [PATCH v2 4/5] drm/i915/gem: Fix same-driver-another-instance dma-buf export
  2021-06-28  9:09 [Intel-gfx] [PATCH v2 0/5] drm/i915/gem: Introduce a migrate interface Thomas Hellström
                   ` (2 preceding siblings ...)
  2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 3/5] drm/i915/display: Migrate objects to LMEM if possible for display Thomas Hellström
@ 2021-06-28  9:09 ` Thomas Hellström
  2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 5/5] drm/i915/gem: Migrate to system at dma-buf map time Thomas Hellström
  4 siblings, 0 replies; 9+ messages in thread
From: Thomas Hellström @ 2021-06-28  9:09 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: Thomas Hellström, Ruhl, matthew.auld

If our exported dma-bufs are imported by another instance of our driver,
that instance will typically have the imported dma-bufs locked during
map_attachment(). But the exporter also locks the same reservation
object in the map_dma_buf() callback, which leads to recursive locking.

Add a live selftest to catch this case, and as a workaround until
we fully support dynamic import and export, declare the exporter dynamic
by providing NOP pin() and unpin() functions. This means our map_dma_buf()
callback will *always* get called locked, and by pinning unconditionally
in i915_gem_map_dma_buf() we make sure we don't need to use the
move_notify() functionality which is not yet implemented.

Reported-by: Ruhl, Michael J <michael.j.ruhl@intel.com>
Cc: Ruhl, Michael J <michael.j.ruhl@intel.com>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    | 31 ++++++-
 .../drm/i915/gem/selftests/i915_gem_dmabuf.c  | 81 ++++++++++++++++++-
 2 files changed, 108 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 616c3a2f1baf..1d1eeb167d28 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -12,6 +12,8 @@
 #include "i915_gem_object.h"
 #include "i915_scatterlist.h"
 
+I915_SELFTEST_DECLARE(static bool force_different_devices;)
+
 static struct drm_i915_gem_object *dma_buf_to_obj(struct dma_buf *buf)
 {
 	return to_intel_bo(buf->priv);
@@ -25,7 +27,9 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme
 	struct scatterlist *src, *dst;
 	int ret, i;
 
-	ret = i915_gem_object_pin_pages_unlocked(obj);
+	assert_object_held(obj);
+
+	ret = i915_gem_object_pin_pages(obj);
 	if (ret)
 		goto err;
 
@@ -168,6 +172,26 @@ static int i915_gem_end_cpu_access(struct dma_buf *dma_buf, enum dma_data_direct
 	return err;
 }
 
+/*
+ * As a workaround until we fully support dynamic import and export,
+ * declare the exporter dynamic by providing NOP pin() and unpin() functions.
+ * This means our i915_gem_map_dma_buf() callback will *always* get called
+ * locked, and by pinning unconditionally in i915_gem_map_dma_buf() we make
+ * sure we don't need to use the move_notify() functionality which is
+ * not yet implemented. Typically for the same-driver-another-instance case,
+ * i915_gem_map_dma_buf() will be called at importer attach time and the
+ * mapped sg_list will be cached by the dma-buf core for the
+ * duration of the attachment.
+ */
+static int i915_gem_dmabuf_pin(struct dma_buf_attachment *attach)
+{
+	return 0;
+}
+
+static void i915_gem_dmabuf_unpin(struct dma_buf_attachment *attach)
+{
+}
+
 static const struct dma_buf_ops i915_dmabuf_ops =  {
 	.map_dma_buf = i915_gem_map_dma_buf,
 	.unmap_dma_buf = i915_gem_unmap_dma_buf,
@@ -177,6 +201,8 @@ static const struct dma_buf_ops i915_dmabuf_ops =  {
 	.vunmap = i915_gem_dmabuf_vunmap,
 	.begin_cpu_access = i915_gem_begin_cpu_access,
 	.end_cpu_access = i915_gem_end_cpu_access,
+	.pin = i915_gem_dmabuf_pin,
+	.unpin = i915_gem_dmabuf_unpin,
 };
 
 struct dma_buf *i915_gem_prime_export(struct drm_gem_object *gem_obj, int flags)
@@ -241,7 +267,8 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
 	if (dma_buf->ops == &i915_dmabuf_ops) {
 		obj = dma_buf_to_obj(dma_buf);
 		/* is it from our device? */
-		if (obj->base.dev == dev) {
+		if (obj->base.dev == dev &&
+		    !I915_SELFTEST_ONLY(force_different_devices)) {
 			/*
 			 * Importing dmabuf exported from out own gem increases
 			 * refcount on gem itself instead of f_count of dmabuf.
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
index dd74bc09ec88..24735d6c12a2 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
@@ -35,7 +35,7 @@ static int igt_dmabuf_export(void *arg)
 static int igt_dmabuf_import_self(void *arg)
 {
 	struct drm_i915_private *i915 = arg;
-	struct drm_i915_gem_object *obj;
+	struct drm_i915_gem_object *obj, *import_obj;
 	struct drm_gem_object *import;
 	struct dma_buf *dmabuf;
 	int err;
@@ -65,14 +65,90 @@ static int igt_dmabuf_import_self(void *arg)
 		err = -EINVAL;
 		goto out_import;
 	}
+	import_obj = to_intel_bo(import);
+
+	i915_gem_object_lock(import_obj, NULL);
+	err = ____i915_gem_object_get_pages(import_obj);
+	i915_gem_object_unlock(import_obj);
+	if (err) {
+		pr_err("Same object dma-buf get_pages failed!\n");
+		goto out_import;
+	}
 
 	err = 0;
 out_import:
-	i915_gem_object_put(to_intel_bo(import));
+	i915_gem_object_put(import_obj);
+out_dmabuf:
+	dma_buf_put(dmabuf);
+out:
+	i915_gem_object_put(obj);
+	return err;
+}
+
+static int igt_dmabuf_import_same_driver(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj, *import_obj;
+	struct drm_gem_object *import;
+	struct dma_buf *dmabuf;
+	int err;
+
+	force_different_devices = true;
+	obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
+	if (IS_ERR(obj))
+		goto out_ret;
+
+	dmabuf = i915_gem_prime_export(&obj->base, 0);
+	if (IS_ERR(dmabuf)) {
+		pr_err("i915_gem_prime_export failed with err=%d\n",
+		       (int)PTR_ERR(dmabuf));
+		err = PTR_ERR(dmabuf);
+		goto out;
+	}
+
+	import = i915_gem_prime_import(&i915->drm, dmabuf);
+	if (IS_ERR(import)) {
+		pr_err("i915_gem_prime_import failed with err=%d\n",
+		       (int)PTR_ERR(import));
+		err = PTR_ERR(import);
+		goto out_dmabuf;
+	}
+
+	if (import == &obj->base) {
+		pr_err("i915_gem_prime_import reused gem object!\n");
+		err = -EINVAL;
+		goto out_import;
+	}
+
+	import_obj = to_intel_bo(import);
+
+	i915_gem_object_lock(import_obj, NULL);
+	err = ____i915_gem_object_get_pages(import_obj);
+	if (err) {
+		pr_err("Different objects dma-buf get_pages failed!\n");
+		i915_gem_object_unlock(import_obj);
+		goto out_import;
+	}
+
+	/*
+	 * If the exported object is not in system memory, something
+	 * weird is going on. TODO: When p2p is supported, this is no
+	 * longer considered weird.
+	 */
+	if (obj->mm.region != i915->mm.regions[INTEL_REGION_SMEM]) {
+		pr_err("Exported dma-buf is not in system memory\n");
+		err = -EINVAL;
+	}
+	i915_gem_object_unlock(import_obj);
+
+out_import:
+	i915_gem_object_put(import_obj);
 out_dmabuf:
 	dma_buf_put(dmabuf);
 out:
 	i915_gem_object_put(obj);
+out_ret:
+	force_different_devices = false;
 	return err;
 }
 
@@ -286,6 +362,7 @@ int i915_gem_dmabuf_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(igt_dmabuf_export),
+		SUBTEST(igt_dmabuf_import_same_driver),
 	};
 
 	return i915_subtests(tests, i915);
-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Intel-gfx] [PATCH v2 5/5] drm/i915/gem: Migrate to system at dma-buf map time
  2021-06-28  9:09 [Intel-gfx] [PATCH v2 0/5] drm/i915/gem: Introduce a migrate interface Thomas Hellström
                   ` (3 preceding siblings ...)
  2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 4/5] drm/i915/gem: Fix same-driver-another-instance dma-buf export Thomas Hellström
@ 2021-06-28  9:09 ` Thomas Hellström
  4 siblings, 0 replies; 9+ messages in thread
From: Thomas Hellström @ 2021-06-28  9:09 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: Thomas Hellström, matthew.auld

Until we support p2p dma or as a complement to that, migrate data
to system memory at dma-buf map time if possible.

v2:
- Rebase on dynamic exporter. Update the igt_dmabuf_import_same_driver
  selftest to migrate if we are LMEM capable.

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c           | 7 ++++++-
 drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c | 4 +++-
 2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 1d1eeb167d28..5207aa3af009 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -29,7 +29,12 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme
 
 	assert_object_held(obj);
 
-	ret = i915_gem_object_pin_pages(obj);
+	if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM))
+		return ERR_PTR(-EOPNOTSUPP);
+
+	ret = i915_gem_object_migrate(obj, NULL, INTEL_REGION_SMEM);
+	if (!ret)
+		ret = i915_gem_object_pin_pages(obj);
 	if (ret)
 		goto err;
 
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
index 24735d6c12a2..b6278dba98d0 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
@@ -94,7 +94,9 @@ static int igt_dmabuf_import_same_driver(void *arg)
 	int err;
 
 	force_different_devices = true;
-	obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
+	obj = i915_gem_object_create_lmem(i915, PAGE_SIZE, 0);
+	if (IS_ERR(obj))
+		obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
 	if (IS_ERR(obj))
 		goto out_ret;
 
-- 
2.31.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [Intel-gfx] [PATCH v2 1/5] drm/i915/gem: Implement object migration
  2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 1/5] drm/i915/gem: Implement object migration Thomas Hellström
@ 2021-06-28  9:47   ` Thomas Hellström
  2021-06-28 11:28   ` kernel test robot
  2021-06-28 12:54   ` kernel test robot
  2 siblings, 0 replies; 9+ messages in thread
From: Thomas Hellström @ 2021-06-28  9:47 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: matthew.auld


On 6/28/21 11:09 AM, Thomas Hellström wrote:
> Introduce an interface to migrate objects between regions.
> This is primarily intended to migrate objects to LMEM for display and
> to SYSTEM for dma-buf, but might be reused in one form or another for
> performande-based migration.
>
> v2:
> - Verify that the memory region given as an id really exists.
>    (Reported by Matthew Auld)
> - Call i915_gem_object_{init,release}_memory_region() when switching region
>    to handle also switching region lists. (Reported by Matthew Auld)
>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
>   drivers/gpu/drm/i915/gem/i915_gem_object.c    | 96 +++++++++++++++++++
>   drivers/gpu/drm/i915/gem/i915_gem_object.h    | 12 +++
>   .../gpu/drm/i915/gem/i915_gem_object_types.h  |  9 ++
>   drivers/gpu/drm/i915/gem/i915_gem_ttm.c       | 69 +++++++++----
>   drivers/gpu/drm/i915/gem/i915_gem_wait.c      | 19 ++++
>   5 files changed, 188 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
> index 07e8ff9a8aae..52a37619054d 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
> @@ -513,6 +513,102 @@ bool i915_gem_object_has_iomem(const struct drm_i915_gem_object *obj)
>   	return obj->mem_flags & I915_BO_FLAG_IOMEM;
>   }
>   
> +/**
> + * i915_gem_object_can_migrate - Whether an object likely can be migrated
> + *
> + * @obj: The object to migrate
> + * @id: The region intended to migrate to
> + *
> + * Check whether the object backend supports migration to the
> + * given region. Note that pinning may affect the ability to migrate.
> + *
> + * Return: true if migration is possible, false otherwise.
> + */
> +bool i915_gem_object_can_migrate(struct drm_i915_gem_object *obj,
> +				 enum intel_region_id id)
> +{
> +	struct drm_i915_private *i915 = to_i915(obj->base.dev);
> +	unsigned int num_allowed = obj->mm.n_placements;
> +	struct intel_memory_region *mr;
> +	unsigned int i;
> +
> +	GEM_BUG_ON(id >= INTEL_REGION_UNKNOWN);
> +	GEM_BUG_ON(obj->mm.madv != I915_MADV_WILLNEED);
> +
> +	if (!obj->ops->migrate)
> +		return false;
> +
> +	mr = i915->mm.regions[id];
> +	if (!mr)
> +		return false;


Hmm. Should probably switch order between these two, otherwise 
can_migrate will always return false on !TTM

/Thomas


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Intel-gfx] [PATCH v2 1/5] drm/i915/gem: Implement object migration
  2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 1/5] drm/i915/gem: Implement object migration Thomas Hellström
  2021-06-28  9:47   ` Thomas Hellström
@ 2021-06-28 11:28   ` kernel test robot
  2021-06-28 12:54   ` kernel test robot
  2 siblings, 0 replies; 9+ messages in thread
From: kernel test robot @ 2021-06-28 11:28 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel
  Cc: Thomas Hellström, kbuild-all, matthew.auld

[-- Attachment #1: Type: text/plain, Size: 2609 bytes --]

Hi "Thomas,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on drm-tip/drm-tip]
[cannot apply to drm-intel/for-linux-next drm-exynos/exynos-drm-next tegra-drm/drm/tegra/for-next drm/drm-next v5.13 next-20210625]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Thomas-Hellstr-m/drm-i915-gem-Introduce-a-migrate-interface/20210628-171204
base:   git://anongit.freedesktop.org/drm/drm-tip drm-tip
config: i386-randconfig-a014-20210628 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/e4e5a7f5c031252f26c868a2aa17a031a1558336
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Thomas-Hellstr-m/drm-i915-gem-Introduce-a-migrate-interface/20210628-171204
        git checkout e4e5a7f5c031252f26c868a2aa17a031a1558336
        # save the attached .config to linux build tree
        mkdir build_dir
        make W=1 O=build_dir ARCH=i386 SHELL=/bin/bash drivers/gpu/drm/i915/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   drivers/gpu/drm/i915/gem/i915_gem_wait.c:184: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst
    * Waits for rendering to the object to be completed
>> drivers/gpu/drm/i915/gem/i915_gem_wait.c:307: warning: expecting prototype for i915_gem_object_wait_migrate(). Prototype was for i915_gem_object_wait_migration() instead


vim +307 drivers/gpu/drm/i915/gem/i915_gem_wait.c

   293	
   294	/**
   295	 * i915_gem_object_wait_migrate - Sync an accelerated migration operation
   296	 * @obj: The migrating object.
   297	 * @flags: waiting flags. Currently supports only I915_WAIT_INTERRUPTIBLE.
   298	 *
   299	 * Wait for any pending async migration operation on the object,
   300	 * whether it's explicitly (i915_gem_object_migrate()) or implicitly
   301	 * (swapin, initial clearing) initiated.
   302	 *
   303	 * Return: 0 if successful, -ERESTARTSYS if a signal was hit during waiting.
   304	 */
   305	int i915_gem_object_wait_migration(struct drm_i915_gem_object *obj,
   306					   unsigned int flags)
 > 307	{

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 44185 bytes --]

[-- Attachment #3: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Intel-gfx] [PATCH v2 1/5] drm/i915/gem: Implement object migration
  2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 1/5] drm/i915/gem: Implement object migration Thomas Hellström
  2021-06-28  9:47   ` Thomas Hellström
  2021-06-28 11:28   ` kernel test robot
@ 2021-06-28 12:54   ` kernel test robot
  2 siblings, 0 replies; 9+ messages in thread
From: kernel test robot @ 2021-06-28 12:54 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel
  Cc: clang-built-linux, kbuild-all, matthew.auld, Thomas Hellström

[-- Attachment #1: Type: text/plain, Size: 6844 bytes --]

Hi "Thomas,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on drm-tip/drm-tip]
[cannot apply to drm-intel/for-linux-next drm-exynos/exynos-drm-next tegra-drm/drm/tegra/for-next drm/drm-next v5.13 next-20210628]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Thomas-Hellstr-m/drm-i915-gem-Introduce-a-migrate-interface/20210628-171204
base:   git://anongit.freedesktop.org/drm/drm-tip drm-tip
config: x86_64-randconfig-a003-20210628 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project 4c92e31dd0f1bd152eda883af20ff7fbcaa14945)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install x86_64 cross compiling tool for clang build
        # apt-get install binutils-x86-64-linux-gnu
        # https://github.com/0day-ci/linux/commit/e4e5a7f5c031252f26c868a2aa17a031a1558336
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Thomas-Hellstr-m/drm-i915-gem-Introduce-a-migrate-interface/20210628-171204
        git checkout e4e5a7f5c031252f26c868a2aa17a031a1558336
        # save the attached .config to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross O=build_dir ARCH=x86_64 SHELL=/bin/bash drivers/gpu/drm/i915/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   drivers/gpu/drm/i915/gem/i915_gem_wait.c:184: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst
    * Waits for rendering to the object to be completed
>> drivers/gpu/drm/i915/gem/i915_gem_wait.c:307: warning: expecting prototype for i915_gem_object_wait_migrate(). Prototype was for i915_gem_object_wait_migration() instead


vim +307 drivers/gpu/drm/i915/gem/i915_gem_wait.c

   182	
   183	/**
 > 184	 * Waits for rendering to the object to be completed
   185	 * @obj: i915 gem object
   186	 * @flags: how to wait (under a lock, for all rendering or just for writes etc)
   187	 * @timeout: how long to wait
   188	 */
   189	int
   190	i915_gem_object_wait(struct drm_i915_gem_object *obj,
   191			     unsigned int flags,
   192			     long timeout)
   193	{
   194		might_sleep();
   195		GEM_BUG_ON(timeout < 0);
   196	
   197		timeout = i915_gem_object_wait_reservation(obj->base.resv,
   198							   flags, timeout);
   199		return timeout < 0 ? timeout : 0;
   200	}
   201	
   202	static inline unsigned long nsecs_to_jiffies_timeout(const u64 n)
   203	{
   204		/* nsecs_to_jiffies64() does not guard against overflow */
   205		if (NSEC_PER_SEC % HZ &&
   206		    div_u64(n, NSEC_PER_SEC) >= MAX_JIFFY_OFFSET / HZ)
   207			return MAX_JIFFY_OFFSET;
   208	
   209		return min_t(u64, MAX_JIFFY_OFFSET, nsecs_to_jiffies64(n) + 1);
   210	}
   211	
   212	static unsigned long to_wait_timeout(s64 timeout_ns)
   213	{
   214		if (timeout_ns < 0)
   215			return MAX_SCHEDULE_TIMEOUT;
   216	
   217		if (timeout_ns == 0)
   218			return 0;
   219	
   220		return nsecs_to_jiffies_timeout(timeout_ns);
   221	}
   222	
   223	/**
   224	 * i915_gem_wait_ioctl - implements DRM_IOCTL_I915_GEM_WAIT
   225	 * @dev: drm device pointer
   226	 * @data: ioctl data blob
   227	 * @file: drm file pointer
   228	 *
   229	 * Returns 0 if successful, else an error is returned with the remaining time in
   230	 * the timeout parameter.
   231	 *  -ETIME: object is still busy after timeout
   232	 *  -ERESTARTSYS: signal interrupted the wait
   233	 *  -ENONENT: object doesn't exist
   234	 * Also possible, but rare:
   235	 *  -EAGAIN: incomplete, restart syscall
   236	 *  -ENOMEM: damn
   237	 *  -ENODEV: Internal IRQ fail
   238	 *  -E?: The add request failed
   239	 *
   240	 * The wait ioctl with a timeout of 0 reimplements the busy ioctl. With any
   241	 * non-zero timeout parameter the wait ioctl will wait for the given number of
   242	 * nanoseconds on an object becoming unbusy. Since the wait itself does so
   243	 * without holding struct_mutex the object may become re-busied before this
   244	 * function completes. A similar but shorter * race condition exists in the busy
   245	 * ioctl
   246	 */
   247	int
   248	i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
   249	{
   250		struct drm_i915_gem_wait *args = data;
   251		struct drm_i915_gem_object *obj;
   252		ktime_t start;
   253		long ret;
   254	
   255		if (args->flags != 0)
   256			return -EINVAL;
   257	
   258		obj = i915_gem_object_lookup(file, args->bo_handle);
   259		if (!obj)
   260			return -ENOENT;
   261	
   262		start = ktime_get();
   263	
   264		ret = i915_gem_object_wait(obj,
   265					   I915_WAIT_INTERRUPTIBLE |
   266					   I915_WAIT_PRIORITY |
   267					   I915_WAIT_ALL,
   268					   to_wait_timeout(args->timeout_ns));
   269	
   270		if (args->timeout_ns > 0) {
   271			args->timeout_ns -= ktime_to_ns(ktime_sub(ktime_get(), start));
   272			if (args->timeout_ns < 0)
   273				args->timeout_ns = 0;
   274	
   275			/*
   276			 * Apparently ktime isn't accurate enough and occasionally has a
   277			 * bit of mismatch in the jiffies<->nsecs<->ktime loop. So patch
   278			 * things up to make the test happy. We allow up to 1 jiffy.
   279			 *
   280			 * This is a regression from the timespec->ktime conversion.
   281			 */
   282			if (ret == -ETIME && !nsecs_to_jiffies(args->timeout_ns))
   283				args->timeout_ns = 0;
   284	
   285			/* Asked to wait beyond the jiffie/scheduler precision? */
   286			if (ret == -ETIME && args->timeout_ns)
   287				ret = -EAGAIN;
   288		}
   289	
   290		i915_gem_object_put(obj);
   291		return ret;
   292	}
   293	
   294	/**
   295	 * i915_gem_object_wait_migrate - Sync an accelerated migration operation
   296	 * @obj: The migrating object.
   297	 * @flags: waiting flags. Currently supports only I915_WAIT_INTERRUPTIBLE.
   298	 *
   299	 * Wait for any pending async migration operation on the object,
   300	 * whether it's explicitly (i915_gem_object_migrate()) or implicitly
   301	 * (swapin, initial clearing) initiated.
   302	 *
   303	 * Return: 0 if successful, -ERESTARTSYS if a signal was hit during waiting.
   304	 */
   305	int i915_gem_object_wait_migration(struct drm_i915_gem_object *obj,
   306					   unsigned int flags)
 > 307	{

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 49800 bytes --]

[-- Attachment #3: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-06-28 12:55 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-28  9:09 [Intel-gfx] [PATCH v2 0/5] drm/i915/gem: Introduce a migrate interface Thomas Hellström
2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 1/5] drm/i915/gem: Implement object migration Thomas Hellström
2021-06-28  9:47   ` Thomas Hellström
2021-06-28 11:28   ` kernel test robot
2021-06-28 12:54   ` kernel test robot
2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 2/5] drm/i915/gem: Introduce a selftest for the gem object migrate functionality Thomas Hellström
2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 3/5] drm/i915/display: Migrate objects to LMEM if possible for display Thomas Hellström
2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 4/5] drm/i915/gem: Fix same-driver-another-instance dma-buf export Thomas Hellström
2021-06-28  9:09 ` [Intel-gfx] [PATCH v2 5/5] drm/i915/gem: Migrate to system at dma-buf map time Thomas Hellström

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).