* [Intel-gfx] [PATCH 1/7] drm/i915/gem: Check object_can_migrate from object_migrate
2021-07-16 14:14 [Intel-gfx] [PATCH 0/7] drm/i915: Migrate memory to SMEM when imported cross-device (v7) Jason Ekstrand
@ 2021-07-16 14:14 ` Jason Ekstrand
2021-07-16 14:14 ` [Intel-gfx] [PATCH 2/7] drm/i915/gem: Refactor placement setup for i915_gem_object_create* (v2) Jason Ekstrand
` (9 subsequent siblings)
10 siblings, 0 replies; 25+ messages in thread
From: Jason Ekstrand @ 2021-07-16 14:14 UTC (permalink / raw)
To: intel-gfx, dri-devel; +Cc: Matthew Auld
We don't roll them together entirely because there are still a couple
cases where we want a separate can_migrate check. For instance, the
display code checks that you can migrate a buffer to LMEM before it
accepts it in fb_create. The dma-buf import code also uses it to do an
early check and return a different error code if someone tries to attach
a LMEM-only dma-buf to another driver.
However, no one actually wants to call object_migrate when can_migrate
has failed. The stated intention is for self-tests but none of those
actually take advantage of this unsafe migration.
Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Cc: Daniel Vetter <daniel@ffwll.ch>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/i915/gem/i915_gem_object.c | 13 ++-----------
.../gpu/drm/i915/gem/selftests/i915_gem_migrate.c | 15 ---------------
2 files changed, 2 insertions(+), 26 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 9da7b288b7ede..f2244ae09a613 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -584,12 +584,6 @@ bool i915_gem_object_can_migrate(struct drm_i915_gem_object *obj,
* completed yet, and to accomplish that, i915_gem_object_wait_migration()
* must be called.
*
- * This function is a bit more permissive than i915_gem_object_can_migrate()
- * to allow for migrating objects where the caller knows exactly what is
- * happening. For example within selftests. More specifically this
- * function allows migrating I915_BO_ALLOC_USER objects to regions
- * that are not in the list of allowable regions.
- *
* Note: the @ww parameter is not used yet, but included to make sure
* callers put some effort into obtaining a valid ww ctx if one is
* available.
@@ -616,11 +610,8 @@ int i915_gem_object_migrate(struct drm_i915_gem_object *obj,
if (obj->mm.region == mr)
return 0;
- if (!i915_gem_object_evictable(obj))
- return -EBUSY;
-
- if (!obj->ops->migrate)
- return -EOPNOTSUPP;
+ if (!i915_gem_object_can_migrate(obj, id))
+ return -EINVAL;
return obj->ops->migrate(obj, mr);
}
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c
index 0b7144d2991ca..28a700f08b49a 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c
@@ -61,11 +61,6 @@ static int igt_create_migrate(struct intel_gt *gt, enum intel_region_id src,
if (err)
continue;
- if (!i915_gem_object_can_migrate(obj, dst)) {
- err = -EINVAL;
- continue;
- }
-
err = i915_gem_object_migrate(obj, &ww, dst);
if (err)
continue;
@@ -114,11 +109,6 @@ static int lmem_pages_migrate_one(struct i915_gem_ww_ctx *ww,
return err;
if (i915_gem_object_is_lmem(obj)) {
- if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM)) {
- pr_err("object can't migrate to smem.\n");
- return -EINVAL;
- }
-
err = i915_gem_object_migrate(obj, ww, INTEL_REGION_SMEM);
if (err) {
pr_err("Object failed migration to smem\n");
@@ -137,11 +127,6 @@ static int lmem_pages_migrate_one(struct i915_gem_ww_ctx *ww,
}
} else {
- if (!i915_gem_object_can_migrate(obj, INTEL_REGION_LMEM)) {
- pr_err("object can't migrate to lmem.\n");
- return -EINVAL;
- }
-
err = i915_gem_object_migrate(obj, ww, INTEL_REGION_LMEM);
if (err) {
pr_err("Object failed migration to lmem\n");
--
2.31.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [Intel-gfx] [PATCH 2/7] drm/i915/gem: Refactor placement setup for i915_gem_object_create* (v2)
2021-07-16 14:14 [Intel-gfx] [PATCH 0/7] drm/i915: Migrate memory to SMEM when imported cross-device (v7) Jason Ekstrand
2021-07-16 14:14 ` [Intel-gfx] [PATCH 1/7] drm/i915/gem: Check object_can_migrate from object_migrate Jason Ekstrand
@ 2021-07-16 14:14 ` Jason Ekstrand
2021-07-16 19:18 ` Matthew Auld
2021-07-19 8:17 ` Matthew Auld
2021-07-16 14:14 ` [Intel-gfx] [PATCH 3/7] drm/i915/gem: Call i915_gem_flush_free_objects() in i915_gem_dumb_create() Jason Ekstrand
` (8 subsequent siblings)
10 siblings, 2 replies; 25+ messages in thread
From: Jason Ekstrand @ 2021-07-16 14:14 UTC (permalink / raw)
To: intel-gfx, dri-devel; +Cc: Matthew Auld
Since we don't allow changing the set of regions after creation, we can
make ext_set_placements() build up the region set directly in the
create_ext and assign it to the object later. This is similar to what
we did for contexts with the proto-context only simpler because there's
no funny object shuffling. This will be used in the next patch to allow
us to de-duplicate a bunch of code. Also, since we know the maximum
number of regions up-front, we can use a fixed-size temporary array for
the regions. This simplifies memory management a bit for this new
delayed approach.
v2 (Matthew Auld):
- Get rid of MAX_N_PLACEMENTS
- Drop kfree(placements) from set_placements()
Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Cc: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/i915/gem/i915_gem_create.c | 81 ++++++++++++----------
1 file changed, 45 insertions(+), 36 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c
index 51f92e4b1a69d..5766749a449c0 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_create.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c
@@ -27,10 +27,13 @@ static u32 object_max_page_size(struct drm_i915_gem_object *obj)
return max_page_size;
}
-static void object_set_placements(struct drm_i915_gem_object *obj,
- struct intel_memory_region **placements,
- unsigned int n_placements)
+static int object_set_placements(struct drm_i915_gem_object *obj,
+ struct intel_memory_region **placements,
+ unsigned int n_placements)
{
+ struct intel_memory_region **arr;
+ unsigned int i;
+
GEM_BUG_ON(!n_placements);
/*
@@ -44,9 +47,20 @@ static void object_set_placements(struct drm_i915_gem_object *obj,
obj->mm.placements = &i915->mm.regions[mr->id];
obj->mm.n_placements = 1;
} else {
- obj->mm.placements = placements;
+ arr = kmalloc_array(n_placements,
+ sizeof(struct intel_memory_region *),
+ GFP_KERNEL);
+ if (!arr)
+ return -ENOMEM;
+
+ for (i = 0; i < n_placements; i++)
+ arr[i] = placements[i];
+
+ obj->mm.placements = arr;
obj->mm.n_placements = n_placements;
}
+
+ return 0;
}
static int i915_gem_publish(struct drm_i915_gem_object *obj,
@@ -148,7 +162,9 @@ i915_gem_dumb_create(struct drm_file *file,
return -ENOMEM;
mr = intel_memory_region_by_type(to_i915(dev), mem_type);
- object_set_placements(obj, &mr, 1);
+ ret = object_set_placements(obj, &mr, 1);
+ if (ret)
+ goto object_free;
ret = i915_gem_setup(obj, args->size);
if (ret)
@@ -184,7 +200,9 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data,
return -ENOMEM;
mr = intel_memory_region_by_type(i915, INTEL_MEMORY_SYSTEM);
- object_set_placements(obj, &mr, 1);
+ ret = object_set_placements(obj, &mr, 1);
+ if (ret)
+ goto object_free;
ret = i915_gem_setup(obj, args->size);
if (ret)
@@ -199,7 +217,8 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data,
struct create_ext {
struct drm_i915_private *i915;
- struct drm_i915_gem_object *vanilla_object;
+ struct intel_memory_region *placements[INTEL_REGION_UNKNOWN];
+ unsigned int n_placements;
};
static void repr_placements(char *buf, size_t size,
@@ -230,8 +249,7 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
struct drm_i915_private *i915 = ext_data->i915;
struct drm_i915_gem_memory_class_instance __user *uregions =
u64_to_user_ptr(args->regions);
- struct drm_i915_gem_object *obj = ext_data->vanilla_object;
- struct intel_memory_region **placements;
+ struct intel_memory_region *placements[INTEL_REGION_UNKNOWN];
u32 mask;
int i, ret = 0;
@@ -245,6 +263,8 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
ret = -EINVAL;
}
+ BUILD_BUG_ON(ARRAY_SIZE(i915->mm.regions) != ARRAY_SIZE(placements));
+ BUILD_BUG_ON(ARRAY_SIZE(ext_data->placements) != ARRAY_SIZE(placements));
if (args->num_regions > ARRAY_SIZE(i915->mm.regions)) {
drm_dbg(&i915->drm, "num_regions is too large\n");
ret = -EINVAL;
@@ -253,21 +273,13 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
if (ret)
return ret;
- placements = kmalloc_array(args->num_regions,
- sizeof(struct intel_memory_region *),
- GFP_KERNEL);
- if (!placements)
- return -ENOMEM;
-
mask = 0;
for (i = 0; i < args->num_regions; i++) {
struct drm_i915_gem_memory_class_instance region;
struct intel_memory_region *mr;
- if (copy_from_user(®ion, uregions, sizeof(region))) {
- ret = -EFAULT;
- goto out_free;
- }
+ if (copy_from_user(®ion, uregions, sizeof(region)))
+ return -EFAULT;
mr = intel_memory_region_lookup(i915,
region.memory_class,
@@ -293,14 +305,13 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
++uregions;
}
- if (obj->mm.placements) {
+ if (ext_data->n_placements) {
ret = -EINVAL;
goto out_dump;
}
- object_set_placements(obj, placements, args->num_regions);
- if (args->num_regions == 1)
- kfree(placements);
+ for (i = 0; i < args->num_regions; i++)
+ ext_data->placements[i] = placements[i];
return 0;
@@ -308,11 +319,11 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
if (1) {
char buf[256];
- if (obj->mm.placements) {
+ if (ext_data->n_placements) {
repr_placements(buf,
sizeof(buf),
- obj->mm.placements,
- obj->mm.n_placements);
+ ext_data->placements,
+ ext_data->n_placements);
drm_dbg(&i915->drm,
"Placements were already set in previous EXT. Existing placements: %s\n",
buf);
@@ -322,8 +333,6 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
drm_dbg(&i915->drm, "New placements(so far validated): %s\n", buf);
}
-out_free:
- kfree(placements);
return ret;
}
@@ -358,7 +367,6 @@ i915_gem_create_ext_ioctl(struct drm_device *dev, void *data,
struct drm_i915_private *i915 = to_i915(dev);
struct drm_i915_gem_create_ext *args = data;
struct create_ext ext_data = { .i915 = i915 };
- struct intel_memory_region **placements_ext;
struct drm_i915_gem_object *obj;
int ret;
@@ -371,21 +379,22 @@ i915_gem_create_ext_ioctl(struct drm_device *dev, void *data,
if (!obj)
return -ENOMEM;
- ext_data.vanilla_object = obj;
ret = i915_user_extensions(u64_to_user_ptr(args->extensions),
create_extensions,
ARRAY_SIZE(create_extensions),
&ext_data);
- placements_ext = obj->mm.placements;
if (ret)
goto object_free;
- if (!placements_ext) {
- struct intel_memory_region *mr =
+ if (!ext_data.n_placements) {
+ ext_data.placements[0] =
intel_memory_region_by_type(i915, INTEL_MEMORY_SYSTEM);
-
- object_set_placements(obj, &mr, 1);
+ ext_data.n_placements = 1;
}
+ ret = object_set_placements(obj, ext_data.placements,
+ ext_data.n_placements);
+ if (ret)
+ goto object_free;
ret = i915_gem_setup(obj, args->size);
if (ret)
@@ -395,7 +404,7 @@ i915_gem_create_ext_ioctl(struct drm_device *dev, void *data,
object_free:
if (obj->mm.n_placements > 1)
- kfree(placements_ext);
+ kfree(obj->mm.placements);
i915_gem_object_free(obj);
return ret;
}
--
2.31.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [Intel-gfx] [PATCH 2/7] drm/i915/gem: Refactor placement setup for i915_gem_object_create* (v2)
2021-07-16 14:14 ` [Intel-gfx] [PATCH 2/7] drm/i915/gem: Refactor placement setup for i915_gem_object_create* (v2) Jason Ekstrand
@ 2021-07-16 19:18 ` Matthew Auld
2021-07-19 8:17 ` Matthew Auld
1 sibling, 0 replies; 25+ messages in thread
From: Matthew Auld @ 2021-07-16 19:18 UTC (permalink / raw)
To: Jason Ekstrand; +Cc: Intel Graphics Development, Matthew Auld, ML dri-devel
On Fri, 16 Jul 2021 at 15:14, Jason Ekstrand <jason@jlekstrand.net> wrote:
>
> Since we don't allow changing the set of regions after creation, we can
> make ext_set_placements() build up the region set directly in the
> create_ext and assign it to the object later. This is similar to what
> we did for contexts with the proto-context only simpler because there's
> no funny object shuffling. This will be used in the next patch to allow
> us to de-duplicate a bunch of code. Also, since we know the maximum
> number of regions up-front, we can use a fixed-size temporary array for
> the regions. This simplifies memory management a bit for this new
> delayed approach.
>
> v2 (Matthew Auld):
> - Get rid of MAX_N_PLACEMENTS
> - Drop kfree(placements) from set_placements()
>
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Cc: Matthew Auld <matthew.auld@intel.com>
If CI is happy,
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [Intel-gfx] [PATCH 2/7] drm/i915/gem: Refactor placement setup for i915_gem_object_create* (v2)
2021-07-16 14:14 ` [Intel-gfx] [PATCH 2/7] drm/i915/gem: Refactor placement setup for i915_gem_object_create* (v2) Jason Ekstrand
2021-07-16 19:18 ` Matthew Auld
@ 2021-07-19 8:17 ` Matthew Auld
2021-07-20 22:06 ` Jason Ekstrand
1 sibling, 1 reply; 25+ messages in thread
From: Matthew Auld @ 2021-07-19 8:17 UTC (permalink / raw)
To: Jason Ekstrand; +Cc: Intel Graphics Development, Matthew Auld, ML dri-devel
On Fri, 16 Jul 2021 at 15:14, Jason Ekstrand <jason@jlekstrand.net> wrote:
>
> Since we don't allow changing the set of regions after creation, we can
> make ext_set_placements() build up the region set directly in the
> create_ext and assign it to the object later. This is similar to what
> we did for contexts with the proto-context only simpler because there's
> no funny object shuffling. This will be used in the next patch to allow
> us to de-duplicate a bunch of code. Also, since we know the maximum
> number of regions up-front, we can use a fixed-size temporary array for
> the regions. This simplifies memory management a bit for this new
> delayed approach.
>
> v2 (Matthew Auld):
> - Get rid of MAX_N_PLACEMENTS
> - Drop kfree(placements) from set_placements()
>
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Cc: Matthew Auld <matthew.auld@intel.com>
> ---
> drivers/gpu/drm/i915/gem/i915_gem_create.c | 81 ++++++++++++----------
> 1 file changed, 45 insertions(+), 36 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c
> index 51f92e4b1a69d..5766749a449c0 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_create.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c
> @@ -27,10 +27,13 @@ static u32 object_max_page_size(struct drm_i915_gem_object *obj)
> return max_page_size;
> }
>
> -static void object_set_placements(struct drm_i915_gem_object *obj,
> - struct intel_memory_region **placements,
> - unsigned int n_placements)
> +static int object_set_placements(struct drm_i915_gem_object *obj,
> + struct intel_memory_region **placements,
> + unsigned int n_placements)
> {
> + struct intel_memory_region **arr;
> + unsigned int i;
> +
> GEM_BUG_ON(!n_placements);
>
> /*
> @@ -44,9 +47,20 @@ static void object_set_placements(struct drm_i915_gem_object *obj,
> obj->mm.placements = &i915->mm.regions[mr->id];
> obj->mm.n_placements = 1;
> } else {
> - obj->mm.placements = placements;
> + arr = kmalloc_array(n_placements,
> + sizeof(struct intel_memory_region *),
> + GFP_KERNEL);
> + if (!arr)
> + return -ENOMEM;
> +
> + for (i = 0; i < n_placements; i++)
> + arr[i] = placements[i];
> +
> + obj->mm.placements = arr;
> obj->mm.n_placements = n_placements;
> }
> +
> + return 0;
> }
>
> static int i915_gem_publish(struct drm_i915_gem_object *obj,
> @@ -148,7 +162,9 @@ i915_gem_dumb_create(struct drm_file *file,
> return -ENOMEM;
>
> mr = intel_memory_region_by_type(to_i915(dev), mem_type);
> - object_set_placements(obj, &mr, 1);
> + ret = object_set_placements(obj, &mr, 1);
> + if (ret)
> + goto object_free;
>
> ret = i915_gem_setup(obj, args->size);
> if (ret)
> @@ -184,7 +200,9 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data,
> return -ENOMEM;
>
> mr = intel_memory_region_by_type(i915, INTEL_MEMORY_SYSTEM);
> - object_set_placements(obj, &mr, 1);
> + ret = object_set_placements(obj, &mr, 1);
> + if (ret)
> + goto object_free;
>
> ret = i915_gem_setup(obj, args->size);
> if (ret)
> @@ -199,7 +217,8 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data,
>
> struct create_ext {
> struct drm_i915_private *i915;
> - struct drm_i915_gem_object *vanilla_object;
> + struct intel_memory_region *placements[INTEL_REGION_UNKNOWN];
> + unsigned int n_placements;
> };
>
> static void repr_placements(char *buf, size_t size,
> @@ -230,8 +249,7 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
> struct drm_i915_private *i915 = ext_data->i915;
> struct drm_i915_gem_memory_class_instance __user *uregions =
> u64_to_user_ptr(args->regions);
> - struct drm_i915_gem_object *obj = ext_data->vanilla_object;
> - struct intel_memory_region **placements;
> + struct intel_memory_region *placements[INTEL_REGION_UNKNOWN];
> u32 mask;
> int i, ret = 0;
>
> @@ -245,6 +263,8 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
> ret = -EINVAL;
> }
>
> + BUILD_BUG_ON(ARRAY_SIZE(i915->mm.regions) != ARRAY_SIZE(placements));
> + BUILD_BUG_ON(ARRAY_SIZE(ext_data->placements) != ARRAY_SIZE(placements));
> if (args->num_regions > ARRAY_SIZE(i915->mm.regions)) {
> drm_dbg(&i915->drm, "num_regions is too large\n");
> ret = -EINVAL;
> @@ -253,21 +273,13 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
> if (ret)
> return ret;
>
> - placements = kmalloc_array(args->num_regions,
> - sizeof(struct intel_memory_region *),
> - GFP_KERNEL);
> - if (!placements)
> - return -ENOMEM;
> -
> mask = 0;
> for (i = 0; i < args->num_regions; i++) {
> struct drm_i915_gem_memory_class_instance region;
> struct intel_memory_region *mr;
>
> - if (copy_from_user(®ion, uregions, sizeof(region))) {
> - ret = -EFAULT;
> - goto out_free;
> - }
> + if (copy_from_user(®ion, uregions, sizeof(region)))
> + return -EFAULT;
>
> mr = intel_memory_region_lookup(i915,
> region.memory_class,
> @@ -293,14 +305,13 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
> ++uregions;
> }
>
> - if (obj->mm.placements) {
> + if (ext_data->n_placements) {
> ret = -EINVAL;
> goto out_dump;
> }
>
> - object_set_placements(obj, placements, args->num_regions);
> - if (args->num_regions == 1)
> - kfree(placements);
> + for (i = 0; i < args->num_regions; i++)
> + ext_data->placements[i] = placements[i];
I guess here we forget to set the ext_data->n_placements, which would
explain the CI failure.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [Intel-gfx] [PATCH 2/7] drm/i915/gem: Refactor placement setup for i915_gem_object_create* (v2)
2021-07-19 8:17 ` Matthew Auld
@ 2021-07-20 22:06 ` Jason Ekstrand
2021-07-21 8:31 ` Matthew Auld
0 siblings, 1 reply; 25+ messages in thread
From: Jason Ekstrand @ 2021-07-20 22:06 UTC (permalink / raw)
To: Matthew Auld; +Cc: Intel Graphics Development, Matthew Auld, ML dri-devel
On Mon, Jul 19, 2021 at 3:18 AM Matthew Auld
<matthew.william.auld@gmail.com> wrote:
>
> On Fri, 16 Jul 2021 at 15:14, Jason Ekstrand <jason@jlekstrand.net> wrote:
> >
> > Since we don't allow changing the set of regions after creation, we can
> > make ext_set_placements() build up the region set directly in the
> > create_ext and assign it to the object later. This is similar to what
> > we did for contexts with the proto-context only simpler because there's
> > no funny object shuffling. This will be used in the next patch to allow
> > us to de-duplicate a bunch of code. Also, since we know the maximum
> > number of regions up-front, we can use a fixed-size temporary array for
> > the regions. This simplifies memory management a bit for this new
> > delayed approach.
> >
> > v2 (Matthew Auld):
> > - Get rid of MAX_N_PLACEMENTS
> > - Drop kfree(placements) from set_placements()
> >
> > Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> > Cc: Matthew Auld <matthew.auld@intel.com>
> > ---
> > drivers/gpu/drm/i915/gem/i915_gem_create.c | 81 ++++++++++++----------
> > 1 file changed, 45 insertions(+), 36 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c
> > index 51f92e4b1a69d..5766749a449c0 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_create.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c
> > @@ -27,10 +27,13 @@ static u32 object_max_page_size(struct drm_i915_gem_object *obj)
> > return max_page_size;
> > }
> >
> > -static void object_set_placements(struct drm_i915_gem_object *obj,
> > - struct intel_memory_region **placements,
> > - unsigned int n_placements)
> > +static int object_set_placements(struct drm_i915_gem_object *obj,
> > + struct intel_memory_region **placements,
> > + unsigned int n_placements)
> > {
> > + struct intel_memory_region **arr;
> > + unsigned int i;
> > +
> > GEM_BUG_ON(!n_placements);
> >
> > /*
> > @@ -44,9 +47,20 @@ static void object_set_placements(struct drm_i915_gem_object *obj,
> > obj->mm.placements = &i915->mm.regions[mr->id];
> > obj->mm.n_placements = 1;
> > } else {
> > - obj->mm.placements = placements;
> > + arr = kmalloc_array(n_placements,
> > + sizeof(struct intel_memory_region *),
> > + GFP_KERNEL);
> > + if (!arr)
> > + return -ENOMEM;
> > +
> > + for (i = 0; i < n_placements; i++)
> > + arr[i] = placements[i];
> > +
> > + obj->mm.placements = arr;
> > obj->mm.n_placements = n_placements;
> > }
> > +
> > + return 0;
> > }
> >
> > static int i915_gem_publish(struct drm_i915_gem_object *obj,
> > @@ -148,7 +162,9 @@ i915_gem_dumb_create(struct drm_file *file,
> > return -ENOMEM;
> >
> > mr = intel_memory_region_by_type(to_i915(dev), mem_type);
> > - object_set_placements(obj, &mr, 1);
> > + ret = object_set_placements(obj, &mr, 1);
> > + if (ret)
> > + goto object_free;
> >
> > ret = i915_gem_setup(obj, args->size);
> > if (ret)
> > @@ -184,7 +200,9 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data,
> > return -ENOMEM;
> >
> > mr = intel_memory_region_by_type(i915, INTEL_MEMORY_SYSTEM);
> > - object_set_placements(obj, &mr, 1);
> > + ret = object_set_placements(obj, &mr, 1);
> > + if (ret)
> > + goto object_free;
> >
> > ret = i915_gem_setup(obj, args->size);
> > if (ret)
> > @@ -199,7 +217,8 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data,
> >
> > struct create_ext {
> > struct drm_i915_private *i915;
> > - struct drm_i915_gem_object *vanilla_object;
> > + struct intel_memory_region *placements[INTEL_REGION_UNKNOWN];
> > + unsigned int n_placements;
> > };
> >
> > static void repr_placements(char *buf, size_t size,
> > @@ -230,8 +249,7 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
> > struct drm_i915_private *i915 = ext_data->i915;
> > struct drm_i915_gem_memory_class_instance __user *uregions =
> > u64_to_user_ptr(args->regions);
> > - struct drm_i915_gem_object *obj = ext_data->vanilla_object;
> > - struct intel_memory_region **placements;
> > + struct intel_memory_region *placements[INTEL_REGION_UNKNOWN];
> > u32 mask;
> > int i, ret = 0;
> >
> > @@ -245,6 +263,8 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
> > ret = -EINVAL;
> > }
> >
> > + BUILD_BUG_ON(ARRAY_SIZE(i915->mm.regions) != ARRAY_SIZE(placements));
> > + BUILD_BUG_ON(ARRAY_SIZE(ext_data->placements) != ARRAY_SIZE(placements));
> > if (args->num_regions > ARRAY_SIZE(i915->mm.regions)) {
> > drm_dbg(&i915->drm, "num_regions is too large\n");
> > ret = -EINVAL;
> > @@ -253,21 +273,13 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
> > if (ret)
> > return ret;
> >
> > - placements = kmalloc_array(args->num_regions,
> > - sizeof(struct intel_memory_region *),
> > - GFP_KERNEL);
> > - if (!placements)
> > - return -ENOMEM;
> > -
> > mask = 0;
> > for (i = 0; i < args->num_regions; i++) {
> > struct drm_i915_gem_memory_class_instance region;
> > struct intel_memory_region *mr;
> >
> > - if (copy_from_user(®ion, uregions, sizeof(region))) {
> > - ret = -EFAULT;
> > - goto out_free;
> > - }
> > + if (copy_from_user(®ion, uregions, sizeof(region)))
> > + return -EFAULT;
> >
> > mr = intel_memory_region_lookup(i915,
> > region.memory_class,
> > @@ -293,14 +305,13 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
> > ++uregions;
> > }
> >
> > - if (obj->mm.placements) {
> > + if (ext_data->n_placements) {
> > ret = -EINVAL;
> > goto out_dump;
> > }
> >
> > - object_set_placements(obj, placements, args->num_regions);
> > - if (args->num_regions == 1)
> > - kfree(placements);
> > + for (i = 0; i < args->num_regions; i++)
> > + ext_data->placements[i] = placements[i];
>
> I guess here we forget to set the ext_data->n_placements, which would
> explain the CI failure.
What CI failure are you referring to?
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [Intel-gfx] [PATCH 2/7] drm/i915/gem: Refactor placement setup for i915_gem_object_create* (v2)
2021-07-20 22:06 ` Jason Ekstrand
@ 2021-07-21 8:31 ` Matthew Auld
2021-07-21 18:22 ` Jason Ekstrand
0 siblings, 1 reply; 25+ messages in thread
From: Matthew Auld @ 2021-07-21 8:31 UTC (permalink / raw)
To: Jason Ekstrand; +Cc: Intel Graphics Development, Matthew Auld, ML dri-devel
On Tue, 20 Jul 2021 at 23:07, Jason Ekstrand <jason@jlekstrand.net> wrote:
>
> On Mon, Jul 19, 2021 at 3:18 AM Matthew Auld
> <matthew.william.auld@gmail.com> wrote:
> >
> > On Fri, 16 Jul 2021 at 15:14, Jason Ekstrand <jason@jlekstrand.net> wrote:
> > >
> > > Since we don't allow changing the set of regions after creation, we can
> > > make ext_set_placements() build up the region set directly in the
> > > create_ext and assign it to the object later. This is similar to what
> > > we did for contexts with the proto-context only simpler because there's
> > > no funny object shuffling. This will be used in the next patch to allow
> > > us to de-duplicate a bunch of code. Also, since we know the maximum
> > > number of regions up-front, we can use a fixed-size temporary array for
> > > the regions. This simplifies memory management a bit for this new
> > > delayed approach.
> > >
> > > v2 (Matthew Auld):
> > > - Get rid of MAX_N_PLACEMENTS
> > > - Drop kfree(placements) from set_placements()
> > >
> > > Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> > > Cc: Matthew Auld <matthew.auld@intel.com>
> > > ---
> > > drivers/gpu/drm/i915/gem/i915_gem_create.c | 81 ++++++++++++----------
> > > 1 file changed, 45 insertions(+), 36 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c
> > > index 51f92e4b1a69d..5766749a449c0 100644
> > > --- a/drivers/gpu/drm/i915/gem/i915_gem_create.c
> > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c
> > > @@ -27,10 +27,13 @@ static u32 object_max_page_size(struct drm_i915_gem_object *obj)
> > > return max_page_size;
> > > }
> > >
> > > -static void object_set_placements(struct drm_i915_gem_object *obj,
> > > - struct intel_memory_region **placements,
> > > - unsigned int n_placements)
> > > +static int object_set_placements(struct drm_i915_gem_object *obj,
> > > + struct intel_memory_region **placements,
> > > + unsigned int n_placements)
> > > {
> > > + struct intel_memory_region **arr;
> > > + unsigned int i;
> > > +
> > > GEM_BUG_ON(!n_placements);
> > >
> > > /*
> > > @@ -44,9 +47,20 @@ static void object_set_placements(struct drm_i915_gem_object *obj,
> > > obj->mm.placements = &i915->mm.regions[mr->id];
> > > obj->mm.n_placements = 1;
> > > } else {
> > > - obj->mm.placements = placements;
> > > + arr = kmalloc_array(n_placements,
> > > + sizeof(struct intel_memory_region *),
> > > + GFP_KERNEL);
> > > + if (!arr)
> > > + return -ENOMEM;
> > > +
> > > + for (i = 0; i < n_placements; i++)
> > > + arr[i] = placements[i];
> > > +
> > > + obj->mm.placements = arr;
> > > obj->mm.n_placements = n_placements;
> > > }
> > > +
> > > + return 0;
> > > }
> > >
> > > static int i915_gem_publish(struct drm_i915_gem_object *obj,
> > > @@ -148,7 +162,9 @@ i915_gem_dumb_create(struct drm_file *file,
> > > return -ENOMEM;
> > >
> > > mr = intel_memory_region_by_type(to_i915(dev), mem_type);
> > > - object_set_placements(obj, &mr, 1);
> > > + ret = object_set_placements(obj, &mr, 1);
> > > + if (ret)
> > > + goto object_free;
> > >
> > > ret = i915_gem_setup(obj, args->size);
> > > if (ret)
> > > @@ -184,7 +200,9 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data,
> > > return -ENOMEM;
> > >
> > > mr = intel_memory_region_by_type(i915, INTEL_MEMORY_SYSTEM);
> > > - object_set_placements(obj, &mr, 1);
> > > + ret = object_set_placements(obj, &mr, 1);
> > > + if (ret)
> > > + goto object_free;
> > >
> > > ret = i915_gem_setup(obj, args->size);
> > > if (ret)
> > > @@ -199,7 +217,8 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data,
> > >
> > > struct create_ext {
> > > struct drm_i915_private *i915;
> > > - struct drm_i915_gem_object *vanilla_object;
> > > + struct intel_memory_region *placements[INTEL_REGION_UNKNOWN];
> > > + unsigned int n_placements;
> > > };
> > >
> > > static void repr_placements(char *buf, size_t size,
> > > @@ -230,8 +249,7 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
> > > struct drm_i915_private *i915 = ext_data->i915;
> > > struct drm_i915_gem_memory_class_instance __user *uregions =
> > > u64_to_user_ptr(args->regions);
> > > - struct drm_i915_gem_object *obj = ext_data->vanilla_object;
> > > - struct intel_memory_region **placements;
> > > + struct intel_memory_region *placements[INTEL_REGION_UNKNOWN];
> > > u32 mask;
> > > int i, ret = 0;
> > >
> > > @@ -245,6 +263,8 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
> > > ret = -EINVAL;
> > > }
> > >
> > > + BUILD_BUG_ON(ARRAY_SIZE(i915->mm.regions) != ARRAY_SIZE(placements));
> > > + BUILD_BUG_ON(ARRAY_SIZE(ext_data->placements) != ARRAY_SIZE(placements));
> > > if (args->num_regions > ARRAY_SIZE(i915->mm.regions)) {
> > > drm_dbg(&i915->drm, "num_regions is too large\n");
> > > ret = -EINVAL;
> > > @@ -253,21 +273,13 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
> > > if (ret)
> > > return ret;
> > >
> > > - placements = kmalloc_array(args->num_regions,
> > > - sizeof(struct intel_memory_region *),
> > > - GFP_KERNEL);
> > > - if (!placements)
> > > - return -ENOMEM;
> > > -
> > > mask = 0;
> > > for (i = 0; i < args->num_regions; i++) {
> > > struct drm_i915_gem_memory_class_instance region;
> > > struct intel_memory_region *mr;
> > >
> > > - if (copy_from_user(®ion, uregions, sizeof(region))) {
> > > - ret = -EFAULT;
> > > - goto out_free;
> > > - }
> > > + if (copy_from_user(®ion, uregions, sizeof(region)))
> > > + return -EFAULT;
> > >
> > > mr = intel_memory_region_lookup(i915,
> > > region.memory_class,
> > > @@ -293,14 +305,13 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
> > > ++uregions;
> > > }
> > >
> > > - if (obj->mm.placements) {
> > > + if (ext_data->n_placements) {
> > > ret = -EINVAL;
> > > goto out_dump;
> > > }
> > >
> > > - object_set_placements(obj, placements, args->num_regions);
> > > - if (args->num_regions == 1)
> > > - kfree(placements);
> > > + for (i = 0; i < args->num_regions; i++)
> > > + ext_data->placements[i] = placements[i];
> >
> > I guess here we forget to set the ext_data->n_placements, which would
> > explain the CI failure.
>
> What CI failure are you referring to?
Pre-merge results for this series:
igt@gem_create@create-ext-placement-sanity-check:
shard-skl: PASS -> FAIL +1 similar issue
shard-apl: NOTRUN -> FAIL
shard-glk: PASS -> FAIL
shard-iclb: PASS -> FAIL
shard-kbl: PASS -> FAIL
shard-tglb: NOTRUN -> FAIL
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [Intel-gfx] [PATCH 2/7] drm/i915/gem: Refactor placement setup for i915_gem_object_create* (v2)
2021-07-21 8:31 ` Matthew Auld
@ 2021-07-21 18:22 ` Jason Ekstrand
0 siblings, 0 replies; 25+ messages in thread
From: Jason Ekstrand @ 2021-07-21 18:22 UTC (permalink / raw)
To: Matthew Auld; +Cc: Intel Graphics Development, Matthew Auld, ML dri-devel
On Wed, Jul 21, 2021 at 3:32 AM Matthew Auld
<matthew.william.auld@gmail.com> wrote:
>
> On Tue, 20 Jul 2021 at 23:07, Jason Ekstrand <jason@jlekstrand.net> wrote:
> >
> > On Mon, Jul 19, 2021 at 3:18 AM Matthew Auld
> > <matthew.william.auld@gmail.com> wrote:
> > >
> > > On Fri, 16 Jul 2021 at 15:14, Jason Ekstrand <jason@jlekstrand.net> wrote:
> > > >
> > > > Since we don't allow changing the set of regions after creation, we can
> > > > make ext_set_placements() build up the region set directly in the
> > > > create_ext and assign it to the object later. This is similar to what
> > > > we did for contexts with the proto-context only simpler because there's
> > > > no funny object shuffling. This will be used in the next patch to allow
> > > > us to de-duplicate a bunch of code. Also, since we know the maximum
> > > > number of regions up-front, we can use a fixed-size temporary array for
> > > > the regions. This simplifies memory management a bit for this new
> > > > delayed approach.
> > > >
> > > > v2 (Matthew Auld):
> > > > - Get rid of MAX_N_PLACEMENTS
> > > > - Drop kfree(placements) from set_placements()
> > > >
> > > > Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> > > > Cc: Matthew Auld <matthew.auld@intel.com>
> > > > ---
> > > > drivers/gpu/drm/i915/gem/i915_gem_create.c | 81 ++++++++++++----------
> > > > 1 file changed, 45 insertions(+), 36 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c
> > > > index 51f92e4b1a69d..5766749a449c0 100644
> > > > --- a/drivers/gpu/drm/i915/gem/i915_gem_create.c
> > > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c
> > > > @@ -27,10 +27,13 @@ static u32 object_max_page_size(struct drm_i915_gem_object *obj)
> > > > return max_page_size;
> > > > }
> > > >
> > > > -static void object_set_placements(struct drm_i915_gem_object *obj,
> > > > - struct intel_memory_region **placements,
> > > > - unsigned int n_placements)
> > > > +static int object_set_placements(struct drm_i915_gem_object *obj,
> > > > + struct intel_memory_region **placements,
> > > > + unsigned int n_placements)
> > > > {
> > > > + struct intel_memory_region **arr;
> > > > + unsigned int i;
> > > > +
> > > > GEM_BUG_ON(!n_placements);
> > > >
> > > > /*
> > > > @@ -44,9 +47,20 @@ static void object_set_placements(struct drm_i915_gem_object *obj,
> > > > obj->mm.placements = &i915->mm.regions[mr->id];
> > > > obj->mm.n_placements = 1;
> > > > } else {
> > > > - obj->mm.placements = placements;
> > > > + arr = kmalloc_array(n_placements,
> > > > + sizeof(struct intel_memory_region *),
> > > > + GFP_KERNEL);
> > > > + if (!arr)
> > > > + return -ENOMEM;
> > > > +
> > > > + for (i = 0; i < n_placements; i++)
> > > > + arr[i] = placements[i];
> > > > +
> > > > + obj->mm.placements = arr;
> > > > obj->mm.n_placements = n_placements;
> > > > }
> > > > +
> > > > + return 0;
> > > > }
> > > >
> > > > static int i915_gem_publish(struct drm_i915_gem_object *obj,
> > > > @@ -148,7 +162,9 @@ i915_gem_dumb_create(struct drm_file *file,
> > > > return -ENOMEM;
> > > >
> > > > mr = intel_memory_region_by_type(to_i915(dev), mem_type);
> > > > - object_set_placements(obj, &mr, 1);
> > > > + ret = object_set_placements(obj, &mr, 1);
> > > > + if (ret)
> > > > + goto object_free;
> > > >
> > > > ret = i915_gem_setup(obj, args->size);
> > > > if (ret)
> > > > @@ -184,7 +200,9 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data,
> > > > return -ENOMEM;
> > > >
> > > > mr = intel_memory_region_by_type(i915, INTEL_MEMORY_SYSTEM);
> > > > - object_set_placements(obj, &mr, 1);
> > > > + ret = object_set_placements(obj, &mr, 1);
> > > > + if (ret)
> > > > + goto object_free;
> > > >
> > > > ret = i915_gem_setup(obj, args->size);
> > > > if (ret)
> > > > @@ -199,7 +217,8 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data,
> > > >
> > > > struct create_ext {
> > > > struct drm_i915_private *i915;
> > > > - struct drm_i915_gem_object *vanilla_object;
> > > > + struct intel_memory_region *placements[INTEL_REGION_UNKNOWN];
> > > > + unsigned int n_placements;
> > > > };
> > > >
> > > > static void repr_placements(char *buf, size_t size,
> > > > @@ -230,8 +249,7 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
> > > > struct drm_i915_private *i915 = ext_data->i915;
> > > > struct drm_i915_gem_memory_class_instance __user *uregions =
> > > > u64_to_user_ptr(args->regions);
> > > > - struct drm_i915_gem_object *obj = ext_data->vanilla_object;
> > > > - struct intel_memory_region **placements;
> > > > + struct intel_memory_region *placements[INTEL_REGION_UNKNOWN];
> > > > u32 mask;
> > > > int i, ret = 0;
> > > >
> > > > @@ -245,6 +263,8 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
> > > > ret = -EINVAL;
> > > > }
> > > >
> > > > + BUILD_BUG_ON(ARRAY_SIZE(i915->mm.regions) != ARRAY_SIZE(placements));
> > > > + BUILD_BUG_ON(ARRAY_SIZE(ext_data->placements) != ARRAY_SIZE(placements));
> > > > if (args->num_regions > ARRAY_SIZE(i915->mm.regions)) {
> > > > drm_dbg(&i915->drm, "num_regions is too large\n");
> > > > ret = -EINVAL;
> > > > @@ -253,21 +273,13 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
> > > > if (ret)
> > > > return ret;
> > > >
> > > > - placements = kmalloc_array(args->num_regions,
> > > > - sizeof(struct intel_memory_region *),
> > > > - GFP_KERNEL);
> > > > - if (!placements)
> > > > - return -ENOMEM;
> > > > -
> > > > mask = 0;
> > > > for (i = 0; i < args->num_regions; i++) {
> > > > struct drm_i915_gem_memory_class_instance region;
> > > > struct intel_memory_region *mr;
> > > >
> > > > - if (copy_from_user(®ion, uregions, sizeof(region))) {
> > > > - ret = -EFAULT;
> > > > - goto out_free;
> > > > - }
> > > > + if (copy_from_user(®ion, uregions, sizeof(region)))
> > > > + return -EFAULT;
> > > >
> > > > mr = intel_memory_region_lookup(i915,
> > > > region.memory_class,
> > > > @@ -293,14 +305,13 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args,
> > > > ++uregions;
> > > > }
> > > >
> > > > - if (obj->mm.placements) {
> > > > + if (ext_data->n_placements) {
> > > > ret = -EINVAL;
> > > > goto out_dump;
> > > > }
> > > >
> > > > - object_set_placements(obj, placements, args->num_regions);
> > > > - if (args->num_regions == 1)
> > > > - kfree(placements);
> > > > + for (i = 0; i < args->num_regions; i++)
> > > > + ext_data->placements[i] = placements[i];
> > >
> > > I guess here we forget to set the ext_data->n_placements, which would
> > > explain the CI failure.
> >
> > What CI failure are you referring to?
>
> Pre-merge results for this series:
>
> igt@gem_create@create-ext-placement-sanity-check:
>
> shard-skl: PASS -> FAIL +1 similar issue
> shard-apl: NOTRUN -> FAIL
> shard-glk: PASS -> FAIL
> shard-iclb: PASS -> FAIL
> shard-kbl: PASS -> FAIL
> shard-tglb: NOTRUN -> FAIL
Yup. That was it. Thanks! Not sure why I didn't notice those fails....
--Jason
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 25+ messages in thread
* [Intel-gfx] [PATCH 3/7] drm/i915/gem: Call i915_gem_flush_free_objects() in i915_gem_dumb_create()
2021-07-16 14:14 [Intel-gfx] [PATCH 0/7] drm/i915: Migrate memory to SMEM when imported cross-device (v7) Jason Ekstrand
2021-07-16 14:14 ` [Intel-gfx] [PATCH 1/7] drm/i915/gem: Check object_can_migrate from object_migrate Jason Ekstrand
2021-07-16 14:14 ` [Intel-gfx] [PATCH 2/7] drm/i915/gem: Refactor placement setup for i915_gem_object_create* (v2) Jason Ekstrand
@ 2021-07-16 14:14 ` Jason Ekstrand
2021-07-16 19:19 ` Matthew Auld
2021-07-16 14:14 ` [Intel-gfx] [PATCH 4/7] drm/i915/gem: Unify user object creation (v2) Jason Ekstrand
` (7 subsequent siblings)
10 siblings, 1 reply; 25+ messages in thread
From: Jason Ekstrand @ 2021-07-16 14:14 UTC (permalink / raw)
To: intel-gfx, dri-devel; +Cc: Matthew Auld
This doesn't really fix anything serious since the chances of a client
creating and destroying a mass of dumb BOs is pretty low. However, it
is called by the other two create IOCTLs to garbage collect old objects.
Call it here too for consistency.
Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Cc: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/i915/gem/i915_gem_create.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c
index 5766749a449c0..1b370914587c0 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_create.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c
@@ -151,6 +151,8 @@ i915_gem_dumb_create(struct drm_file *file,
if (args->pitch < args->width)
return -EINVAL;
+ i915_gem_flush_free_objects(i915);
+
args->size = mul_u32_u32(args->pitch, args->height);
mem_type = INTEL_MEMORY_SYSTEM;
--
2.31.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [Intel-gfx] [PATCH 4/7] drm/i915/gem: Unify user object creation (v2)
2021-07-16 14:14 [Intel-gfx] [PATCH 0/7] drm/i915: Migrate memory to SMEM when imported cross-device (v7) Jason Ekstrand
` (2 preceding siblings ...)
2021-07-16 14:14 ` [Intel-gfx] [PATCH 3/7] drm/i915/gem: Call i915_gem_flush_free_objects() in i915_gem_dumb_create() Jason Ekstrand
@ 2021-07-16 14:14 ` Jason Ekstrand
2021-07-16 19:21 ` Matthew Auld
2021-07-16 14:14 ` [Intel-gfx] [PATCH 5/7] drm/i915/gem/ttm: Respect the objection region in placement_from_obj Jason Ekstrand
` (6 subsequent siblings)
10 siblings, 1 reply; 25+ messages in thread
From: Jason Ekstrand @ 2021-07-16 14:14 UTC (permalink / raw)
To: intel-gfx, dri-devel; +Cc: Matthew Auld
Instead of hand-rolling the same three calls in each function, pull them
into an i915_gem_object_create_user helper. Apart from re-ordering of
the placements array ENOMEM check, there should be no functional change.
v2 (Matthew Auld):
- Add the call to i915_gem_flush_free_objects() from
i915_gem_dumb_create() in a separate patch
- Move i915_gem_object_alloc() below the simple error checks
Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Cc: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/i915/gem/i915_gem_create.c | 108 ++++++++-------------
1 file changed, 43 insertions(+), 65 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c
index 1b370914587c0..039e4f3b39c79 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_create.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c
@@ -11,13 +11,14 @@
#include "i915_trace.h"
#include "i915_user_extensions.h"
-static u32 object_max_page_size(struct drm_i915_gem_object *obj)
+static u32 object_max_page_size(struct intel_memory_region **placements,
+ unsigned int n_placements)
{
u32 max_page_size = 0;
int i;
- for (i = 0; i < obj->mm.n_placements; i++) {
- struct intel_memory_region *mr = obj->mm.placements[i];
+ for (i = 0; i < n_placements; i++) {
+ struct intel_memory_region *mr = placements[i];
GEM_BUG_ON(!is_power_of_2(mr->min_page_size));
max_page_size = max_t(u32, max_page_size, mr->min_page_size);
@@ -81,22 +82,35 @@ static int i915_gem_publish(struct drm_i915_gem_object *obj,
return 0;
}
-static int
-i915_gem_setup(struct drm_i915_gem_object *obj, u64 size)
+static struct drm_i915_gem_object *
+i915_gem_object_create_user(struct drm_i915_private *i915, u64 size,
+ struct intel_memory_region **placements,
+ unsigned int n_placements)
{
- struct intel_memory_region *mr = obj->mm.placements[0];
+ struct intel_memory_region *mr = placements[0];
+ struct drm_i915_gem_object *obj;
unsigned int flags;
int ret;
- size = round_up(size, object_max_page_size(obj));
+ i915_gem_flush_free_objects(i915);
+
+ size = round_up(size, object_max_page_size(placements, n_placements));
if (size == 0)
- return -EINVAL;
+ return ERR_PTR(-EINVAL);
/* For most of the ABI (e.g. mmap) we think in system pages */
GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE));
if (i915_gem_object_size_2big(size))
- return -E2BIG;
+ return ERR_PTR(-E2BIG);
+
+ obj = i915_gem_object_alloc();
+ if (!obj)
+ return ERR_PTR(-ENOMEM);
+
+ ret = object_set_placements(obj, placements, n_placements);
+ if (ret)
+ goto object_free;
/*
* I915_BO_ALLOC_USER will make sure the object is cleared before
@@ -106,12 +120,18 @@ i915_gem_setup(struct drm_i915_gem_object *obj, u64 size)
ret = mr->ops->init_object(mr, obj, size, 0, flags);
if (ret)
- return ret;
+ goto object_free;
GEM_BUG_ON(size != obj->base.size);
trace_i915_gem_object_create(obj);
- return 0;
+ return obj;
+
+object_free:
+ if (obj->mm.n_placements > 1)
+ kfree(obj->mm.placements);
+ i915_gem_object_free(obj);
+ return ERR_PTR(ret);
}
int
@@ -124,7 +144,6 @@ i915_gem_dumb_create(struct drm_file *file,
enum intel_memory_type mem_type;
int cpp = DIV_ROUND_UP(args->bpp, 8);
u32 format;
- int ret;
switch (cpp) {
case 1:
@@ -151,32 +170,19 @@ i915_gem_dumb_create(struct drm_file *file,
if (args->pitch < args->width)
return -EINVAL;
- i915_gem_flush_free_objects(i915);
-
args->size = mul_u32_u32(args->pitch, args->height);
mem_type = INTEL_MEMORY_SYSTEM;
if (HAS_LMEM(to_i915(dev)))
mem_type = INTEL_MEMORY_LOCAL;
- obj = i915_gem_object_alloc();
- if (!obj)
- return -ENOMEM;
-
mr = intel_memory_region_by_type(to_i915(dev), mem_type);
- ret = object_set_placements(obj, &mr, 1);
- if (ret)
- goto object_free;
- ret = i915_gem_setup(obj, args->size);
- if (ret)
- goto object_free;
+ obj = i915_gem_object_create_user(to_i915(dev), args->size, &mr, 1);
+ if (IS_ERR(obj))
+ return PTR_ERR(obj);
return i915_gem_publish(obj, file, &args->size, &args->handle);
-
-object_free:
- i915_gem_object_free(obj);
- return ret;
}
/**
@@ -193,28 +199,14 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data,
struct drm_i915_gem_create *args = data;
struct drm_i915_gem_object *obj;
struct intel_memory_region *mr;
- int ret;
-
- i915_gem_flush_free_objects(i915);
-
- obj = i915_gem_object_alloc();
- if (!obj)
- return -ENOMEM;
mr = intel_memory_region_by_type(i915, INTEL_MEMORY_SYSTEM);
- ret = object_set_placements(obj, &mr, 1);
- if (ret)
- goto object_free;
- ret = i915_gem_setup(obj, args->size);
- if (ret)
- goto object_free;
+ obj = i915_gem_object_create_user(i915, args->size, &mr, 1);
+ if (IS_ERR(obj))
+ return PTR_ERR(obj);
return i915_gem_publish(obj, file, &args->size, &args->handle);
-
-object_free:
- i915_gem_object_free(obj);
- return ret;
}
struct create_ext {
@@ -375,38 +367,24 @@ i915_gem_create_ext_ioctl(struct drm_device *dev, void *data,
if (args->flags)
return -EINVAL;
- i915_gem_flush_free_objects(i915);
-
- obj = i915_gem_object_alloc();
- if (!obj)
- return -ENOMEM;
-
ret = i915_user_extensions(u64_to_user_ptr(args->extensions),
create_extensions,
ARRAY_SIZE(create_extensions),
&ext_data);
if (ret)
- goto object_free;
+ return ret;
if (!ext_data.n_placements) {
ext_data.placements[0] =
intel_memory_region_by_type(i915, INTEL_MEMORY_SYSTEM);
ext_data.n_placements = 1;
}
- ret = object_set_placements(obj, ext_data.placements,
- ext_data.n_placements);
- if (ret)
- goto object_free;
- ret = i915_gem_setup(obj, args->size);
- if (ret)
- goto object_free;
+ obj = i915_gem_object_create_user(i915, args->size,
+ ext_data.placements,
+ ext_data.n_placements);
+ if (IS_ERR(obj))
+ return PTR_ERR(obj);
return i915_gem_publish(obj, file, &args->size, &args->handle);
-
-object_free:
- if (obj->mm.n_placements > 1)
- kfree(obj->mm.placements);
- i915_gem_object_free(obj);
- return ret;
}
--
2.31.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [Intel-gfx] [PATCH 4/7] drm/i915/gem: Unify user object creation (v2)
2021-07-16 14:14 ` [Intel-gfx] [PATCH 4/7] drm/i915/gem: Unify user object creation (v2) Jason Ekstrand
@ 2021-07-16 19:21 ` Matthew Auld
2021-07-19 8:12 ` Matthew Auld
0 siblings, 1 reply; 25+ messages in thread
From: Matthew Auld @ 2021-07-16 19:21 UTC (permalink / raw)
To: Jason Ekstrand; +Cc: Intel Graphics Development, Matthew Auld, ML dri-devel
On Fri, 16 Jul 2021 at 15:14, Jason Ekstrand <jason@jlekstrand.net> wrote:
>
> Instead of hand-rolling the same three calls in each function, pull them
> into an i915_gem_object_create_user helper. Apart from re-ordering of
> the placements array ENOMEM check, there should be no functional change.
>
> v2 (Matthew Auld):
> - Add the call to i915_gem_flush_free_objects() from
> i915_gem_dumb_create() in a separate patch
> - Move i915_gem_object_alloc() below the simple error checks
>
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Cc: Matthew Auld <matthew.auld@intel.com>
If CI is happy,
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [Intel-gfx] [PATCH 4/7] drm/i915/gem: Unify user object creation (v2)
2021-07-16 19:21 ` Matthew Auld
@ 2021-07-19 8:12 ` Matthew Auld
0 siblings, 0 replies; 25+ messages in thread
From: Matthew Auld @ 2021-07-19 8:12 UTC (permalink / raw)
To: Jason Ekstrand; +Cc: Intel Graphics Development, Matthew Auld, ML dri-devel
On Fri, 16 Jul 2021 at 20:21, Matthew Auld
<matthew.william.auld@gmail.com> wrote:
>
> On Fri, 16 Jul 2021 at 15:14, Jason Ekstrand <jason@jlekstrand.net> wrote:
> >
> > Instead of hand-rolling the same three calls in each function, pull them
> > into an i915_gem_object_create_user helper. Apart from re-ordering of
> > the placements array ENOMEM check, there should be no functional change.
> >
> > v2 (Matthew Auld):
> > - Add the call to i915_gem_flush_free_objects() from
> > i915_gem_dumb_create() in a separate patch
> > - Move i915_gem_object_alloc() below the simple error checks
> >
> > Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> > Cc: Matthew Auld <matthew.auld@intel.com>
>
> If CI is happy,
> Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Might be good to also update the mman selftests to use this new helper.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 25+ messages in thread
* [Intel-gfx] [PATCH 5/7] drm/i915/gem/ttm: Respect the objection region in placement_from_obj
2021-07-16 14:14 [Intel-gfx] [PATCH 0/7] drm/i915: Migrate memory to SMEM when imported cross-device (v7) Jason Ekstrand
` (3 preceding siblings ...)
2021-07-16 14:14 ` [Intel-gfx] [PATCH 4/7] drm/i915/gem: Unify user object creation (v2) Jason Ekstrand
@ 2021-07-16 14:14 ` Jason Ekstrand
2021-07-16 19:23 ` Matthew Auld
2021-07-16 14:14 ` [Intel-gfx] [PATCH 6/7] drm/i915/gem: Correct the locking and pin pattern for dma-buf (v6) Jason Ekstrand
` (5 subsequent siblings)
10 siblings, 1 reply; 25+ messages in thread
From: Jason Ekstrand @ 2021-07-16 14:14 UTC (permalink / raw)
To: intel-gfx, dri-devel; +Cc: Thomas Hellström, Matthew Auld
Whenever we had a user object (n_placements > 0), we were ignoring
obj->mm.region and always putting obj->placements[0] as the requested
region. For LMEM+SMEM objects, this was causing them to get shoved into
LMEM on every i915_ttm_get_pages() even when SMEM was requested by, say,
i915_gem_object_migrate().
Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
index 6589411396d3f..8eeb73c7c401c 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
@@ -150,8 +150,7 @@ i915_ttm_placement_from_obj(const struct drm_i915_gem_object *obj,
unsigned int i;
placement->num_placement = 1;
- i915_ttm_place_from_region(num_allowed ? obj->mm.placements[0] :
- obj->mm.region, requested, flags);
+ i915_ttm_place_from_region(obj->mm.region, requested, flags);
/* Cache this on object? */
placement->num_busy_placement = num_allowed;
--
2.31.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [Intel-gfx] [PATCH 5/7] drm/i915/gem/ttm: Respect the objection region in placement_from_obj
2021-07-16 14:14 ` [Intel-gfx] [PATCH 5/7] drm/i915/gem/ttm: Respect the objection region in placement_from_obj Jason Ekstrand
@ 2021-07-16 19:23 ` Matthew Auld
0 siblings, 0 replies; 25+ messages in thread
From: Matthew Auld @ 2021-07-16 19:23 UTC (permalink / raw)
To: Jason Ekstrand
Cc: Thomas Hellström, Intel Graphics Development, Matthew Auld,
ML dri-devel
On Fri, 16 Jul 2021 at 15:14, Jason Ekstrand <jason@jlekstrand.net> wrote:
>
> Whenever we had a user object (n_placements > 0), we were ignoring
> obj->mm.region and always putting obj->placements[0] as the requested
> region. For LMEM+SMEM objects, this was causing them to get shoved into
> LMEM on every i915_ttm_get_pages() even when SMEM was requested by, say,
> i915_gem_object_migrate().
>
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Matthew Auld <matthew.auld@intel.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
AFAIK makes sense, just a question of properly understanding that
weird migration issue first.
Assuming CI is happy,
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
> ---
> drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> index 6589411396d3f..8eeb73c7c401c 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> @@ -150,8 +150,7 @@ i915_ttm_placement_from_obj(const struct drm_i915_gem_object *obj,
> unsigned int i;
>
> placement->num_placement = 1;
> - i915_ttm_place_from_region(num_allowed ? obj->mm.placements[0] :
> - obj->mm.region, requested, flags);
> + i915_ttm_place_from_region(obj->mm.region, requested, flags);
>
> /* Cache this on object? */
> placement->num_busy_placement = num_allowed;
> --
> 2.31.1
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 25+ messages in thread
* [Intel-gfx] [PATCH 6/7] drm/i915/gem: Correct the locking and pin pattern for dma-buf (v6)
2021-07-16 14:14 [Intel-gfx] [PATCH 0/7] drm/i915: Migrate memory to SMEM when imported cross-device (v7) Jason Ekstrand
` (4 preceding siblings ...)
2021-07-16 14:14 ` [Intel-gfx] [PATCH 5/7] drm/i915/gem/ttm: Respect the objection region in placement_from_obj Jason Ekstrand
@ 2021-07-16 14:14 ` Jason Ekstrand
2021-07-20 9:07 ` Matthew Auld
2021-07-16 14:14 ` [Intel-gfx] [PATCH 7/7] drm/i915/gem: Migrate to system at dma-buf attach time (v6) Jason Ekstrand
` (4 subsequent siblings)
10 siblings, 1 reply; 25+ messages in thread
From: Jason Ekstrand @ 2021-07-16 14:14 UTC (permalink / raw)
To: intel-gfx, dri-devel; +Cc: Thomas Hellström
From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
If our exported dma-bufs are imported by another instance of our driver,
that instance will typically have the imported dma-bufs locked during
dma_buf_map_attachment(). But the exporter also locks the same reservation
object in the map_dma_buf() callback, which leads to recursive locking.
So taking the lock inside _pin_pages_unlocked() is incorrect.
Additionally, the current pinning code path is contrary to the defined
way that pinning should occur.
Remove the explicit pin/unpin from the map/umap functions and move them
to the attach/detach allowing correct locking to occur, and to match
the static dma-buf drm_prime pattern.
Add a live selftest to exercise both dynamic and non-dynamic
exports.
v2:
- Extend the selftest with a fake dynamic importer.
- Provide real pin and unpin callbacks to not abuse the interface.
v3: (ruhl)
- Remove the dynamic export support and move the pinning into the
attach/detach path.
v4: (ruhl)
- Put pages does not need to assert on the dma-resv
v5: (jason)
- Lock around dma_buf_unmap_attachment() when emulating a dynamic
importer in the subtests.
- Use pin_pages_unlocked
v6: (jason)
- Use dma_buf_attach instead of dma_buf_attach_dynamic in the selftests
Reported-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
---
drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 43 ++++++--
.../drm/i915/gem/selftests/i915_gem_dmabuf.c | 103 +++++++++++++++++-
2 files changed, 132 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 616c3a2f1baf0..9a655f69a0671 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -12,6 +12,8 @@
#include "i915_gem_object.h"
#include "i915_scatterlist.h"
+I915_SELFTEST_DECLARE(static bool force_different_devices;)
+
static struct drm_i915_gem_object *dma_buf_to_obj(struct dma_buf *buf)
{
return to_intel_bo(buf->priv);
@@ -25,15 +27,11 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme
struct scatterlist *src, *dst;
int ret, i;
- ret = i915_gem_object_pin_pages_unlocked(obj);
- if (ret)
- goto err;
-
/* Copy sg so that we make an independent mapping */
st = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
if (st == NULL) {
ret = -ENOMEM;
- goto err_unpin_pages;
+ goto err;
}
ret = sg_alloc_table(st, obj->mm.pages->nents, GFP_KERNEL);
@@ -58,8 +56,6 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme
sg_free_table(st);
err_free:
kfree(st);
-err_unpin_pages:
- i915_gem_object_unpin_pages(obj);
err:
return ERR_PTR(ret);
}
@@ -68,13 +64,9 @@ static void i915_gem_unmap_dma_buf(struct dma_buf_attachment *attachment,
struct sg_table *sg,
enum dma_data_direction dir)
{
- struct drm_i915_gem_object *obj = dma_buf_to_obj(attachment->dmabuf);
-
dma_unmap_sgtable(attachment->dev, sg, dir, DMA_ATTR_SKIP_CPU_SYNC);
sg_free_table(sg);
kfree(sg);
-
- i915_gem_object_unpin_pages(obj);
}
static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
@@ -168,7 +160,31 @@ static int i915_gem_end_cpu_access(struct dma_buf *dma_buf, enum dma_data_direct
return err;
}
+/**
+ * i915_gem_dmabuf_attach - Do any extra attach work necessary
+ * @dmabuf: imported dma-buf
+ * @attach: new attach to do work on
+ *
+ */
+static int i915_gem_dmabuf_attach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attach)
+{
+ struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf);
+
+ return i915_gem_object_pin_pages_unlocked(obj);
+}
+
+static void i915_gem_dmabuf_detach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attach)
+{
+ struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf);
+
+ i915_gem_object_unpin_pages(obj);
+}
+
static const struct dma_buf_ops i915_dmabuf_ops = {
+ .attach = i915_gem_dmabuf_attach,
+ .detach = i915_gem_dmabuf_detach,
.map_dma_buf = i915_gem_map_dma_buf,
.unmap_dma_buf = i915_gem_unmap_dma_buf,
.release = drm_gem_dmabuf_release,
@@ -204,6 +220,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
struct sg_table *pages;
unsigned int sg_page_sizes;
+ assert_object_held(obj);
+
pages = dma_buf_map_attachment(obj->base.import_attach,
DMA_BIDIRECTIONAL);
if (IS_ERR(pages))
@@ -241,7 +259,8 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
if (dma_buf->ops == &i915_dmabuf_ops) {
obj = dma_buf_to_obj(dma_buf);
/* is it from our device? */
- if (obj->base.dev == dev) {
+ if (obj->base.dev == dev &&
+ !I915_SELFTEST_ONLY(force_different_devices)) {
/*
* Importing dmabuf exported from out own gem increases
* refcount on gem itself instead of f_count of dmabuf.
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
index dd74bc09ec88d..4451bbb4917e4 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
@@ -35,7 +35,7 @@ static int igt_dmabuf_export(void *arg)
static int igt_dmabuf_import_self(void *arg)
{
struct drm_i915_private *i915 = arg;
- struct drm_i915_gem_object *obj;
+ struct drm_i915_gem_object *obj, *import_obj;
struct drm_gem_object *import;
struct dma_buf *dmabuf;
int err;
@@ -65,14 +65,112 @@ static int igt_dmabuf_import_self(void *arg)
err = -EINVAL;
goto out_import;
}
+ import_obj = to_intel_bo(import);
+
+ i915_gem_object_lock(import_obj, NULL);
+ err = ____i915_gem_object_get_pages(import_obj);
+ i915_gem_object_unlock(import_obj);
+ if (err) {
+ pr_err("Same object dma-buf get_pages failed!\n");
+ goto out_import;
+ }
err = 0;
out_import:
- i915_gem_object_put(to_intel_bo(import));
+ i915_gem_object_put(import_obj);
+out_dmabuf:
+ dma_buf_put(dmabuf);
+out:
+ i915_gem_object_put(obj);
+ return err;
+}
+
+static int igt_dmabuf_import_same_driver(void *arg)
+{
+ struct drm_i915_private *i915 = arg;
+ struct drm_i915_gem_object *obj, *import_obj;
+ struct drm_gem_object *import;
+ struct dma_buf *dmabuf;
+ struct dma_buf_attachment *import_attach;
+ struct sg_table *st;
+ long timeout;
+ int err;
+
+ force_different_devices = true;
+ obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
+ if (IS_ERR(obj))
+ goto out_ret;
+
+ dmabuf = i915_gem_prime_export(&obj->base, 0);
+ if (IS_ERR(dmabuf)) {
+ pr_err("i915_gem_prime_export failed with err=%d\n",
+ (int)PTR_ERR(dmabuf));
+ err = PTR_ERR(dmabuf);
+ goto out;
+ }
+
+ import = i915_gem_prime_import(&i915->drm, dmabuf);
+ if (IS_ERR(import)) {
+ pr_err("i915_gem_prime_import failed with err=%d\n",
+ (int)PTR_ERR(import));
+ err = PTR_ERR(import);
+ goto out_dmabuf;
+ }
+
+ if (import == &obj->base) {
+ pr_err("i915_gem_prime_import reused gem object!\n");
+ err = -EINVAL;
+ goto out_import;
+ }
+
+ import_obj = to_intel_bo(import);
+
+ i915_gem_object_lock(import_obj, NULL);
+ err = ____i915_gem_object_get_pages(import_obj);
+ if (err) {
+ pr_err("Different objects dma-buf get_pages failed!\n");
+ i915_gem_object_unlock(import_obj);
+ goto out_import;
+ }
+
+ /*
+ * If the exported object is not in system memory, something
+ * weird is going on. TODO: When p2p is supported, this is no
+ * longer considered weird.
+ */
+ if (obj->mm.region != i915->mm.regions[INTEL_REGION_SMEM]) {
+ pr_err("Exported dma-buf is not in system memory\n");
+ err = -EINVAL;
+ }
+
+ i915_gem_object_unlock(import_obj);
+
+ /* Now try a fake an importer */
+ import_attach = dma_buf_attach(dmabuf, obj->base.dev->dev);
+ if (IS_ERR(import_attach))
+ goto out_import;
+
+ st = dma_buf_map_attachment(import_attach, DMA_BIDIRECTIONAL);
+ if (IS_ERR(st))
+ goto out_detach;
+
+ timeout = dma_resv_wait_timeout(dmabuf->resv, false, true, 5 * HZ);
+ if (!timeout) {
+ pr_err("dmabuf wait for exclusive fence timed out.\n");
+ timeout = -ETIME;
+ }
+ err = timeout > 0 ? 0 : timeout;
+ dma_buf_unmap_attachment(import_attach, st, DMA_BIDIRECTIONAL);
+out_detach:
+ dma_buf_detach(dmabuf, import_attach);
+out_import:
+ i915_gem_object_put(import_obj);
out_dmabuf:
dma_buf_put(dmabuf);
out:
i915_gem_object_put(obj);
+out_ret:
+ force_different_devices = false;
return err;
}
@@ -286,6 +384,7 @@ int i915_gem_dmabuf_live_selftests(struct drm_i915_private *i915)
{
static const struct i915_subtest tests[] = {
SUBTEST(igt_dmabuf_export),
+ SUBTEST(igt_dmabuf_import_same_driver),
};
return i915_subtests(tests, i915);
--
2.31.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [Intel-gfx] [PATCH 6/7] drm/i915/gem: Correct the locking and pin pattern for dma-buf (v6)
2021-07-16 14:14 ` [Intel-gfx] [PATCH 6/7] drm/i915/gem: Correct the locking and pin pattern for dma-buf (v6) Jason Ekstrand
@ 2021-07-20 9:07 ` Matthew Auld
2021-07-20 21:55 ` Jason Ekstrand
0 siblings, 1 reply; 25+ messages in thread
From: Matthew Auld @ 2021-07-20 9:07 UTC (permalink / raw)
To: Jason Ekstrand
Cc: Thomas Hellström, Intel Graphics Development, ML dri-devel
On Fri, 16 Jul 2021 at 15:14, Jason Ekstrand <jason@jlekstrand.net> wrote:
>
> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>
> If our exported dma-bufs are imported by another instance of our driver,
> that instance will typically have the imported dma-bufs locked during
> dma_buf_map_attachment(). But the exporter also locks the same reservation
> object in the map_dma_buf() callback, which leads to recursive locking.
>
> So taking the lock inside _pin_pages_unlocked() is incorrect.
>
> Additionally, the current pinning code path is contrary to the defined
> way that pinning should occur.
>
> Remove the explicit pin/unpin from the map/umap functions and move them
> to the attach/detach allowing correct locking to occur, and to match
> the static dma-buf drm_prime pattern.
>
> Add a live selftest to exercise both dynamic and non-dynamic
> exports.
>
> v2:
> - Extend the selftest with a fake dynamic importer.
> - Provide real pin and unpin callbacks to not abuse the interface.
> v3: (ruhl)
> - Remove the dynamic export support and move the pinning into the
> attach/detach path.
> v4: (ruhl)
> - Put pages does not need to assert on the dma-resv
> v5: (jason)
> - Lock around dma_buf_unmap_attachment() when emulating a dynamic
> importer in the subtests.
> - Use pin_pages_unlocked
> v6: (jason)
> - Use dma_buf_attach instead of dma_buf_attach_dynamic in the selftests
>
> Reported-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
> ---
> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 43 ++++++--
> .../drm/i915/gem/selftests/i915_gem_dmabuf.c | 103 +++++++++++++++++-
> 2 files changed, 132 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> index 616c3a2f1baf0..9a655f69a0671 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> @@ -12,6 +12,8 @@
> #include "i915_gem_object.h"
> #include "i915_scatterlist.h"
>
> +I915_SELFTEST_DECLARE(static bool force_different_devices;)
> +
> static struct drm_i915_gem_object *dma_buf_to_obj(struct dma_buf *buf)
> {
> return to_intel_bo(buf->priv);
> @@ -25,15 +27,11 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme
> struct scatterlist *src, *dst;
> int ret, i;
>
> - ret = i915_gem_object_pin_pages_unlocked(obj);
> - if (ret)
> - goto err;
> -
> /* Copy sg so that we make an independent mapping */
> st = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
> if (st == NULL) {
> ret = -ENOMEM;
> - goto err_unpin_pages;
> + goto err;
> }
>
> ret = sg_alloc_table(st, obj->mm.pages->nents, GFP_KERNEL);
> @@ -58,8 +56,6 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme
> sg_free_table(st);
> err_free:
> kfree(st);
> -err_unpin_pages:
> - i915_gem_object_unpin_pages(obj);
> err:
> return ERR_PTR(ret);
> }
> @@ -68,13 +64,9 @@ static void i915_gem_unmap_dma_buf(struct dma_buf_attachment *attachment,
> struct sg_table *sg,
> enum dma_data_direction dir)
> {
> - struct drm_i915_gem_object *obj = dma_buf_to_obj(attachment->dmabuf);
> -
> dma_unmap_sgtable(attachment->dev, sg, dir, DMA_ATTR_SKIP_CPU_SYNC);
> sg_free_table(sg);
> kfree(sg);
> -
> - i915_gem_object_unpin_pages(obj);
> }
>
> static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
> @@ -168,7 +160,31 @@ static int i915_gem_end_cpu_access(struct dma_buf *dma_buf, enum dma_data_direct
> return err;
> }
>
> +/**
> + * i915_gem_dmabuf_attach - Do any extra attach work necessary
> + * @dmabuf: imported dma-buf
> + * @attach: new attach to do work on
> + *
> + */
> +static int i915_gem_dmabuf_attach(struct dma_buf *dmabuf,
> + struct dma_buf_attachment *attach)
> +{
> + struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf);
> +
> + return i915_gem_object_pin_pages_unlocked(obj);
> +}
> +
> +static void i915_gem_dmabuf_detach(struct dma_buf *dmabuf,
> + struct dma_buf_attachment *attach)
> +{
> + struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf);
> +
> + i915_gem_object_unpin_pages(obj);
> +}
> +
We don't normally add kernel-doc for static functions? Otherwise
dmabuf_detach() needs matching kernel-doc.
<snip>
> +
> +static int igt_dmabuf_import_same_driver(void *arg)
> +{
> + struct drm_i915_private *i915 = arg;
> + struct drm_i915_gem_object *obj, *import_obj;
> + struct drm_gem_object *import;
> + struct dma_buf *dmabuf;
> + struct dma_buf_attachment *import_attach;
> + struct sg_table *st;
> + long timeout;
> + int err;
> +
> + force_different_devices = true;
> + obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
> + if (IS_ERR(obj))
err = PTR_ERR(obj)
<snip>
> + /* Now try a fake an importer */
> + import_attach = dma_buf_attach(dmabuf, obj->base.dev->dev);
> + if (IS_ERR(import_attach))
> + goto out_import;
> +
> + st = dma_buf_map_attachment(import_attach, DMA_BIDIRECTIONAL);
> + if (IS_ERR(st))
> + goto out_detach;
For these two maybe missing err = ?
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [Intel-gfx] [PATCH 6/7] drm/i915/gem: Correct the locking and pin pattern for dma-buf (v6)
2021-07-20 9:07 ` Matthew Auld
@ 2021-07-20 21:55 ` Jason Ekstrand
0 siblings, 0 replies; 25+ messages in thread
From: Jason Ekstrand @ 2021-07-20 21:55 UTC (permalink / raw)
To: Matthew Auld
Cc: Thomas Hellström, Intel Graphics Development, ML dri-devel
On Tue, Jul 20, 2021 at 4:07 AM Matthew Auld
<matthew.william.auld@gmail.com> wrote:
>
> On Fri, 16 Jul 2021 at 15:14, Jason Ekstrand <jason@jlekstrand.net> wrote:
> >
> > From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> >
> > If our exported dma-bufs are imported by another instance of our driver,
> > that instance will typically have the imported dma-bufs locked during
> > dma_buf_map_attachment(). But the exporter also locks the same reservation
> > object in the map_dma_buf() callback, which leads to recursive locking.
> >
> > So taking the lock inside _pin_pages_unlocked() is incorrect.
> >
> > Additionally, the current pinning code path is contrary to the defined
> > way that pinning should occur.
> >
> > Remove the explicit pin/unpin from the map/umap functions and move them
> > to the attach/detach allowing correct locking to occur, and to match
> > the static dma-buf drm_prime pattern.
> >
> > Add a live selftest to exercise both dynamic and non-dynamic
> > exports.
> >
> > v2:
> > - Extend the selftest with a fake dynamic importer.
> > - Provide real pin and unpin callbacks to not abuse the interface.
> > v3: (ruhl)
> > - Remove the dynamic export support and move the pinning into the
> > attach/detach path.
> > v4: (ruhl)
> > - Put pages does not need to assert on the dma-resv
> > v5: (jason)
> > - Lock around dma_buf_unmap_attachment() when emulating a dynamic
> > importer in the subtests.
> > - Use pin_pages_unlocked
> > v6: (jason)
> > - Use dma_buf_attach instead of dma_buf_attach_dynamic in the selftests
> >
> > Reported-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
> > Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
> > Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> > Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
> > ---
> > drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 43 ++++++--
> > .../drm/i915/gem/selftests/i915_gem_dmabuf.c | 103 +++++++++++++++++-
> > 2 files changed, 132 insertions(+), 14 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > index 616c3a2f1baf0..9a655f69a0671 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > @@ -12,6 +12,8 @@
> > #include "i915_gem_object.h"
> > #include "i915_scatterlist.h"
> >
> > +I915_SELFTEST_DECLARE(static bool force_different_devices;)
> > +
> > static struct drm_i915_gem_object *dma_buf_to_obj(struct dma_buf *buf)
> > {
> > return to_intel_bo(buf->priv);
> > @@ -25,15 +27,11 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme
> > struct scatterlist *src, *dst;
> > int ret, i;
> >
> > - ret = i915_gem_object_pin_pages_unlocked(obj);
> > - if (ret)
> > - goto err;
> > -
> > /* Copy sg so that we make an independent mapping */
> > st = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
> > if (st == NULL) {
> > ret = -ENOMEM;
> > - goto err_unpin_pages;
> > + goto err;
> > }
> >
> > ret = sg_alloc_table(st, obj->mm.pages->nents, GFP_KERNEL);
> > @@ -58,8 +56,6 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme
> > sg_free_table(st);
> > err_free:
> > kfree(st);
> > -err_unpin_pages:
> > - i915_gem_object_unpin_pages(obj);
> > err:
> > return ERR_PTR(ret);
> > }
> > @@ -68,13 +64,9 @@ static void i915_gem_unmap_dma_buf(struct dma_buf_attachment *attachment,
> > struct sg_table *sg,
> > enum dma_data_direction dir)
> > {
> > - struct drm_i915_gem_object *obj = dma_buf_to_obj(attachment->dmabuf);
> > -
> > dma_unmap_sgtable(attachment->dev, sg, dir, DMA_ATTR_SKIP_CPU_SYNC);
> > sg_free_table(sg);
> > kfree(sg);
> > -
> > - i915_gem_object_unpin_pages(obj);
> > }
> >
> > static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
> > @@ -168,7 +160,31 @@ static int i915_gem_end_cpu_access(struct dma_buf *dma_buf, enum dma_data_direct
> > return err;
> > }
> >
> > +/**
> > + * i915_gem_dmabuf_attach - Do any extra attach work necessary
> > + * @dmabuf: imported dma-buf
> > + * @attach: new attach to do work on
> > + *
> > + */
> > +static int i915_gem_dmabuf_attach(struct dma_buf *dmabuf,
> > + struct dma_buf_attachment *attach)
> > +{
> > + struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf);
> > +
> > + return i915_gem_object_pin_pages_unlocked(obj);
> > +}
> > +
> > +static void i915_gem_dmabuf_detach(struct dma_buf *dmabuf,
> > + struct dma_buf_attachment *attach)
> > +{
> > + struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf);
> > +
> > + i915_gem_object_unpin_pages(obj);
> > +}
> > +
>
> We don't normally add kernel-doc for static functions? Otherwise
> dmabuf_detach() needs matching kernel-doc.
Dropped.
> <snip>
>
> > +
> > +static int igt_dmabuf_import_same_driver(void *arg)
> > +{
> > + struct drm_i915_private *i915 = arg;
> > + struct drm_i915_gem_object *obj, *import_obj;
> > + struct drm_gem_object *import;
> > + struct dma_buf *dmabuf;
> > + struct dma_buf_attachment *import_attach;
> > + struct sg_table *st;
> > + long timeout;
> > + int err;
> > +
> > + force_different_devices = true;
> > + obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
> > + if (IS_ERR(obj))
>
> err = PTR_ERR(obj)
Done.
> <snip>
>
> > + /* Now try a fake an importer */
> > + import_attach = dma_buf_attach(dmabuf, obj->base.dev->dev);
> > + if (IS_ERR(import_attach))
> > + goto out_import;
> > +
> > + st = dma_buf_map_attachment(import_attach, DMA_BIDIRECTIONAL);
> > + if (IS_ERR(st))
> > + goto out_detach;
>
> For these two maybe missing err = ?
Yup. Fixed. I also changed the (int)PTR_ERR() in the error prints
like you asked for in the next patch.
--Jason
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 25+ messages in thread
* [Intel-gfx] [PATCH 7/7] drm/i915/gem: Migrate to system at dma-buf attach time (v6)
2021-07-16 14:14 [Intel-gfx] [PATCH 0/7] drm/i915: Migrate memory to SMEM when imported cross-device (v7) Jason Ekstrand
` (5 preceding siblings ...)
2021-07-16 14:14 ` [Intel-gfx] [PATCH 6/7] drm/i915/gem: Correct the locking and pin pattern for dma-buf (v6) Jason Ekstrand
@ 2021-07-16 14:14 ` Jason Ekstrand
2021-07-20 10:53 ` Matthew Auld
2021-07-17 0:39 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Migrate memory to SMEM when imported cross-device (rev2) Patchwork
` (3 subsequent siblings)
10 siblings, 1 reply; 25+ messages in thread
From: Jason Ekstrand @ 2021-07-16 14:14 UTC (permalink / raw)
To: intel-gfx, dri-devel; +Cc: Thomas Hellström
From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Until we support p2p dma or as a complement to that, migrate data
to system memory at dma-buf attach time if possible.
v2:
- Rebase on dynamic exporter. Update the igt_dmabuf_import_same_driver
selftest to migrate if we are LMEM capable.
v3:
- Migrate also in the pin() callback.
v4:
- Migrate in attach
v5: (jason)
- Lock around the migration
v6: (jason)
- Move the can_migrate check outside the lock
- Rework the selftests to test more migration conditions. In
particular, SMEM, LMEM, and LMEM+SMEM are all checked.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
---
drivers/gpu/drm/i915/gem/i915_gem_create.c | 2 +-
drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 23 ++++-
drivers/gpu/drm/i915/gem/i915_gem_object.h | 4 +
.../drm/i915/gem/selftests/i915_gem_dmabuf.c | 89 ++++++++++++++++++-
4 files changed, 112 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c
index 039e4f3b39c79..41c4cd3e1ea01 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_create.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c
@@ -82,7 +82,7 @@ static int i915_gem_publish(struct drm_i915_gem_object *obj,
return 0;
}
-static struct drm_i915_gem_object *
+struct drm_i915_gem_object *
i915_gem_object_create_user(struct drm_i915_private *i915, u64 size,
struct intel_memory_region **placements,
unsigned int n_placements)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 9a655f69a0671..5d438b95826b9 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -170,8 +170,29 @@ static int i915_gem_dmabuf_attach(struct dma_buf *dmabuf,
struct dma_buf_attachment *attach)
{
struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf);
+ struct i915_gem_ww_ctx ww;
+ int err;
+
+ if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM))
+ return -EOPNOTSUPP;
+
+ for_i915_gem_ww(&ww, err, true) {
+ err = i915_gem_object_lock(obj, &ww);
+ if (err)
+ continue;
+
+ err = i915_gem_object_migrate(obj, &ww, INTEL_REGION_SMEM);
+ if (err)
+ continue;
- return i915_gem_object_pin_pages_unlocked(obj);
+ err = i915_gem_object_wait_migration(obj, 0);
+ if (err)
+ continue;
+
+ err = i915_gem_object_pin_pages(obj);
+ }
+
+ return err;
}
static void i915_gem_dmabuf_detach(struct dma_buf *dmabuf,
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 8be4fadeee487..fbae53bd46384 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -61,6 +61,10 @@ i915_gem_object_create_shmem(struct drm_i915_private *i915,
struct drm_i915_gem_object *
i915_gem_object_create_shmem_from_data(struct drm_i915_private *i915,
const void *data, resource_size_t size);
+struct drm_i915_gem_object *
+i915_gem_object_create_user(struct drm_i915_private *i915, u64 size,
+ struct intel_memory_region **placements,
+ unsigned int n_placements);
extern const struct drm_i915_gem_object_ops i915_gem_shmem_ops;
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
index 4451bbb4917e4..7b7647e7e220a 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
@@ -85,9 +85,62 @@ static int igt_dmabuf_import_self(void *arg)
return err;
}
-static int igt_dmabuf_import_same_driver(void *arg)
+static int igt_dmabuf_import_same_driver_lmem(void *arg)
{
struct drm_i915_private *i915 = arg;
+ struct intel_memory_region *lmem = i915->mm.regions[INTEL_REGION_LMEM];
+ struct drm_i915_gem_object *obj;
+ struct drm_gem_object *import;
+ struct dma_buf *dmabuf;
+ int err;
+
+ if (!i915->mm.regions[INTEL_REGION_LMEM])
+ return 0;
+
+ force_different_devices = true;
+
+ obj = i915_gem_object_create_user(i915, PAGE_SIZE, &lmem, 1);
+ if (IS_ERR(obj)) {
+ pr_err("i915_gem_object_create_user failed with err=%d\n",
+ (int)PTR_ERR(dmabuf));
+ err = PTR_ERR(obj);
+ goto out_ret;
+ }
+
+ dmabuf = i915_gem_prime_export(&obj->base, 0);
+ if (IS_ERR(dmabuf)) {
+ pr_err("i915_gem_prime_export failed with err=%d\n",
+ (int)PTR_ERR(dmabuf));
+ err = PTR_ERR(dmabuf);
+ goto out;
+ }
+
+ /* We expect an import of an LMEM-only object to fail with
+ * -EOPNOTSUPP because it can't be migrated to SMEM.
+ */
+ import = i915_gem_prime_import(&i915->drm, dmabuf);
+ if (!IS_ERR(import)) {
+ drm_gem_object_put(import);
+ pr_err("i915_gem_prime_import succeeded when it shouldn't have\n");
+ err = -EINVAL;
+ } else if (PTR_ERR(import) != -EOPNOTSUPP) {
+ pr_err("i915_gem_prime_import failed with the wrong err=%d\n",
+ (int)PTR_ERR(import));
+ err = PTR_ERR(import);
+ }
+
+ dma_buf_put(dmabuf);
+out:
+ i915_gem_object_put(obj);
+out_ret:
+ force_different_devices = false;
+ return err;
+}
+
+static int igt_dmabuf_import_same_driver(struct drm_i915_private *i915,
+ struct intel_memory_region **regions,
+ unsigned int num_regions)
+{
struct drm_i915_gem_object *obj, *import_obj;
struct drm_gem_object *import;
struct dma_buf *dmabuf;
@@ -97,9 +150,15 @@ static int igt_dmabuf_import_same_driver(void *arg)
int err;
force_different_devices = true;
- obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
- if (IS_ERR(obj))
+
+ obj = i915_gem_object_create_user(i915, PAGE_SIZE,
+ regions, num_regions);
+ if (IS_ERR(obj)) {
+ pr_err("i915_gem_object_create_user failed with err=%d\n",
+ (int)PTR_ERR(dmabuf));
+ err = PTR_ERR(obj);
goto out_ret;
+ }
dmabuf = i915_gem_prime_export(&obj->base, 0);
if (IS_ERR(dmabuf)) {
@@ -174,6 +233,26 @@ static int igt_dmabuf_import_same_driver(void *arg)
return err;
}
+static int igt_dmabuf_import_same_driver_smem(void *arg)
+{
+ struct drm_i915_private *i915 = arg;
+ struct intel_memory_region *smem = i915->mm.regions[INTEL_REGION_SMEM];
+ return igt_dmabuf_import_same_driver(i915, &smem, 1);
+}
+
+static int igt_dmabuf_import_same_driver_lmem_smem(void *arg)
+{
+ struct drm_i915_private *i915 = arg;
+ struct intel_memory_region *regions[2];
+
+ if (!i915->mm.regions[INTEL_REGION_LMEM])
+ return 0;
+
+ regions[0] = i915->mm.regions[INTEL_REGION_LMEM];
+ regions[1] = i915->mm.regions[INTEL_REGION_SMEM];
+ return igt_dmabuf_import_same_driver(i915, regions, 2);
+}
+
static int igt_dmabuf_import(void *arg)
{
struct drm_i915_private *i915 = arg;
@@ -384,7 +463,9 @@ int i915_gem_dmabuf_live_selftests(struct drm_i915_private *i915)
{
static const struct i915_subtest tests[] = {
SUBTEST(igt_dmabuf_export),
- SUBTEST(igt_dmabuf_import_same_driver),
+ SUBTEST(igt_dmabuf_import_same_driver_lmem),
+ SUBTEST(igt_dmabuf_import_same_driver_smem),
+ SUBTEST(igt_dmabuf_import_same_driver_lmem_smem),
};
return i915_subtests(tests, i915);
--
2.31.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [Intel-gfx] [PATCH 7/7] drm/i915/gem: Migrate to system at dma-buf attach time (v6)
2021-07-16 14:14 ` [Intel-gfx] [PATCH 7/7] drm/i915/gem: Migrate to system at dma-buf attach time (v6) Jason Ekstrand
@ 2021-07-20 10:53 ` Matthew Auld
2021-07-20 21:40 ` Jason Ekstrand
0 siblings, 1 reply; 25+ messages in thread
From: Matthew Auld @ 2021-07-20 10:53 UTC (permalink / raw)
To: Jason Ekstrand
Cc: Thomas Hellström, Intel Graphics Development, ML dri-devel
On Fri, 16 Jul 2021 at 15:14, Jason Ekstrand <jason@jlekstrand.net> wrote:
>
> From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>
> Until we support p2p dma or as a complement to that, migrate data
> to system memory at dma-buf attach time if possible.
>
> v2:
> - Rebase on dynamic exporter. Update the igt_dmabuf_import_same_driver
> selftest to migrate if we are LMEM capable.
> v3:
> - Migrate also in the pin() callback.
> v4:
> - Migrate in attach
> v5: (jason)
> - Lock around the migration
> v6: (jason)
> - Move the can_migrate check outside the lock
> - Rework the selftests to test more migration conditions. In
> particular, SMEM, LMEM, and LMEM+SMEM are all checked.
>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
> Reported-by: kernel test robot <lkp@intel.com>
> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
> ---
> drivers/gpu/drm/i915/gem/i915_gem_create.c | 2 +-
> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 23 ++++-
> drivers/gpu/drm/i915/gem/i915_gem_object.h | 4 +
> .../drm/i915/gem/selftests/i915_gem_dmabuf.c | 89 ++++++++++++++++++-
> 4 files changed, 112 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c
> index 039e4f3b39c79..41c4cd3e1ea01 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_create.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c
> @@ -82,7 +82,7 @@ static int i915_gem_publish(struct drm_i915_gem_object *obj,
> return 0;
> }
>
> -static struct drm_i915_gem_object *
> +struct drm_i915_gem_object *
> i915_gem_object_create_user(struct drm_i915_private *i915, u64 size,
> struct intel_memory_region **placements,
> unsigned int n_placements)
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> index 9a655f69a0671..5d438b95826b9 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> @@ -170,8 +170,29 @@ static int i915_gem_dmabuf_attach(struct dma_buf *dmabuf,
> struct dma_buf_attachment *attach)
> {
> struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf);
> + struct i915_gem_ww_ctx ww;
> + int err;
> +
> + if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM))
> + return -EOPNOTSUPP;
> +
> + for_i915_gem_ww(&ww, err, true) {
> + err = i915_gem_object_lock(obj, &ww);
> + if (err)
> + continue;
> +
> + err = i915_gem_object_migrate(obj, &ww, INTEL_REGION_SMEM);
> + if (err)
> + continue;
>
> - return i915_gem_object_pin_pages_unlocked(obj);
> + err = i915_gem_object_wait_migration(obj, 0);
> + if (err)
> + continue;
> +
> + err = i915_gem_object_pin_pages(obj);
> + }
> +
> + return err;
> }
>
> static void i915_gem_dmabuf_detach(struct dma_buf *dmabuf,
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> index 8be4fadeee487..fbae53bd46384 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> @@ -61,6 +61,10 @@ i915_gem_object_create_shmem(struct drm_i915_private *i915,
> struct drm_i915_gem_object *
> i915_gem_object_create_shmem_from_data(struct drm_i915_private *i915,
> const void *data, resource_size_t size);
> +struct drm_i915_gem_object *
> +i915_gem_object_create_user(struct drm_i915_private *i915, u64 size,
> + struct intel_memory_region **placements,
> + unsigned int n_placements);
>
> extern const struct drm_i915_gem_object_ops i915_gem_shmem_ops;
>
> diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> index 4451bbb4917e4..7b7647e7e220a 100644
> --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> @@ -85,9 +85,62 @@ static int igt_dmabuf_import_self(void *arg)
> return err;
> }
>
> -static int igt_dmabuf_import_same_driver(void *arg)
> +static int igt_dmabuf_import_same_driver_lmem(void *arg)
> {
> struct drm_i915_private *i915 = arg;
> + struct intel_memory_region *lmem = i915->mm.regions[INTEL_REGION_LMEM];
> + struct drm_i915_gem_object *obj;
> + struct drm_gem_object *import;
> + struct dma_buf *dmabuf;
> + int err;
> +
> + if (!i915->mm.regions[INTEL_REGION_LMEM])
!lmem
> + return 0;
> +
> + force_different_devices = true;
> +
> + obj = i915_gem_object_create_user(i915, PAGE_SIZE, &lmem, 1);
> + if (IS_ERR(obj)) {
> + pr_err("i915_gem_object_create_user failed with err=%d\n",
> + (int)PTR_ERR(dmabuf));
PTR_ERR(obj)
> + err = PTR_ERR(obj);
> + goto out_ret;
> + }
> +
> + dmabuf = i915_gem_prime_export(&obj->base, 0);
> + if (IS_ERR(dmabuf)) {
> + pr_err("i915_gem_prime_export failed with err=%d\n",
> + (int)PTR_ERR(dmabuf));
> + err = PTR_ERR(dmabuf);
> + goto out;
> + }
> +
> + /* We expect an import of an LMEM-only object to fail with
> + * -EOPNOTSUPP because it can't be migrated to SMEM.
> + */
/*
* We expect...
*/
> + import = i915_gem_prime_import(&i915->drm, dmabuf);
> + if (!IS_ERR(import)) {
> + drm_gem_object_put(import);
> + pr_err("i915_gem_prime_import succeeded when it shouldn't have\n");
> + err = -EINVAL;
> + } else if (PTR_ERR(import) != -EOPNOTSUPP) {
> + pr_err("i915_gem_prime_import failed with the wrong err=%d\n",
> + (int)PTR_ERR(import));
> + err = PTR_ERR(import);
> + }
> +
> + dma_buf_put(dmabuf);
> +out:
> + i915_gem_object_put(obj);
> +out_ret:
> + force_different_devices = false;
> + return err;
> +}
> +
> +static int igt_dmabuf_import_same_driver(struct drm_i915_private *i915,
> + struct intel_memory_region **regions,
> + unsigned int num_regions)
> +{
> struct drm_i915_gem_object *obj, *import_obj;
> struct drm_gem_object *import;
> struct dma_buf *dmabuf;
> @@ -97,9 +150,15 @@ static int igt_dmabuf_import_same_driver(void *arg)
> int err;
>
> force_different_devices = true;
> - obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
> - if (IS_ERR(obj))
> +
> + obj = i915_gem_object_create_user(i915, PAGE_SIZE,
> + regions, num_regions);
> + if (IS_ERR(obj)) {
> + pr_err("i915_gem_object_create_user failed with err=%d\n",
> + (int)PTR_ERR(dmabuf));
PTR_ERR(obj)
> + err = PTR_ERR(obj);
> goto out_ret;
> + }
>
> dmabuf = i915_gem_prime_export(&obj->base, 0);
> if (IS_ERR(dmabuf)) {
> @@ -174,6 +233,26 @@ static int igt_dmabuf_import_same_driver(void *arg)
> return err;
> }
>
> +static int igt_dmabuf_import_same_driver_smem(void *arg)
> +{
> + struct drm_i915_private *i915 = arg;
> + struct intel_memory_region *smem = i915->mm.regions[INTEL_REGION_SMEM];
Newline.
> + return igt_dmabuf_import_same_driver(i915, &smem, 1);
> +}
> +
> +static int igt_dmabuf_import_same_driver_lmem_smem(void *arg)
> +{
> + struct drm_i915_private *i915 = arg;
> + struct intel_memory_region *regions[2];
> +
> + if (!i915->mm.regions[INTEL_REGION_LMEM])
> + return 0;
> +
> + regions[0] = i915->mm.regions[INTEL_REGION_LMEM];
> + regions[1] = i915->mm.regions[INTEL_REGION_SMEM];
> + return igt_dmabuf_import_same_driver(i915, regions, 2);
> +}
> +
> static int igt_dmabuf_import(void *arg)
> {
> struct drm_i915_private *i915 = arg;
> @@ -384,7 +463,9 @@ int i915_gem_dmabuf_live_selftests(struct drm_i915_private *i915)
> {
> static const struct i915_subtest tests[] = {
> SUBTEST(igt_dmabuf_export),
> - SUBTEST(igt_dmabuf_import_same_driver),
> + SUBTEST(igt_dmabuf_import_same_driver_lmem),
> + SUBTEST(igt_dmabuf_import_same_driver_smem),
> + SUBTEST(igt_dmabuf_import_same_driver_lmem_smem),
> };
>
> return i915_subtests(tests, i915);
> --
> 2.31.1
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [Intel-gfx] [PATCH 7/7] drm/i915/gem: Migrate to system at dma-buf attach time (v6)
2021-07-20 10:53 ` Matthew Auld
@ 2021-07-20 21:40 ` Jason Ekstrand
0 siblings, 0 replies; 25+ messages in thread
From: Jason Ekstrand @ 2021-07-20 21:40 UTC (permalink / raw)
To: Matthew Auld
Cc: Thomas Hellström, Intel Graphics Development, ML dri-devel
Fixed all the nits below locally. It'll be in the next send.
On Tue, Jul 20, 2021 at 5:53 AM Matthew Auld
<matthew.william.auld@gmail.com> wrote:
>
> On Fri, 16 Jul 2021 at 15:14, Jason Ekstrand <jason@jlekstrand.net> wrote:
> >
> > From: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> >
> > Until we support p2p dma or as a complement to that, migrate data
> > to system memory at dma-buf attach time if possible.
> >
> > v2:
> > - Rebase on dynamic exporter. Update the igt_dmabuf_import_same_driver
> > selftest to migrate if we are LMEM capable.
> > v3:
> > - Migrate also in the pin() callback.
> > v4:
> > - Migrate in attach
> > v5: (jason)
> > - Lock around the migration
> > v6: (jason)
> > - Move the can_migrate check outside the lock
> > - Rework the selftests to test more migration conditions. In
> > particular, SMEM, LMEM, and LMEM+SMEM are all checked.
> >
> > Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
> > Reported-by: kernel test robot <lkp@intel.com>
> > Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
> > Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
> > ---
> > drivers/gpu/drm/i915/gem/i915_gem_create.c | 2 +-
> > drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 23 ++++-
> > drivers/gpu/drm/i915/gem/i915_gem_object.h | 4 +
> > .../drm/i915/gem/selftests/i915_gem_dmabuf.c | 89 ++++++++++++++++++-
> > 4 files changed, 112 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c
> > index 039e4f3b39c79..41c4cd3e1ea01 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_create.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c
> > @@ -82,7 +82,7 @@ static int i915_gem_publish(struct drm_i915_gem_object *obj,
> > return 0;
> > }
> >
> > -static struct drm_i915_gem_object *
> > +struct drm_i915_gem_object *
> > i915_gem_object_create_user(struct drm_i915_private *i915, u64 size,
> > struct intel_memory_region **placements,
> > unsigned int n_placements)
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > index 9a655f69a0671..5d438b95826b9 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > @@ -170,8 +170,29 @@ static int i915_gem_dmabuf_attach(struct dma_buf *dmabuf,
> > struct dma_buf_attachment *attach)
> > {
> > struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf);
> > + struct i915_gem_ww_ctx ww;
> > + int err;
> > +
> > + if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM))
> > + return -EOPNOTSUPP;
> > +
> > + for_i915_gem_ww(&ww, err, true) {
> > + err = i915_gem_object_lock(obj, &ww);
> > + if (err)
> > + continue;
> > +
> > + err = i915_gem_object_migrate(obj, &ww, INTEL_REGION_SMEM);
> > + if (err)
> > + continue;
> >
> > - return i915_gem_object_pin_pages_unlocked(obj);
> > + err = i915_gem_object_wait_migration(obj, 0);
> > + if (err)
> > + continue;
> > +
> > + err = i915_gem_object_pin_pages(obj);
> > + }
> > +
> > + return err;
> > }
> >
> > static void i915_gem_dmabuf_detach(struct dma_buf *dmabuf,
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> > index 8be4fadeee487..fbae53bd46384 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> > @@ -61,6 +61,10 @@ i915_gem_object_create_shmem(struct drm_i915_private *i915,
> > struct drm_i915_gem_object *
> > i915_gem_object_create_shmem_from_data(struct drm_i915_private *i915,
> > const void *data, resource_size_t size);
> > +struct drm_i915_gem_object *
> > +i915_gem_object_create_user(struct drm_i915_private *i915, u64 size,
> > + struct intel_memory_region **placements,
> > + unsigned int n_placements);
> >
> > extern const struct drm_i915_gem_object_ops i915_gem_shmem_ops;
> >
> > diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> > index 4451bbb4917e4..7b7647e7e220a 100644
> > --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> > +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> > @@ -85,9 +85,62 @@ static int igt_dmabuf_import_self(void *arg)
> > return err;
> > }
> >
> > -static int igt_dmabuf_import_same_driver(void *arg)
> > +static int igt_dmabuf_import_same_driver_lmem(void *arg)
> > {
> > struct drm_i915_private *i915 = arg;
> > + struct intel_memory_region *lmem = i915->mm.regions[INTEL_REGION_LMEM];
> > + struct drm_i915_gem_object *obj;
> > + struct drm_gem_object *import;
> > + struct dma_buf *dmabuf;
> > + int err;
> > +
> > + if (!i915->mm.regions[INTEL_REGION_LMEM])
>
> !lmem
>
> > + return 0;
> > +
> > + force_different_devices = true;
> > +
> > + obj = i915_gem_object_create_user(i915, PAGE_SIZE, &lmem, 1);
> > + if (IS_ERR(obj)) {
> > + pr_err("i915_gem_object_create_user failed with err=%d\n",
> > + (int)PTR_ERR(dmabuf));
>
> PTR_ERR(obj)
>
> > + err = PTR_ERR(obj);
> > + goto out_ret;
> > + }
> > +
> > + dmabuf = i915_gem_prime_export(&obj->base, 0);
> > + if (IS_ERR(dmabuf)) {
> > + pr_err("i915_gem_prime_export failed with err=%d\n",
> > + (int)PTR_ERR(dmabuf));
> > + err = PTR_ERR(dmabuf);
> > + goto out;
> > + }
> > +
> > + /* We expect an import of an LMEM-only object to fail with
> > + * -EOPNOTSUPP because it can't be migrated to SMEM.
> > + */
>
> /*
> * We expect...
> */
>
> > + import = i915_gem_prime_import(&i915->drm, dmabuf);
> > + if (!IS_ERR(import)) {
> > + drm_gem_object_put(import);
> > + pr_err("i915_gem_prime_import succeeded when it shouldn't have\n");
> > + err = -EINVAL;
> > + } else if (PTR_ERR(import) != -EOPNOTSUPP) {
> > + pr_err("i915_gem_prime_import failed with the wrong err=%d\n",
> > + (int)PTR_ERR(import));
> > + err = PTR_ERR(import);
> > + }
> > +
> > + dma_buf_put(dmabuf);
> > +out:
> > + i915_gem_object_put(obj);
> > +out_ret:
> > + force_different_devices = false;
> > + return err;
> > +}
> > +
> > +static int igt_dmabuf_import_same_driver(struct drm_i915_private *i915,
> > + struct intel_memory_region **regions,
> > + unsigned int num_regions)
> > +{
> > struct drm_i915_gem_object *obj, *import_obj;
> > struct drm_gem_object *import;
> > struct dma_buf *dmabuf;
> > @@ -97,9 +150,15 @@ static int igt_dmabuf_import_same_driver(void *arg)
> > int err;
> >
> > force_different_devices = true;
> > - obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
> > - if (IS_ERR(obj))
> > +
> > + obj = i915_gem_object_create_user(i915, PAGE_SIZE,
> > + regions, num_regions);
> > + if (IS_ERR(obj)) {
> > + pr_err("i915_gem_object_create_user failed with err=%d\n",
> > + (int)PTR_ERR(dmabuf));
>
> PTR_ERR(obj)
>
> > + err = PTR_ERR(obj);
> > goto out_ret;
> > + }
> >
> > dmabuf = i915_gem_prime_export(&obj->base, 0);
> > if (IS_ERR(dmabuf)) {
> > @@ -174,6 +233,26 @@ static int igt_dmabuf_import_same_driver(void *arg)
> > return err;
> > }
> >
> > +static int igt_dmabuf_import_same_driver_smem(void *arg)
> > +{
> > + struct drm_i915_private *i915 = arg;
> > + struct intel_memory_region *smem = i915->mm.regions[INTEL_REGION_SMEM];
>
> Newline.
>
> > + return igt_dmabuf_import_same_driver(i915, &smem, 1);
> > +}
> > +
> > +static int igt_dmabuf_import_same_driver_lmem_smem(void *arg)
> > +{
> > + struct drm_i915_private *i915 = arg;
> > + struct intel_memory_region *regions[2];
> > +
> > + if (!i915->mm.regions[INTEL_REGION_LMEM])
> > + return 0;
> > +
> > + regions[0] = i915->mm.regions[INTEL_REGION_LMEM];
> > + regions[1] = i915->mm.regions[INTEL_REGION_SMEM];
> > + return igt_dmabuf_import_same_driver(i915, regions, 2);
> > +}
> > +
> > static int igt_dmabuf_import(void *arg)
> > {
> > struct drm_i915_private *i915 = arg;
> > @@ -384,7 +463,9 @@ int i915_gem_dmabuf_live_selftests(struct drm_i915_private *i915)
> > {
> > static const struct i915_subtest tests[] = {
> > SUBTEST(igt_dmabuf_export),
> > - SUBTEST(igt_dmabuf_import_same_driver),
> > + SUBTEST(igt_dmabuf_import_same_driver_lmem),
> > + SUBTEST(igt_dmabuf_import_same_driver_smem),
> > + SUBTEST(igt_dmabuf_import_same_driver_lmem_smem),
> > };
> >
> > return i915_subtests(tests, i915);
> > --
> > 2.31.1
> >
> > _______________________________________________
> > Intel-gfx mailing list
> > Intel-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 25+ messages in thread
* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Migrate memory to SMEM when imported cross-device (rev2)
2021-07-16 14:14 [Intel-gfx] [PATCH 0/7] drm/i915: Migrate memory to SMEM when imported cross-device (v7) Jason Ekstrand
` (6 preceding siblings ...)
2021-07-16 14:14 ` [Intel-gfx] [PATCH 7/7] drm/i915/gem: Migrate to system at dma-buf attach time (v6) Jason Ekstrand
@ 2021-07-17 0:39 ` Patchwork
2021-07-17 0:44 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
` (2 subsequent siblings)
10 siblings, 0 replies; 25+ messages in thread
From: Patchwork @ 2021-07-17 0:39 UTC (permalink / raw)
To: Jason Ekstrand; +Cc: intel-gfx
== Series Details ==
Series: drm/i915: Migrate memory to SMEM when imported cross-device (rev2)
URL : https://patchwork.freedesktop.org/series/92617/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
a518ef415105 drm/i915/gem: Check object_can_migrate from object_migrate
bc470df2a6dc drm/i915/gem: Refactor placement setup for i915_gem_object_create* (v2)
7cfa880e0e74 drm/i915/gem: Call i915_gem_flush_free_objects() in i915_gem_dumb_create()
80a4a7f5c03b drm/i915/gem: Unify user object creation (v2)
0e4a617a75b8 drm/i915/gem/ttm: Respect the objection region in placement_from_obj
6dc86f72e162 drm/i915/gem: Correct the locking and pin pattern for dma-buf (v6)
22424415b1b8 drm/i915/gem: Migrate to system at dma-buf attach time (v6)
-:189: WARNING:LINE_SPACING: Missing a blank line after declarations
#189: FILE: drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c:240:
+ struct intel_memory_region *smem = i915->mm.regions[INTEL_REGION_SMEM];
+ return igt_dmabuf_import_same_driver(i915, &smem, 1);
total: 0 errors, 1 warnings, 0 checks, 164 lines checked
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 25+ messages in thread
* [Intel-gfx] ✗ Fi.CI.DOCS: warning for drm/i915: Migrate memory to SMEM when imported cross-device (rev2)
2021-07-16 14:14 [Intel-gfx] [PATCH 0/7] drm/i915: Migrate memory to SMEM when imported cross-device (v7) Jason Ekstrand
` (7 preceding siblings ...)
2021-07-17 0:39 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Migrate memory to SMEM when imported cross-device (rev2) Patchwork
@ 2021-07-17 0:44 ` Patchwork
2021-07-17 1:08 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-07-17 11:01 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
10 siblings, 0 replies; 25+ messages in thread
From: Patchwork @ 2021-07-17 0:44 UTC (permalink / raw)
To: Jason Ekstrand; +Cc: intel-gfx
== Series Details ==
Series: drm/i915: Migrate memory to SMEM when imported cross-device (rev2)
URL : https://patchwork.freedesktop.org/series/92617/
State : warning
== Summary ==
$ make htmldocs 2>&1 > /dev/null | grep i915
./drivers/gpu/drm/i915/i915_cmd_parser.c:1436: warning: Excess function parameter 'jump_whitelist' description in 'intel_engine_cmd_parser'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1436: warning: Excess function parameter 'shadow_map' description in 'intel_engine_cmd_parser'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1436: warning: Excess function parameter 'batch_map' description in 'intel_engine_cmd_parser'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1436: warning: Function parameter or member 'trampoline' not described in 'intel_engine_cmd_parser'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1436: warning: Excess function parameter 'jump_whitelist' description in 'intel_engine_cmd_parser'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1436: warning: Excess function parameter 'shadow_map' description in 'intel_engine_cmd_parser'
./drivers/gpu/drm/i915/i915_cmd_parser.c:1436: warning: Excess function parameter 'batch_map' description in 'intel_engine_cmd_parser'
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 25+ messages in thread
* [Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915: Migrate memory to SMEM when imported cross-device (rev2)
2021-07-16 14:14 [Intel-gfx] [PATCH 0/7] drm/i915: Migrate memory to SMEM when imported cross-device (v7) Jason Ekstrand
` (8 preceding siblings ...)
2021-07-17 0:44 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
@ 2021-07-17 1:08 ` Patchwork
2021-07-17 11:01 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
10 siblings, 0 replies; 25+ messages in thread
From: Patchwork @ 2021-07-17 1:08 UTC (permalink / raw)
To: Jason Ekstrand; +Cc: intel-gfx
[-- Attachment #1.1: Type: text/plain, Size: 3107 bytes --]
== Series Details ==
Series: drm/i915: Migrate memory to SMEM when imported cross-device (rev2)
URL : https://patchwork.freedesktop.org/series/92617/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_10346 -> Patchwork_20636
====================================================
Summary
-------
**SUCCESS**
No regressions found.
External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/index.html
Known issues
------------
Here are the changes found in Patchwork_20636 that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@amdgpu/amd_basic@semaphore:
- fi-bdw-5557u: NOTRUN -> [SKIP][1] ([fdo#109271]) +27 similar issues
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/fi-bdw-5557u/igt@amdgpu/amd_basic@semaphore.html
* igt@core_hotunplug@unbind-rebind:
- fi-bdw-5557u: NOTRUN -> [WARN][2] ([i915#3718])
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/fi-bdw-5557u/igt@core_hotunplug@unbind-rebind.html
* igt@i915_module_load@reload:
- fi-kbl-soraka: [PASS][3] -> [DMESG-WARN][4] ([i915#1982])
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/fi-kbl-soraka/igt@i915_module_load@reload.html
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/fi-kbl-soraka/igt@i915_module_load@reload.html
* igt@kms_chamelium@dp-crc-fast:
- fi-bdw-5557u: NOTRUN -> [SKIP][5] ([fdo#109271] / [fdo#111827]) +8 similar issues
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/fi-bdw-5557u/igt@kms_chamelium@dp-crc-fast.html
[fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
[fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
[i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
[i915#3718]: https://gitlab.freedesktop.org/drm/intel/issues/3718
Participating hosts (41 -> 35)
------------------------------
Missing (6): fi-ilk-m540 fi-hsw-4200u fi-bsw-cyan fi-bdw-samus fi-tgl-y bat-jsl-1
Build changes
-------------
* Linux: CI_DRM_10346 -> Patchwork_20636
CI-20190529: 20190529
CI_DRM_10346: 6c4e3c031a995e641cc0d9563d21043415fb8d12 @ git://anongit.freedesktop.org/gfx-ci/linux
IGT_6144: bc65ee9ee6593716306448c9fb82c77f284f2148 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
Patchwork_20636: 22424415b1b82d54acae6be81d4ca6b6c3412f8f @ git://anongit.freedesktop.org/gfx-ci/linux
== Linux commits ==
22424415b1b8 drm/i915/gem: Migrate to system at dma-buf attach time (v6)
6dc86f72e162 drm/i915/gem: Correct the locking and pin pattern for dma-buf (v6)
0e4a617a75b8 drm/i915/gem/ttm: Respect the objection region in placement_from_obj
80a4a7f5c03b drm/i915/gem: Unify user object creation (v2)
7cfa880e0e74 drm/i915/gem: Call i915_gem_flush_free_objects() in i915_gem_dumb_create()
bc470df2a6dc drm/i915/gem: Refactor placement setup for i915_gem_object_create* (v2)
a518ef415105 drm/i915/gem: Check object_can_migrate from object_migrate
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/index.html
[-- Attachment #1.2: Type: text/html, Size: 3890 bytes --]
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 25+ messages in thread
* [Intel-gfx] ✗ Fi.CI.IGT: failure for drm/i915: Migrate memory to SMEM when imported cross-device (rev2)
2021-07-16 14:14 [Intel-gfx] [PATCH 0/7] drm/i915: Migrate memory to SMEM when imported cross-device (v7) Jason Ekstrand
` (9 preceding siblings ...)
2021-07-17 1:08 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
@ 2021-07-17 11:01 ` Patchwork
10 siblings, 0 replies; 25+ messages in thread
From: Patchwork @ 2021-07-17 11:01 UTC (permalink / raw)
To: Jason Ekstrand; +Cc: intel-gfx
[-- Attachment #1.1: Type: text/plain, Size: 30289 bytes --]
== Series Details ==
Series: drm/i915: Migrate memory to SMEM when imported cross-device (rev2)
URL : https://patchwork.freedesktop.org/series/92617/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_10346_full -> Patchwork_20636_full
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with Patchwork_20636_full absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in Patchwork_20636_full, please notify your bug team to allow them
to document this new failure mode, which will reduce false positives in CI.
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in Patchwork_20636_full:
### IGT changes ###
#### Possible regressions ####
* igt@gem_create@create-ext-placement-sanity-check:
- shard-skl: [PASS][1] -> [FAIL][2] +1 similar issue
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-skl4/igt@gem_create@create-ext-placement-sanity-check.html
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl10/igt@gem_create@create-ext-placement-sanity-check.html
- shard-apl: NOTRUN -> [FAIL][3]
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl6/igt@gem_create@create-ext-placement-sanity-check.html
- shard-glk: [PASS][4] -> [FAIL][5]
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-glk2/igt@gem_create@create-ext-placement-sanity-check.html
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-glk5/igt@gem_create@create-ext-placement-sanity-check.html
- shard-iclb: [PASS][6] -> [FAIL][7]
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-iclb2/igt@gem_create@create-ext-placement-sanity-check.html
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb2/igt@gem_create@create-ext-placement-sanity-check.html
- shard-kbl: [PASS][8] -> [FAIL][9]
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-kbl1/igt@gem_create@create-ext-placement-sanity-check.html
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-kbl4/igt@gem_create@create-ext-placement-sanity-check.html
- shard-tglb: NOTRUN -> [FAIL][10]
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-tglb3/igt@gem_create@create-ext-placement-sanity-check.html
* igt@gen9_exec_parse@bb-start-far:
- shard-iclb: NOTRUN -> [SKIP][11]
[11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb8/igt@gen9_exec_parse@bb-start-far.html
Known issues
------------
Here are the changes found in Patchwork_20636_full that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@gem_ctx_isolation@preservation-s3@vcs0:
- shard-apl: NOTRUN -> [DMESG-WARN][12] ([i915#180])
[12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl2/igt@gem_ctx_isolation@preservation-s3@vcs0.html
* igt@gem_ctx_persistence@legacy-engines-mixed:
- shard-snb: NOTRUN -> [SKIP][13] ([fdo#109271] / [i915#1099]) +4 similar issues
[13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-snb2/igt@gem_ctx_persistence@legacy-engines-mixed.html
* igt@gem_eio@in-flight-contexts-immediate:
- shard-iclb: [PASS][14] -> [TIMEOUT][15] ([i915#3070])
[14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-iclb2/igt@gem_eio@in-flight-contexts-immediate.html
[15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb2/igt@gem_eio@in-flight-contexts-immediate.html
* igt@gem_exec_fair@basic-none@vcs0:
- shard-tglb: NOTRUN -> [FAIL][16] ([i915#2842]) +4 similar issues
[16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-tglb3/igt@gem_exec_fair@basic-none@vcs0.html
* igt@gem_exec_fair@basic-pace-solo@rcs0:
- shard-tglb: [PASS][17] -> [FAIL][18] ([i915#2842]) +1 similar issue
[17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-tglb5/igt@gem_exec_fair@basic-pace-solo@rcs0.html
[18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-tglb2/igt@gem_exec_fair@basic-pace-solo@rcs0.html
* igt@gem_mmap_gtt@cpuset-medium-copy-xy:
- shard-skl: NOTRUN -> [FAIL][19] ([i915#307])
[19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl5/igt@gem_mmap_gtt@cpuset-medium-copy-xy.html
* igt@gem_pread@exhaustion:
- shard-apl: NOTRUN -> [WARN][20] ([i915#2658])
[20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl2/igt@gem_pread@exhaustion.html
- shard-snb: NOTRUN -> [WARN][21] ([i915#2658])
[21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-snb6/igt@gem_pread@exhaustion.html
* igt@gem_pwrite@basic-exhaustion:
- shard-kbl: NOTRUN -> [WARN][22] ([i915#2658])
[22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-kbl3/igt@gem_pwrite@basic-exhaustion.html
* igt@gem_render_copy@y-tiled-mc-ccs-to-yf-tiled-ccs:
- shard-iclb: NOTRUN -> [SKIP][23] ([i915#768])
[23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb8/igt@gem_render_copy@y-tiled-mc-ccs-to-yf-tiled-ccs.html
* igt@gem_workarounds@basic-read-context:
- shard-snb: [PASS][24] -> [TIMEOUT][25] ([i915#2808])
[24]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-snb7/igt@gem_workarounds@basic-read-context.html
[25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-snb6/igt@gem_workarounds@basic-read-context.html
* igt@gen7_exec_parse@chained-batch:
- shard-iclb: NOTRUN -> [SKIP][26] ([fdo#109289]) +1 similar issue
[26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb8/igt@gen7_exec_parse@chained-batch.html
* igt@gen9_exec_parse@batch-invalid-length:
- shard-snb: NOTRUN -> [SKIP][27] ([fdo#109271]) +362 similar issues
[27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-snb5/igt@gen9_exec_parse@batch-invalid-length.html
* igt@i915_pm_lpsp@kms-lpsp:
- shard-skl: NOTRUN -> [SKIP][28] ([fdo#109271]) +110 similar issues
[28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl5/igt@i915_pm_lpsp@kms-lpsp.html
* igt@i915_pm_rpm@modeset-non-lpsp-stress:
- shard-iclb: NOTRUN -> [SKIP][29] ([fdo#110892])
[29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb8/igt@i915_pm_rpm@modeset-non-lpsp-stress.html
* igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-async-flip:
- shard-skl: NOTRUN -> [FAIL][30] ([i915#3722])
[30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl6/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-async-flip.html
* igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip:
- shard-apl: NOTRUN -> [SKIP][31] ([fdo#109271] / [i915#3777]) +1 similar issue
[31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl3/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip.html
* igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip:
- shard-skl: NOTRUN -> [SKIP][32] ([fdo#109271] / [i915#3777])
[32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl5/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-180-hflip:
- shard-kbl: NOTRUN -> [SKIP][33] ([fdo#109271] / [i915#3777])
[33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-kbl3/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-180-hflip.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0:
- shard-iclb: NOTRUN -> [SKIP][34] ([fdo#110723]) +1 similar issue
[34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb8/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0.html
* igt@kms_ccs@pipe-b-ccs-on-another-bo-y_tiled_gen12_rc_ccs_cc:
- shard-skl: NOTRUN -> [FAIL][35] ([i915#3678])
[35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl5/igt@kms_ccs@pipe-b-ccs-on-another-bo-y_tiled_gen12_rc_ccs_cc.html
* igt@kms_ccs@pipe-b-crc-primary-basic-yf_tiled_ccs:
- shard-tglb: NOTRUN -> [SKIP][36] ([i915#3689]) +5 similar issues
[36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-tglb3/igt@kms_ccs@pipe-b-crc-primary-basic-yf_tiled_ccs.html
* igt@kms_chamelium@dp-hpd-storm-disable:
- shard-tglb: NOTRUN -> [SKIP][37] ([fdo#109284] / [fdo#111827]) +1 similar issue
[37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-tglb3/igt@kms_chamelium@dp-hpd-storm-disable.html
* igt@kms_chamelium@dp-mode-timings:
- shard-apl: NOTRUN -> [SKIP][38] ([fdo#109271] / [fdo#111827]) +22 similar issues
[38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl2/igt@kms_chamelium@dp-mode-timings.html
* igt@kms_chamelium@hdmi-aspect-ratio:
- shard-kbl: NOTRUN -> [SKIP][39] ([fdo#109271] / [fdo#111827]) +1 similar issue
[39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-kbl3/igt@kms_chamelium@hdmi-aspect-ratio.html
* igt@kms_chamelium@vga-edid-read:
- shard-iclb: NOTRUN -> [SKIP][40] ([fdo#109284] / [fdo#111827]) +1 similar issue
[40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb8/igt@kms_chamelium@vga-edid-read.html
* igt@kms_color@pipe-d-ctm-negative:
- shard-iclb: NOTRUN -> [SKIP][41] ([fdo#109278] / [i915#1149])
[41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb8/igt@kms_color@pipe-d-ctm-negative.html
* igt@kms_color_chamelium@pipe-invalid-ctm-matrix-sizes:
- shard-skl: NOTRUN -> [SKIP][42] ([fdo#109271] / [fdo#111827]) +11 similar issues
[42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl5/igt@kms_color_chamelium@pipe-invalid-ctm-matrix-sizes.html
- shard-snb: NOTRUN -> [SKIP][43] ([fdo#109271] / [fdo#111827]) +21 similar issues
[43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-snb5/igt@kms_color_chamelium@pipe-invalid-ctm-matrix-sizes.html
* igt@kms_cursor_crc@pipe-a-cursor-32x32-sliding:
- shard-tglb: NOTRUN -> [SKIP][44] ([i915#3319]) +2 similar issues
[44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-tglb3/igt@kms_cursor_crc@pipe-a-cursor-32x32-sliding.html
* igt@kms_cursor_crc@pipe-c-cursor-256x256-random:
- shard-skl: [PASS][45] -> [FAIL][46] ([i915#3444]) +1 similar issue
[45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-skl10/igt@kms_cursor_crc@pipe-c-cursor-256x256-random.html
[46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl8/igt@kms_cursor_crc@pipe-c-cursor-256x256-random.html
* igt@kms_cursor_crc@pipe-c-cursor-32x32-rapid-movement:
- shard-glk: NOTRUN -> [SKIP][47] ([fdo#109271]) +1 similar issue
[47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-glk6/igt@kms_cursor_crc@pipe-c-cursor-32x32-rapid-movement.html
* igt@kms_cursor_crc@pipe-d-cursor-512x170-offscreen:
- shard-tglb: NOTRUN -> [SKIP][48] ([fdo#109279] / [i915#3359]) +1 similar issue
[48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-tglb1/igt@kms_cursor_crc@pipe-d-cursor-512x170-offscreen.html
* igt@kms_cursor_crc@pipe-d-cursor-dpms:
- shard-iclb: NOTRUN -> [SKIP][49] ([fdo#109278]) +11 similar issues
[49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb8/igt@kms_cursor_crc@pipe-d-cursor-dpms.html
* igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions:
- shard-iclb: NOTRUN -> [SKIP][50] ([fdo#109274] / [fdo#109278])
[50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb8/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions.html
* igt@kms_dp_tiled_display@basic-test-pattern:
- shard-tglb: NOTRUN -> [SKIP][51] ([i915#426])
[51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-tglb3/igt@kms_dp_tiled_display@basic-test-pattern.html
* igt@kms_flip@2x-plain-flip-ts-check-interruptible@bc-hdmi-a1-hdmi-a2:
- shard-glk: [PASS][52] -> [FAIL][53] ([i915#2122]) +1 similar issue
[52]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-glk7/igt@kms_flip@2x-plain-flip-ts-check-interruptible@bc-hdmi-a1-hdmi-a2.html
[53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-glk9/igt@kms_flip@2x-plain-flip-ts-check-interruptible@bc-hdmi-a1-hdmi-a2.html
* igt@kms_flip@flip-vs-expired-vblank-interruptible@c-edp1:
- shard-skl: NOTRUN -> [FAIL][54] ([i915#79])
[54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl5/igt@kms_flip@flip-vs-expired-vblank-interruptible@c-edp1.html
* igt@kms_flip@flip-vs-suspend-interruptible@a-dp1:
- shard-apl: [PASS][55] -> [DMESG-WARN][56] ([i915#180])
[55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-apl7/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html
[56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl7/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html
* igt@kms_flip@plain-flip-fb-recreate-interruptible@c-edp1:
- shard-skl: [PASS][57] -> [FAIL][58] ([i915#2122]) +1 similar issue
[57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-skl2/igt@kms_flip@plain-flip-fb-recreate-interruptible@c-edp1.html
[58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl10/igt@kms_flip@plain-flip-fb-recreate-interruptible@c-edp1.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs:
- shard-apl: NOTRUN -> [SKIP][59] ([fdo#109271] / [i915#2672])
[59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl2/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-blt:
- shard-kbl: NOTRUN -> [SKIP][60] ([fdo#109271]) +27 similar issues
[60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-kbl3/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff:
- shard-iclb: NOTRUN -> [SKIP][61] ([fdo#109280]) +3 similar issues
[61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb8/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-cur-indfb-draw-blt:
- shard-tglb: NOTRUN -> [SKIP][62] ([fdo#111825]) +4 similar issues
[62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-tglb3/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-cur-indfb-draw-blt.html
* igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d:
- shard-apl: NOTRUN -> [SKIP][63] ([fdo#109271] / [i915#533]) +2 similar issues
[63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl3/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d.html
* igt@kms_pipe_crc_basic@suspend-read-crc-pipe-b:
- shard-skl: [PASS][64] -> [DMESG-WARN][65] ([i915#1982])
[64]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-skl6/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-b.html
[65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl7/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-b.html
* igt@kms_pipe_crc_basic@suspend-read-crc-pipe-d:
- shard-skl: NOTRUN -> [SKIP][66] ([fdo#109271] / [i915#533]) +1 similar issue
[66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl9/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-d.html
* igt@kms_plane_alpha_blend@pipe-a-alpha-7efc:
- shard-apl: NOTRUN -> [FAIL][67] ([fdo#108145] / [i915#265]) +1 similar issue
[67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl6/igt@kms_plane_alpha_blend@pipe-a-alpha-7efc.html
* igt@kms_plane_alpha_blend@pipe-b-alpha-opaque-fb:
- shard-skl: NOTRUN -> [FAIL][68] ([fdo#108145] / [i915#265]) +2 similar issues
[68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl6/igt@kms_plane_alpha_blend@pipe-b-alpha-opaque-fb.html
* igt@kms_plane_alpha_blend@pipe-b-coverage-7efc:
- shard-skl: [PASS][69] -> [FAIL][70] ([fdo#108145] / [i915#265]) +1 similar issue
[69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-skl8/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html
[70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl4/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html
* igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb:
- shard-apl: NOTRUN -> [FAIL][71] ([i915#265])
[71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl2/igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb.html
* igt@kms_plane_lowres@pipe-c-tiling-none:
- shard-tglb: NOTRUN -> [SKIP][72] ([i915#3536])
[72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-tglb3/igt@kms_plane_lowres@pipe-c-tiling-none.html
* igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping:
- shard-apl: NOTRUN -> [SKIP][73] ([fdo#109271] / [i915#2733])
[73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl6/igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping.html
* igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-2:
- shard-apl: NOTRUN -> [SKIP][74] ([fdo#109271] / [i915#658]) +5 similar issues
[74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl2/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-2.html
* igt@kms_psr2_sf@plane-move-sf-dmg-area-0:
- shard-skl: NOTRUN -> [SKIP][75] ([fdo#109271] / [i915#658]) +1 similar issue
[75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl6/igt@kms_psr2_sf@plane-move-sf-dmg-area-0.html
* igt@kms_psr2_sf@plane-move-sf-dmg-area-2:
- shard-kbl: NOTRUN -> [SKIP][76] ([fdo#109271] / [i915#658])
[76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-kbl3/igt@kms_psr2_sf@plane-move-sf-dmg-area-2.html
* igt@kms_psr@psr2_cursor_mmap_gtt:
- shard-iclb: NOTRUN -> [SKIP][77] ([fdo#109441])
[77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb8/igt@kms_psr@psr2_cursor_mmap_gtt.html
* igt@kms_psr@psr2_primary_page_flip:
- shard-tglb: NOTRUN -> [FAIL][78] ([i915#132] / [i915#3467])
[78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-tglb3/igt@kms_psr@psr2_primary_page_flip.html
* igt@kms_vrr@flip-basic:
- shard-iclb: NOTRUN -> [SKIP][79] ([fdo#109502])
[79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb8/igt@kms_vrr@flip-basic.html
* igt@kms_writeback@writeback-check-output:
- shard-apl: NOTRUN -> [SKIP][80] ([fdo#109271] / [i915#2437]) +1 similar issue
[80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl2/igt@kms_writeback@writeback-check-output.html
* igt@kms_writeback@writeback-invalid-parameters:
- shard-tglb: NOTRUN -> [SKIP][81] ([i915#2437])
[81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-tglb3/igt@kms_writeback@writeback-invalid-parameters.html
* igt@nouveau_crc@pipe-b-ctx-flip-skip-current-frame:
- shard-apl: NOTRUN -> [SKIP][82] ([fdo#109271]) +251 similar issues
[82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl7/igt@nouveau_crc@pipe-b-ctx-flip-skip-current-frame.html
* igt@nouveau_crc@pipe-b-source-rg:
- shard-iclb: NOTRUN -> [SKIP][83] ([i915#2530])
[83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb8/igt@nouveau_crc@pipe-b-source-rg.html
* igt@nouveau_crc@pipe-d-source-outp-inactive:
- shard-tglb: NOTRUN -> [SKIP][84] ([i915#2530])
[84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-tglb3/igt@nouveau_crc@pipe-d-source-outp-inactive.html
* igt@prime_nv_test@i915_import_gtt_mmap:
- shard-tglb: NOTRUN -> [SKIP][85] ([fdo#109291]) +1 similar issue
[85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-tglb1/igt@prime_nv_test@i915_import_gtt_mmap.html
* igt@sysfs_clients@pidname:
- shard-skl: NOTRUN -> [SKIP][86] ([fdo#109271] / [i915#2994])
[86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl5/igt@sysfs_clients@pidname.html
* igt@sysfs_clients@recycle-many:
- shard-apl: NOTRUN -> [SKIP][87] ([fdo#109271] / [i915#2994]) +1 similar issue
[87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl2/igt@sysfs_clients@recycle-many.html
#### Possible fixes ####
* igt@gem_exec_fair@basic-none-share@rcs0:
- shard-iclb: [FAIL][88] ([i915#2842]) -> [PASS][89]
[88]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-iclb7/igt@gem_exec_fair@basic-none-share@rcs0.html
[89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb3/igt@gem_exec_fair@basic-none-share@rcs0.html
* igt@gem_exec_fair@basic-none-solo@rcs0:
- shard-kbl: [FAIL][90] ([i915#2842]) -> [PASS][91] +4 similar issues
[90]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-kbl4/igt@gem_exec_fair@basic-none-solo@rcs0.html
[91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-kbl2/igt@gem_exec_fair@basic-none-solo@rcs0.html
* igt@gem_exec_fair@basic-none@rcs0:
- shard-glk: [FAIL][92] ([i915#2842]) -> [PASS][93]
[92]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-glk2/igt@gem_exec_fair@basic-none@rcs0.html
[93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-glk5/igt@gem_exec_fair@basic-none@rcs0.html
* igt@gem_mmap_gtt@cpuset-big-copy:
- shard-iclb: [FAIL][94] ([i915#307]) -> [PASS][95]
[94]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-iclb4/igt@gem_mmap_gtt@cpuset-big-copy.html
[95]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb8/igt@gem_mmap_gtt@cpuset-big-copy.html
* igt@gem_workarounds@suspend-resume-context:
- shard-apl: [DMESG-WARN][96] ([i915#180]) -> [PASS][97]
[96]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-apl1/igt@gem_workarounds@suspend-resume-context.html
[97]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl8/igt@gem_workarounds@suspend-resume-context.html
* igt@kms_cursor_crc@pipe-b-cursor-suspend:
- shard-kbl: [DMESG-WARN][98] ([i915#180]) -> [PASS][99]
[98]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-kbl3/igt@kms_cursor_crc@pipe-b-cursor-suspend.html
[99]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-kbl3/igt@kms_cursor_crc@pipe-b-cursor-suspend.html
* igt@kms_cursor_crc@pipe-c-cursor-suspend:
- shard-skl: [INCOMPLETE][100] ([i915#300]) -> [PASS][101]
[100]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-skl2/igt@kms_cursor_crc@pipe-c-cursor-suspend.html
[101]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl6/igt@kms_cursor_crc@pipe-c-cursor-suspend.html
* igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size:
- shard-skl: [FAIL][102] ([i915#2346] / [i915#533]) -> [PASS][103]
[102]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-skl3/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html
[103]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl9/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html
* igt@kms_flip@flip-vs-panning-interruptible@d-edp1:
- shard-tglb: [INCOMPLETE][104] -> [PASS][105]
[104]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-tglb6/igt@kms_flip@flip-vs-panning-interruptible@d-edp1.html
[105]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-tglb3/igt@kms_flip@flip-vs-panning-interruptible@d-edp1.html
* igt@kms_flip@flip-vs-suspend-interruptible@c-edp1:
- shard-skl: [INCOMPLETE][106] ([i915#198] / [i915#2910]) -> [PASS][107]
[106]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-skl10/igt@kms_flip@flip-vs-suspend-interruptible@c-edp1.html
[107]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl5/igt@kms_flip@flip-vs-suspend-interruptible@c-edp1.html
* igt@kms_hdr@bpc-switch-dpms:
- shard-skl: [FAIL][108] ([i915#1188]) -> [PASS][109]
[108]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-skl5/igt@kms_hdr@bpc-switch-dpms.html
[109]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl1/igt@kms_hdr@bpc-switch-dpms.html
* igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a:
- shard-iclb: [INCOMPLETE][110] ([i915#1185]) -> [PASS][111]
[110]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-iclb3/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html
[111]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb8/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html
* igt@kms_plane_alpha_blend@pipe-c-coverage-7efc:
- shard-skl: [FAIL][112] ([fdo#108145] / [i915#265]) -> [PASS][113] +2 similar issues
[112]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-skl8/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
[113]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-skl9/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
#### Warnings ####
* igt@i915_pm_rc6_residency@rc6-idle:
- shard-iclb: [WARN][114] ([i915#1804] / [i915#2684]) -> [WARN][115] ([i915#2684])
[114]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-iclb4/igt@i915_pm_rc6_residency@rc6-idle.html
[115]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb8/igt@i915_pm_rc6_residency@rc6-idle.html
* igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-2:
- shard-iclb: [SKIP][116] ([i915#2920]) -> [SKIP][117] ([i915#658])
[116]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-iclb2/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-2.html
[117]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-iclb3/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-2.html
* igt@runner@aborted:
- shard-kbl: ([FAIL][118], [FAIL][119], [FAIL][120]) ([i915#1814] / [i915#3002] / [i915#3363]) -> ([FAIL][121], [FAIL][122]) ([i915#3002] / [i915#3363])
[118]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-kbl4/igt@runner@aborted.html
[119]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-kbl3/igt@runner@aborted.html
[120]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-kbl3/igt@runner@aborted.html
[121]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-kbl7/igt@runner@aborted.html
[122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-kbl7/igt@runner@aborted.html
- shard-apl: ([FAIL][123], [FAIL][124], [FAIL][125], [FAIL][126]) ([fdo#109271] / [i915#180] / [i915#3002] / [i915#3363]) -> ([FAIL][127], [FAIL][128], [FAIL][129], [FAIL][130]) ([i915#180] / [i915#3002] / [i915#3363])
[123]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-apl3/igt@runner@aborted.html
[124]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-apl3/igt@runner@aborted.html
[125]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-apl3/igt@runner@aborted.html
[126]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10346/shard-apl1/igt@runner@aborted.html
[127]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl3/igt@runner@aborted.html
[128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl2/igt@runner@aborted.html
[129]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl8/igt@runner@aborted.html
[130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/shard-apl7/igt@runner@aborted.html
[fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145
[fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
[fdo#109274]: https://bugs.freedesktop.org/show_bug.cgi?id=109274
[fdo#109278]: https://bugs.freedesktop.org/show_bug.cgi?id=109278
[fdo#109279]: https://bugs.freedesktop.org/show_bug.cgi?id=109279
[fdo#109280]: https://bugs.freedesktop.org/show_bug.cgi?id=109280
[fdo#109284]: https://bugs.freedesktop.org/show_bug.cgi?id=109284
[fdo#109289]: https://bugs.freedesktop.org/show_bug.cgi?id=109289
[fdo#109291]: https://bugs.freedesktop.org/show_bug.cgi?id=109291
[fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
[fdo#109502]: https://bugs.freedesktop.org/show_bug.cgi?id=109502
[fdo#110723]: https://bugs.freedesktop.org/show_bug.cgi?id=110723
[fdo#110892]: https://bugs.freedesktop.org/show_bug.cgi?id=110892
[fdo#111825]: https://bugs.freedesktop.org/show_bug.cgi?id=111825
[fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
[i915#1099]: https://gitlab.freedesktop.org/drm/intel/issues/1099
[i915#1149]: https://gitlab.freedesktop.org/drm/intel/issues/1149
[i915#1185]: https://gitlab.freedesktop.org/drm/intel/issues/1185
[i915#1188]: https://gitlab.freedesktop.org/drm/intel/issues/1188
[i915#132]: https://gitlab.freedesktop.org/drm/intel/issues/132
[i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180
[i915#1804]: https://gitlab.freedesktop.org/drm/intel/issues/1804
[i915#1814]: https://gitlab.freedesktop.org/drm/intel/issues/1814
[i915#198]: https://gitlab.freedeskto
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20636/index.html
[-- Attachment #1.2: Type: text/html, Size: 36323 bytes --]
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 25+ messages in thread