From: Kenneth Graunke <kenneth@whitecape.org>
To: intel-gfx@lists.freedesktop.org, Matthew Auld <matthew.auld@intel.com>
Cc: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com>,
Jordan Justen <jordan.l.justen@intel.com>,
dri-devel@lists.freedesktop.org, CQ Tang <cq.tang@intel.com>,
Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>,
Jason Ekstrand <jason@jlekstrand.net>,
mesa-dev@lists.freedesktop.org,
Daniel Vetter <daniel.vetter@intel.com>
Subject: Re: [PATCH 6/9] drm/i915/uapi: implement object placement extension
Date: Wed, 28 Apr 2021 10:28:03 -0700 [thread overview]
Message-ID: <6803385.KsXFIaTQHz@mizzik> (raw)
In-Reply-To: <20210426093901.28937-6-matthew.auld@intel.com>
[-- Attachment #1.1: Type: text/plain, Size: 3849 bytes --]
On Monday, April 26, 2021 2:38:58 AM PDT Matthew Auld wrote:
> Add new extension to support setting an immutable-priority-list of
> potential placements, at creation time.
>
> If we use the normal gem_create or gem_create_ext without the
> extensions/placements then we still get the old behaviour with only
> placing the object in system memory.
>
> v2(Daniel & Jason):
> - Add a bunch of kernel-doc
> - Simplify design for placements extension
>
> Testcase: igt/gem_create/create-ext-placement-sanity-check
> Testcase: igt/gem_create/create-ext-placement-each
> Testcase: igt/gem_create/create-ext-placement-all
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Signed-off-by: CQ Tang <cq.tang@intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> Cc: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com>
> Cc: Jordan Justen <jordan.l.justen@intel.com>
> Cc: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Kenneth Graunke <kenneth@whitecape.org>
> Cc: Jason Ekstrand <jason@jlekstrand.net>
> Cc: Dave Airlie <airlied@gmail.com>
> Cc: dri-devel@lists.freedesktop.org
> Cc: mesa-dev@lists.freedesktop.org
> ---
> drivers/gpu/drm/i915/gem/i915_gem_create.c | 215 ++++++++++++++++--
> drivers/gpu/drm/i915/gem/i915_gem_object.c | 3 +
> .../gpu/drm/i915/gem/i915_gem_object_types.h | 6 +
> .../drm/i915/gem/selftests/i915_gem_mman.c | 26 +++
> drivers/gpu/drm/i915/intel_memory_region.c | 16 ++
> drivers/gpu/drm/i915/intel_memory_region.h | 4 +
> include/uapi/drm/i915_drm.h | 62 +++++
> 7 files changed, 315 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c
> index 90e9eb6601b5..895f1666a8d3 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_create.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c
> @@ -4,12 +4,47 @@
> */
>
> #include "gem/i915_gem_ioctls.h"
> +#include "gem/i915_gem_lmem.h"
> #include "gem/i915_gem_region.h"
>
> #include "i915_drv.h"
> #include "i915_trace.h"
> #include "i915_user_extensions.h"
>
> +static u32 object_max_page_size(struct drm_i915_gem_object *obj)
> +{
> + u32 max_page_size = 0;
> + int i;
> +
> + for (i = 0; i < obj->mm.n_placements; i++) {
> + struct intel_memory_region *mr = obj->mm.placements[i];
> +
> + GEM_BUG_ON(!is_power_of_2(mr->min_page_size));
> + max_page_size = max_t(u32, max_page_size, mr->min_page_size);
> + }
> +
> + GEM_BUG_ON(!max_page_size);
> + return max_page_size;
> +}
> +
> +static void object_set_placements(struct drm_i915_gem_object *obj,
> + struct intel_memory_region **placements,
> + unsigned int n_placements)
> +{
> + GEM_BUG_ON(!n_placements);
> +
> + if (n_placements == 1) {
> + struct intel_memory_region *mr = placements[0];
> + struct drm_i915_private *i915 = mr->i915;
> +
> + obj->mm.placements = &i915->mm.regions[mr->id];
> + obj->mm.n_placements = 1;
> + } else {
> + obj->mm.placements = placements;
> + obj->mm.n_placements = n_placements;
> + }
> +}
> +
I found this helper function rather odd looking at first. In the
general case, it simply sets fields based on the parameters...but in
the n == 1 case, it goes and uses something else as the array.
On further inspection, this makes sense: normally, we have an array
of multiple placements in priority order. That array is (essentially)
malloc'd. But if there's only 1 item, having a malloc'd array of 1
thing is pretty silly. We can just point at it directly. Which means
the callers can kfree the array, and the object destructor should not.
Maybe a comment saying
/*
* For the common case of one memory region, skip storing an
* allocated array and just point at the region directly.
*/
would be helpful?
[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
next prev parent reply other threads:[~2021-04-29 5:49 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-26 9:38 [PATCH 1/9] drm/doc/rfc: i915 DG1 uAPI Matthew Auld
2021-04-26 9:38 ` [PATCH 2/9] drm/i915: mark stolen as private Matthew Auld
2021-04-26 9:38 ` [PATCH 3/9] drm/i915/query: Expose memory regions through the query uAPI Matthew Auld
2021-04-26 9:38 ` [PATCH 4/9] drm/i915: rework gem_create flow for upcoming extensions Matthew Auld
2021-04-26 9:38 ` [PATCH 5/9] drm/i915/uapi: introduce drm_i915_gem_create_ext Matthew Auld
2021-04-26 9:38 ` [PATCH 6/9] drm/i915/uapi: implement object placement extension Matthew Auld
2021-04-28 17:28 ` Kenneth Graunke [this message]
2021-04-26 9:38 ` [PATCH 7/9] drm/i915/lmem: support optional CPU clearing for special internal use Matthew Auld
2021-04-26 9:39 ` [PATCH 8/9] drm/i915/gem: clear userspace buffers for LMEM Matthew Auld
2021-04-26 9:39 ` [PATCH 9/9] drm/i915/gem: hide new uAPI behind CONFIG_BROKEN Matthew Auld
2021-04-26 15:11 ` [PATCH 1/9] drm/doc/rfc: i915 DG1 uAPI Jason Ekstrand
2021-04-26 15:31 ` Matthew Auld
2021-04-26 16:25 ` Jason Ekstrand
2021-04-26 16:32 ` Daniel Vetter
2021-04-28 15:16 ` Kenneth Graunke
2021-04-28 16:10 ` Matthew Auld
2021-04-28 15:51 ` Jason Ekstrand
2021-04-28 16:41 ` Matthew Auld
2021-04-28 16:56 ` Jason Ekstrand
2021-04-28 17:12 ` Kenneth Graunke
2021-04-28 17:30 ` Kenneth Graunke
2021-04-28 17:39 ` Bloomfield, Jon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6803385.KsXFIaTQHz@mizzik \
--to=kenneth@whitecape.org \
--cc=cq.tang@intel.com \
--cc=daniel.vetter@intel.com \
--cc=daniele.ceraolospurio@intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=jason@jlekstrand.net \
--cc=jordan.l.justen@intel.com \
--cc=lionel.g.landwerlin@linux.intel.com \
--cc=matthew.auld@intel.com \
--cc=mesa-dev@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).