All of lore.kernel.org
 help / color / mirror / Atom feed
From: Chris Wilson <chris@chris-wilson.co.uk>
To: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>,
	intel-gfx@lists.freedesktop.org
Cc: Matthew Auld <matthew.auld@intel.com>
Subject: Re: [PATCH 1/4] drm/i915: Allow i915 to manage the vma offset nodes instead of drm core
Date: Fri, 15 Nov 2019 14:17:19 +0000	[thread overview]
Message-ID: <157382743979.11997.1764940468404555607@skylake-alporthouse-com> (raw)
In-Reply-To: <20191115114549.23716-1-abdiel.janulgue@linux.intel.com>

Quoting Abdiel Janulgue (2019-11-15 11:45:46)
> -static int create_mmap_offset(struct drm_i915_gem_object *obj)
> +void i915_gem_object_release_mmap_offset(struct drm_i915_gem_object *obj)
> +{
> +       struct i915_mmap_offset *mmo;
> +
> +       mutex_lock(&obj->mmo_lock);
> +       list_for_each_entry(mmo, &obj->mmap_offsets, offset) {
> +               /* vma_node_unmap for GTT mmaps handled already in
> +                * __i915_gem_object_release_mmap_gtt
> +                */
> +               if (mmo->mmap_type != I915_MMAP_TYPE_GTT)

Tempted to say always do it, but that would be a waste indeed.

> +                       drm_vma_node_unmap(&mmo->vma_node,
> +                                          obj->base.dev->anon_inode->i_mapping);
> +       }
> +       mutex_unlock(&obj->mmo_lock);
> +}

> +void i915_mmap_offset_destroy(struct i915_mmap_offset *mmo, struct mutex *mutex)
> +{
> +       if (mmo->file)
> +               drm_vma_node_revoke(&mmo->vma_node, mmo->file);

Wait a sec.

The mmo is global, one per object per type. Not one per object per type
per client.

We shouldn't be associated with a single mmo->file. That is enough
address space magnification for a single process to be able to exhaust
the entire global address space...

How's this meant to work?

> @@ -118,6 +132,11 @@ struct drm_i915_gem_object {
>         unsigned int userfault_count;
>         struct list_head userfault_link;
>  
> +       /* Protects access to mmap offsets */
> +       struct mutex mmo_lock;
> +       struct list_head mmap_offsets;
> +       bool readonly:1;

Go on, steal a bit from flags.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

WARNING: multiple messages have this Message-ID (diff)
From: Chris Wilson <chris@chris-wilson.co.uk>
To: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>,
	intel-gfx@lists.freedesktop.org
Cc: Matthew Auld <matthew.auld@intel.com>
Subject: Re: [Intel-gfx] [PATCH 1/4] drm/i915: Allow i915 to manage the vma offset nodes instead of drm core
Date: Fri, 15 Nov 2019 14:17:19 +0000	[thread overview]
Message-ID: <157382743979.11997.1764940468404555607@skylake-alporthouse-com> (raw)
Message-ID: <20191115141719.nfHJ9wlDRhuG8mWzzkgxlQampFY1WsgaqGwsGk1TDoo@z> (raw)
In-Reply-To: <20191115114549.23716-1-abdiel.janulgue@linux.intel.com>

Quoting Abdiel Janulgue (2019-11-15 11:45:46)
> -static int create_mmap_offset(struct drm_i915_gem_object *obj)
> +void i915_gem_object_release_mmap_offset(struct drm_i915_gem_object *obj)
> +{
> +       struct i915_mmap_offset *mmo;
> +
> +       mutex_lock(&obj->mmo_lock);
> +       list_for_each_entry(mmo, &obj->mmap_offsets, offset) {
> +               /* vma_node_unmap for GTT mmaps handled already in
> +                * __i915_gem_object_release_mmap_gtt
> +                */
> +               if (mmo->mmap_type != I915_MMAP_TYPE_GTT)

Tempted to say always do it, but that would be a waste indeed.

> +                       drm_vma_node_unmap(&mmo->vma_node,
> +                                          obj->base.dev->anon_inode->i_mapping);
> +       }
> +       mutex_unlock(&obj->mmo_lock);
> +}

> +void i915_mmap_offset_destroy(struct i915_mmap_offset *mmo, struct mutex *mutex)
> +{
> +       if (mmo->file)
> +               drm_vma_node_revoke(&mmo->vma_node, mmo->file);

Wait a sec.

The mmo is global, one per object per type. Not one per object per type
per client.

We shouldn't be associated with a single mmo->file. That is enough
address space magnification for a single process to be able to exhaust
the entire global address space...

How's this meant to work?

> @@ -118,6 +132,11 @@ struct drm_i915_gem_object {
>         unsigned int userfault_count;
>         struct list_head userfault_link;
>  
> +       /* Protects access to mmap offsets */
> +       struct mutex mmo_lock;
> +       struct list_head mmap_offsets;
> +       bool readonly:1;

Go on, steal a bit from flags.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  parent reply	other threads:[~2019-11-15 14:17 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-15 11:45 [PATCH 1/4] drm/i915: Allow i915 to manage the vma offset nodes instead of drm core Abdiel Janulgue
2019-11-15 11:45 ` [Intel-gfx] " Abdiel Janulgue
2019-11-15 11:45 ` [PATCH 2/4] drm/i915: Introduce DRM_I915_GEM_MMAP_OFFSET Abdiel Janulgue
2019-11-15 11:45   ` [Intel-gfx] " Abdiel Janulgue
2019-11-15 13:51   ` Chris Wilson
2019-11-15 13:51     ` [Intel-gfx] " Chris Wilson
2019-11-15 14:31   ` Chris Wilson
2019-11-15 14:31     ` [Intel-gfx] " Chris Wilson
2019-11-15 11:45 ` [PATCH 3/4] drm/i915: cpu-map based dumb buffers Abdiel Janulgue
2019-11-15 11:45   ` [Intel-gfx] " Abdiel Janulgue
2019-11-15 13:54   ` Chris Wilson
2019-11-15 13:54     ` [Intel-gfx] " Chris Wilson
2019-11-15 14:26     ` Chris Wilson
2019-11-15 14:26       ` [Intel-gfx] " Chris Wilson
2019-11-15 11:45 ` [PATCH 4/4] drm/i915: Add cpu fault handler for mmap_offset Abdiel Janulgue
2019-11-15 11:45   ` [Intel-gfx] " Abdiel Janulgue
2019-11-15 13:58   ` Chris Wilson
2019-11-15 13:58     ` [Intel-gfx] " Chris Wilson
2019-11-15 13:56 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/4] drm/i915: Allow i915 to manage the vma offset nodes instead of drm core Patchwork
2019-11-15 13:56   ` [Intel-gfx] " Patchwork
2019-11-15 13:57 ` ✗ Fi.CI.SPARSE: " Patchwork
2019-11-15 13:57   ` [Intel-gfx] " Patchwork
2019-11-15 14:17 ` Chris Wilson [this message]
2019-11-15 14:17   ` [Intel-gfx] [PATCH 1/4] " Chris Wilson
2019-11-15 14:23 ` ✗ Fi.CI.BAT: failure for series starting with [1/4] " Patchwork
2019-11-15 14:23   ` [Intel-gfx] " Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2019-11-19 11:37 [PATCH 1/4] " Abdiel Janulgue
2019-11-14 19:09 Abdiel Janulgue

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=157382743979.11997.1764940468404555607@skylake-alporthouse-com \
    --to=chris@chris-wilson.co.uk \
    --cc=abdiel.janulgue@linux.intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=matthew.auld@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.