From: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com> Cc: matthew.brost@intel.com, paulo.r.zanoni@intel.com, lionel.g.landwerlin@intel.com, tvrtko.ursulin@intel.com, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, matthew.auld@intel.com, jason@jlekstrand.net, daniel.vetter@intel.com, christian.koenig@amd.com Subject: Re: [RFC 10/10] drm/i915/vm_bind: Fix vm->vm_bind_mutex and vm->mutex nesting Date: Wed, 6 Jul 2022 22:56:53 -0700 [thread overview] Message-ID: <20220707055651.GO14039@nvishwa1-DESK> (raw) In-Reply-To: <549c2e3253f847aabcc7366c9d5efa582e51f8e8.camel@linux.intel.com> On Tue, Jul 05, 2022 at 10:40:56AM +0200, Thomas Hellström wrote: >On Fri, 2022-07-01 at 15:50 -0700, Niranjana Vishwanathapura wrote: >> VM_BIND functionality maintain that vm->vm_bind_mutex will never be >> taken >> while holding vm->mutex. >> However, while closing 'vm', vma is destroyed while holding vm- >> >mutex. >> But vma releasing needs to take vm->vm_bind_mutex in order to delete >> vma >> from the vm_bind_list. To avoid this, destroy the vma outside vm- >> >mutex >> while closing the 'vm'. >> >> Signed-off-by: Niranjana Vishwanathapura > >First, when introducing a new feature like this, we should not need to >end the series with "Fix.." patches like this, rather whatever needs to >be fixed should be fixed where the code was introduced. > Yah, makes sense. >Second, an analogy whith linux kernel CPU mapping, could we instead >think of the vm_bind_lock being similar to the mmap_lock, and the >vm_mutex being similar to the i_mmap_lock, the former being used for VA >manipulation and the latter when attaching / removing the backing store >from the VA? > >Then we would not need to take the vm_bind_lock from vma destruction >since the VA would already have been reclaimed at that point. For vm >destruction here we'd loop over all relevant vm bind VAs under the >vm_bind lock and call vm_unbind? Would that work? > Yah. Infact, in vm_unbind call, we first do VA reclaim (i915_gem_vm_bind_remove()) under the vm_bind_lock and destroy the vma (i915_vma_destroy()) outside the vm_bind_lock (under the object lock). The vma destruction in vm_bind call error path is bit different, but I think it can be handled as well. Yah, as mentioned in other thread, doing a VA reclaim (i915_gem_vm_bind_remove()) early during VM destruction under the vm_bind_lock as you suggested would fit in there nicely. Niranjana >/Thomas > > >> <niranjana.vishwanathapura@intel.com> >> --- >> drivers/gpu/drm/i915/gt/intel_gtt.c | 23 ++++++++++++++++++----- >> 1 file changed, 18 insertions(+), 5 deletions(-) >> >> diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c >> b/drivers/gpu/drm/i915/gt/intel_gtt.c >> index 4ab3bda644ff..4f707d0eb3ef 100644 >> --- a/drivers/gpu/drm/i915/gt/intel_gtt.c >> +++ b/drivers/gpu/drm/i915/gt/intel_gtt.c >> @@ -109,7 +109,8 @@ int map_pt_dma_locked(struct i915_address_space >> *vm, struct drm_i915_gem_object >> return 0; >> } >> >> -static void clear_vm_list(struct list_head *list) >> +static void clear_vm_list(struct list_head *list, >> + struct list_head *destroy_list) >> { >> struct i915_vma *vma, *vn; >> >> @@ -138,8 +139,7 @@ static void clear_vm_list(struct list_head *list) >> i915_vm_resv_get(vma->vm); >> vma->vm_ddestroy = true; >> } else { >> - i915_vma_destroy_locked(vma); >> - i915_gem_object_put(obj); >> + list_move_tail(&vma->vm_link, destroy_list); >> } >> >> } >> @@ -147,16 +147,29 @@ static void clear_vm_list(struct list_head >> *list) >> >> static void __i915_vm_close(struct i915_address_space *vm) >> { >> + struct i915_vma *vma, *vn; >> + struct list_head list; >> + >> + INIT_LIST_HEAD(&list); >> + >> mutex_lock(&vm->mutex); >> >> - clear_vm_list(&vm->bound_list); >> - clear_vm_list(&vm->unbound_list); >> + clear_vm_list(&vm->bound_list, &list); >> + clear_vm_list(&vm->unbound_list, &list); >> >> /* Check for must-fix unanticipated side-effects */ >> GEM_BUG_ON(!list_empty(&vm->bound_list)); >> GEM_BUG_ON(!list_empty(&vm->unbound_list)); >> >> mutex_unlock(&vm->mutex); >> + >> + /* Destroy vmas outside vm->mutex */ >> + list_for_each_entry_safe(vma, vn, &list, vm_link) { >> + struct drm_i915_gem_object *obj = vma->obj; >> + >> + i915_vma_destroy(vma); >> + i915_gem_object_put(obj); >> + } >> } >> >> /* lock the vm into the current ww, if we lock one, we lock all */ >
WARNING: multiple messages have this Message-ID (diff)
From: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com> Cc: paulo.r.zanoni@intel.com, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, matthew.auld@intel.com, daniel.vetter@intel.com, christian.koenig@amd.com Subject: Re: [Intel-gfx] [RFC 10/10] drm/i915/vm_bind: Fix vm->vm_bind_mutex and vm->mutex nesting Date: Wed, 6 Jul 2022 22:56:53 -0700 [thread overview] Message-ID: <20220707055651.GO14039@nvishwa1-DESK> (raw) In-Reply-To: <549c2e3253f847aabcc7366c9d5efa582e51f8e8.camel@linux.intel.com> On Tue, Jul 05, 2022 at 10:40:56AM +0200, Thomas Hellström wrote: >On Fri, 2022-07-01 at 15:50 -0700, Niranjana Vishwanathapura wrote: >> VM_BIND functionality maintain that vm->vm_bind_mutex will never be >> taken >> while holding vm->mutex. >> However, while closing 'vm', vma is destroyed while holding vm- >> >mutex. >> But vma releasing needs to take vm->vm_bind_mutex in order to delete >> vma >> from the vm_bind_list. To avoid this, destroy the vma outside vm- >> >mutex >> while closing the 'vm'. >> >> Signed-off-by: Niranjana Vishwanathapura > >First, when introducing a new feature like this, we should not need to >end the series with "Fix.." patches like this, rather whatever needs to >be fixed should be fixed where the code was introduced. > Yah, makes sense. >Second, an analogy whith linux kernel CPU mapping, could we instead >think of the vm_bind_lock being similar to the mmap_lock, and the >vm_mutex being similar to the i_mmap_lock, the former being used for VA >manipulation and the latter when attaching / removing the backing store >from the VA? > >Then we would not need to take the vm_bind_lock from vma destruction >since the VA would already have been reclaimed at that point. For vm >destruction here we'd loop over all relevant vm bind VAs under the >vm_bind lock and call vm_unbind? Would that work? > Yah. Infact, in vm_unbind call, we first do VA reclaim (i915_gem_vm_bind_remove()) under the vm_bind_lock and destroy the vma (i915_vma_destroy()) outside the vm_bind_lock (under the object lock). The vma destruction in vm_bind call error path is bit different, but I think it can be handled as well. Yah, as mentioned in other thread, doing a VA reclaim (i915_gem_vm_bind_remove()) early during VM destruction under the vm_bind_lock as you suggested would fit in there nicely. Niranjana >/Thomas > > >> <niranjana.vishwanathapura@intel.com> >> --- >> drivers/gpu/drm/i915/gt/intel_gtt.c | 23 ++++++++++++++++++----- >> 1 file changed, 18 insertions(+), 5 deletions(-) >> >> diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c >> b/drivers/gpu/drm/i915/gt/intel_gtt.c >> index 4ab3bda644ff..4f707d0eb3ef 100644 >> --- a/drivers/gpu/drm/i915/gt/intel_gtt.c >> +++ b/drivers/gpu/drm/i915/gt/intel_gtt.c >> @@ -109,7 +109,8 @@ int map_pt_dma_locked(struct i915_address_space >> *vm, struct drm_i915_gem_object >> return 0; >> } >> >> -static void clear_vm_list(struct list_head *list) >> +static void clear_vm_list(struct list_head *list, >> + struct list_head *destroy_list) >> { >> struct i915_vma *vma, *vn; >> >> @@ -138,8 +139,7 @@ static void clear_vm_list(struct list_head *list) >> i915_vm_resv_get(vma->vm); >> vma->vm_ddestroy = true; >> } else { >> - i915_vma_destroy_locked(vma); >> - i915_gem_object_put(obj); >> + list_move_tail(&vma->vm_link, destroy_list); >> } >> >> } >> @@ -147,16 +147,29 @@ static void clear_vm_list(struct list_head >> *list) >> >> static void __i915_vm_close(struct i915_address_space *vm) >> { >> + struct i915_vma *vma, *vn; >> + struct list_head list; >> + >> + INIT_LIST_HEAD(&list); >> + >> mutex_lock(&vm->mutex); >> >> - clear_vm_list(&vm->bound_list); >> - clear_vm_list(&vm->unbound_list); >> + clear_vm_list(&vm->bound_list, &list); >> + clear_vm_list(&vm->unbound_list, &list); >> >> /* Check for must-fix unanticipated side-effects */ >> GEM_BUG_ON(!list_empty(&vm->bound_list)); >> GEM_BUG_ON(!list_empty(&vm->unbound_list)); >> >> mutex_unlock(&vm->mutex); >> + >> + /* Destroy vmas outside vm->mutex */ >> + list_for_each_entry_safe(vma, vn, &list, vm_link) { >> + struct drm_i915_gem_object *obj = vma->obj; >> + >> + i915_vma_destroy(vma); >> + i915_gem_object_put(obj); >> + } >> } >> >> /* lock the vm into the current ww, if we lock one, we lock all */ >
next prev parent reply other threads:[~2022-07-07 5:57 UTC|newest] Thread overview: 121+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-07-01 22:50 [RFC 00/10] drm/i915/vm_bind: Add VM_BIND functionality Niranjana Vishwanathapura 2022-07-01 22:50 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-01 22:50 ` [RFC 01/10] drm/i915/vm_bind: Introduce VM_BIND ioctl Niranjana Vishwanathapura 2022-07-01 22:50 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-05 9:59 ` Hellstrom, Thomas 2022-07-05 9:59 ` Hellstrom, Thomas 2022-07-07 1:18 ` [Intel-gfx] " Andi Shyti 2022-07-07 1:18 ` Andi Shyti 2022-07-07 5:06 ` Niranjana Vishwanathapura 2022-07-07 5:01 ` Niranjana Vishwanathapura 2022-07-07 5:01 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-07 7:32 ` Hellstrom, Thomas 2022-07-07 7:32 ` [Intel-gfx] " Hellstrom, Thomas 2022-07-08 12:58 ` Niranjana Vishwanathapura 2022-07-08 12:58 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-01 22:50 ` [RFC 02/10] drm/i915/vm_bind: Bind and unbind mappings Niranjana Vishwanathapura 2022-07-01 22:50 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-06 16:21 ` Thomas Hellström 2022-07-06 16:21 ` [Intel-gfx] " Thomas Hellström 2022-07-07 1:41 ` Andi Shyti 2022-07-07 1:41 ` Andi Shyti 2022-07-07 5:48 ` Niranjana Vishwanathapura 2022-07-07 5:43 ` Niranjana Vishwanathapura 2022-07-07 5:43 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-07 8:14 ` Thomas Hellström 2022-07-07 8:14 ` [Intel-gfx] " Thomas Hellström 2022-07-08 12:57 ` Niranjana Vishwanathapura 2022-07-08 12:57 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-18 10:55 ` Tvrtko Ursulin 2022-07-26 5:07 ` Niranjana Vishwanathapura 2022-07-26 8:40 ` Tvrtko Ursulin 2022-07-01 22:50 ` [RFC 03/10] drm/i915/vm_bind: Support private and shared BOs Niranjana Vishwanathapura 2022-07-01 22:50 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-07 10:31 ` Hellstrom, Thomas 2022-07-07 10:31 ` [Intel-gfx] " Hellstrom, Thomas 2022-07-08 13:14 ` Niranjana Vishwanathapura 2022-07-08 13:14 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-08 13:43 ` Hellstrom, Thomas 2022-07-08 13:43 ` [Intel-gfx] " Hellstrom, Thomas 2022-07-08 14:44 ` Hellstrom, Thomas 2022-07-08 14:44 ` [Intel-gfx] " Hellstrom, Thomas 2022-07-09 20:13 ` Niranjana Vishwanathapura 2022-07-09 20:13 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-07 13:27 ` Christian König 2022-07-07 13:27 ` [Intel-gfx] " Christian König 2022-07-08 13:23 ` Niranjana Vishwanathapura 2022-07-08 13:23 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-08 17:32 ` Christian König 2022-07-08 17:32 ` [Intel-gfx] " Christian König 2022-07-09 20:14 ` Niranjana Vishwanathapura 2022-07-09 20:14 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-01 22:50 ` [RFC 04/10] drm/i915/vm_bind: Add out fence support Niranjana Vishwanathapura 2022-07-01 22:50 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-01 22:50 ` [RFC 05/10] drm/i915/vm_bind: Handle persistent vmas Niranjana Vishwanathapura 2022-07-01 22:50 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-04 17:05 ` Zeng, Oak 2022-07-04 17:05 ` Zeng, Oak 2022-07-05 9:20 ` Ramalingam C 2022-07-05 9:20 ` Ramalingam C 2022-07-05 13:50 ` Zeng, Oak 2022-07-05 13:50 ` Zeng, Oak 2022-07-07 6:00 ` Niranjana Vishwanathapura 2022-07-07 11:27 ` Hellstrom, Thomas 2022-07-07 11:27 ` [Intel-gfx] " Hellstrom, Thomas 2022-07-08 15:06 ` Niranjana Vishwanathapura 2022-07-08 15:06 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-01 22:50 ` [RFC 06/10] drm/i915/vm_bind: Add I915_GEM_EXECBUFFER3 ioctl Niranjana Vishwanathapura 2022-07-01 22:50 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-07 14:41 ` Hellstrom, Thomas 2022-07-07 14:41 ` [Intel-gfx] " Hellstrom, Thomas 2022-07-07 19:38 ` Andi Shyti 2022-07-07 19:38 ` Andi Shyti 2022-07-08 12:22 ` Hellstrom, Thomas 2022-07-08 12:22 ` Hellstrom, Thomas 2022-07-08 13:47 ` Niranjana Vishwanathapura 2022-07-08 13:47 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-08 14:37 ` Hellstrom, Thomas 2022-07-08 14:37 ` [Intel-gfx] " Hellstrom, Thomas 2022-07-09 20:23 ` Niranjana Vishwanathapura 2022-07-09 20:23 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-01 22:50 ` [RFC 07/10] drm/i915/vm_bind: Handle persistent vmas in execbuf3 Niranjana Vishwanathapura 2022-07-01 22:50 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-07 14:54 ` Hellstrom, Thomas 2022-07-07 14:54 ` [Intel-gfx] " Hellstrom, Thomas 2022-07-08 12:44 ` Niranjana Vishwanathapura 2022-07-08 12:44 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-08 13:03 ` Hellstrom, Thomas 2022-07-08 13:03 ` [Intel-gfx] " Hellstrom, Thomas 2022-07-09 20:25 ` Niranjana Vishwanathapura 2022-07-09 20:25 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-01 22:50 ` [RFC 08/10] drm/i915/vm_bind: userptr dma-resv changes Niranjana Vishwanathapura 2022-07-01 22:50 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-07 13:11 ` Hellstrom, Thomas 2022-07-07 13:11 ` [Intel-gfx] " Hellstrom, Thomas 2022-07-08 14:51 ` Niranjana Vishwanathapura 2022-07-08 14:51 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-08 15:20 ` Hellstrom, Thomas 2022-07-08 15:20 ` [Intel-gfx] " Hellstrom, Thomas 2022-07-09 20:56 ` Niranjana Vishwanathapura 2022-07-09 20:56 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-08 12:17 ` Hellstrom, Thomas 2022-07-08 12:17 ` [Intel-gfx] " Hellstrom, Thomas 2022-07-08 14:54 ` Niranjana Vishwanathapura 2022-07-08 14:54 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-01 22:50 ` [RFC 09/10] drm/i915/vm_bind: Skip vma_lookup for persistent vmas Niranjana Vishwanathapura 2022-07-01 22:50 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-05 8:57 ` Thomas Hellström 2022-07-05 8:57 ` Thomas Hellström 2022-07-08 12:40 ` Niranjana Vishwanathapura 2022-07-08 12:40 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-01 22:50 ` [RFC 10/10] drm/i915/vm_bind: Fix vm->vm_bind_mutex and vm->mutex nesting Niranjana Vishwanathapura 2022-07-01 22:50 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-07-05 8:40 ` Thomas Hellström 2022-07-05 8:40 ` [Intel-gfx] " Thomas Hellström 2022-07-06 16:33 ` Ramalingam C 2022-07-06 16:33 ` [Intel-gfx] " Ramalingam C 2022-07-07 5:56 ` Niranjana Vishwanathapura [this message] 2022-07-07 5:56 ` Niranjana Vishwanathapura 2022-07-01 23:19 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915/vm_bind: Add VM_BIND functionality Patchwork 2022-07-01 23:19 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork 2022-07-01 23:40 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20220707055651.GO14039@nvishwa1-DESK \ --to=niranjana.vishwanathapura@intel.com \ --cc=christian.koenig@amd.com \ --cc=daniel.vetter@intel.com \ --cc=dri-devel@lists.freedesktop.org \ --cc=intel-gfx@lists.freedesktop.org \ --cc=jason@jlekstrand.net \ --cc=lionel.g.landwerlin@intel.com \ --cc=matthew.auld@intel.com \ --cc=matthew.brost@intel.com \ --cc=paulo.r.zanoni@intel.com \ --cc=thomas.hellstrom@linux.intel.com \ --cc=tvrtko.ursulin@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.