From: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> To: Matthew Brost <matthew.brost@intel.com> Cc: chris.p.wilson@intel.com, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, thomas.hellstrom@intel.com, Lionel Landwerlin <lionel.g.landwerlin@intel.com>, daniel.vetter@intel.com, christian.koenig@amd.com Subject: Re: [Intel-gfx] [RFC v3 1/3] drm/doc/rfc: VM_BIND feature design document Date: Thu, 2 Jun 2022 13:24:54 -0700 [thread overview] Message-ID: <20220602202453.GR4461@nvishwa1-DESK> (raw) In-Reply-To: <20220602162245.GA15751@jons-linux-dev-box> On Thu, Jun 02, 2022 at 09:22:46AM -0700, Matthew Brost wrote: >On Thu, Jun 02, 2022 at 08:42:13AM +0300, Lionel Landwerlin wrote: >> On 02/06/2022 00:18, Matthew Brost wrote: >> > On Wed, Jun 01, 2022 at 05:25:49PM +0300, Lionel Landwerlin wrote: >> > > On 17/05/2022 21:32, Niranjana Vishwanathapura wrote: >> > > > +VM_BIND/UNBIND ioctl will immediately start binding/unbinding the mapping in an >> > > > +async worker. The binding and unbinding will work like a special GPU engine. >> > > > +The binding and unbinding operations are serialized and will wait on specified >> > > > +input fences before the operation and will signal the output fences upon the >> > > > +completion of the operation. Due to serialization, completion of an operation >> > > > +will also indicate that all previous operations are also complete. >> > > I guess we should avoid saying "will immediately start binding/unbinding" if >> > > there are fences involved. >> > > >> > > And the fact that it's happening in an async worker seem to imply it's not >> > > immediate. >> > > >> > > >> > > I have a question on the behavior of the bind operation when no input fence >> > > is provided. Let say I do : >> > > >> > > VM_BIND (out_fence=fence1) >> > > >> > > VM_BIND (out_fence=fence2) >> > > >> > > VM_BIND (out_fence=fence3) >> > > >> > > >> > > In what order are the fences going to be signaled? >> > > >> > > In the order of VM_BIND ioctls? Or out of order? >> > > >> > > Because you wrote "serialized I assume it's : in order >> > > >> > > >> > > One thing I didn't realize is that because we only get one "VM_BIND" engine, >> > > there is a disconnect from the Vulkan specification. >> > > >> > > In Vulkan VM_BIND operations are serialized but per engine. >> > > >> > > So you could have something like this : >> > > >> > > VM_BIND (engine=rcs0, in_fence=fence1, out_fence=fence2) >> > > >> > > VM_BIND (engine=ccs0, in_fence=fence3, out_fence=fence4) >> > > >> > Question - let's say this done after the above operations: >> > >> > EXEC (engine=ccs0, in_fence=NULL, out_fence=NULL) >> > >> > Is the exec ordered with respected to bind (i.e. would fence3 & 4 be >> > signaled before the exec starts)? >> > >> > Matt >> >> >> Hi Matt, >> >> From the vulkan point of view, everything is serialized within an engine (we >> map that to a VkQueue). >> >> So with : >> >> EXEC (engine=ccs0, in_fence=NULL, out_fence=NULL) >> VM_BIND (engine=ccs0, in_fence=fence3, out_fence=fence4) >> >> EXEC completes first then VM_BIND executes. >> >> >> To be even clearer : >> >> EXEC (engine=ccs0, in_fence=fence2, out_fence=NULL) >> VM_BIND (engine=ccs0, in_fence=fence3, out_fence=fence4) >> >> >> EXEC will wait until fence2 is signaled. >> Once fence2 is signaled, EXEC proceeds, finishes and only after it is done, VM_BIND executes. >> >> It would kind of like having the VM_BIND operation be another batch executed from the ringbuffer buffer. >> > >Yea this makes sense. I think of VM_BINDs as more or less just another >version of an EXEC and this fits with that. > Note that VM_BIND itself can bind while and EXEC (GPU job) is running. (Say, getting binds ready for next submission). It is up to user though, how to use it. >In practice I don't think we can share a ring but we should be able to >present an engine (again likely a gem context in i915) to the user that >orders VM_BINDs / EXECs if that is what Vulkan expects, at least I think. > I have responded in the other thread on this. Niranjana >Hopefully Niranjana + Daniel agree. > >Matt > >> -Lionel >> >> >> > >> > > fence1 is not signaled >> > > >> > > fence3 is signaled >> > > >> > > So the second VM_BIND will proceed before the first VM_BIND. >> > > >> > > >> > > I guess we can deal with that scenario in userspace by doing the wait >> > > ourselves in one thread per engines. >> > > >> > > But then it makes the VM_BIND input fences useless. >> > > >> > > >> > > Daniel : what do you think? Should be rework this or just deal with wait >> > > fences in userspace? >> > > >> > > >> > > Sorry I noticed this late. >> > > >> > > >> > > -Lionel >> > > >> > > >>
WARNING: multiple messages have this Message-ID (diff)
From: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> To: Matthew Brost <matthew.brost@intel.com> Cc: chris.p.wilson@intel.com, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, thomas.hellstrom@intel.com, daniel.vetter@intel.com, christian.koenig@amd.com Subject: Re: [Intel-gfx] [RFC v3 1/3] drm/doc/rfc: VM_BIND feature design document Date: Thu, 2 Jun 2022 13:24:54 -0700 [thread overview] Message-ID: <20220602202453.GR4461@nvishwa1-DESK> (raw) In-Reply-To: <20220602162245.GA15751@jons-linux-dev-box> On Thu, Jun 02, 2022 at 09:22:46AM -0700, Matthew Brost wrote: >On Thu, Jun 02, 2022 at 08:42:13AM +0300, Lionel Landwerlin wrote: >> On 02/06/2022 00:18, Matthew Brost wrote: >> > On Wed, Jun 01, 2022 at 05:25:49PM +0300, Lionel Landwerlin wrote: >> > > On 17/05/2022 21:32, Niranjana Vishwanathapura wrote: >> > > > +VM_BIND/UNBIND ioctl will immediately start binding/unbinding the mapping in an >> > > > +async worker. The binding and unbinding will work like a special GPU engine. >> > > > +The binding and unbinding operations are serialized and will wait on specified >> > > > +input fences before the operation and will signal the output fences upon the >> > > > +completion of the operation. Due to serialization, completion of an operation >> > > > +will also indicate that all previous operations are also complete. >> > > I guess we should avoid saying "will immediately start binding/unbinding" if >> > > there are fences involved. >> > > >> > > And the fact that it's happening in an async worker seem to imply it's not >> > > immediate. >> > > >> > > >> > > I have a question on the behavior of the bind operation when no input fence >> > > is provided. Let say I do : >> > > >> > > VM_BIND (out_fence=fence1) >> > > >> > > VM_BIND (out_fence=fence2) >> > > >> > > VM_BIND (out_fence=fence3) >> > > >> > > >> > > In what order are the fences going to be signaled? >> > > >> > > In the order of VM_BIND ioctls? Or out of order? >> > > >> > > Because you wrote "serialized I assume it's : in order >> > > >> > > >> > > One thing I didn't realize is that because we only get one "VM_BIND" engine, >> > > there is a disconnect from the Vulkan specification. >> > > >> > > In Vulkan VM_BIND operations are serialized but per engine. >> > > >> > > So you could have something like this : >> > > >> > > VM_BIND (engine=rcs0, in_fence=fence1, out_fence=fence2) >> > > >> > > VM_BIND (engine=ccs0, in_fence=fence3, out_fence=fence4) >> > > >> > Question - let's say this done after the above operations: >> > >> > EXEC (engine=ccs0, in_fence=NULL, out_fence=NULL) >> > >> > Is the exec ordered with respected to bind (i.e. would fence3 & 4 be >> > signaled before the exec starts)? >> > >> > Matt >> >> >> Hi Matt, >> >> From the vulkan point of view, everything is serialized within an engine (we >> map that to a VkQueue). >> >> So with : >> >> EXEC (engine=ccs0, in_fence=NULL, out_fence=NULL) >> VM_BIND (engine=ccs0, in_fence=fence3, out_fence=fence4) >> >> EXEC completes first then VM_BIND executes. >> >> >> To be even clearer : >> >> EXEC (engine=ccs0, in_fence=fence2, out_fence=NULL) >> VM_BIND (engine=ccs0, in_fence=fence3, out_fence=fence4) >> >> >> EXEC will wait until fence2 is signaled. >> Once fence2 is signaled, EXEC proceeds, finishes and only after it is done, VM_BIND executes. >> >> It would kind of like having the VM_BIND operation be another batch executed from the ringbuffer buffer. >> > >Yea this makes sense. I think of VM_BINDs as more or less just another >version of an EXEC and this fits with that. > Note that VM_BIND itself can bind while and EXEC (GPU job) is running. (Say, getting binds ready for next submission). It is up to user though, how to use it. >In practice I don't think we can share a ring but we should be able to >present an engine (again likely a gem context in i915) to the user that >orders VM_BINDs / EXECs if that is what Vulkan expects, at least I think. > I have responded in the other thread on this. Niranjana >Hopefully Niranjana + Daniel agree. > >Matt > >> -Lionel >> >> >> > >> > > fence1 is not signaled >> > > >> > > fence3 is signaled >> > > >> > > So the second VM_BIND will proceed before the first VM_BIND. >> > > >> > > >> > > I guess we can deal with that scenario in userspace by doing the wait >> > > ourselves in one thread per engines. >> > > >> > > But then it makes the VM_BIND input fences useless. >> > > >> > > >> > > Daniel : what do you think? Should be rework this or just deal with wait >> > > fences in userspace? >> > > >> > > >> > > Sorry I noticed this late. >> > > >> > > >> > > -Lionel >> > > >> > > >>
next prev parent reply other threads:[~2022-06-02 20:25 UTC|newest] Thread overview: 121+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-05-17 18:32 [Intel-gfx] [RFC v3 0/3] drm/doc/rfc: i915 VM_BIND feature design + uapi Niranjana Vishwanathapura 2022-05-17 18:32 ` Niranjana Vishwanathapura 2022-05-17 18:32 ` [Intel-gfx] [RFC v3 1/3] drm/doc/rfc: VM_BIND feature design document Niranjana Vishwanathapura 2022-05-17 18:32 ` Niranjana Vishwanathapura 2022-05-19 22:52 ` Zanoni, Paulo R 2022-05-19 22:52 ` [Intel-gfx] " Zanoni, Paulo R 2022-05-23 19:05 ` Niranjana Vishwanathapura 2022-05-23 19:05 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-05-23 19:08 ` Niranjana Vishwanathapura 2022-05-23 19:08 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-05-24 10:08 ` Lionel Landwerlin 2022-06-01 14:25 ` Lionel Landwerlin 2022-06-01 20:28 ` Matthew Brost 2022-06-01 20:28 ` Matthew Brost 2022-06-02 20:11 ` Niranjana Vishwanathapura 2022-06-02 20:11 ` Niranjana Vishwanathapura 2022-06-02 20:35 ` Jason Ekstrand 2022-06-02 20:35 ` Jason Ekstrand 2022-06-03 7:20 ` Lionel Landwerlin 2022-06-03 23:51 ` Niranjana Vishwanathapura 2022-06-03 23:51 ` Niranjana Vishwanathapura 2022-06-07 17:12 ` Jason Ekstrand 2022-06-07 17:12 ` Jason Ekstrand 2022-06-07 18:18 ` Niranjana Vishwanathapura 2022-06-07 18:18 ` Niranjana Vishwanathapura 2022-06-07 21:32 ` Niranjana Vishwanathapura 2022-06-08 7:33 ` Tvrtko Ursulin 2022-06-08 21:44 ` Niranjana Vishwanathapura 2022-06-08 21:44 ` Niranjana Vishwanathapura 2022-06-08 21:55 ` Jason Ekstrand 2022-06-08 21:55 ` Jason Ekstrand 2022-06-08 22:48 ` Niranjana Vishwanathapura 2022-06-08 22:48 ` Niranjana Vishwanathapura 2022-06-09 14:49 ` Lionel Landwerlin 2022-06-09 19:31 ` Niranjana Vishwanathapura 2022-06-09 19:31 ` Niranjana Vishwanathapura 2022-06-10 6:53 ` Lionel Landwerlin 2022-06-10 6:53 ` Lionel Landwerlin 2022-06-10 7:54 ` Niranjana Vishwanathapura 2022-06-10 7:54 ` Niranjana Vishwanathapura 2022-06-10 8:18 ` Lionel Landwerlin 2022-06-10 8:18 ` Lionel Landwerlin 2022-06-10 17:42 ` Niranjana Vishwanathapura 2022-06-10 17:42 ` Niranjana Vishwanathapura 2022-06-13 13:33 ` Zeng, Oak 2022-06-13 13:33 ` Zeng, Oak 2022-06-13 18:02 ` Niranjana Vishwanathapura 2022-06-13 18:02 ` Niranjana Vishwanathapura 2022-06-14 7:04 ` Lionel Landwerlin 2022-06-14 17:01 ` Niranjana Vishwanathapura 2022-06-14 17:01 ` Niranjana Vishwanathapura 2022-06-14 21:12 ` Zeng, Oak 2022-06-14 21:12 ` Zeng, Oak 2022-06-14 21:47 ` Zeng, Oak 2022-06-14 21:47 ` Zeng, Oak 2022-06-01 21:18 ` Matthew Brost 2022-06-01 21:18 ` Matthew Brost 2022-06-02 5:42 ` Lionel Landwerlin 2022-06-02 5:42 ` Lionel Landwerlin 2022-06-02 16:22 ` Matthew Brost 2022-06-02 16:22 ` Matthew Brost 2022-06-02 20:24 ` Niranjana Vishwanathapura [this message] 2022-06-02 20:24 ` Niranjana Vishwanathapura 2022-06-02 20:16 ` Bas Nieuwenhuizen 2022-06-02 20:16 ` Bas Nieuwenhuizen 2022-06-02 2:13 ` Zeng, Oak 2022-06-02 2:13 ` [Intel-gfx] " Zeng, Oak 2022-06-02 20:48 ` Niranjana Vishwanathapura 2022-06-02 20:48 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-06-06 20:45 ` Zeng, Oak 2022-06-06 20:45 ` [Intel-gfx] " Zeng, Oak 2022-05-17 18:32 ` [Intel-gfx] [RFC v3 2/3] drm/i915: Update i915 uapi documentation Niranjana Vishwanathapura 2022-05-17 18:32 ` Niranjana Vishwanathapura 2022-06-08 11:24 ` Matthew Auld 2022-06-08 11:24 ` [Intel-gfx] " Matthew Auld 2022-06-10 1:43 ` Niranjana Vishwanathapura 2022-06-10 1:43 ` [Intel-gfx] " Niranjana Vishwanathapura 2022-05-17 18:32 ` [Intel-gfx] [RFC v3 3/3] drm/doc/rfc: VM_BIND uapi definition Niranjana Vishwanathapura 2022-05-17 18:32 ` Niranjana Vishwanathapura 2022-05-19 23:07 ` [Intel-gfx] " Zanoni, Paulo R 2022-05-23 19:19 ` Niranjana Vishwanathapura 2022-06-01 9:02 ` Dave Airlie 2022-06-01 9:27 ` Daniel Vetter 2022-06-01 9:27 ` Daniel Vetter 2022-06-02 5:08 ` Niranjana Vishwanathapura 2022-06-02 5:08 ` Niranjana Vishwanathapura 2022-06-03 6:53 ` Niranjana Vishwanathapura 2022-06-07 10:42 ` Tvrtko Ursulin 2022-06-07 21:25 ` Niranjana Vishwanathapura 2022-06-08 7:34 ` Tvrtko Ursulin 2022-06-08 19:52 ` Niranjana Vishwanathapura 2022-06-08 6:40 ` Lionel Landwerlin 2022-06-08 6:43 ` Lionel Landwerlin 2022-06-08 8:36 ` Tvrtko Ursulin 2022-06-08 8:45 ` Lionel Landwerlin 2022-06-08 8:54 ` Tvrtko Ursulin 2022-06-08 20:45 ` Niranjana Vishwanathapura 2022-06-08 20:45 ` Niranjana Vishwanathapura 2022-06-15 9:49 ` Tvrtko Ursulin 2022-06-15 9:49 ` Tvrtko Ursulin 2022-06-08 7:12 ` Lionel Landwerlin 2022-06-08 21:24 ` Matthew Brost 2022-06-08 21:24 ` Matthew Brost 2022-06-07 10:27 ` Tvrtko Ursulin 2022-06-07 19:37 ` Niranjana Vishwanathapura 2022-06-08 7:17 ` Tvrtko Ursulin 2022-06-08 9:12 ` Matthew Auld 2022-06-08 21:32 ` Niranjana Vishwanathapura 2022-06-08 21:32 ` Niranjana Vishwanathapura 2022-06-09 8:36 ` Matthew Auld 2022-06-09 8:36 ` Matthew Auld 2022-06-09 18:53 ` Niranjana Vishwanathapura 2022-06-09 18:53 ` Niranjana Vishwanathapura 2022-06-10 10:16 ` Tvrtko Ursulin 2022-06-10 10:32 ` Matthew Auld 2022-06-10 8:34 ` Matthew Brost 2022-06-10 8:34 ` [Intel-gfx] " Matthew Brost 2022-05-17 20:49 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/doc/rfc: i915 VM_BIND feature design + uapi (rev3) Patchwork 2022-05-17 20:49 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork 2022-05-17 21:09 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork 2022-05-18 2:33 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20220602202453.GR4461@nvishwa1-DESK \ --to=niranjana.vishwanathapura@intel.com \ --cc=chris.p.wilson@intel.com \ --cc=christian.koenig@amd.com \ --cc=daniel.vetter@intel.com \ --cc=dri-devel@lists.freedesktop.org \ --cc=intel-gfx@lists.freedesktop.org \ --cc=lionel.g.landwerlin@intel.com \ --cc=matthew.brost@intel.com \ --cc=thomas.hellstrom@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.