All of lore.kernel.org
 help / color / mirror / Atom feed
From: Gurchetan Singh <gurchetansingh@chromium.org>
To: Chia-I Wu <olvaffe@gmail.com>
Cc: ML dri-devel <dri-devel@lists.freedesktop.org>,
	virtio-dev@lists.oasis-open.org,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: Re: [virtio-dev] [PATCH v1 09/12] drm/virtio: implement context init: allocate an array of fence contexts
Date: Mon, 13 Sep 2021 18:57:38 -0700	[thread overview]
Message-ID: <CAAfnVB=+fpYhnjpbAzAgvYBnH9amRx8Nn0kkSR3PBcqPpjWLbw@mail.gmail.com> (raw)
In-Reply-To: <CAPaKu7SvzWPDAOKVSot9W4M880Hgdjm=RbKVrU=nthOaXh_hEg@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 7271 bytes --]

On Mon, Sep 13, 2021 at 11:52 AM Chia-I Wu <olvaffe@gmail.com> wrote:

> .
>
> On Mon, Sep 13, 2021 at 10:48 AM Gurchetan Singh
> <gurchetansingh@chromium.org> wrote:
> >
> >
> >
> > On Fri, Sep 10, 2021 at 12:33 PM Chia-I Wu <olvaffe@gmail.com> wrote:
> >>
> >> On Wed, Sep 8, 2021 at 6:37 PM Gurchetan Singh
> >> <gurchetansingh@chromium.org> wrote:
> >> >
> >> > We don't want fences from different 3D contexts (virgl, gfxstream,
> >> > venus) to be on the same timeline.  With explicit context creation,
> >> > we can specify the number of ring each context wants.
> >> >
> >> > Execbuffer can specify which ring to use.
> >> >
> >> > Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
> >> > Acked-by: Lingfeng Yang <lfy@google.com>
> >> > ---
> >> >  drivers/gpu/drm/virtio/virtgpu_drv.h   |  3 +++
> >> >  drivers/gpu/drm/virtio/virtgpu_ioctl.c | 34
> ++++++++++++++++++++++++--
> >> >  2 files changed, 35 insertions(+), 2 deletions(-)
> >> >
> >> > diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h
> b/drivers/gpu/drm/virtio/virtgpu_drv.h
> >> > index a5142d60c2fa..cca9ab505deb 100644
> >> > --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> >> > +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> >> > @@ -56,6 +56,7 @@
> >> >  #define STATE_ERR 2
> >> >
> >> >  #define MAX_CAPSET_ID 63
> >> > +#define MAX_RINGS 64
> >> >
> >> >  struct virtio_gpu_object_params {
> >> >         unsigned long size;
> >> > @@ -263,6 +264,8 @@ struct virtio_gpu_fpriv {
> >> >         uint32_t ctx_id;
> >> >         uint32_t context_init;
> >> >         bool context_created;
> >> > +       uint32_t num_rings;
> >> > +       uint64_t base_fence_ctx;
> >> >         struct mutex context_lock;
> >> >  };
> >> >
> >> > diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> >> > index f51f3393a194..262f79210283 100644
> >> > --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> >> > +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> >> > @@ -99,6 +99,11 @@ static int virtio_gpu_execbuffer_ioctl(struct
> drm_device *dev, void *data,
> >> >         int in_fence_fd = exbuf->fence_fd;
> >> >         int out_fence_fd = -1;
> >> >         void *buf;
> >> > +       uint64_t fence_ctx;
> >> > +       uint32_t ring_idx;
> >> > +
> >> > +       fence_ctx = vgdev->fence_drv.context;
> >> > +       ring_idx = 0;
> >> >
> >> >         if (vgdev->has_virgl_3d == false)
> >> >                 return -ENOSYS;
> >> > @@ -106,6 +111,17 @@ static int virtio_gpu_execbuffer_ioctl(struct
> drm_device *dev, void *data,
> >> >         if ((exbuf->flags & ~VIRTGPU_EXECBUF_FLAGS))
> >> >                 return -EINVAL;
> >> >
> >> > +       if ((exbuf->flags & VIRTGPU_EXECBUF_RING_IDX)) {
> >> > +               if (exbuf->ring_idx >= vfpriv->num_rings)
> >> > +                       return -EINVAL;
> >> > +
> >> > +               if (!vfpriv->base_fence_ctx)
> >> > +                       return -EINVAL;
> >> > +
> >> > +               fence_ctx = vfpriv->base_fence_ctx;
> >> > +               ring_idx = exbuf->ring_idx;
> >> > +       }
> >> > +
> >> >         exbuf->fence_fd = -1;
> >> >
> >> >         virtio_gpu_create_context(dev, file);
> >> > @@ -173,7 +189,7 @@ static int virtio_gpu_execbuffer_ioctl(struct
> drm_device *dev, void *data,
> >> >                         goto out_memdup;
> >> >         }
> >> >
> >> > -       out_fence = virtio_gpu_fence_alloc(vgdev,
> vgdev->fence_drv.context, 0);
> >> > +       out_fence = virtio_gpu_fence_alloc(vgdev, fence_ctx,
> ring_idx);
> >> >         if(!out_fence) {
> >> >                 ret = -ENOMEM;
> >> >                 goto out_unresv;
> >> > @@ -691,7 +707,7 @@ static int virtio_gpu_context_init_ioctl(struct
> drm_device *dev,
> >> >                 return -EINVAL;
> >> >
> >> >         /* Number of unique parameters supported at this time. */
> >> > -       if (num_params > 1)
> >> > +       if (num_params > 2)
> >> >                 return -EINVAL;
> >> >
> >> >         ctx_set_params =
> memdup_user(u64_to_user_ptr(args->ctx_set_params),
> >> > @@ -731,6 +747,20 @@ static int virtio_gpu_context_init_ioctl(struct
> drm_device *dev,
> >> >
> >> >                         vfpriv->context_init |= value;
> >> >                         break;
> >> > +               case VIRTGPU_CONTEXT_PARAM_NUM_RINGS:
> >> > +                       if (vfpriv->base_fence_ctx) {
> >> > +                               ret = -EINVAL;
> >> > +                               goto out_unlock;
> >> > +                       }
> >> > +
> >> > +                       if (value > MAX_RINGS) {
> >> > +                               ret = -EINVAL;
> >> > +                               goto out_unlock;
> >> > +                       }
> >> > +
> >> > +                       vfpriv->base_fence_ctx =
> dma_fence_context_alloc(value);
> >> With multiple fence contexts, we should do something about implicit
> fencing.
> >>
> >> The classic example is Mesa and X server.  When both use virgl and the
> >> global fence context, no dma_fence_wait is fine.  But when Mesa uses
> >> venus and the ring fence context, dma_fence_wait should be inserted.
> >
> >
> >  If I read your comment correctly, the use case is:
> >
> > context A (venus)
> >
> > sharing a render target with
> >
> > context B (Xserver backed virgl)
> >
> > ?
> >
> > Which function do you envisage dma_fence_wait(...) to be inserted?
> Doesn't implicit synchronization mean there's no fence to share between
> contexts (only buffer objects)?
>
> Fences can be implicitly shared via reservation objects associated
> with buffer objects.
>
> > It may be possible to wait on the reservation object associated with a
> buffer object from a different context (userspace can also do
> DRM_IOCTL_VIRTGPU_WAIT), but not sure if that's what you're looking for.
>
> Right, that's what I am looking for.  Userspace expects implicit
> fencing to work.  While there are works to move the userspace to do
> explicit fencing, it is not there yet in general and we can't require
> the userspace to do explicit fencing or DRM_IOCTL_VIRTGPU_WAIT.


Another option would be to use the upcoming
DMA_BUF_IOCTL_EXPORT_SYNC_FILE + VIRTGPU_EXECBUF_FENCE_FD_IN (which checks
the dma_fence context).

Generally, if it only requires virgl changes, userspace changes are fine
since OpenGL drivers implement implicit sync in many ways.  Waiting on the
reservation object in the kernel is fine too though.

Though venus doesn't use the NUM_RINGS param yet.  Getting all permutations
of context type + display integration working would take some time
(patchset mostly tested with wayland + gfxstream/Android [no implicit
sync]).

WDYT of someone figuring out virgl/venus interop later, independently of
this patchset?




>
> >
> >>
> >>
> >> > +                       vfpriv->num_rings = value;
> >> > +                       break;
> >> >                 default:
> >> >                         ret = -EINVAL;
> >> >                         goto out_unlock;
> >> > --
> >> > 2.33.0.153.gba50c8fa24-goog
> >> >
> >> >
> >> > ---------------------------------------------------------------------
> >> > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> >> > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> >> >
>

[-- Attachment #2: Type: text/html, Size: 10711 bytes --]

  reply	other threads:[~2021-09-14  1:57 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-09  1:37 [PATCH v1 00/12] Context types Gurchetan Singh
2021-09-09  1:37 ` [virtio-dev] " Gurchetan Singh
2021-09-09  1:37 ` [PATCH v1 01/12] virtio-gpu api: multiple context types with explicit initialization Gurchetan Singh
2021-09-09  1:37   ` [virtio-dev] " Gurchetan Singh
2021-09-09 10:13   ` kernel test robot
2021-09-09 10:13     ` kernel test robot
2021-09-09  1:37 ` [PATCH v1 02/12] drm/virtgpu api: create context init feature Gurchetan Singh
2021-09-09  1:37   ` [virtio-dev] " Gurchetan Singh
2021-09-09  1:37 ` [PATCH v1 03/12] drm/virtio: implement context init: track valid capabilities in a mask Gurchetan Singh
2021-09-09  1:37   ` [virtio-dev] " Gurchetan Singh
2021-09-09  1:37 ` [PATCH v1 04/12] drm/virtio: implement context init: probe for feature Gurchetan Singh
2021-09-09  1:37   ` [virtio-dev] " Gurchetan Singh
2021-09-09  1:37 ` [PATCH v1 05/12] drm/virtio: implement context init: support init ioctl Gurchetan Singh
2021-09-09  1:37   ` [virtio-dev] " Gurchetan Singh
2021-09-09  1:37 ` [PATCH v1 06/12] drm/virtio: implement context init: track {ring_idx, emit_fence_info} in virtio_gpu_fence Gurchetan Singh
2021-09-09  1:37   ` [virtio-dev] " Gurchetan Singh
2021-09-09  1:37 ` [PATCH v1 07/12] drm/virtio: implement context init: plumb {base_fence_ctx, ring_idx} to virtio_gpu_fence_alloc Gurchetan Singh
2021-09-09  1:37   ` [virtio-dev] " Gurchetan Singh
2021-09-09  1:37 ` [PATCH v1 08/12] drm/virtio: implement context init: stop using drv->context when creating fence Gurchetan Singh
2021-09-09  1:37   ` [virtio-dev] " Gurchetan Singh
2021-09-15  5:53   ` Gerd Hoffmann
2021-09-15  5:53     ` [virtio-dev] " Gerd Hoffmann
2021-09-15 23:39     ` Gurchetan Singh
2021-09-16  5:50       ` Gerd Hoffmann
2021-09-16  5:50         ` Gerd Hoffmann
2021-09-09  1:37 ` [PATCH v1 09/12] drm/virtio: implement context init: allocate an array of fence contexts Gurchetan Singh
2021-09-09  1:37   ` [virtio-dev] " Gurchetan Singh
2021-09-10 19:33   ` Chia-I Wu
2021-09-10 19:33     ` Chia-I Wu
2021-09-13 17:48     ` Gurchetan Singh
2021-09-13 18:52       ` Chia-I Wu
2021-09-13 18:52         ` Chia-I Wu
2021-09-14  1:57         ` Gurchetan Singh [this message]
2021-09-14 17:52           ` Chia-I Wu
2021-09-14 17:52             ` Chia-I Wu
2021-09-15  1:25             ` Gurchetan Singh
2021-09-16  0:11               ` Chia-I Wu
2021-09-16  0:11                 ` Chia-I Wu
2021-09-17  1:06                 ` Gurchetan Singh
2021-09-09  1:37 ` [PATCH v1 10/12] drm/virtio: implement context init: handle VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK Gurchetan Singh
2021-09-09  1:37   ` [virtio-dev] " Gurchetan Singh
2021-09-09  1:37 ` [PATCH v1 11/12] drm/virtio: implement context init: add virtio_gpu_fence_event Gurchetan Singh
2021-09-09  1:37   ` [virtio-dev] " Gurchetan Singh
2021-09-09  1:37 ` [PATCH v1 12/12] drm/virtio: implement context init: advertise feature to userspace Gurchetan Singh
2021-09-09  1:37   ` [virtio-dev] " Gurchetan Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAAfnVB=+fpYhnjpbAzAgvYBnH9amRx8Nn0kkSR3PBcqPpjWLbw@mail.gmail.com' \
    --to=gurchetansingh@chromium.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=kraxel@redhat.com \
    --cc=olvaffe@gmail.com \
    --cc=virtio-dev@lists.oasis-open.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.