All of lore.kernel.org
 help / color / mirror / Atom feed
From: Chia-I Wu <olvaffe@gmail.com>
To: Gerd Hoffmann <kraxel@redhat.com>
Cc: Chad Versace <chadversary@chromium.org>,
	ML dri-devel <dri-devel@lists.freedesktop.org>,
	Gurchetan Singh <gurchetansingh@chromium.org>,
	David Stevens <stevensd@chromium.org>,
	John Bates <jbates@chromium.org>
Subject: Re: [RFC PATCH 0/8] *** Per context fencing ***
Date: Wed, 18 Mar 2020 04:32:24 +0800	[thread overview]
Message-ID: <CAPaKu7S=fmsGDY+txgFBcYDaBE9VaBubtEvVMEWj2yQ_UL04bQ@mail.gmail.com> (raw)
In-Reply-To: <20200316074404.z4xbta6qyrm74oxo@sirius.home.kraxel.org>

On Mon, Mar 16, 2020 at 3:44 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
>   Hi,
>
> > >> At virtio level it is pretty simple:  The host completes the SUBMIT_3D
> > >> virtio command when it finished rendering, period.
> > >>
> > >>
> > >> On the guest side we don't need the fence_id.  The completion callback
> > >> gets passed the virtio_gpu_vbuffer, so it can figure which command did
> > >> actually complete without looking at virtio_gpu_ctrl_hdr->fence_id.
> > >>
> > >> On the host side we depend on the fence_id right now, but only because
> > >> that is the way the virgl_renderer_callbacks->write_fence interface is
> > >> designed.  We have to change that anyway for per-context (or whatever)
> > >> fences, so it should not be a problem to drop the fence_id dependency
> > >> too and just pass around an opaque pointer instead.
> >
> > I am still catching up, but IIUC, indeed I don't think the host needs
> > to depend on fence_id.  We should be able to repurpose fence_id.
>
> I'd rather ignore it altogether for FENCE_V2 (or whatever we call the
> feature flag).

That's fine when one assumes each virtqueue has one host GPU timeline.
But when there are multiple (e.g., multiplexing multiple contexts over
one virtqueue, or multiple VkQueues), fence_id can be repurposed as
the host timeline id.

>
> > On the other hand, the VIRTIO_GPU_FLAG_FENCE flag is interesting, and
> > it indicates that the vbuf is on the host GPU timeline instead of the
> > host CPU timeline.
>
> Yep, we have to keep that (unless we do command completion on GPU
> timeline unconditionally with FENCE_V2).

I think it will be useful when EXECBUFFER is used for metadata query
and write the metadata directly to a guest BO's sg list.  We want the
query to be on the CPU timeline.




> cheers,
>   Gerd
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

  reply	other threads:[~2020-03-17 20:32 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-10  1:08 [RFC PATCH 0/8] *** Per context fencing *** Gurchetan Singh
2020-03-10  1:08 ` [RFC PATCH 1/8] drm/virtio: use fence_id when processing fences Gurchetan Singh
2020-03-10  1:08 ` [RFC PATCH 2/8] drm/virtio: allocate a fence context for every 3D context Gurchetan Singh
2020-03-10  1:08 ` [RFC PATCH 3/8] drm/virtio: plumb virtio_gpu_fpriv to virtio_gpu_fence_alloc Gurchetan Singh
2020-03-10  1:08 ` [RFC PATCH 4/8] drm/virtio: rename sync_seq and last_seq Gurchetan Singh
2020-03-10  1:08 ` [RFC PATCH 5/8] drm/virtio: track fence_id in virtio_gpu_fence Gurchetan Singh
2020-03-10  1:08 ` [RFC PATCH 6/8] virtio/drm: rework virtio_fence_signaled Gurchetan Singh
2020-03-10  1:08 ` [RFC PATCH 7/8] drm/virtio: check context when signaling Gurchetan Singh
2020-03-10  1:08 ` [RFC PATCH 8/8] drm/virtio: enable per context fencing Gurchetan Singh
2020-03-10  7:43 ` [RFC PATCH 0/8] *** Per context fencing *** Gerd Hoffmann
2020-03-10 22:57   ` Gurchetan Singh
2020-03-11 10:36     ` Gerd Hoffmann
2020-03-11 17:35       ` Chia-I Wu
2020-03-12  9:06         ` Gerd Hoffmann
2020-03-11 23:36       ` Gurchetan Singh
2020-03-12  0:43         ` Chia-I Wu
2020-03-12  9:10           ` Gerd Hoffmann
2020-03-12  9:29         ` Gerd Hoffmann
2020-03-12 23:08           ` Gurchetan Singh
2020-03-13 21:20             ` Chia-I Wu
2020-03-16  7:44               ` Gerd Hoffmann
2020-03-17 20:32                 ` Chia-I Wu [this message]
2020-03-18  6:40                   ` Gerd Hoffmann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPaKu7S=fmsGDY+txgFBcYDaBE9VaBubtEvVMEWj2yQ_UL04bQ@mail.gmail.com' \
    --to=olvaffe@gmail.com \
    --cc=chadversary@chromium.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=gurchetansingh@chromium.org \
    --cc=jbates@chromium.org \
    --cc=kraxel@redhat.com \
    --cc=stevensd@chromium.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.