From: Daniel Vetter <daniel@ffwll.ch>
To: Dmitry Osipenko <dmitry.osipenko@collabora.com>,
Daniel Stone <daniel@fooishbar.org>
Cc: David Airlie <airlied@linux.ie>,
dri-devel@lists.freedesktop.org,
Gurchetan Singh <gurchetansingh@chromium.org>,
Gerd Hoffmann <kraxel@redhat.com>,
Dmitry Osipenko <digetx@gmail.com>,
Steven Price <steven.price@arm.com>,
Gustavo Padovan <gustavo.padovan@collabora.com>,
Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
virtualization@lists.linux-foundation.org,
Daniel Almeida <daniel.almeida@collabora.com>,
Tomeu Vizoso <tomeu.vizoso@collabora.com>,
Gert Wollny <gert.wollny@collabora.com>,
Emil Velikov <emil.l.velikov@gmail.com>,
linux-kernel@vger.kernel.org,
Thomas Zimmermann <tzimmermann@suse.de>,
Robin Murphy <robin.murphy@arm.com>
Subject: Re: [PATCH v4 10/15] drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks
Date: Wed, 27 Apr 2022 16:50:04 +0200 [thread overview]
Message-ID: <YmlYHNlcmNMfOeyy@phenom.ffwll.local> (raw)
In-Reply-To: <d9e7bec1-fffb-e0c4-8659-ef3ce2c31280@collabora.com>
On Mon, Apr 18, 2022 at 10:18:54PM +0300, Dmitry Osipenko wrote:
> Hello,
>
> On 4/18/22 21:38, Thomas Zimmermann wrote:
> > Hi
> >
> > Am 18.04.22 um 00:37 schrieb Dmitry Osipenko:
> >> Replace drm_gem_shmem locks with the reservation lock to make GEM
> >> lockings more consistent.
> >>
> >> Previously drm_gem_shmem_vmap() and drm_gem_shmem_get_pages() were
> >> protected by separate locks, now it's the same lock, but it doesn't
> >> make any difference for the current GEM SHMEM users. Only Panfrost
> >> and Lima drivers use vmap() and they do it in the slow code paths,
> >> hence there was no practical justification for the usage of separate
> >> lock in the vmap().
> >>
> >> Suggested-by: Daniel Vetter <daniel@ffwll.ch>
> >> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> >> ---
> ...
> >> @@ -310,7 +306,7 @@ static int drm_gem_shmem_vmap_locked(struct
> >> drm_gem_shmem_object *shmem,
> >> } else {
> >> pgprot_t prot = PAGE_KERNEL;
> >> - ret = drm_gem_shmem_get_pages(shmem);
> >> + ret = drm_gem_shmem_get_pages_locked(shmem);
> >> if (ret)
> >> goto err_zero_use;
> >> @@ -360,11 +356,11 @@ int drm_gem_shmem_vmap(struct
> >> drm_gem_shmem_object *shmem,
> >> {
> >> int ret;
> >> - ret = mutex_lock_interruptible(&shmem->vmap_lock);
> >> + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL);
> >> if (ret)
> >> return ret;
> >> ret = drm_gem_shmem_vmap_locked(shmem, map);
> >
> > Within drm_gem_shmem_vmap_locked(), there's a call to dma_buf_vmap() for
> > imported pages. If the exporter side also holds/acquires the same
> > reservation lock as our object, the whole thing can deadlock. We cannot
> > move dma_buf_vmap() out of the CS, because we still need to increment
> > the reference counter. I honestly don't know how to easily fix this
> > problem. There's a TODO item about replacing these locks at [1]. As
> > Daniel suggested this patch, we should talk to him about the issue.
> >
> > Best regards
> > Thomas
> >
> > [1]
> > https://www.kernel.org/doc/html/latest/gpu/todo.html#move-buffer-object-locking-to-dma-resv-lock
>
> Indeed, good catch! Perhaps we could simply use a separate lock for the
> vmapping of the *imported* GEMs? The vmap_use_count is used only by
> vmap/vunmap, so it doesn't matter which lock is used by these functions
> in the case of imported GEMs since we only need to protect the
> vmap_use_count.
Apologies for the late reply, I'm flooded.
I discussed this with Daniel Stone last week in a chat, roughly what we
need to do is:
1. Pick a function from shmem helpers.
2. Go through all drivers that call this, and make sure that we acquire
dma_resv_lock in the top level driver entry point for this.
3. Once all driver code paths are converted, add a dma_resv_assert_held()
call to that function to make sure you have it all correctly.
4. Repeate 1-3 until all shmem helper functions are converted over.
5. Ditch the 3 different shmem helper locks.
The trouble is that I forgot that vmap is a thing, so that needs more
work. I think there's two approaches here:
- Do the vmap at import time. This is the trick we used to untangle the
dma_resv_lock issues around dma_buf_attachment_map()
- Change the dma_buf_vmap rules that callers must hold the dma_resv_lock.
- Maybe also do what you suggest and keep a separate lock for this, but
the fundamental issue is that this doesn't really work - if you share
buffers both ways with two drivers using shmem helpers, then the
ordering of this vmap_count_mutex vs dma_resv_lock is inconsistent and
you can get some nice deadlocks. So not a great approach (and also the
reason why we really need to get everyone to move towards dma_resv_lock
as _the_ buffer object lock, since otherwise we'll never get a
consistent lock nesting hierarchy).
The trouble here is that trying to be clever and doing the conversion just
in shmem helpers wont work, because there's a lot of cases where the
drivers are all kinds of inconsistent with their locking.
Adding Daniel S, also maybe for questions it'd be fastest to chat on irc?
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
next prev parent reply other threads:[~2022-04-27 14:50 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-17 22:36 [PATCH v4 00/15] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
2022-04-17 22:36 ` [PATCH v4 01/15] drm/virtio: Correct drm_gem_shmem_get_sg_table() error handling Dmitry Osipenko
2022-04-17 22:36 ` [PATCH v4 02/15] drm/virtio: Check whether transferred 2D BO is shmem Dmitry Osipenko
2022-04-17 22:36 ` [PATCH v4 03/15] drm/virtio: Unlock GEM reservations on virtio_gpu_object_shmem_init() error Dmitry Osipenko
2022-04-17 22:36 ` [PATCH v4 04/15] drm/virtio: Unlock reservations on dma_resv_reserve_fences() error Dmitry Osipenko
2022-04-17 22:36 ` [PATCH v4 05/15] drm/virtio: Use appropriate atomic state in virtio_gpu_plane_cleanup_fb() Dmitry Osipenko
2022-04-17 22:36 ` [PATCH v4 06/15] drm/virtio: Simplify error handling of virtio_gpu_object_create() Dmitry Osipenko
2022-04-17 22:36 ` [PATCH v4 07/15] drm/virtio: Improve DMA API usage for shmem BOs Dmitry Osipenko
2022-04-17 22:37 ` [PATCH v4 08/15] drm/virtio: Use dev_is_pci() Dmitry Osipenko
2022-04-17 22:37 ` [PATCH v4 09/15] drm/shmem-helper: Correct doc-comment of drm_gem_shmem_get_sg_table() Dmitry Osipenko
2022-04-18 18:25 ` Thomas Zimmermann
2022-04-18 19:43 ` Dmitry Osipenko
2022-04-17 22:37 ` [PATCH v4 10/15] drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks Dmitry Osipenko
2022-04-18 18:38 ` Thomas Zimmermann
2022-04-18 19:18 ` Dmitry Osipenko
2022-04-27 14:50 ` Daniel Vetter [this message]
2022-04-28 18:31 ` Dmitry Osipenko
2022-05-04 8:21 ` Daniel Vetter
2022-05-04 15:56 ` Dmitry Osipenko
2022-05-05 8:12 ` Daniel Vetter
2022-05-05 22:49 ` Dmitry Osipenko
2022-05-09 13:42 ` Daniel Vetter
2022-05-10 13:39 ` Dmitry Osipenko
2022-05-11 13:00 ` Daniel Vetter
2022-05-11 14:24 ` Christian König
2022-05-11 15:07 ` Daniel Vetter
2022-05-11 15:14 ` Dmitry Osipenko
2022-05-11 15:29 ` Daniel Vetter
2022-05-11 15:40 ` Dmitry Osipenko
2022-05-11 19:05 ` Daniel Vetter
2022-05-12 7:29 ` Christian König
2022-05-12 14:15 ` Daniel Vetter
2022-04-17 22:37 ` [PATCH v4 11/15] drm/shmem-helper: Add generic memory shrinker Dmitry Osipenko
2022-04-19 7:22 ` Thomas Zimmermann
2022-04-19 20:40 ` Dmitry Osipenko
2022-04-27 15:03 ` Daniel Vetter
2022-04-28 18:20 ` Dmitry Osipenko
2022-05-04 8:24 ` Daniel Vetter
2022-06-19 16:54 ` Rob Clark
2022-05-05 8:34 ` Thomas Zimmermann
2022-05-05 11:59 ` Daniel Vetter
2022-05-06 0:10 ` Dmitry Osipenko
2022-05-09 13:49 ` Daniel Vetter
2022-05-10 13:47 ` Dmitry Osipenko
2022-05-11 13:09 ` Daniel Vetter
2022-05-11 16:06 ` Dmitry Osipenko
2022-05-11 19:09 ` Daniel Vetter
2022-05-12 11:36 ` Dmitry Osipenko
2022-05-12 17:04 ` Daniel Vetter
2022-05-12 19:04 ` Dmitry Osipenko
2022-05-19 14:13 ` Daniel Vetter
2022-05-19 14:29 ` Dmitry Osipenko
2022-04-17 22:37 ` [PATCH v4 12/15] drm/virtio: Support memory shrinking Dmitry Osipenko
2022-04-17 22:37 ` [PATCH v4 13/15] drm/panfrost: Switch to generic memory shrinker Dmitry Osipenko
2022-04-17 22:37 ` [PATCH v4 14/15] drm/shmem-helper: Make drm_gem_shmem_get_pages() private Dmitry Osipenko
2022-04-17 22:37 ` [PATCH v4 15/15] drm/shmem-helper: Remove drm_gem_shmem_purge() Dmitry Osipenko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YmlYHNlcmNMfOeyy@phenom.ffwll.local \
--to=daniel@ffwll.ch \
--cc=airlied@linux.ie \
--cc=alyssa.rosenzweig@collabora.com \
--cc=daniel.almeida@collabora.com \
--cc=daniel@fooishbar.org \
--cc=digetx@gmail.com \
--cc=dmitry.osipenko@collabora.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=emil.l.velikov@gmail.com \
--cc=gert.wollny@collabora.com \
--cc=gurchetansingh@chromium.org \
--cc=gustavo.padovan@collabora.com \
--cc=kraxel@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=robin.murphy@arm.com \
--cc=steven.price@arm.com \
--cc=tomeu.vizoso@collabora.com \
--cc=tzimmermann@suse.de \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).