linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Rob Clark <robdclark@gmail.com>
To: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Cc: David Airlie <airlied@linux.ie>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Gurchetan Singh <gurchetansingh@chromium.org>,
	Chia-I Wu <olvaffe@gmail.com>, Daniel Vetter <daniel@ffwll.ch>,
	Daniel Almeida <daniel.almeida@collabora.com>,
	Gert Wollny <gert.wollny@collabora.com>,
	Tomeu Vizoso <tomeu.vizoso@collabora.com>,
	Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
	Maxime Ripard <mripard@kernel.org>,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Rob Herring <robh@kernel.org>,
	Steven Price <steven.price@arm.com>,
	Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	"open list:VIRTIO GPU DRIVER" 
	<virtualization@lists.linux-foundation.org>,
	Gustavo Padovan <gustavo.padovan@collabora.com>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	Dmitry Osipenko <digetx@gmail.com>
Subject: Re: [PATCH v2 6/8] drm/shmem-helper: Add generic memory shrinker
Date: Thu, 17 Mar 2022 09:13:00 -0700	[thread overview]
Message-ID: <CAF6AEGv2Ob7_Zp3+m-16QExDTM9vYfAkeSuBtjWG7ukHnY73UA@mail.gmail.com> (raw)
In-Reply-To: <be3b09ff-08ea-3e13-7d8c-06af6fffbd8f@collabora.com>

On Wed, Mar 16, 2022 at 5:13 PM Dmitry Osipenko
<dmitry.osipenko@collabora.com> wrote:
>
> On 3/16/22 23:00, Rob Clark wrote:
> > On Mon, Mar 14, 2022 at 3:44 PM Dmitry Osipenko
> > <dmitry.osipenko@collabora.com> wrote:
> >>
> >> Introduce a common DRM SHMEM shrinker. It allows to reduce code
> >> duplication among DRM drivers, it also handles complicated lockings
> >> for the drivers. This is initial version of the shrinker that covers
> >> basic needs of GPU drivers.
> >>
> >> This patch is based on a couple ideas borrowed from Rob's Clark MSM
> >> shrinker and Thomas' Zimmermann variant of SHMEM shrinker.
> >>
> >> GPU drivers that want to use generic DRM memory shrinker must support
> >> generic GEM reservations.
> >>
> >> Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
> >> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> >> ---
> >>  drivers/gpu/drm/drm_gem_shmem_helper.c | 194 +++++++++++++++++++++++++
> >>  include/drm/drm_device.h               |   4 +
> >>  include/drm/drm_gem.h                  |  11 ++
> >>  include/drm/drm_gem_shmem_helper.h     |  25 ++++
> >>  4 files changed, 234 insertions(+)
> >>
> >> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> >> index 37009418cd28..35be2ee98f11 100644
> >> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> >> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> >> @@ -139,6 +139,9 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> >>  {
> >>         struct drm_gem_object *obj = &shmem->base;
> >>
> >> +       /* take out shmem GEM object from the memory shrinker */
> >> +       drm_gem_shmem_madvise(shmem, 0);
> >> +
> >>         WARN_ON(shmem->vmap_use_count);
> >>
> >>         if (obj->import_attach) {
> >> @@ -163,6 +166,42 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> >>  }
> >>  EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
> >>
> >> +static void drm_gem_shmem_update_purgeable_status(struct drm_gem_shmem_object *shmem)
> >> +{
> >> +       struct drm_gem_object *obj = &shmem->base;
> >> +       struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker;
> >> +       size_t page_count = obj->size >> PAGE_SHIFT;
> >> +
> >> +       if (!gem_shrinker || obj->import_attach || !obj->funcs->purge)
> >> +               return;
> >> +
> >> +       mutex_lock(&shmem->vmap_lock);
> >> +       mutex_lock(&shmem->pages_lock);
> >> +       mutex_lock(&gem_shrinker->lock);
> >> +
> >> +       if (shmem->madv < 0) {
> >> +               list_del_init(&shmem->madv_list);
> >> +               goto unlock;
> >> +       } else if (shmem->madv > 0) {
> >> +               if (!list_empty(&shmem->madv_list))
> >> +                       goto unlock;
> >> +
> >> +               WARN_ON(gem_shrinker->shrinkable_count + page_count < page_count);
> >> +               gem_shrinker->shrinkable_count += page_count;
> >> +
> >> +               list_add_tail(&shmem->madv_list, &gem_shrinker->lru);
> >> +       } else if (!list_empty(&shmem->madv_list)) {
> >> +               list_del_init(&shmem->madv_list);
> >> +
> >> +               WARN_ON(gem_shrinker->shrinkable_count < page_count);
> >> +               gem_shrinker->shrinkable_count -= page_count;
> >> +       }
> >> +unlock:
> >> +       mutex_unlock(&gem_shrinker->lock);
> >> +       mutex_unlock(&shmem->pages_lock);
> >> +       mutex_unlock(&shmem->vmap_lock);
> >> +}
> >> +
> >>  static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
> >>  {
> >>         struct drm_gem_object *obj = &shmem->base;
> >> @@ -366,6 +405,8 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
> >>         ret = drm_gem_shmem_vmap_locked(shmem, map);
> >>         mutex_unlock(&shmem->vmap_lock);
> >>
> >> +       drm_gem_shmem_update_purgeable_status(shmem);
> >> +
> >>         return ret;
> >>  }
> >>  EXPORT_SYMBOL(drm_gem_shmem_vmap);
> >> @@ -409,6 +450,8 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
> >>         mutex_lock(&shmem->vmap_lock);
> >>         drm_gem_shmem_vunmap_locked(shmem, map);
> >>         mutex_unlock(&shmem->vmap_lock);
> >> +
> >> +       drm_gem_shmem_update_purgeable_status(shmem);
> >>  }
> >>  EXPORT_SYMBOL(drm_gem_shmem_vunmap);
> >>
> >> @@ -451,6 +494,8 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
> >>
> >>         mutex_unlock(&shmem->pages_lock);
> >>
> >> +       drm_gem_shmem_update_purgeable_status(shmem);
> >> +
> >>         return (madv >= 0);
> >>  }
> >>  EXPORT_SYMBOL(drm_gem_shmem_madvise);
> >> @@ -763,6 +808,155 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev,
> >>  }
> >>  EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table);
> >>
> >> +static struct drm_gem_shmem_shrinker *
> >> +to_drm_shrinker(struct shrinker *shrinker)
> >> +{
> >> +       return container_of(shrinker, struct drm_gem_shmem_shrinker, base);
> >> +}
> >> +
> >> +static unsigned long
> >> +drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker,
> >> +                                    struct shrink_control *sc)
> >> +{
> >> +       struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker);
> >> +       u64 count = gem_shrinker->shrinkable_count;
> >> +
> >> +       if (count >= SHRINK_EMPTY)
> >> +               return SHRINK_EMPTY - 1;
> >> +
> >> +       return count ?: SHRINK_EMPTY;
> >> +}
> >> +
> >> +static unsigned long
> >> +drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker,
> >> +                                   struct shrink_control *sc)
> >> +{
> >> +       struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker);
> >> +       struct drm_gem_shmem_object *shmem;
> >> +       struct list_head still_in_list;
> >> +       bool lock_contention = true;
> >> +       struct drm_gem_object *obj;
> >> +       unsigned long freed = 0;
> >> +
> >> +       INIT_LIST_HEAD(&still_in_list);
> >> +
> >> +       mutex_lock(&gem_shrinker->lock);
> >> +
> >> +       while (freed < sc->nr_to_scan) {
> >> +               shmem = list_first_entry_or_null(&gem_shrinker->lru,
> >> +                                                typeof(*shmem), madv_list);
> >> +               if (!shmem)
> >> +                       break;
> >> +
> >> +               obj = &shmem->base;
> >> +               list_move_tail(&shmem->madv_list, &still_in_list);
> >> +
> >> +               /*
> >> +                * If it's in the process of being freed, gem_object->free()
> >> +                * may be blocked on lock waiting to remove it.  So just
> >> +                * skip it.
> >> +                */
> >> +               if (!kref_get_unless_zero(&obj->refcount))
> >> +                       continue;
> >> +
> >> +               mutex_unlock(&gem_shrinker->lock);
> >> +
> >> +               /* prevent racing with job submission code paths */
> >> +               if (!dma_resv_trylock(obj->resv))
> >> +                       goto shrinker_lock;
> >
> > jfwiw, the trylock here is in the msm code isn't so much for madvise
> > (it is an error to submit jobs that reference DONTNEED objects), but
> > instead for the case of evicting WILLNEED but inactive objects to
> > swap.  Ie. in the case that we need to move bo's back in to memory, we
> > don't want to unpin/evict a buffer that is later on the list for the
> > same job.. msm shrinker re-uses the same scan loop for both
> > inactive_dontneed (purge) and inactive_willneed (evict)
>
> I don't see connection between the objects on the shrinker's list and
> the job's BOs. Jobs indeed must not have any objects marked as DONTNEED,
> this case should never happen in practice, but we still need to protect
> from it.

Hmm, let me try to explain with a simple example.. hopefully this makes sense.

Say you have a job with two bo's, A and B..  bo A is not backed with
memory (either hasn't been used before or was evicted.  Allocating
pages for A triggers shrinker.  But B is still on the
inactive_willneed list, however it is already locked (because we don't
want to evict B to obtain backing pages for A).

>
> > I suppose using trylock is not technically wrong, and it would be a
> > good idea if the shmem helpers supported eviction as well.  But I
> > think in the madvise/purge case if you lose the trylock then there is
> > something else bad going on.
>
> This trylock is intended for protecting job's submission path from
> racing with madvise ioctl invocation followed by immediate purging of
> BOs while job is in a process of submission, i.e. it protects from a
> use-after-free.

ahh, ok

> If you'll lose this trylock, then shrinker can't use
> dma_resv_test_signaled() reliably anymore and shrinker may purge BO
> before job had a chance to add fence to the BO's reservation.
>
> > Anyways, from the PoV of minimizing lock contention when under memory
> > pressure, this all looks good to me.
>
> Thank you. I may try to add generic eviction support to the v3.

eviction is a trickier thing to get right, I wouldn't blame you for
splitting that out into it's own patchset ;-)

You probably also would want to make it a thing that is opt-in for
drivers using the shmem helpers

BR,
-R

  reply	other threads:[~2022-03-17 16:12 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-14 22:42 [PATCH v2 0/8] Add memory shrinker to VirtIO-GPU DRM driver Dmitry Osipenko
2022-03-14 22:42 ` [PATCH v2 1/8] drm/virtio: Correct drm_gem_shmem_get_sg_table() error handling Dmitry Osipenko
2022-03-15 13:05   ` Dmitry Osipenko
2022-03-14 22:42 ` [PATCH v2 2/8] drm/virtio: Check whether transferred 2D BO is shmem Dmitry Osipenko
2022-03-14 22:42 ` [PATCH v2 3/8] drm/virtio: Unlock GEM reservations in error code path Dmitry Osipenko
2022-03-14 22:42 ` [PATCH v2 4/8] drm/virtio: Improve DMA API usage for shmem BOs Dmitry Osipenko
2022-03-16 12:41   ` Robin Murphy
2022-03-16 13:52     ` Dmitry Osipenko
2022-03-17  1:09   ` Dmitry Osipenko
2022-03-14 22:42 ` [PATCH v2 5/8] drm/shmem-helper: Correct doc-comment of drm_gem_shmem_get_sg_table() Dmitry Osipenko
2022-03-14 22:42 ` [PATCH v2 6/8] drm/shmem-helper: Add generic memory shrinker Dmitry Osipenko
2022-03-16 15:04   ` Steven Price
2022-03-16 23:03     ` Dmitry Osipenko
2022-03-16 20:00   ` Rob Clark
2022-03-17  0:13     ` Dmitry Osipenko
2022-03-17 16:13       ` Rob Clark [this message]
2022-03-17 17:43         ` Dmitry Osipenko
2022-03-17 17:32   ` Daniel Vetter
2022-03-17 17:45     ` Dmitry Osipenko
2022-03-14 22:42 ` [PATCH v2 7/8] drm/virtio: Support memory shrinking Dmitry Osipenko
2022-03-15 12:43   ` Emil Velikov
2022-03-16 14:23     ` Dmitry Osipenko
2022-03-14 22:42 ` [PATCH v2 8/8] drm/panfrost: Switch to generic memory shrinker Dmitry Osipenko
2022-03-14 23:26   ` Alyssa Rosenzweig
2022-03-14 23:32     ` Dmitry Osipenko
2022-03-16 15:04   ` Steven Price
2022-03-16 23:04     ` Dmitry Osipenko
2022-03-18 14:41       ` Dmitry Osipenko
2022-03-18 14:47         ` Steven Price
2022-03-18 17:09           ` Dmitry Osipenko
2022-03-15 12:47 ` [PATCH v2 0/8] Add memory shrinker to VirtIO-GPU DRM driver Emil Velikov
2022-03-15 13:10   ` Dmitry Osipenko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAF6AEGv2Ob7_Zp3+m-16QExDTM9vYfAkeSuBtjWG7ukHnY73UA@mail.gmail.com \
    --to=robdclark@gmail.com \
    --cc=airlied@linux.ie \
    --cc=alyssa.rosenzweig@collabora.com \
    --cc=daniel.almeida@collabora.com \
    --cc=daniel@ffwll.ch \
    --cc=digetx@gmail.com \
    --cc=dmitry.osipenko@collabora.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=gert.wollny@collabora.com \
    --cc=gurchetansingh@chromium.org \
    --cc=gustavo.padovan@collabora.com \
    --cc=kraxel@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maarten.lankhorst@linux.intel.com \
    --cc=mripard@kernel.org \
    --cc=olvaffe@gmail.com \
    --cc=robh@kernel.org \
    --cc=steven.price@arm.com \
    --cc=tomeu.vizoso@collabora.com \
    --cc=tzimmermann@suse.de \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).