linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers
@ 2023-01-08 21:04 Dmitry Osipenko
  2023-01-08 21:04 ` [PATCH v10 01/11] drm/msm/gem: Prevent blocking within shrinker loop Dmitry Osipenko
                   ` (12 more replies)
  0 siblings, 13 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2023-01-08 21:04 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gustavo Padovan, Daniel Stone,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Clark, Sumit Semwal, Christian König,
	Qiang Yu, Steven Price, Alyssa Rosenzweig, Rob Herring,
	Sean Paul, Dmitry Baryshkov, Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization

This series:

  1. Makes minor fixes for drm_gem_lru and Panfrost
  2. Brings refactoring for older code
  3. Adds common drm-shmem memory shrinker
  4. Enables shrinker for VirtIO-GPU driver
  5. Switches Panfrost driver to the common shrinker

Changelog:

v10:- Rebased on a recent linux-next.

    - Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.

    - Added Steven's ack/r-b/t-b for the Panfrost patches.

    - Fixed missing export of the new drm_gem_object_evict() function.

    - Added fixes tags to the first two patches that are making minor fixes,
      for consistency.

v9: - Replaced struct drm_gem_shmem_shrinker with drm_gem_shmem and
      moved it to drm_device, like was suggested by Thomas Zimmermann.

    - Replaced drm_gem_shmem_shrinker_register() with drmm_gem_shmem_init(),
      like was suggested by Thomas Zimmermann.

    - Moved evict() callback to drm_gem_object_funcs and added common
      drm_gem_object_evict() helper, like was suggested by Thomas Zimmermann.

    - The shmem object now is evictable by default, like was suggested by
      Thomas Zimmermann. Dropped the set_evictable/purgeble() functions
      as well, drivers will decide whether BO is evictable within theirs
      madvise IOCTL.

    - Added patches that convert drm-shmem code to use drm_WARN_ON() and
      drm_dbg_kms(), like was requested by Thomas Zimmermann.

    - Turned drm_gem_shmem_object booleans into 1-bit bit fields, like was
      suggested by Thomas Zimmermann.

    - Switched to use drm_dev->unique for the shmem shrinker name. Drivers
      don't need to specify the name explicitly anymore.

    - Re-added dma_resv_test_signaled() that was missing in v8 and also
      fixed its argument to DMA_RESV_USAGE_READ. See comment to
      dma_resv_usage_rw().

    - Added new fix for Panfrost driver that silences lockdep warning
      caused by shrinker. Both Panfrost old and new shmem shrinkers are
      affected.

v8: - Rebased on top of recent linux-next that now has dma-buf locking
      convention patches merged, which was blocking shmem shrinker before.

    - Shmem shrinker now uses new drm_gem_lru helper.

    - Dropped Steven Price t-b from the Panfrost patch because code
      changed significantly since v6 and should be re-tested.

v7: - dma-buf locking convention

v6: https://lore.kernel.org/dri-devel/20220526235040.678984-1-dmitry.osipenko@collabora.com/

Related patches:

Mesa: https://gitlab.freedesktop.org/digetx/mesa/-/commits/virgl-madvise
igt:  https://gitlab.freedesktop.org/digetx/igt-gpu-tools/-/commits/virtio-madvise
      https://gitlab.freedesktop.org/digetx/igt-gpu-tools/-/commits/panfrost-madvise

The Mesa and IGT patches will be sent out once the kernel part will land.

Dmitry Osipenko (11):
  drm/msm/gem: Prevent blocking within shrinker loop
  drm/panfrost: Don't sync rpm suspension after mmu flushing
  drm/gem: Add evict() callback to drm_gem_object_funcs
  drm/shmem: Put booleans in the end of struct drm_gem_shmem_object
  drm/shmem: Switch to use drm_* debug helpers
  drm/shmem-helper: Don't use vmap_use_count for dma-bufs
  drm/shmem-helper: Switch to reservation lock
  drm/shmem-helper: Add memory shrinker
  drm/gem: Add drm_gem_pin_unlocked()
  drm/virtio: Support memory shrinking
  drm/panfrost: Switch to generic memory shrinker

 drivers/gpu/drm/drm_gem.c                     |  54 +-
 drivers/gpu/drm/drm_gem_shmem_helper.c        | 646 +++++++++++++-----
 drivers/gpu/drm/lima/lima_gem.c               |   8 +-
 drivers/gpu/drm/msm/msm_gem_shrinker.c        |   8 +-
 drivers/gpu/drm/panfrost/Makefile             |   1 -
 drivers/gpu/drm/panfrost/panfrost_device.h    |   4 -
 drivers/gpu/drm/panfrost/panfrost_drv.c       |  34 +-
 drivers/gpu/drm/panfrost/panfrost_gem.c       |  30 +-
 drivers/gpu/drm/panfrost/panfrost_gem.h       |   9 -
 .../gpu/drm/panfrost/panfrost_gem_shrinker.c  | 122 ----
 drivers/gpu/drm/panfrost/panfrost_job.c       |  18 +-
 drivers/gpu/drm/panfrost/panfrost_mmu.c       |  21 +-
 drivers/gpu/drm/virtio/virtgpu_drv.h          |  18 +-
 drivers/gpu/drm/virtio/virtgpu_gem.c          |  52 ++
 drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  37 +
 drivers/gpu/drm/virtio/virtgpu_kms.c          |   8 +
 drivers/gpu/drm/virtio/virtgpu_object.c       | 132 +++-
 drivers/gpu/drm/virtio/virtgpu_plane.c        |  22 +-
 drivers/gpu/drm/virtio/virtgpu_vq.c           |  40 ++
 include/drm/drm_device.h                      |  10 +-
 include/drm/drm_gem.h                         |  19 +-
 include/drm/drm_gem_shmem_helper.h            | 112 +--
 include/uapi/drm/virtgpu_drm.h                |  14 +
 23 files changed, 1010 insertions(+), 409 deletions(-)
 delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c

-- 
2.38.1


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [PATCH v10 01/11] drm/msm/gem: Prevent blocking within shrinker loop
  2023-01-08 21:04 [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
@ 2023-01-08 21:04 ` Dmitry Osipenko
  2023-02-17 12:02   ` Thomas Zimmermann
  2023-01-08 21:04 ` [PATCH v10 02/11] drm/panfrost: Don't sync rpm suspension after mmu flushing Dmitry Osipenko
                   ` (11 subsequent siblings)
  12 siblings, 1 reply; 41+ messages in thread
From: Dmitry Osipenko @ 2023-01-08 21:04 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gustavo Padovan, Daniel Stone,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Clark, Sumit Semwal, Christian König,
	Qiang Yu, Steven Price, Alyssa Rosenzweig, Rob Herring,
	Sean Paul, Dmitry Baryshkov, Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization

Consider this scenario:

1. APP1 continuously creates lots of small GEMs
2. APP2 triggers `drop_caches`
3. Shrinker starts to evict APP1 GEMs, while APP1 produces new purgeable
   GEMs
4. msm_gem_shrinker_scan() returns non-zero number of freed pages
   and causes shrinker to try shrink more
5. msm_gem_shrinker_scan() returns non-zero number of freed pages again,
   goto 4
6. The APP2 is blocked in `drop_caches` until APP1 stops producing
   purgeable GEMs

To prevent this blocking scenario, check number of remaining pages
that GPU shrinker couldn't release due to a GEM locking contention
or shrinking rejection. If there are no remaining pages left to shrink,
then there is no need to free up more pages and shrinker may break out
from the loop.

This problem was found during shrinker/madvise IOCTL testing of
virtio-gpu driver. The MSM driver is affected in the same way.

Reviewed-by: Rob Clark <robdclark@gmail.com>
Fixes: b352ba54a820 ("drm/msm/gem: Convert to using drm_gem_lru")
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem.c              | 9 +++++++--
 drivers/gpu/drm/msm/msm_gem_shrinker.c | 8 ++++++--
 include/drm/drm_gem.h                  | 4 +++-
 3 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 59a0bb5ebd85..c6bca5ac6e0f 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1388,10 +1388,13 @@ EXPORT_SYMBOL(drm_gem_lru_move_tail);
  *
  * @lru: The LRU to scan
  * @nr_to_scan: The number of pages to try to reclaim
+ * @remaining: The number of pages left to reclaim
  * @shrink: Callback to try to shrink/reclaim the object.
  */
 unsigned long
-drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan,
+drm_gem_lru_scan(struct drm_gem_lru *lru,
+		 unsigned int nr_to_scan,
+		 unsigned long *remaining,
 		 bool (*shrink)(struct drm_gem_object *obj))
 {
 	struct drm_gem_lru still_in_lru;
@@ -1430,8 +1433,10 @@ drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan,
 		 * hit shrinker in response to trying to get backing pages
 		 * for this obj (ie. while it's lock is already held)
 		 */
-		if (!dma_resv_trylock(obj->resv))
+		if (!dma_resv_trylock(obj->resv)) {
+			*remaining += obj->size >> PAGE_SHIFT;
 			goto tail;
+		}
 
 		if (shrink(obj)) {
 			freed += obj->size >> PAGE_SHIFT;
diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c
index 051bdbc093cf..b7c1242014ec 100644
--- a/drivers/gpu/drm/msm/msm_gem_shrinker.c
+++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c
@@ -116,12 +116,14 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
 	};
 	long nr = sc->nr_to_scan;
 	unsigned long freed = 0;
+	unsigned long remaining = 0;
 
 	for (unsigned i = 0; (nr > 0) && (i < ARRAY_SIZE(stages)); i++) {
 		if (!stages[i].cond)
 			continue;
 		stages[i].freed =
-			drm_gem_lru_scan(stages[i].lru, nr, stages[i].shrink);
+			drm_gem_lru_scan(stages[i].lru, nr, &remaining,
+					 stages[i].shrink);
 		nr -= stages[i].freed;
 		freed += stages[i].freed;
 	}
@@ -132,7 +134,7 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
 				     stages[3].freed);
 	}
 
-	return (freed > 0) ? freed : SHRINK_STOP;
+	return (freed > 0 && remaining > 0) ? freed : SHRINK_STOP;
 }
 
 #ifdef CONFIG_DEBUG_FS
@@ -182,10 +184,12 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr)
 		NULL,
 	};
 	unsigned idx, unmapped = 0;
+	unsigned long remaining = 0;
 
 	for (idx = 0; lrus[idx] && unmapped < vmap_shrink_limit; idx++) {
 		unmapped += drm_gem_lru_scan(lrus[idx],
 					     vmap_shrink_limit - unmapped,
+					     &remaining,
 					     vmap_shrink);
 	}
 
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 772a4adf5287..f1f00fc2dba6 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -476,7 +476,9 @@ int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev,
 void drm_gem_lru_init(struct drm_gem_lru *lru, struct mutex *lock);
 void drm_gem_lru_remove(struct drm_gem_object *obj);
 void drm_gem_lru_move_tail(struct drm_gem_lru *lru, struct drm_gem_object *obj);
-unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan,
+unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru,
+			       unsigned int nr_to_scan,
+			       unsigned long *remaining,
 			       bool (*shrink)(struct drm_gem_object *obj));
 
 #endif /* __DRM_GEM_H__ */
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v10 02/11] drm/panfrost: Don't sync rpm suspension after mmu flushing
  2023-01-08 21:04 [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
  2023-01-08 21:04 ` [PATCH v10 01/11] drm/msm/gem: Prevent blocking within shrinker loop Dmitry Osipenko
@ 2023-01-08 21:04 ` Dmitry Osipenko
  2023-01-08 21:04 ` [PATCH v10 03/11] drm/gem: Add evict() callback to drm_gem_object_funcs Dmitry Osipenko
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2023-01-08 21:04 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gustavo Padovan, Daniel Stone,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Clark, Sumit Semwal, Christian König,
	Qiang Yu, Steven Price, Alyssa Rosenzweig, Rob Herring,
	Sean Paul, Dmitry Baryshkov, Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization

Lockdep warns about potential circular locking dependency of devfreq
with the fs_reclaim caused by immediate device suspension when mapping is
released by shrinker. Fix it by doing the suspension asynchronously.

Reviewed-by: Steven Price <steven.price@arm.com>
Fixes: ec7eba47da86 ("drm/panfrost: Rework page table flushing and runtime PM interaction ")
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/panfrost/panfrost_mmu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index 4e83a1891f3e..666a5e53fe19 100644
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
@@ -282,7 +282,7 @@ static void panfrost_mmu_flush_range(struct panfrost_device *pfdev,
 	if (pm_runtime_active(pfdev->dev))
 		mmu_hw_do_operation(pfdev, mmu, iova, size, AS_COMMAND_FLUSH_PT);
 
-	pm_runtime_put_sync_autosuspend(pfdev->dev);
+	pm_runtime_put_autosuspend(pfdev->dev);
 }
 
 static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu,
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v10 03/11] drm/gem: Add evict() callback to drm_gem_object_funcs
  2023-01-08 21:04 [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
  2023-01-08 21:04 ` [PATCH v10 01/11] drm/msm/gem: Prevent blocking within shrinker loop Dmitry Osipenko
  2023-01-08 21:04 ` [PATCH v10 02/11] drm/panfrost: Don't sync rpm suspension after mmu flushing Dmitry Osipenko
@ 2023-01-08 21:04 ` Dmitry Osipenko
  2023-02-17 12:23   ` Thomas Zimmermann
  2023-01-08 21:04 ` [PATCH v10 04/11] drm/shmem: Put booleans in the end of struct drm_gem_shmem_object Dmitry Osipenko
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 41+ messages in thread
From: Dmitry Osipenko @ 2023-01-08 21:04 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gustavo Padovan, Daniel Stone,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Clark, Sumit Semwal, Christian König,
	Qiang Yu, Steven Price, Alyssa Rosenzweig, Rob Herring,
	Sean Paul, Dmitry Baryshkov, Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization

Add new common evict() callback to drm_gem_object_funcs and corresponding
drm_gem_object_evict() helper. This is a first step on a way to providing
common GEM-shrinker API for DRM drivers.

Suggested-by: Thomas Zimmermann <tzimmermann@suse.de>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem.c | 16 ++++++++++++++++
 include/drm/drm_gem.h     | 12 ++++++++++++
 2 files changed, 28 insertions(+)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index c6bca5ac6e0f..dbb48fc9dff3 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1471,3 +1471,19 @@ drm_gem_lru_scan(struct drm_gem_lru *lru,
 	return freed;
 }
 EXPORT_SYMBOL(drm_gem_lru_scan);
+
+/**
+ * drm_gem_object_evict - helper to evict backing pages for a GEM object
+ * @obj: obj in question
+ */
+bool
+drm_gem_object_evict(struct drm_gem_object *obj)
+{
+	dma_resv_assert_held(obj->resv);
+
+	if (obj->funcs->evict)
+		return obj->funcs->evict(obj);
+
+	return false;
+}
+EXPORT_SYMBOL(drm_gem_object_evict);
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index f1f00fc2dba6..8e5c22f25691 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -172,6 +172,16 @@ struct drm_gem_object_funcs {
 	 * This is optional but necessary for mmap support.
 	 */
 	const struct vm_operations_struct *vm_ops;
+
+	/**
+	 * @evict:
+	 *
+	 * Evicts gem object out from memory. Used by the drm_gem_object_evict()
+	 * helper. Returns true on success, false otherwise.
+	 *
+	 * This callback is optional.
+	 */
+	bool (*evict)(struct drm_gem_object *obj);
 };
 
 /**
@@ -481,4 +491,6 @@ unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru,
 			       unsigned long *remaining,
 			       bool (*shrink)(struct drm_gem_object *obj));
 
+bool drm_gem_object_evict(struct drm_gem_object *obj);
+
 #endif /* __DRM_GEM_H__ */
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v10 04/11] drm/shmem: Put booleans in the end of struct drm_gem_shmem_object
  2023-01-08 21:04 [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
                   ` (2 preceding siblings ...)
  2023-01-08 21:04 ` [PATCH v10 03/11] drm/gem: Add evict() callback to drm_gem_object_funcs Dmitry Osipenko
@ 2023-01-08 21:04 ` Dmitry Osipenko
  2023-02-17 12:25   ` Thomas Zimmermann
  2023-01-08 21:04 ` [PATCH v10 05/11] drm/shmem: Switch to use drm_* debug helpers Dmitry Osipenko
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 41+ messages in thread
From: Dmitry Osipenko @ 2023-01-08 21:04 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gustavo Padovan, Daniel Stone,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Clark, Sumit Semwal, Christian König,
	Qiang Yu, Steven Price, Alyssa Rosenzweig, Rob Herring,
	Sean Paul, Dmitry Baryshkov, Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization

Group all 1-bit boolean members of struct drm_gem_shmem_object in the end
of the structure, allowing compiler to pack data better and making code to
look more consistent.

Suggested-by: Thomas Zimmermann <tzimmermann@suse.de>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 include/drm/drm_gem_shmem_helper.h | 30 +++++++++++++++---------------
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index a2201b2488c5..5994fed5e327 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -60,20 +60,6 @@ struct drm_gem_shmem_object {
 	 */
 	struct list_head madv_list;
 
-	/**
-	 * @pages_mark_dirty_on_put:
-	 *
-	 * Mark pages as dirty when they are put.
-	 */
-	unsigned int pages_mark_dirty_on_put    : 1;
-
-	/**
-	 * @pages_mark_accessed_on_put:
-	 *
-	 * Mark pages as accessed when they are put.
-	 */
-	unsigned int pages_mark_accessed_on_put : 1;
-
 	/**
 	 * @sgt: Scatter/gather table for imported PRIME buffers
 	 */
@@ -97,10 +83,24 @@ struct drm_gem_shmem_object {
 	 */
 	unsigned int vmap_use_count;
 
+	/**
+	 * @pages_mark_dirty_on_put:
+	 *
+	 * Mark pages as dirty when they are put.
+	 */
+	bool pages_mark_dirty_on_put : 1;
+
+	/**
+	 * @pages_mark_accessed_on_put:
+	 *
+	 * Mark pages as accessed when they are put.
+	 */
+	bool pages_mark_accessed_on_put : 1;
+
 	/**
 	 * @map_wc: map object write-combined (instead of using shmem defaults).
 	 */
-	bool map_wc;
+	bool map_wc : 1;
 };
 
 #define to_drm_gem_shmem_obj(obj) \
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v10 05/11] drm/shmem: Switch to use drm_* debug helpers
  2023-01-08 21:04 [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
                   ` (3 preceding siblings ...)
  2023-01-08 21:04 ` [PATCH v10 04/11] drm/shmem: Put booleans in the end of struct drm_gem_shmem_object Dmitry Osipenko
@ 2023-01-08 21:04 ` Dmitry Osipenko
  2023-01-26 12:15   ` Gerd Hoffmann
  2023-02-17 12:28   ` Thomas Zimmermann
  2023-01-08 21:04 ` [PATCH v10 06/11] drm/shmem-helper: Don't use vmap_use_count for dma-bufs Dmitry Osipenko
                   ` (7 subsequent siblings)
  12 siblings, 2 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2023-01-08 21:04 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gustavo Padovan, Daniel Stone,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Clark, Sumit Semwal, Christian König,
	Qiang Yu, Steven Price, Alyssa Rosenzweig, Rob Herring,
	Sean Paul, Dmitry Baryshkov, Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization

Ease debugging of a multi-GPU system by using drm_WARN_*() and
drm_dbg_kms() helpers that print out DRM device name corresponding
to shmem GEM.

Suggested-by: Thomas Zimmermann <tzimmermann@suse.de>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 38 +++++++++++++++-----------
 1 file changed, 22 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index f21f47737817..5006f7da7f2d 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -141,7 +141,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
 
-	WARN_ON(shmem->vmap_use_count);
+	drm_WARN_ON(obj->dev, shmem->vmap_use_count);
 
 	if (obj->import_attach) {
 		drm_prime_gem_destroy(obj, shmem->sgt);
@@ -156,7 +156,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
 			drm_gem_shmem_put_pages(shmem);
 	}
 
-	WARN_ON(shmem->pages_use_count);
+	drm_WARN_ON(obj->dev, shmem->pages_use_count);
 
 	drm_gem_object_release(obj);
 	mutex_destroy(&shmem->pages_lock);
@@ -175,7 +175,8 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
 
 	pages = drm_gem_get_pages(obj);
 	if (IS_ERR(pages)) {
-		DRM_DEBUG_KMS("Failed to get pages (%ld)\n", PTR_ERR(pages));
+		drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n",
+			    PTR_ERR(pages));
 		shmem->pages_use_count = 0;
 		return PTR_ERR(pages);
 	}
@@ -207,9 +208,10 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
  */
 int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
 {
+	struct drm_gem_object *obj = &shmem->base;
 	int ret;
 
-	WARN_ON(shmem->base.import_attach);
+	drm_WARN_ON(obj->dev, obj->import_attach);
 
 	ret = mutex_lock_interruptible(&shmem->pages_lock);
 	if (ret)
@@ -225,7 +227,7 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
 
-	if (WARN_ON_ONCE(!shmem->pages_use_count))
+	if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
 		return;
 
 	if (--shmem->pages_use_count > 0)
@@ -268,7 +270,9 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages);
  */
 int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
 {
-	WARN_ON(shmem->base.import_attach);
+	struct drm_gem_object *obj = &shmem->base;
+
+	drm_WARN_ON(obj->dev, obj->import_attach);
 
 	return drm_gem_shmem_get_pages(shmem);
 }
@@ -283,7 +287,9 @@ EXPORT_SYMBOL(drm_gem_shmem_pin);
  */
 void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
 {
-	WARN_ON(shmem->base.import_attach);
+	struct drm_gem_object *obj = &shmem->base;
+
+	drm_WARN_ON(obj->dev, obj->import_attach);
 
 	drm_gem_shmem_put_pages(shmem);
 }
@@ -303,7 +309,7 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
 	if (obj->import_attach) {
 		ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
 		if (!ret) {
-			if (WARN_ON(map->is_iomem)) {
+			if (drm_WARN_ON(obj->dev, map->is_iomem)) {
 				dma_buf_vunmap(obj->import_attach->dmabuf, map);
 				ret = -EIO;
 				goto err_put_pages;
@@ -328,7 +334,7 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
 	}
 
 	if (ret) {
-		DRM_DEBUG_KMS("Failed to vmap pages, error %d\n", ret);
+		drm_dbg_kms(obj->dev, "Failed to vmap pages, error %d\n", ret);
 		goto err_put_pages;
 	}
 
@@ -378,7 +384,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
 {
 	struct drm_gem_object *obj = &shmem->base;
 
-	if (WARN_ON_ONCE(!shmem->vmap_use_count))
+	if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count))
 		return;
 
 	if (--shmem->vmap_use_count > 0)
@@ -463,7 +469,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
 	struct drm_gem_object *obj = &shmem->base;
 	struct drm_device *dev = obj->dev;
 
-	WARN_ON(!drm_gem_shmem_is_purgeable(shmem));
+	drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
 
 	dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
 	sg_free_table(shmem->sgt);
@@ -555,7 +561,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 	mutex_lock(&shmem->pages_lock);
 
 	if (page_offset >= num_pages ||
-	    WARN_ON_ONCE(!shmem->pages) ||
+	    drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
 	    shmem->madv < 0) {
 		ret = VM_FAULT_SIGBUS;
 	} else {
@@ -574,7 +580,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
 	struct drm_gem_object *obj = vma->vm_private_data;
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
 
-	WARN_ON(shmem->base.import_attach);
+	drm_WARN_ON(obj->dev, obj->import_attach);
 
 	mutex_lock(&shmem->pages_lock);
 
@@ -583,7 +589,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
 	 * mmap'd, vm_open() just grabs an additional reference for the new
 	 * mm the vma is getting copied into (ie. on fork()).
 	 */
-	if (!WARN_ON_ONCE(!shmem->pages_use_count))
+	if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
 		shmem->pages_use_count++;
 
 	mutex_unlock(&shmem->pages_lock);
@@ -677,7 +683,7 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
 
-	WARN_ON(shmem->base.import_attach);
+	drm_WARN_ON(obj->dev, obj->import_attach);
 
 	return drm_prime_pages_to_sg(obj->dev, shmem->pages, obj->size >> PAGE_SHIFT);
 }
@@ -708,7 +714,7 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
 	if (shmem->sgt)
 		return shmem->sgt;
 
-	WARN_ON(obj->import_attach);
+	drm_WARN_ON(obj->dev, obj->import_attach);
 
 	ret = drm_gem_shmem_get_pages(shmem);
 	if (ret)
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v10 06/11] drm/shmem-helper: Don't use vmap_use_count for dma-bufs
  2023-01-08 21:04 [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
                   ` (4 preceding siblings ...)
  2023-01-08 21:04 ` [PATCH v10 05/11] drm/shmem: Switch to use drm_* debug helpers Dmitry Osipenko
@ 2023-01-08 21:04 ` Dmitry Osipenko
  2023-01-26 12:17   ` Gerd Hoffmann
  2023-02-17 12:41   ` Thomas Zimmermann
  2023-01-08 21:04 ` [PATCH v10 07/11] drm/shmem-helper: Switch to reservation lock Dmitry Osipenko
                   ` (6 subsequent siblings)
  12 siblings, 2 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2023-01-08 21:04 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gustavo Padovan, Daniel Stone,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Clark, Sumit Semwal, Christian König,
	Qiang Yu, Steven Price, Alyssa Rosenzweig, Rob Herring,
	Sean Paul, Dmitry Baryshkov, Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization

DMA-buf core has its own refcounting of vmaps, use it instead of drm-shmem
counting. This change prepares drm-shmem for addition of memory shrinker
support where drm-shmem will use a single dma-buf reservation lock for
all operations performed over dma-bufs.

Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 35 +++++++++++++++-----------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 5006f7da7f2d..1392cbd3cc02 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -301,24 +301,22 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
 	struct drm_gem_object *obj = &shmem->base;
 	int ret = 0;
 
-	if (shmem->vmap_use_count++ > 0) {
-		iosys_map_set_vaddr(map, shmem->vaddr);
-		return 0;
-	}
-
 	if (obj->import_attach) {
 		ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
 		if (!ret) {
 			if (drm_WARN_ON(obj->dev, map->is_iomem)) {
 				dma_buf_vunmap(obj->import_attach->dmabuf, map);
-				ret = -EIO;
-				goto err_put_pages;
+				return -EIO;
 			}
-			shmem->vaddr = map->vaddr;
 		}
 	} else {
 		pgprot_t prot = PAGE_KERNEL;
 
+		if (shmem->vmap_use_count++ > 0) {
+			iosys_map_set_vaddr(map, shmem->vaddr);
+			return 0;
+		}
+
 		ret = drm_gem_shmem_get_pages(shmem);
 		if (ret)
 			goto err_zero_use;
@@ -384,15 +382,15 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
 {
 	struct drm_gem_object *obj = &shmem->base;
 
-	if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count))
-		return;
-
-	if (--shmem->vmap_use_count > 0)
-		return;
-
 	if (obj->import_attach) {
 		dma_buf_vunmap(obj->import_attach->dmabuf, map);
 	} else {
+		if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count))
+			return;
+
+		if (--shmem->vmap_use_count > 0)
+			return;
+
 		vunmap(shmem->vaddr);
 		drm_gem_shmem_put_pages(shmem);
 	}
@@ -660,7 +658,14 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem,
 			      struct drm_printer *p, unsigned int indent)
 {
 	drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count);
-	drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count);
+
+	if (shmem->base.import_attach)
+		drm_printf_indent(p, indent, "vmap_use_count=%u\n",
+				  shmem->base.dma_buf->vmapping_counter);
+	else
+		drm_printf_indent(p, indent, "vmap_use_count=%u\n",
+				  shmem->vmap_use_count);
+
 	drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr);
 }
 EXPORT_SYMBOL(drm_gem_shmem_print_info);
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v10 07/11] drm/shmem-helper: Switch to reservation lock
  2023-01-08 21:04 [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
                   ` (5 preceding siblings ...)
  2023-01-08 21:04 ` [PATCH v10 06/11] drm/shmem-helper: Don't use vmap_use_count for dma-bufs Dmitry Osipenko
@ 2023-01-08 21:04 ` Dmitry Osipenko
  2023-02-17 12:52   ` Thomas Zimmermann
  2023-02-17 13:29   ` Thomas Zimmermann
  2023-01-08 21:04 ` [PATCH v10 08/11] drm/shmem-helper: Add memory shrinker Dmitry Osipenko
                   ` (5 subsequent siblings)
  12 siblings, 2 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2023-01-08 21:04 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gustavo Padovan, Daniel Stone,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Clark, Sumit Semwal, Christian König,
	Qiang Yu, Steven Price, Alyssa Rosenzweig, Rob Herring,
	Sean Paul, Dmitry Baryshkov, Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization

Replace all drm-shmem locks with a GEM reservation lock. This makes locks
consistent with dma-buf locking convention where importers are responsible
for holding reservation lock for all operations performed over dma-bufs,
preventing deadlock between dma-buf importers and exporters.

Suggested-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c        | 185 +++++++-----------
 drivers/gpu/drm/lima/lima_gem.c               |   8 +-
 drivers/gpu/drm/panfrost/panfrost_drv.c       |   7 +-
 .../gpu/drm/panfrost/panfrost_gem_shrinker.c  |   6 +-
 drivers/gpu/drm/panfrost/panfrost_mmu.c       |  19 +-
 include/drm/drm_gem_shmem_helper.h            |  14 +-
 6 files changed, 94 insertions(+), 145 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 1392cbd3cc02..a1f2f2158c50 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -88,8 +88,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private)
 	if (ret)
 		goto err_release;
 
-	mutex_init(&shmem->pages_lock);
-	mutex_init(&shmem->vmap_lock);
 	INIT_LIST_HEAD(&shmem->madv_list);
 
 	if (!private) {
@@ -141,11 +139,13 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
 
-	drm_WARN_ON(obj->dev, shmem->vmap_use_count);
-
 	if (obj->import_attach) {
 		drm_prime_gem_destroy(obj, shmem->sgt);
 	} else {
+		dma_resv_lock(shmem->base.resv, NULL);
+
+		drm_WARN_ON(obj->dev, shmem->vmap_use_count);
+
 		if (shmem->sgt) {
 			dma_unmap_sgtable(obj->dev->dev, shmem->sgt,
 					  DMA_BIDIRECTIONAL, 0);
@@ -154,18 +154,18 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
 		}
 		if (shmem->pages)
 			drm_gem_shmem_put_pages(shmem);
-	}
 
-	drm_WARN_ON(obj->dev, shmem->pages_use_count);
+		drm_WARN_ON(obj->dev, shmem->pages_use_count);
+
+		dma_resv_unlock(shmem->base.resv);
+	}
 
 	drm_gem_object_release(obj);
-	mutex_destroy(&shmem->pages_lock);
-	mutex_destroy(&shmem->vmap_lock);
 	kfree(shmem);
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
 
-static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
+static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
 	struct page **pages;
@@ -197,35 +197,16 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
 }
 
 /*
- * drm_gem_shmem_get_pages - Allocate backing pages for a shmem GEM object
+ * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
  * @shmem: shmem GEM object
  *
- * This function makes sure that backing pages exists for the shmem GEM object
- * and increases the use count.
- *
- * Returns:
- * 0 on success or a negative error code on failure.
+ * This function decreases the use count and puts the backing pages when use drops to zero.
  */
-int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
+void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
-	int ret;
 
-	drm_WARN_ON(obj->dev, obj->import_attach);
-
-	ret = mutex_lock_interruptible(&shmem->pages_lock);
-	if (ret)
-		return ret;
-	ret = drm_gem_shmem_get_pages_locked(shmem);
-	mutex_unlock(&shmem->pages_lock);
-
-	return ret;
-}
-EXPORT_SYMBOL(drm_gem_shmem_get_pages);
-
-static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
-{
-	struct drm_gem_object *obj = &shmem->base;
+	dma_resv_assert_held(shmem->base.resv);
 
 	if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
 		return;
@@ -243,19 +224,6 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
 			  shmem->pages_mark_accessed_on_put);
 	shmem->pages = NULL;
 }
-
-/*
- * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
- * @shmem: shmem GEM object
- *
- * This function decreases the use count and puts the backing pages when use drops to zero.
- */
-void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
-{
-	mutex_lock(&shmem->pages_lock);
-	drm_gem_shmem_put_pages_locked(shmem);
-	mutex_unlock(&shmem->pages_lock);
-}
 EXPORT_SYMBOL(drm_gem_shmem_put_pages);
 
 /**
@@ -272,6 +240,8 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
 
+	dma_resv_assert_held(shmem->base.resv);
+
 	drm_WARN_ON(obj->dev, obj->import_attach);
 
 	return drm_gem_shmem_get_pages(shmem);
@@ -289,14 +259,31 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
 
+	dma_resv_assert_held(shmem->base.resv);
+
 	drm_WARN_ON(obj->dev, obj->import_attach);
 
 	drm_gem_shmem_put_pages(shmem);
 }
 EXPORT_SYMBOL(drm_gem_shmem_unpin);
 
-static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
-				     struct iosys_map *map)
+/*
+ * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
+ * @shmem: shmem GEM object
+ * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
+ *       store.
+ *
+ * This function makes sure that a contiguous kernel virtual address mapping
+ * exists for the buffer backing the shmem GEM object. It hides the differences
+ * between dma-buf imported and natively allocated objects.
+ *
+ * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
+		       struct iosys_map *map)
 {
 	struct drm_gem_object *obj = &shmem->base;
 	int ret = 0;
@@ -312,6 +299,8 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
 	} else {
 		pgprot_t prot = PAGE_KERNEL;
 
+		dma_resv_assert_held(shmem->base.resv);
+
 		if (shmem->vmap_use_count++ > 0) {
 			iosys_map_set_vaddr(map, shmem->vaddr);
 			return 0;
@@ -346,45 +335,30 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
 
 	return ret;
 }
+EXPORT_SYMBOL(drm_gem_shmem_vmap);
 
 /*
- * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
+ * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
  * @shmem: shmem GEM object
- * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
- *       store.
- *
- * This function makes sure that a contiguous kernel virtual address mapping
- * exists for the buffer backing the shmem GEM object. It hides the differences
- * between dma-buf imported and natively allocated objects.
+ * @map: Kernel virtual address where the SHMEM GEM object was mapped
  *
- * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
+ * This function cleans up a kernel virtual address mapping acquired by
+ * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
+ * zero.
  *
- * Returns:
- * 0 on success or a negative error code on failure.
+ * This function hides the differences between dma-buf imported and natively
+ * allocated objects.
  */
-int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
-		       struct iosys_map *map)
-{
-	int ret;
-
-	ret = mutex_lock_interruptible(&shmem->vmap_lock);
-	if (ret)
-		return ret;
-	ret = drm_gem_shmem_vmap_locked(shmem, map);
-	mutex_unlock(&shmem->vmap_lock);
-
-	return ret;
-}
-EXPORT_SYMBOL(drm_gem_shmem_vmap);
-
-static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
-					struct iosys_map *map)
+void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
+			  struct iosys_map *map)
 {
 	struct drm_gem_object *obj = &shmem->base;
 
 	if (obj->import_attach) {
 		dma_buf_vunmap(obj->import_attach->dmabuf, map);
 	} else {
+		dma_resv_assert_held(shmem->base.resv);
+
 		if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count))
 			return;
 
@@ -397,26 +371,6 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
 
 	shmem->vaddr = NULL;
 }
-
-/*
- * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
- * @shmem: shmem GEM object
- * @map: Kernel virtual address where the SHMEM GEM object was mapped
- *
- * This function cleans up a kernel virtual address mapping acquired by
- * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
- * zero.
- *
- * This function hides the differences between dma-buf imported and natively
- * allocated objects.
- */
-void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
-			  struct iosys_map *map)
-{
-	mutex_lock(&shmem->vmap_lock);
-	drm_gem_shmem_vunmap_locked(shmem, map);
-	mutex_unlock(&shmem->vmap_lock);
-}
 EXPORT_SYMBOL(drm_gem_shmem_vunmap);
 
 static struct drm_gem_shmem_object *
@@ -449,24 +403,24 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
  */
 int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
 {
-	mutex_lock(&shmem->pages_lock);
+	dma_resv_assert_held(shmem->base.resv);
 
 	if (shmem->madv >= 0)
 		shmem->madv = madv;
 
 	madv = shmem->madv;
 
-	mutex_unlock(&shmem->pages_lock);
-
 	return (madv >= 0);
 }
 EXPORT_SYMBOL(drm_gem_shmem_madvise);
 
-void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
+void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
 	struct drm_device *dev = obj->dev;
 
+	dma_resv_assert_held(shmem->base.resv);
+
 	drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
 
 	dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
@@ -474,7 +428,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
 	kfree(shmem->sgt);
 	shmem->sgt = NULL;
 
-	drm_gem_shmem_put_pages_locked(shmem);
+	drm_gem_shmem_put_pages(shmem);
 
 	shmem->madv = -1;
 
@@ -490,17 +444,6 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
 
 	invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
 }
-EXPORT_SYMBOL(drm_gem_shmem_purge_locked);
-
-bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
-{
-	if (!mutex_trylock(&shmem->pages_lock))
-		return false;
-	drm_gem_shmem_purge_locked(shmem);
-	mutex_unlock(&shmem->pages_lock);
-
-	return true;
-}
 EXPORT_SYMBOL(drm_gem_shmem_purge);
 
 /**
@@ -556,7 +499,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 	/* We don't use vmf->pgoff since that has the fake offset */
 	page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
 
-	mutex_lock(&shmem->pages_lock);
+	dma_resv_lock(shmem->base.resv, NULL);
 
 	if (page_offset >= num_pages ||
 	    drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
@@ -568,7 +511,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 		ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
 	}
 
-	mutex_unlock(&shmem->pages_lock);
+	dma_resv_unlock(shmem->base.resv);
 
 	return ret;
 }
@@ -580,7 +523,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
 
 	drm_WARN_ON(obj->dev, obj->import_attach);
 
-	mutex_lock(&shmem->pages_lock);
+	dma_resv_lock(shmem->base.resv, NULL);
 
 	/*
 	 * We should have already pinned the pages when the buffer was first
@@ -590,7 +533,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
 	if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
 		shmem->pages_use_count++;
 
-	mutex_unlock(&shmem->pages_lock);
+	dma_resv_unlock(shmem->base.resv);
 
 	drm_gem_vm_open(vma);
 }
@@ -600,7 +543,10 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
 	struct drm_gem_object *obj = vma->vm_private_data;
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
 
+	dma_resv_lock(shmem->base.resv, NULL);
 	drm_gem_shmem_put_pages(shmem);
+	dma_resv_unlock(shmem->base.resv);
+
 	drm_gem_vm_close(vma);
 }
 
@@ -635,7 +581,10 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
 		return dma_buf_mmap(obj->dma_buf, vma, 0);
 	}
 
+	dma_resv_lock(shmem->base.resv, NULL);
 	ret = drm_gem_shmem_get_pages(shmem);
+	dma_resv_unlock(shmem->base.resv);
+
 	if (ret)
 		return ret;
 
@@ -721,9 +670,11 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
 
 	drm_WARN_ON(obj->dev, obj->import_attach);
 
+	dma_resv_lock(shmem->base.resv, NULL);
+
 	ret = drm_gem_shmem_get_pages(shmem);
 	if (ret)
-		return ERR_PTR(ret);
+		goto err_unlock;
 
 	sgt = drm_gem_shmem_get_sg_table(shmem);
 	if (IS_ERR(sgt)) {
@@ -737,6 +688,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
 
 	shmem->sgt = sgt;
 
+	dma_resv_unlock(shmem->base.resv);
+
 	return sgt;
 
 err_free_sgt:
@@ -744,6 +697,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
 	kfree(sgt);
 err_put_pages:
 	drm_gem_shmem_put_pages(shmem);
+err_unlock:
+	dma_resv_unlock(shmem->base.resv);
 	return ERR_PTR(ret);
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt);
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 0f1ca0b0db49..5008f0c2428f 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -34,7 +34,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
 
 	new_size = min(new_size, bo->base.base.size);
 
-	mutex_lock(&bo->base.pages_lock);
+	dma_resv_lock(bo->base.base.resv, NULL);
 
 	if (bo->base.pages) {
 		pages = bo->base.pages;
@@ -42,7 +42,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
 		pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
 				       sizeof(*pages), GFP_KERNEL | __GFP_ZERO);
 		if (!pages) {
-			mutex_unlock(&bo->base.pages_lock);
+			dma_resv_unlock(bo->base.base.resv);
 			return -ENOMEM;
 		}
 
@@ -56,13 +56,13 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
 		struct page *page = shmem_read_mapping_page(mapping, i);
 
 		if (IS_ERR(page)) {
-			mutex_unlock(&bo->base.pages_lock);
+			dma_resv_unlock(bo->base.base.resv);
 			return PTR_ERR(page);
 		}
 		pages[i] = page;
 	}
 
-	mutex_unlock(&bo->base.pages_lock);
+	dma_resv_unlock(bo->base.base.resv);
 
 	ret = sg_alloc_table_from_pages(&sgt, pages, i, 0,
 					new_size, GFP_KERNEL);
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
index abb0dadd8f63..9f3f2283b67a 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -414,6 +414,10 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
 
 	bo = to_panfrost_bo(gem_obj);
 
+	ret = dma_resv_lock_interruptible(bo->base.base.resv, NULL);
+	if (ret)
+		goto out_put_object;
+
 	mutex_lock(&pfdev->shrinker_lock);
 	mutex_lock(&bo->mappings.lock);
 	if (args->madv == PANFROST_MADV_DONTNEED) {
@@ -451,7 +455,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
 out_unlock_mappings:
 	mutex_unlock(&bo->mappings.lock);
 	mutex_unlock(&pfdev->shrinker_lock);
-
+	dma_resv_unlock(bo->base.base.resv);
+out_put_object:
 	drm_gem_object_put(gem_obj);
 	return ret;
 }
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
index bf0170782f25..6a71a2555f85 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
+++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
@@ -48,14 +48,14 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj)
 	if (!mutex_trylock(&bo->mappings.lock))
 		return false;
 
-	if (!mutex_trylock(&shmem->pages_lock))
+	if (!dma_resv_trylock(shmem->base.resv))
 		goto unlock_mappings;
 
 	panfrost_gem_teardown_mappings_locked(bo);
-	drm_gem_shmem_purge_locked(&bo->base);
+	drm_gem_shmem_purge(&bo->base);
 	ret = true;
 
-	mutex_unlock(&shmem->pages_lock);
+	dma_resv_unlock(shmem->base.resv);
 
 unlock_mappings:
 	mutex_unlock(&bo->mappings.lock);
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index 666a5e53fe19..0679df57f394 100644
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
@@ -443,6 +443,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 	struct panfrost_gem_mapping *bomapping;
 	struct panfrost_gem_object *bo;
 	struct address_space *mapping;
+	struct drm_gem_object *obj;
 	pgoff_t page_offset;
 	struct sg_table *sgt;
 	struct page **pages;
@@ -465,15 +466,16 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 	page_offset = addr >> PAGE_SHIFT;
 	page_offset -= bomapping->mmnode.start;
 
-	mutex_lock(&bo->base.pages_lock);
+	obj = &bo->base.base;
+
+	dma_resv_lock(obj->resv, NULL);
 
 	if (!bo->base.pages) {
 		bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M,
 				     sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO);
 		if (!bo->sgts) {
-			mutex_unlock(&bo->base.pages_lock);
 			ret = -ENOMEM;
-			goto err_bo;
+			goto err_unlock;
 		}
 
 		pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
@@ -481,9 +483,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 		if (!pages) {
 			kvfree(bo->sgts);
 			bo->sgts = NULL;
-			mutex_unlock(&bo->base.pages_lock);
 			ret = -ENOMEM;
-			goto err_bo;
+			goto err_unlock;
 		}
 		bo->base.pages = pages;
 		bo->base.pages_use_count = 1;
@@ -491,7 +492,6 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 		pages = bo->base.pages;
 		if (pages[page_offset]) {
 			/* Pages are already mapped, bail out. */
-			mutex_unlock(&bo->base.pages_lock);
 			goto out;
 		}
 	}
@@ -502,14 +502,11 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 	for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
 		pages[i] = shmem_read_mapping_page(mapping, i);
 		if (IS_ERR(pages[i])) {
-			mutex_unlock(&bo->base.pages_lock);
 			ret = PTR_ERR(pages[i]);
 			goto err_pages;
 		}
 	}
 
-	mutex_unlock(&bo->base.pages_lock);
-
 	sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)];
 	ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
 					NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL);
@@ -528,6 +525,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 	dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr);
 
 out:
+	dma_resv_unlock(obj->resv);
+
 	panfrost_gem_mapping_put(bomapping);
 
 	return 0;
@@ -536,6 +535,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 	sg_free_table(sgt);
 err_pages:
 	drm_gem_shmem_put_pages(&bo->base);
+err_unlock:
+	dma_resv_unlock(obj->resv);
 err_bo:
 	panfrost_gem_mapping_put(bomapping);
 	return ret;
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 5994fed5e327..20ddcd799df9 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -26,11 +26,6 @@ struct drm_gem_shmem_object {
 	 */
 	struct drm_gem_object base;
 
-	/**
-	 * @pages_lock: Protects the page table and use count
-	 */
-	struct mutex pages_lock;
-
 	/**
 	 * @pages: Page table
 	 */
@@ -65,11 +60,6 @@ struct drm_gem_shmem_object {
 	 */
 	struct sg_table *sgt;
 
-	/**
-	 * @vmap_lock: Protects the vmap address and use count
-	 */
-	struct mutex vmap_lock;
-
 	/**
 	 * @vaddr: Kernel virtual address of the backing memory
 	 */
@@ -109,7 +99,6 @@ struct drm_gem_shmem_object {
 struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size);
 void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem);
 
-int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
 void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
 int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem);
 void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem);
@@ -128,8 +117,7 @@ static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem
 		!shmem->base.dma_buf && !shmem->base.import_attach;
 }
 
-void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem);
-bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
+void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
 
 struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
 struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem);
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v10 08/11] drm/shmem-helper: Add memory shrinker
  2023-01-08 21:04 [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
                   ` (6 preceding siblings ...)
  2023-01-08 21:04 ` [PATCH v10 07/11] drm/shmem-helper: Switch to reservation lock Dmitry Osipenko
@ 2023-01-08 21:04 ` Dmitry Osipenko
  2023-02-17 13:19   ` Thomas Zimmermann
  2023-01-08 21:04 ` [PATCH v10 09/11] drm/gem: Add drm_gem_pin_unlocked() Dmitry Osipenko
                   ` (4 subsequent siblings)
  12 siblings, 1 reply; 41+ messages in thread
From: Dmitry Osipenko @ 2023-01-08 21:04 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gustavo Padovan, Daniel Stone,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Clark, Sumit Semwal, Christian König,
	Qiang Yu, Steven Price, Alyssa Rosenzweig, Rob Herring,
	Sean Paul, Dmitry Baryshkov, Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization

Introduce common drm-shmem shrinker for DRM drivers.

To start using drm-shmem shrinker drivers should do the following:

1. Implement evict() callback of GEM object where driver should check
   whether object is purgeable or evictable using drm-shmem helpers and
   perform the shrinking action

2. Initialize drm-shmem internals using drmm_gem_shmem_init(drm_device),
   which will register drm-shmem shrinker

3. Implement madvise IOCTL that will use drm_gem_shmem_madvise()

Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c        | 460 ++++++++++++++++--
 .../gpu/drm/panfrost/panfrost_gem_shrinker.c  |   9 +-
 include/drm/drm_device.h                      |  10 +-
 include/drm/drm_gem_shmem_helper.h            |  61 ++-
 4 files changed, 490 insertions(+), 50 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index a1f2f2158c50..3ab5ec325ddb 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -20,6 +20,7 @@
 #include <drm/drm_device.h>
 #include <drm/drm_drv.h>
 #include <drm/drm_gem_shmem_helper.h>
+#include <drm/drm_managed.h>
 #include <drm/drm_prime.h>
 #include <drm/drm_print.h>
 
@@ -128,6 +129,57 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
 
+static void drm_gem_shmem_resv_assert_held(struct drm_gem_shmem_object *shmem)
+{
+	/*
+	 * Destroying the object is a special case.. drm_gem_shmem_free()
+	 * calls many things that WARN_ON if the obj lock is not held.  But
+	 * acquiring the obj lock in drm_gem_shmem_free() can cause a locking
+	 * order inversion between reservation_ww_class_mutex and fs_reclaim.
+	 *
+	 * This deadlock is not actually possible, because no one should
+	 * be already holding the lock when msm_gem_free_object() is called.
+	 * Unfortunately lockdep is not aware of this detail.  So when the
+	 * refcount drops to zero, we pretend it is already locked.
+	 */
+	if (kref_read(&shmem->base.refcount))
+		dma_resv_assert_held(shmem->base.resv);
+}
+
+static bool drm_gem_shmem_is_evictable(struct drm_gem_shmem_object *shmem)
+{
+	dma_resv_assert_held(shmem->base.resv);
+
+	return (shmem->madv >= 0) && shmem->base.funcs->evict &&
+		shmem->pages_use_count && !shmem->pages_pin_count &&
+		!shmem->base.dma_buf && !shmem->base.import_attach &&
+		shmem->sgt && !shmem->evicted;
+}
+
+static void
+drm_gem_shmem_update_pages_state(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+	struct drm_gem_shmem *shmem_mm = obj->dev->shmem_mm;
+	struct drm_gem_shmem_shrinker *gem_shrinker = &shmem_mm->shrinker;
+
+	drm_gem_shmem_resv_assert_held(shmem);
+
+	if (!gem_shrinker || obj->import_attach)
+		return;
+
+	if (shmem->madv < 0)
+		drm_gem_lru_remove(&shmem->base);
+	else if (drm_gem_shmem_is_evictable(shmem) || drm_gem_shmem_is_purgeable(shmem))
+		drm_gem_lru_move_tail(&gem_shrinker->lru_evictable, &shmem->base);
+	else if (shmem->evicted)
+		drm_gem_lru_move_tail(&gem_shrinker->lru_evicted, &shmem->base);
+	else if (!shmem->pages)
+		drm_gem_lru_remove(&shmem->base);
+	else
+		drm_gem_lru_move_tail(&gem_shrinker->lru_pinned, &shmem->base);
+}
+
 /**
  * drm_gem_shmem_free - Free resources associated with a shmem GEM object
  * @shmem: shmem GEM object to free
@@ -142,7 +194,8 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
 	if (obj->import_attach) {
 		drm_prime_gem_destroy(obj, shmem->sgt);
 	} else {
-		dma_resv_lock(shmem->base.resv, NULL);
+		/* take out shmem GEM object from the memory shrinker */
+		drm_gem_shmem_madvise(shmem, -1);
 
 		drm_WARN_ON(obj->dev, shmem->vmap_use_count);
 
@@ -152,12 +205,10 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
 			sg_free_table(shmem->sgt);
 			kfree(shmem->sgt);
 		}
-		if (shmem->pages)
+		if (shmem->pages_use_count)
 			drm_gem_shmem_put_pages(shmem);
 
 		drm_WARN_ON(obj->dev, shmem->pages_use_count);
-
-		dma_resv_unlock(shmem->base.resv);
 	}
 
 	drm_gem_object_release(obj);
@@ -165,19 +216,31 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
 
-static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
+static int
+drm_gem_shmem_acquire_pages(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
 	struct page **pages;
 
-	if (shmem->pages_use_count++ > 0)
+	dma_resv_assert_held(shmem->base.resv);
+
+	if (shmem->madv < 0) {
+		drm_WARN_ON(obj->dev, shmem->pages);
+		return -ENOMEM;
+	}
+
+	if (shmem->pages) {
+		drm_WARN_ON(obj->dev, !shmem->evicted);
 		return 0;
+	}
+
+	if (drm_WARN_ON(obj->dev, !shmem->pages_use_count))
+		return -EINVAL;
 
 	pages = drm_gem_get_pages(obj);
 	if (IS_ERR(pages)) {
 		drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n",
 			    PTR_ERR(pages));
-		shmem->pages_use_count = 0;
 		return PTR_ERR(pages);
 	}
 
@@ -196,6 +259,58 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
 	return 0;
 }
 
+static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
+{
+	int err;
+
+	dma_resv_assert_held(shmem->base.resv);
+
+	if (shmem->madv < 0)
+		return -ENOMEM;
+
+	if (shmem->pages_use_count++ > 0) {
+		err = drm_gem_shmem_swap_in(shmem);
+		if (err)
+			goto err_zero_use;
+
+		return 0;
+	}
+
+	err = drm_gem_shmem_acquire_pages(shmem);
+	if (err)
+		goto err_zero_use;
+
+	drm_gem_shmem_update_pages_state(shmem);
+
+	return 0;
+
+err_zero_use:
+	shmem->pages_use_count = 0;
+
+	return err;
+}
+
+static void
+drm_gem_shmem_release_pages(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+
+	if (!shmem->pages) {
+		drm_WARN_ON(obj->dev, !shmem->evicted && shmem->madv >= 0);
+		return;
+	}
+
+#ifdef CONFIG_X86
+	if (shmem->map_wc)
+		set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
+#endif
+
+	drm_gem_put_pages(obj, shmem->pages,
+			  shmem->pages_mark_dirty_on_put,
+			  shmem->pages_mark_accessed_on_put);
+	shmem->pages = NULL;
+}
+
 /*
  * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
  * @shmem: shmem GEM object
@@ -206,7 +321,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
 
-	dma_resv_assert_held(shmem->base.resv);
+	drm_gem_shmem_resv_assert_held(shmem);
 
 	if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
 		return;
@@ -214,15 +329,9 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
 	if (--shmem->pages_use_count > 0)
 		return;
 
-#ifdef CONFIG_X86
-	if (shmem->map_wc)
-		set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
-#endif
+	drm_gem_shmem_release_pages(shmem);
 
-	drm_gem_put_pages(obj, shmem->pages,
-			  shmem->pages_mark_dirty_on_put,
-			  shmem->pages_mark_accessed_on_put);
-	shmem->pages = NULL;
+	drm_gem_shmem_update_pages_state(shmem);
 }
 EXPORT_SYMBOL(drm_gem_shmem_put_pages);
 
@@ -239,12 +348,17 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages);
 int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
+	int ret;
 
 	dma_resv_assert_held(shmem->base.resv);
 
 	drm_WARN_ON(obj->dev, obj->import_attach);
 
-	return drm_gem_shmem_get_pages(shmem);
+	ret = drm_gem_shmem_get_pages(shmem);
+	if (!ret)
+		shmem->pages_pin_count++;
+
+	return ret;
 }
 EXPORT_SYMBOL(drm_gem_shmem_pin);
 
@@ -263,7 +377,12 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
 
 	drm_WARN_ON(obj->dev, obj->import_attach);
 
+	if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_pin_count))
+		return;
+
 	drm_gem_shmem_put_pages(shmem);
+
+	shmem->pages_pin_count--;
 }
 EXPORT_SYMBOL(drm_gem_shmem_unpin);
 
@@ -306,7 +425,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
 			return 0;
 		}
 
-		ret = drm_gem_shmem_get_pages(shmem);
+		ret = drm_gem_shmem_pin(shmem);
 		if (ret)
 			goto err_zero_use;
 
@@ -329,7 +448,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
 
 err_put_pages:
 	if (!obj->import_attach)
-		drm_gem_shmem_put_pages(shmem);
+		drm_gem_shmem_unpin(shmem);
 err_zero_use:
 	shmem->vmap_use_count = 0;
 
@@ -366,7 +485,7 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
 			return;
 
 		vunmap(shmem->vaddr);
-		drm_gem_shmem_put_pages(shmem);
+		drm_gem_shmem_unpin(shmem);
 	}
 
 	shmem->vaddr = NULL;
@@ -403,48 +522,84 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
  */
 int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
 {
-	dma_resv_assert_held(shmem->base.resv);
+	drm_gem_shmem_resv_assert_held(shmem);
 
 	if (shmem->madv >= 0)
 		shmem->madv = madv;
 
 	madv = shmem->madv;
 
+	drm_gem_shmem_update_pages_state(shmem);
+
 	return (madv >= 0);
 }
 EXPORT_SYMBOL(drm_gem_shmem_madvise);
 
-void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
+/**
+ * drm_gem_shmem_swap_in() - Moves shmem GEM back to memory and enables
+ *                           hardware access to the memory.
+ * @shmem: shmem GEM object
+ *
+ * This function moves shmem GEM back to memory if it was previously evicted
+ * by the memory shrinker. The GEM is ready to use on success.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_shmem_swap_in(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
-	struct drm_device *dev = obj->dev;
+	struct sg_table *sgt;
+	int err;
 
 	dma_resv_assert_held(shmem->base.resv);
 
-	drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
+	if (shmem->evicted) {
+		err = drm_gem_shmem_acquire_pages(shmem);
+		if (err)
+			return err;
+
+		sgt = drm_gem_shmem_get_sg_table(shmem);
+		if (IS_ERR(sgt))
+			return PTR_ERR(sgt);
+
+		err = dma_map_sgtable(obj->dev->dev, sgt,
+				      DMA_BIDIRECTIONAL, 0);
+		if (err) {
+			sg_free_table(sgt);
+			kfree(sgt);
+			return err;
+		}
 
-	dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
-	sg_free_table(shmem->sgt);
-	kfree(shmem->sgt);
-	shmem->sgt = NULL;
+		shmem->sgt = sgt;
+		shmem->evicted = false;
 
-	drm_gem_shmem_put_pages(shmem);
+		drm_gem_shmem_update_pages_state(shmem);
+	}
 
-	shmem->madv = -1;
+	if (!shmem->pages)
+		return -ENOMEM;
 
-	drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
-	drm_gem_free_mmap_offset(obj);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_swap_in);
 
-	/* Our goal here is to return as much of the memory as
-	 * is possible back to the system as we are called from OOM.
-	 * To do this we must instruct the shmfs to drop all of its
-	 * backing pages, *now*.
-	 */
-	shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1);
+static void drm_gem_shmem_unpin_pages(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+	struct drm_device *dev = obj->dev;
 
-	invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
+	if (shmem->evicted)
+		return;
+
+	dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
+	drm_gem_shmem_release_pages(shmem);
+	drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
+
+	sg_free_table(shmem->sgt);
+	kfree(shmem->sgt);
+	shmem->sgt = NULL;
 }
-EXPORT_SYMBOL(drm_gem_shmem_purge);
 
 /**
  * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object
@@ -495,22 +650,33 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 	vm_fault_t ret;
 	struct page *page;
 	pgoff_t page_offset;
+	bool pages_unpinned;
+	int err;
 
 	/* We don't use vmf->pgoff since that has the fake offset */
 	page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
 
 	dma_resv_lock(shmem->base.resv, NULL);
 
-	if (page_offset >= num_pages ||
-	    drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
-	    shmem->madv < 0) {
+	/* Sanity-check that we have the pages pointer when it should present */
+	pages_unpinned = (shmem->evicted || shmem->madv < 0 || !shmem->pages_use_count);
+	drm_WARN_ON_ONCE(obj->dev, !shmem->pages ^ pages_unpinned);
+
+	if (page_offset >= num_pages || (!shmem->pages && !shmem->evicted)) {
 		ret = VM_FAULT_SIGBUS;
 	} else {
+		err = drm_gem_shmem_swap_in(shmem);
+		if (err) {
+			ret = VM_FAULT_OOM;
+			goto unlock;
+		}
+
 		page = shmem->pages[page_offset];
 
 		ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
 	}
 
+unlock:
 	dma_resv_unlock(shmem->base.resv);
 
 	return ret;
@@ -533,6 +699,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
 	if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
 		shmem->pages_use_count++;
 
+	drm_gem_shmem_update_pages_state(shmem);
 	dma_resv_unlock(shmem->base.resv);
 
 	drm_gem_vm_open(vma);
@@ -615,7 +782,9 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem,
 		drm_printf_indent(p, indent, "vmap_use_count=%u\n",
 				  shmem->vmap_use_count);
 
+	drm_printf_indent(p, indent, "evicted=%d\n", shmem->evicted);
 	drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr);
+	drm_printf_indent(p, indent, "madv=%d\n", shmem->madv);
 }
 EXPORT_SYMBOL(drm_gem_shmem_print_info);
 
@@ -688,6 +857,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
 
 	shmem->sgt = sgt;
 
+	drm_gem_shmem_update_pages_state(shmem);
+
 	dma_resv_unlock(shmem->base.resv);
 
 	return sgt;
@@ -738,6 +909,209 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev,
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table);
 
+static struct drm_gem_shmem_shrinker *
+to_drm_shrinker(struct shrinker *shrinker)
+{
+	return container_of(shrinker, struct drm_gem_shmem_shrinker, base);
+}
+
+static unsigned long
+drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker,
+				     struct shrink_control *sc)
+{
+	struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker);
+	unsigned long count = gem_shrinker->lru_evictable.count;
+
+	if (count >= SHRINK_EMPTY)
+		return SHRINK_EMPTY - 1;
+
+	return count ?: SHRINK_EMPTY;
+}
+
+void drm_gem_shmem_evict(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+
+	drm_WARN_ON(obj->dev, !drm_gem_shmem_is_evictable(shmem));
+	drm_WARN_ON(obj->dev, shmem->evicted);
+
+	drm_gem_shmem_unpin_pages(shmem);
+
+	shmem->evicted = true;
+	drm_gem_shmem_update_pages_state(shmem);
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_evict);
+
+void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+
+	drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
+
+	drm_gem_shmem_unpin_pages(shmem);
+	drm_gem_free_mmap_offset(obj);
+
+	/* Our goal here is to return as much of the memory as
+	 * is possible back to the system as we are called from OOM.
+	 * To do this we must instruct the shmfs to drop all of its
+	 * backing pages, *now*.
+	 */
+	shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1);
+
+	invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
+
+	shmem->madv = -1;
+	shmem->evicted = false;
+	drm_gem_shmem_update_pages_state(shmem);
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_purge);
+
+static bool drm_gem_is_busy(struct drm_gem_object *obj)
+{
+	return !dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_READ);
+}
+
+static bool drm_gem_shmem_shrinker_evict(struct drm_gem_object *obj)
+{
+	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+
+	if (!drm_gem_shmem_is_evictable(shmem) ||
+	    get_nr_swap_pages() < obj->size >> PAGE_SHIFT ||
+	    drm_gem_is_busy(obj))
+		return false;
+
+	return drm_gem_object_evict(obj);
+}
+
+static bool drm_gem_shmem_shrinker_purge(struct drm_gem_object *obj)
+{
+	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+
+	if (!drm_gem_shmem_is_purgeable(shmem) ||
+	    drm_gem_is_busy(obj))
+		return false;
+
+	return drm_gem_object_evict(obj);
+}
+
+static unsigned long
+drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker,
+				    struct shrink_control *sc)
+{
+	struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker);
+	unsigned long nr_to_scan = sc->nr_to_scan;
+	unsigned long remaining = 0;
+	unsigned long freed = 0;
+
+	/* purge as many objects as we can */
+	freed += drm_gem_lru_scan(&gem_shrinker->lru_evictable,
+				  nr_to_scan, &remaining,
+				  drm_gem_shmem_shrinker_purge);
+
+	/* evict as many objects as we can */
+	if (freed < nr_to_scan)
+		freed += drm_gem_lru_scan(&gem_shrinker->lru_evictable,
+					  nr_to_scan - freed, &remaining,
+					  drm_gem_shmem_shrinker_evict);
+
+	return (freed > 0 && remaining > 0) ? freed : SHRINK_STOP;
+}
+
+static int drm_gem_shmem_shrinker_init(struct drm_gem_shmem *shmem_mm,
+				       const char *shrinker_name)
+{
+	struct drm_gem_shmem_shrinker *gem_shrinker = &shmem_mm->shrinker;
+	int err;
+
+	gem_shrinker->base.count_objects = drm_gem_shmem_shrinker_count_objects;
+	gem_shrinker->base.scan_objects = drm_gem_shmem_shrinker_scan_objects;
+	gem_shrinker->base.seeks = DEFAULT_SEEKS;
+
+	mutex_init(&gem_shrinker->lock);
+	drm_gem_lru_init(&gem_shrinker->lru_evictable, &gem_shrinker->lock);
+	drm_gem_lru_init(&gem_shrinker->lru_evicted, &gem_shrinker->lock);
+	drm_gem_lru_init(&gem_shrinker->lru_pinned, &gem_shrinker->lock);
+
+	err = register_shrinker(&gem_shrinker->base, shrinker_name);
+	if (err) {
+		mutex_destroy(&gem_shrinker->lock);
+		return err;
+	}
+
+	return 0;
+}
+
+static void drm_gem_shmem_shrinker_release(struct drm_device *dev,
+					   struct drm_gem_shmem *shmem_mm)
+{
+	struct drm_gem_shmem_shrinker *gem_shrinker = &shmem_mm->shrinker;
+
+	unregister_shrinker(&gem_shrinker->base);
+	drm_WARN_ON(dev, !list_empty(&gem_shrinker->lru_evictable.list));
+	drm_WARN_ON(dev, !list_empty(&gem_shrinker->lru_evicted.list));
+	drm_WARN_ON(dev, !list_empty(&gem_shrinker->lru_pinned.list));
+	mutex_destroy(&gem_shrinker->lock);
+}
+
+static int drm_gem_shmem_init(struct drm_device *dev)
+{
+	int err;
+
+	if (WARN_ON(dev->shmem_mm))
+		return -EBUSY;
+
+	dev->shmem_mm = kzalloc(sizeof(*dev->shmem_mm), GFP_KERNEL);
+	if (!dev->shmem_mm)
+		return -ENOMEM;
+
+	err = drm_gem_shmem_shrinker_init(dev->shmem_mm, dev->unique);
+	if (err)
+		goto free_gem_shmem;
+
+	return 0;
+
+free_gem_shmem:
+	kfree(dev->shmem_mm);
+	dev->shmem_mm = NULL;
+
+	return err;
+}
+
+static void drm_gem_shmem_release(struct drm_device *dev, void *ptr)
+{
+	struct drm_gem_shmem *shmem_mm = dev->shmem_mm;
+
+	drm_gem_shmem_shrinker_release(dev, shmem_mm);
+	dev->shmem_mm = NULL;
+	kfree(shmem_mm);
+}
+
+/**
+ * drmm_gem_shmem_init() - Initialize drm-shmem internals
+ * @dev: DRM device
+ *
+ * Cleanup is automatically managed as part of DRM device releasing.
+ * Calling this function multiple times will result in a error.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drmm_gem_shmem_init(struct drm_device *dev)
+{
+	int err;
+
+	err = drm_gem_shmem_init(dev);
+	if (err)
+		return err;
+
+	err = drmm_add_action_or_reset(dev, drm_gem_shmem_release, NULL);
+	if (err)
+		return err;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(drmm_gem_shmem_init);
+
 MODULE_DESCRIPTION("DRM SHMEM memory-management helpers");
 MODULE_IMPORT_NS(DMA_BUF);
 MODULE_LICENSE("GPL v2");
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
index 6a71a2555f85..865a989d67c8 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
+++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
@@ -15,6 +15,13 @@
 #include "panfrost_gem.h"
 #include "panfrost_mmu.h"
 
+static bool panfrost_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
+{
+	return (shmem->madv > 0) &&
+		!shmem->pages_pin_count && shmem->sgt &&
+		!shmem->base.dma_buf && !shmem->base.import_attach;
+}
+
 static unsigned long
 panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc)
 {
@@ -27,7 +34,7 @@ panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc
 		return 0;
 
 	list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) {
-		if (drm_gem_shmem_is_purgeable(shmem))
+		if (panfrost_gem_shmem_is_purgeable(shmem))
 			count += shmem->base.size >> PAGE_SHIFT;
 	}
 
diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h
index a68c6a312b46..8acd455fc156 100644
--- a/include/drm/drm_device.h
+++ b/include/drm/drm_device.h
@@ -16,6 +16,7 @@ struct drm_vblank_crtc;
 struct drm_vma_offset_manager;
 struct drm_vram_mm;
 struct drm_fb_helper;
+struct drm_gem_shmem_shrinker;
 
 struct inode;
 
@@ -277,8 +278,13 @@ struct drm_device {
 	/** @vma_offset_manager: GEM information */
 	struct drm_vma_offset_manager *vma_offset_manager;
 
-	/** @vram_mm: VRAM MM memory manager */
-	struct drm_vram_mm *vram_mm;
+	union {
+		/** @vram_mm: VRAM MM memory manager */
+		struct drm_vram_mm *vram_mm;
+
+		/** @shmem_mm: SHMEM GEM memory manager */
+		struct drm_gem_shmem *shmem_mm;
+	};
 
 	/**
 	 * @switch_power_state:
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 20ddcd799df9..c264caf6c83b 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -6,6 +6,7 @@
 #include <linux/fs.h>
 #include <linux/mm.h>
 #include <linux/mutex.h>
+#include <linux/shrinker.h>
 
 #include <drm/drm_file.h>
 #include <drm/drm_gem.h>
@@ -15,6 +16,7 @@
 struct dma_buf_attachment;
 struct drm_mode_create_dumb;
 struct drm_printer;
+struct drm_device;
 struct sg_table;
 
 /**
@@ -39,12 +41,21 @@ struct drm_gem_shmem_object {
 	 */
 	unsigned int pages_use_count;
 
+	/**
+	 * @pages_pin_count:
+	 *
+	 * Reference count on the pinned pages table.
+	 * The pages allowed to be evicted by memory shrinker
+	 * only when the count is zero.
+	 */
+	unsigned int pages_pin_count;
+
 	/**
 	 * @madv: State for madvise
 	 *
 	 * 0 is active/inuse.
+	 * 1 is not-needed/can-be-purged
 	 * A negative value is the object is purged.
-	 * Positive values are driver specific and not used by the helpers.
 	 */
 	int madv;
 
@@ -91,6 +102,12 @@ struct drm_gem_shmem_object {
 	 * @map_wc: map object write-combined (instead of using shmem defaults).
 	 */
 	bool map_wc : 1;
+
+	/**
+	 * @evicted: True if shmem pages are evicted by the memory shrinker.
+	 * Used internally by memory shrinker.
+	 */
+	bool evicted : 1;
 };
 
 #define to_drm_gem_shmem_obj(obj) \
@@ -112,11 +129,17 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv);
 
 static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
 {
-	return (shmem->madv > 0) &&
-		!shmem->vmap_use_count && shmem->sgt &&
-		!shmem->base.dma_buf && !shmem->base.import_attach;
+	dma_resv_assert_held(shmem->base.resv);
+
+	return (shmem->madv > 0) && shmem->base.funcs->evict &&
+		shmem->pages_use_count && !shmem->pages_pin_count &&
+		!shmem->base.dma_buf && !shmem->base.import_attach &&
+		(shmem->sgt || shmem->evicted);
 }
 
+int drm_gem_shmem_swap_in(struct drm_gem_shmem_object *shmem);
+
+void drm_gem_shmem_evict(struct drm_gem_shmem_object *shmem);
 void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
 
 struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
@@ -260,6 +283,36 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v
 	return drm_gem_shmem_mmap(shmem, vma);
 }
 
+/**
+ * struct drm_gem_shmem_shrinker - Memory shrinker of GEM shmem memory manager
+ */
+struct drm_gem_shmem_shrinker {
+	/** @base: Shrinker for purging shmem GEM objects */
+	struct shrinker base;
+
+	/** @lock: Protects @lru_* */
+	struct mutex lock;
+
+	/** @lru_pinned: List of pinned shmem GEM objects */
+	struct drm_gem_lru lru_pinned;
+
+	/** @lru_evictable: List of shmem GEM objects to be evicted */
+	struct drm_gem_lru lru_evictable;
+
+	/** @lru_evicted: List of evicted shmem GEM objects */
+	struct drm_gem_lru lru_evicted;
+};
+
+/**
+ * struct drm_gem_shmem - GEM shmem memory manager
+ */
+struct drm_gem_shmem {
+	/** @shrinker: GEM shmem shrinker */
+	struct drm_gem_shmem_shrinker shrinker;
+};
+
+int drmm_gem_shmem_init(struct drm_device *dev);
+
 /*
  * Driver ops
  */
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v10 09/11] drm/gem: Add drm_gem_pin_unlocked()
  2023-01-08 21:04 [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
                   ` (7 preceding siblings ...)
  2023-01-08 21:04 ` [PATCH v10 08/11] drm/shmem-helper: Add memory shrinker Dmitry Osipenko
@ 2023-01-08 21:04 ` Dmitry Osipenko
  2023-02-17 13:42   ` Thomas Zimmermann
  2023-01-08 21:04 ` [PATCH v10 10/11] drm/virtio: Support memory shrinking Dmitry Osipenko
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 41+ messages in thread
From: Dmitry Osipenko @ 2023-01-08 21:04 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gustavo Padovan, Daniel Stone,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Clark, Sumit Semwal, Christian König,
	Qiang Yu, Steven Price, Alyssa Rosenzweig, Rob Herring,
	Sean Paul, Dmitry Baryshkov, Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization

Add unlocked variants of drm_gem_un/pin() functions. These new helpers
will take care of GEM dma-reservation locking for DRM drivers.

VirtIO-GPU driver will use these helpers to pin shmem framebuffers,
preventing them from eviction during scanout.

Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem.c | 29 +++++++++++++++++++++++++++++
 include/drm/drm_gem.h     |  3 +++
 2 files changed, 32 insertions(+)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index dbb48fc9dff3..0b8d3da985c7 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1167,6 +1167,35 @@ void drm_gem_unpin(struct drm_gem_object *obj)
 		obj->funcs->unpin(obj);
 }
 
+int drm_gem_pin_unlocked(struct drm_gem_object *obj)
+{
+	int ret;
+
+	if (!obj->funcs->pin)
+		return 0;
+
+	ret = dma_resv_lock_interruptible(obj->resv, NULL);
+	if (ret)
+		return ret;
+
+	ret = obj->funcs->pin(obj);
+	dma_resv_unlock(obj->resv);
+
+	return ret;
+}
+EXPORT_SYMBOL(drm_gem_pin_unlocked);
+
+void drm_gem_unpin_unlocked(struct drm_gem_object *obj)
+{
+	if (!obj->funcs->unpin)
+		return;
+
+	dma_resv_lock(obj->resv, NULL);
+	obj->funcs->unpin(obj);
+	dma_resv_unlock(obj->resv);
+}
+EXPORT_SYMBOL(drm_gem_unpin_unlocked);
+
 int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
 {
 	int ret;
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 8e5c22f25691..6f6d96f79a67 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -493,4 +493,7 @@ unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru,
 
 bool drm_gem_object_evict(struct drm_gem_object *obj);
 
+int drm_gem_pin_unlocked(struct drm_gem_object *obj);
+void drm_gem_unpin_unlocked(struct drm_gem_object *obj);
+
 #endif /* __DRM_GEM_H__ */
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v10 10/11] drm/virtio: Support memory shrinking
  2023-01-08 21:04 [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
                   ` (8 preceding siblings ...)
  2023-01-08 21:04 ` [PATCH v10 09/11] drm/gem: Add drm_gem_pin_unlocked() Dmitry Osipenko
@ 2023-01-08 21:04 ` Dmitry Osipenko
  2023-01-27  8:04   ` Gerd Hoffmann
  2023-01-08 21:04 ` [PATCH v10 11/11] drm/panfrost: Switch to generic memory shrinker Dmitry Osipenko
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 41+ messages in thread
From: Dmitry Osipenko @ 2023-01-08 21:04 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gustavo Padovan, Daniel Stone,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Clark, Sumit Semwal, Christian König,
	Qiang Yu, Steven Price, Alyssa Rosenzweig, Rob Herring,
	Sean Paul, Dmitry Baryshkov, Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization

Support generic drm-shmem memory shrinker and add new madvise IOCTL to
the VirtIO-GPU driver. BO cache manager of Mesa driver will mark BOs as
"don't need" using the new IOCTL to let shrinker purge the marked BOs on
OOM, the shrinker will also evict unpurgeable shmem BOs from memory if
guest supports SWAP file or partition.

Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h    |  18 +++-
 drivers/gpu/drm/virtio/virtgpu_gem.c    |  52 ++++++++++
 drivers/gpu/drm/virtio/virtgpu_ioctl.c  |  37 +++++++
 drivers/gpu/drm/virtio/virtgpu_kms.c    |   8 ++
 drivers/gpu/drm/virtio/virtgpu_object.c | 132 +++++++++++++++++++-----
 drivers/gpu/drm/virtio/virtgpu_plane.c  |  22 +++-
 drivers/gpu/drm/virtio/virtgpu_vq.c     |  40 +++++++
 include/uapi/drm/virtgpu_drm.h          |  14 +++
 8 files changed, 293 insertions(+), 30 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index af6ffb696086..07eb8d3e5cfd 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -89,6 +89,7 @@ struct virtio_gpu_object {
 	uint32_t hw_res_handle;
 	bool dumb;
 	bool created;
+	bool detached;
 	bool host3d_blob, guest_blob;
 	uint32_t blob_mem, blob_flags;
 
@@ -277,7 +278,7 @@ struct virtio_gpu_fpriv {
 };
 
 /* virtgpu_ioctl.c */
-#define DRM_VIRTIO_NUM_IOCTLS 12
+#define DRM_VIRTIO_NUM_IOCTLS 13
 extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS];
 void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file);
 
@@ -313,6 +314,10 @@ void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs);
 void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev,
 				       struct virtio_gpu_object_array *objs);
 void virtio_gpu_array_put_free_work(struct work_struct *work);
+int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev,
+			     struct virtio_gpu_object_array *objs);
+int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo);
+int virtio_gpu_gem_madvise(struct virtio_gpu_object *obj, int madv);
 
 /* virtgpu_vq.c */
 int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev);
@@ -324,6 +329,8 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev,
 				    struct virtio_gpu_fence *fence);
 void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
 				   struct virtio_gpu_object *bo);
+int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev,
+				    struct virtio_gpu_object *bo);
 void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
 					uint64_t offset,
 					uint32_t width, uint32_t height,
@@ -344,6 +351,9 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev,
 			      struct virtio_gpu_object *obj,
 			      struct virtio_gpu_mem_entry *ents,
 			      unsigned int nents);
+void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev,
+			      struct virtio_gpu_object *obj,
+			      struct virtio_gpu_fence *fence);
 int virtio_gpu_attach_status_page(struct virtio_gpu_device *vgdev);
 int virtio_gpu_detach_status_page(struct virtio_gpu_device *vgdev);
 void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
@@ -456,6 +466,8 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 
 bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo);
 
+int virtio_gpu_reattach_shmem_object(struct virtio_gpu_object *bo);
+
 int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
 			       uint32_t *resid);
 /* virtgpu_prime.c */
@@ -486,4 +498,8 @@ void virtio_gpu_vram_unmap_dma_buf(struct device *dev,
 				   struct sg_table *sgt,
 				   enum dma_data_direction dir);
 
+/* virtgpu_gem_shrinker.c */
+int virtio_gpu_gem_shrinker_init(struct virtio_gpu_device *vgdev);
+void virtio_gpu_gem_shrinker_fini(struct virtio_gpu_device *vgdev);
+
 #endif
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 7db48d17ee3a..8f65911b1e99 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -294,3 +294,55 @@ void virtio_gpu_array_put_free_work(struct work_struct *work)
 	}
 	spin_unlock(&vgdev->obj_free_lock);
 }
+
+int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev,
+			     struct virtio_gpu_object_array *objs)
+{
+	struct virtio_gpu_object *bo;
+	int ret = 0;
+	u32 i;
+
+	for (i = 0; i < objs->nents; i++) {
+		bo = gem_to_virtio_gpu_obj(objs->objs[i]);
+
+		if (virtio_gpu_is_shmem(bo) && bo->detached) {
+			ret = virtio_gpu_reattach_shmem_object(bo);
+			if (ret)
+				break;
+		}
+	}
+
+	return ret;
+}
+
+int virtio_gpu_gem_madvise(struct virtio_gpu_object *bo, int madv)
+{
+	int ret;
+
+	/* only shmem BOs are supported by shrinker */
+	if (!virtio_gpu_is_shmem(bo) || !bo->base.pages_mark_dirty_on_put)
+		return 1;
+
+	dma_resv_lock(bo->base.base.resv, NULL);
+	ret = drm_gem_shmem_madvise(&bo->base, madv);
+	dma_resv_unlock(bo->base.base.resv);
+
+	return ret;
+}
+
+int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo)
+{
+	struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
+	int err;
+
+	if (bo->created) {
+		err = virtio_gpu_cmd_release_resource(vgdev, bo);
+		if (err)
+			return err;
+
+		virtio_gpu_notify(vgdev);
+		bo->created = false;
+	}
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index 5d05093014ac..550c3c8f53f6 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -217,6 +217,10 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
 		ret = virtio_gpu_array_lock_resv(buflist);
 		if (ret)
 			goto out_memdup;
+
+		ret = virtio_gpu_array_prepare(vgdev, buflist);
+		if (ret)
+			goto out_unresv;
 	}
 
 	out_fence = virtio_gpu_fence_alloc(vgdev, fence_ctx, ring_idx);
@@ -423,6 +427,10 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
 	if (ret != 0)
 		goto err_put_free;
 
+	ret = virtio_gpu_array_prepare(vgdev, objs);
+	if (ret)
+		goto err_unlock;
+
 	fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0);
 	if (!fence) {
 		ret = -ENOMEM;
@@ -482,6 +490,10 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data,
 		if (ret != 0)
 			goto err_put_free;
 
+		ret = virtio_gpu_array_prepare(vgdev, objs);
+		if (ret)
+			goto err_unlock;
+
 		ret = -ENOMEM;
 		fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
 					       0);
@@ -838,6 +850,28 @@ static int virtio_gpu_context_init_ioctl(struct drm_device *dev,
 	return ret;
 }
 
+static int virtio_gpu_madvise_ioctl(struct drm_device *dev,
+				    void *data,
+				    struct drm_file *file)
+{
+	struct drm_virtgpu_madvise *args = data;
+	struct virtio_gpu_object *bo;
+	struct drm_gem_object *obj;
+
+	if (args->madv > VIRTGPU_MADV_DONTNEED)
+		return -EOPNOTSUPP;
+
+	obj = drm_gem_object_lookup(file, args->bo_handle);
+	if (!obj)
+		return -ENOENT;
+
+	bo = gem_to_virtio_gpu_obj(obj);
+	args->retained = virtio_gpu_gem_madvise(bo, args->madv);
+	drm_gem_object_put(obj);
+
+	return 0;
+}
+
 struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = {
 	DRM_IOCTL_DEF_DRV(VIRTGPU_MAP, virtio_gpu_map_ioctl,
 			  DRM_RENDER_ALLOW),
@@ -877,4 +911,7 @@ struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = {
 
 	DRM_IOCTL_DEF_DRV(VIRTGPU_CONTEXT_INIT, virtio_gpu_context_init_ioctl,
 			  DRM_RENDER_ALLOW),
+
+	DRM_IOCTL_DEF_DRV(VIRTGPU_MADVISE, virtio_gpu_madvise_ioctl,
+			  DRM_RENDER_ALLOW),
 };
diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
index 27b7f14dae89..b80cf76cbbef 100644
--- a/drivers/gpu/drm/virtio/virtgpu_kms.c
+++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
@@ -240,6 +240,12 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev)
 		goto err_scanouts;
 	}
 
+	ret = drmm_gem_shmem_init(dev);
+	if (ret) {
+		DRM_ERROR("shmem init failed\n");
+		goto err_modeset;
+	}
+
 	virtio_device_ready(vgdev->vdev);
 
 	if (num_capsets)
@@ -252,6 +258,8 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev)
 			   5 * HZ);
 	return 0;
 
+err_modeset:
+	virtio_gpu_modeset_fini(vgdev);
 err_scanouts:
 	virtio_gpu_free_vbufs(vgdev);
 err_vbufs:
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index c7e74cf13022..c9328f6b7117 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -97,39 +97,54 @@ static void virtio_gpu_free_object(struct drm_gem_object *obj)
 	virtio_gpu_cleanup_object(bo);
 }
 
-static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = {
-	.free = virtio_gpu_free_object,
-	.open = virtio_gpu_gem_object_open,
-	.close = virtio_gpu_gem_object_close,
-	.print_info = drm_gem_shmem_object_print_info,
-	.export = virtgpu_gem_prime_export,
-	.pin = drm_gem_shmem_object_pin,
-	.unpin = drm_gem_shmem_object_unpin,
-	.get_sg_table = drm_gem_shmem_object_get_sg_table,
-	.vmap = drm_gem_shmem_object_vmap,
-	.vunmap = drm_gem_shmem_object_vunmap,
-	.mmap = drm_gem_shmem_object_mmap,
-	.vm_ops = &drm_gem_shmem_vm_ops,
-};
-
-bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo)
+static int virtio_gpu_detach_object_fenced(struct virtio_gpu_object *bo)
 {
-	return bo->base.base.funcs == &virtio_gpu_shmem_funcs;
+	struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
+	struct virtio_gpu_fence *fence;
+
+	fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0);
+	if (!fence)
+		return -ENOMEM;
+
+	virtio_gpu_object_detach(vgdev, bo, fence);
+	virtio_gpu_notify(vgdev);
+
+	dma_fence_wait(&fence->f, false);
+	dma_fence_put(&fence->f);
+
+	bo->detached = true;
+
+	return 0;
 }
 
-struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
-						size_t size)
+static bool virtio_gpu_shmem_evict(struct drm_gem_object *obj)
 {
-	struct virtio_gpu_object_shmem *shmem;
-	struct drm_gem_shmem_object *dshmem;
+	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
+	int err;
+
+	/*
+	 * At first tell host to stop using guest's memory to ensure that
+	 * host won't touch the released guest's memory once it's gone.
+	 */
+	if (!bo->base.evicted) {
+		err = virtio_gpu_detach_object_fenced(bo);
+		if (err)
+			return false;
+	}
 
-	shmem = kzalloc(sizeof(*shmem), GFP_KERNEL);
-	if (!shmem)
-		return ERR_PTR(-ENOMEM);
+	if (drm_gem_shmem_is_purgeable(&bo->base)) {
+		err = virtio_gpu_gem_host_mem_release(bo);
+		if (err) {
+			virtio_gpu_reattach_shmem_object(bo);
+			return false;
+		}
 
-	dshmem = &shmem->base.base;
-	dshmem->base.funcs = &virtio_gpu_shmem_funcs;
-	return &dshmem->base;
+		drm_gem_shmem_purge(&bo->base);
+	} else {
+		drm_gem_shmem_evict(&bo->base);
+	}
+
+	return true;
 }
 
 static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
@@ -176,6 +191,65 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
 	return 0;
 }
 
+int virtio_gpu_reattach_shmem_object(struct virtio_gpu_object *bo)
+{
+	struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
+	struct virtio_gpu_mem_entry *ents;
+	unsigned int nents;
+	int err;
+
+	err = drm_gem_shmem_swap_in(&bo->base);
+	if (err)
+		return err;
+
+	err = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
+	if (err)
+		return err;
+
+	virtio_gpu_object_attach(vgdev, bo, ents, nents);
+	virtio_gpu_notify(vgdev);
+
+	bo->detached = false;
+
+	return 0;
+}
+
+static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = {
+	.free = virtio_gpu_free_object,
+	.open = virtio_gpu_gem_object_open,
+	.close = virtio_gpu_gem_object_close,
+	.print_info = drm_gem_shmem_object_print_info,
+	.export = virtgpu_gem_prime_export,
+	.pin = drm_gem_shmem_object_pin,
+	.unpin = drm_gem_shmem_object_unpin,
+	.get_sg_table = drm_gem_shmem_object_get_sg_table,
+	.vmap = drm_gem_shmem_object_vmap,
+	.vunmap = drm_gem_shmem_object_vunmap,
+	.mmap = drm_gem_shmem_object_mmap,
+	.vm_ops = &drm_gem_shmem_vm_ops,
+	.evict = virtio_gpu_shmem_evict,
+};
+
+bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo)
+{
+	return bo->base.base.funcs == &virtio_gpu_shmem_funcs;
+}
+
+struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
+						size_t size)
+{
+	struct virtio_gpu_object_shmem *shmem;
+	struct drm_gem_shmem_object *dshmem;
+
+	shmem = kzalloc(sizeof(*shmem), GFP_KERNEL);
+	if (!shmem)
+		return ERR_PTR(-ENOMEM);
+
+	dshmem = &shmem->base.base;
+	dshmem->base.funcs = &virtio_gpu_shmem_funcs;
+	return &dshmem->base;
+}
+
 int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 			     struct virtio_gpu_object_params *params,
 			     struct virtio_gpu_object **bo_ptr,
@@ -228,10 +302,14 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 		virtio_gpu_cmd_resource_create_3d(vgdev, bo, params,
 						  objs, fence);
 		virtio_gpu_object_attach(vgdev, bo, ents, nents);
+
+		shmem_obj->pages_mark_dirty_on_put = 1;
 	} else {
 		virtio_gpu_cmd_create_resource(vgdev, bo, params,
 					       objs, fence);
 		virtio_gpu_object_attach(vgdev, bo, ents, nents);
+
+		shmem_obj->pages_mark_dirty_on_put = 1;
 	}
 
 	*bo_ptr = bo;
diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
index 4c09e313bebc..2b99dea26e5c 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -238,20 +238,32 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
 	struct virtio_gpu_device *vgdev = dev->dev_private;
 	struct virtio_gpu_framebuffer *vgfb;
 	struct virtio_gpu_object *bo;
+	int err;
 
 	if (!new_state->fb)
 		return 0;
 
 	vgfb = to_virtio_gpu_framebuffer(new_state->fb);
 	bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
-	if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob))
+
+	if (virtio_gpu_is_shmem(bo)) {
+		err = drm_gem_pin_unlocked(&bo->base.base);
+		if (err)
+			return err;
+	}
+
+	if (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob)
 		return 0;
 
 	if (bo->dumb && (plane->state->fb != new_state->fb)) {
 		vgfb->fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
 						     0);
-		if (!vgfb->fence)
+		if (!vgfb->fence) {
+			if (virtio_gpu_is_shmem(bo))
+				drm_gem_unpin_unlocked(&bo->base.base);
+
 			return -ENOMEM;
+		}
 	}
 
 	return 0;
@@ -261,15 +273,21 @@ static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane,
 					struct drm_plane_state *state)
 {
 	struct virtio_gpu_framebuffer *vgfb;
+	struct virtio_gpu_object *bo;
 
 	if (!state->fb)
 		return;
 
 	vgfb = to_virtio_gpu_framebuffer(state->fb);
+	bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
+
 	if (vgfb->fence) {
 		dma_fence_put(&vgfb->fence->f);
 		vgfb->fence = NULL;
 	}
+
+	if (virtio_gpu_is_shmem(bo))
+		drm_gem_unpin_unlocked(&bo->base.base);
 }
 
 static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index a04a9b20896d..abdf3665c0ba 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -545,6 +545,21 @@ void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
 		virtio_gpu_cleanup_object(bo);
 }
 
+int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev,
+				    struct virtio_gpu_object *bo)
+{
+	struct virtio_gpu_resource_unref *cmd_p;
+	struct virtio_gpu_vbuffer *vbuf;
+
+	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
+	memset(cmd_p, 0, sizeof(*cmd_p));
+
+	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_UNREF);
+	cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
+
+	return virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
+}
+
 void virtio_gpu_cmd_set_scanout(struct virtio_gpu_device *vgdev,
 				uint32_t scanout_id, uint32_t resource_id,
 				uint32_t width, uint32_t height,
@@ -645,6 +660,23 @@ virtio_gpu_cmd_resource_attach_backing(struct virtio_gpu_device *vgdev,
 	virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence);
 }
 
+static void
+virtio_gpu_cmd_resource_detach_backing(struct virtio_gpu_device *vgdev,
+				       u32 resource_id,
+				       struct virtio_gpu_fence *fence)
+{
+	struct virtio_gpu_resource_attach_backing *cmd_p;
+	struct virtio_gpu_vbuffer *vbuf;
+
+	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
+	memset(cmd_p, 0, sizeof(*cmd_p));
+
+	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING);
+	cmd_p->resource_id = cpu_to_le32(resource_id);
+
+	virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence);
+}
+
 static void virtio_gpu_cmd_get_display_info_cb(struct virtio_gpu_device *vgdev,
 					       struct virtio_gpu_vbuffer *vbuf)
 {
@@ -1108,6 +1140,14 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev,
 					       ents, nents, NULL);
 }
 
+void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev,
+			      struct virtio_gpu_object *obj,
+			      struct virtio_gpu_fence *fence)
+{
+	virtio_gpu_cmd_resource_detach_backing(vgdev, obj->hw_res_handle,
+					       fence);
+}
+
 void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
 			    struct virtio_gpu_output *output)
 {
diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h
index 0512fde5e697..12197d8e9759 100644
--- a/include/uapi/drm/virtgpu_drm.h
+++ b/include/uapi/drm/virtgpu_drm.h
@@ -48,6 +48,7 @@ extern "C" {
 #define DRM_VIRTGPU_GET_CAPS  0x09
 #define DRM_VIRTGPU_RESOURCE_CREATE_BLOB 0x0a
 #define DRM_VIRTGPU_CONTEXT_INIT 0x0b
+#define DRM_VIRTGPU_MADVISE 0x0c
 
 #define VIRTGPU_EXECBUF_FENCE_FD_IN	0x01
 #define VIRTGPU_EXECBUF_FENCE_FD_OUT	0x02
@@ -196,6 +197,15 @@ struct drm_virtgpu_context_init {
 	__u64 ctx_set_params;
 };
 
+#define VIRTGPU_MADV_WILLNEED 0
+#define VIRTGPU_MADV_DONTNEED 1
+struct drm_virtgpu_madvise {
+	__u32 bo_handle;
+	__u32 retained; /* out, non-zero if BO can be used */
+	__u32 madv;
+	__u32 pad;
+};
+
 /*
  * Event code that's given when VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK is in
  * effect.  The event size is sizeof(drm_event), since there is no additional
@@ -246,6 +256,10 @@ struct drm_virtgpu_context_init {
 	DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_CONTEXT_INIT,		\
 		struct drm_virtgpu_context_init)
 
+#define DRM_IOCTL_VIRTGPU_MADVISE \
+	DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MADVISE, \
+		 struct drm_virtgpu_madvise)
+
 #if defined(__cplusplus)
 }
 #endif
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v10 11/11] drm/panfrost: Switch to generic memory shrinker
  2023-01-08 21:04 [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
                   ` (9 preceding siblings ...)
  2023-01-08 21:04 ` [PATCH v10 10/11] drm/virtio: Support memory shrinking Dmitry Osipenko
@ 2023-01-08 21:04 ` Dmitry Osipenko
  2023-01-25 22:55 ` [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
  2023-02-17 13:28 ` Thomas Zimmermann
  12 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2023-01-08 21:04 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gustavo Padovan, Daniel Stone,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Clark, Sumit Semwal, Christian König,
	Qiang Yu, Steven Price, Alyssa Rosenzweig, Rob Herring,
	Sean Paul, Dmitry Baryshkov, Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization

Replace Panfrost's custom memory shrinker with a common drm-shmem
memory shrinker.

Tested-by: Steven Price <steven.price@arm.com> # Firefly-RK3288
Reviewed-by: Steven Price <steven.price@arm.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c        |   2 -
 drivers/gpu/drm/panfrost/Makefile             |   1 -
 drivers/gpu/drm/panfrost/panfrost_device.h    |   4 -
 drivers/gpu/drm/panfrost/panfrost_drv.c       |  27 ++--
 drivers/gpu/drm/panfrost/panfrost_gem.c       |  30 ++--
 drivers/gpu/drm/panfrost/panfrost_gem.h       |   9 --
 .../gpu/drm/panfrost/panfrost_gem_shrinker.c  | 129 ------------------
 drivers/gpu/drm/panfrost/panfrost_job.c       |  18 ++-
 include/drm/drm_gem_shmem_helper.h            |   7 -
 9 files changed, 47 insertions(+), 180 deletions(-)
 delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 3ab5ec325ddb..c963cbc7a915 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -89,8 +89,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private)
 	if (ret)
 		goto err_release;
 
-	INIT_LIST_HEAD(&shmem->madv_list);
-
 	if (!private) {
 		/*
 		 * Our buffers are kept pinned, so allocating them
diff --git a/drivers/gpu/drm/panfrost/Makefile b/drivers/gpu/drm/panfrost/Makefile
index 7da2b3f02ed9..11622e22cf15 100644
--- a/drivers/gpu/drm/panfrost/Makefile
+++ b/drivers/gpu/drm/panfrost/Makefile
@@ -5,7 +5,6 @@ panfrost-y := \
 	panfrost_device.o \
 	panfrost_devfreq.o \
 	panfrost_gem.o \
-	panfrost_gem_shrinker.o \
 	panfrost_gpu.o \
 	panfrost_job.o \
 	panfrost_mmu.o \
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h
index d9ba68cffb77..28f28bbdbda9 100644
--- a/drivers/gpu/drm/panfrost/panfrost_device.h
+++ b/drivers/gpu/drm/panfrost/panfrost_device.h
@@ -116,10 +116,6 @@ struct panfrost_device {
 		atomic_t pending;
 	} reset;
 
-	struct mutex shrinker_lock;
-	struct list_head shrinker_list;
-	struct shrinker shrinker;
-
 	struct panfrost_devfreq pfdevfreq;
 };
 
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
index 9f3f2283b67a..e31cf9db005b 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -169,7 +169,6 @@ panfrost_lookup_bos(struct drm_device *dev,
 			break;
 		}
 
-		atomic_inc(&bo->gpu_usecount);
 		job->mappings[i] = mapping;
 	}
 
@@ -401,7 +400,6 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
 {
 	struct panfrost_file_priv *priv = file_priv->driver_priv;
 	struct drm_panfrost_madvise *args = data;
-	struct panfrost_device *pfdev = dev->dev_private;
 	struct drm_gem_object *gem_obj;
 	struct panfrost_gem_object *bo;
 	int ret = 0;
@@ -414,11 +412,15 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
 
 	bo = to_panfrost_bo(gem_obj);
 
+	if (bo->is_heap) {
+		args->retained = 1;
+		goto out_put_object;
+	}
+
 	ret = dma_resv_lock_interruptible(bo->base.base.resv, NULL);
 	if (ret)
 		goto out_put_object;
 
-	mutex_lock(&pfdev->shrinker_lock);
 	mutex_lock(&bo->mappings.lock);
 	if (args->madv == PANFROST_MADV_DONTNEED) {
 		struct panfrost_gem_mapping *first;
@@ -444,17 +446,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
 
 	args->retained = drm_gem_shmem_madvise(&bo->base, args->madv);
 
-	if (args->retained) {
-		if (args->madv == PANFROST_MADV_DONTNEED)
-			list_move_tail(&bo->base.madv_list,
-				       &pfdev->shrinker_list);
-		else if (args->madv == PANFROST_MADV_WILLNEED)
-			list_del_init(&bo->base.madv_list);
-	}
-
 out_unlock_mappings:
 	mutex_unlock(&bo->mappings.lock);
-	mutex_unlock(&pfdev->shrinker_lock);
 	dma_resv_unlock(bo->base.base.resv);
 out_put_object:
 	drm_gem_object_put(gem_obj);
@@ -586,9 +579,6 @@ static int panfrost_probe(struct platform_device *pdev)
 	ddev->dev_private = pfdev;
 	pfdev->ddev = ddev;
 
-	mutex_init(&pfdev->shrinker_lock);
-	INIT_LIST_HEAD(&pfdev->shrinker_list);
-
 	err = panfrost_device_init(pfdev);
 	if (err) {
 		if (err != -EPROBE_DEFER)
@@ -610,10 +600,14 @@ static int panfrost_probe(struct platform_device *pdev)
 	if (err < 0)
 		goto err_out1;
 
-	panfrost_gem_shrinker_init(ddev);
+	err = drmm_gem_shmem_init(ddev);
+	if (err < 0)
+		goto err_out2;
 
 	return 0;
 
+err_out2:
+	drm_dev_unregister(ddev);
 err_out1:
 	pm_runtime_disable(pfdev->dev);
 	panfrost_device_fini(pfdev);
@@ -629,7 +623,6 @@ static int panfrost_remove(struct platform_device *pdev)
 	struct drm_device *ddev = pfdev->ddev;
 
 	drm_dev_unregister(ddev);
-	panfrost_gem_shrinker_cleanup(ddev);
 
 	pm_runtime_get_sync(pfdev->dev);
 	pm_runtime_disable(pfdev->dev);
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
index 3c812fbd126f..f03e29375354 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
+++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
@@ -19,16 +19,6 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj)
 	struct panfrost_gem_object *bo = to_panfrost_bo(obj);
 	struct panfrost_device *pfdev = obj->dev->dev_private;
 
-	/*
-	 * Make sure the BO is no longer inserted in the shrinker list before
-	 * taking care of the destruction itself. If we don't do that we have a
-	 * race condition between this function and what's done in
-	 * panfrost_gem_shrinker_scan().
-	 */
-	mutex_lock(&pfdev->shrinker_lock);
-	list_del_init(&bo->base.madv_list);
-	mutex_unlock(&pfdev->shrinker_lock);
-
 	/*
 	 * If we still have mappings attached to the BO, there's a problem in
 	 * our refcounting.
@@ -195,6 +185,25 @@ static int panfrost_gem_pin(struct drm_gem_object *obj)
 	return drm_gem_shmem_pin(&bo->base);
 }
 
+static bool panfrost_shmem_evict(struct drm_gem_object *obj)
+{
+	struct panfrost_gem_object *bo = to_panfrost_bo(obj);
+
+	if (!drm_gem_shmem_is_purgeable(&bo->base))
+		return false;
+
+	if (!mutex_trylock(&bo->mappings.lock))
+		return false;
+
+	panfrost_gem_teardown_mappings_locked(bo);
+
+	drm_gem_shmem_purge(&bo->base);
+
+	mutex_unlock(&bo->mappings.lock);
+
+	return true;
+}
+
 static const struct drm_gem_object_funcs panfrost_gem_funcs = {
 	.free = panfrost_gem_free_object,
 	.open = panfrost_gem_open,
@@ -207,6 +216,7 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = {
 	.vunmap = drm_gem_shmem_object_vunmap,
 	.mmap = drm_gem_shmem_object_mmap,
 	.vm_ops = &drm_gem_shmem_vm_ops,
+	.evict = panfrost_shmem_evict,
 };
 
 /**
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h
index ad2877eeeccd..6ad1bcedb932 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem.h
+++ b/drivers/gpu/drm/panfrost/panfrost_gem.h
@@ -30,12 +30,6 @@ struct panfrost_gem_object {
 		struct mutex lock;
 	} mappings;
 
-	/*
-	 * Count the number of jobs referencing this BO so we don't let the
-	 * shrinker reclaim this object prematurely.
-	 */
-	atomic_t gpu_usecount;
-
 	bool noexec		:1;
 	bool is_heap		:1;
 };
@@ -81,7 +75,4 @@ panfrost_gem_mapping_get(struct panfrost_gem_object *bo,
 void panfrost_gem_mapping_put(struct panfrost_gem_mapping *mapping);
 void panfrost_gem_teardown_mappings_locked(struct panfrost_gem_object *bo);
 
-void panfrost_gem_shrinker_init(struct drm_device *dev);
-void panfrost_gem_shrinker_cleanup(struct drm_device *dev);
-
 #endif /* __PANFROST_GEM_H__ */
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
deleted file mode 100644
index 865a989d67c8..000000000000
--- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
+++ /dev/null
@@ -1,129 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright (C) 2019 Arm Ltd.
- *
- * Based on msm_gem_freedreno.c:
- * Copyright (C) 2016 Red Hat
- * Author: Rob Clark <robdclark@gmail.com>
- */
-
-#include <linux/list.h>
-
-#include <drm/drm_device.h>
-#include <drm/drm_gem_shmem_helper.h>
-
-#include "panfrost_device.h"
-#include "panfrost_gem.h"
-#include "panfrost_mmu.h"
-
-static bool panfrost_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
-{
-	return (shmem->madv > 0) &&
-		!shmem->pages_pin_count && shmem->sgt &&
-		!shmem->base.dma_buf && !shmem->base.import_attach;
-}
-
-static unsigned long
-panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc)
-{
-	struct panfrost_device *pfdev =
-		container_of(shrinker, struct panfrost_device, shrinker);
-	struct drm_gem_shmem_object *shmem;
-	unsigned long count = 0;
-
-	if (!mutex_trylock(&pfdev->shrinker_lock))
-		return 0;
-
-	list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) {
-		if (panfrost_gem_shmem_is_purgeable(shmem))
-			count += shmem->base.size >> PAGE_SHIFT;
-	}
-
-	mutex_unlock(&pfdev->shrinker_lock);
-
-	return count;
-}
-
-static bool panfrost_gem_purge(struct drm_gem_object *obj)
-{
-	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
-	struct panfrost_gem_object *bo = to_panfrost_bo(obj);
-	bool ret = false;
-
-	if (atomic_read(&bo->gpu_usecount))
-		return false;
-
-	if (!mutex_trylock(&bo->mappings.lock))
-		return false;
-
-	if (!dma_resv_trylock(shmem->base.resv))
-		goto unlock_mappings;
-
-	panfrost_gem_teardown_mappings_locked(bo);
-	drm_gem_shmem_purge(&bo->base);
-	ret = true;
-
-	dma_resv_unlock(shmem->base.resv);
-
-unlock_mappings:
-	mutex_unlock(&bo->mappings.lock);
-	return ret;
-}
-
-static unsigned long
-panfrost_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
-{
-	struct panfrost_device *pfdev =
-		container_of(shrinker, struct panfrost_device, shrinker);
-	struct drm_gem_shmem_object *shmem, *tmp;
-	unsigned long freed = 0;
-
-	if (!mutex_trylock(&pfdev->shrinker_lock))
-		return SHRINK_STOP;
-
-	list_for_each_entry_safe(shmem, tmp, &pfdev->shrinker_list, madv_list) {
-		if (freed >= sc->nr_to_scan)
-			break;
-		if (drm_gem_shmem_is_purgeable(shmem) &&
-		    panfrost_gem_purge(&shmem->base)) {
-			freed += shmem->base.size >> PAGE_SHIFT;
-			list_del_init(&shmem->madv_list);
-		}
-	}
-
-	mutex_unlock(&pfdev->shrinker_lock);
-
-	if (freed > 0)
-		pr_info_ratelimited("Purging %lu bytes\n", freed << PAGE_SHIFT);
-
-	return freed;
-}
-
-/**
- * panfrost_gem_shrinker_init - Initialize panfrost shrinker
- * @dev: DRM device
- *
- * This function registers and sets up the panfrost shrinker.
- */
-void panfrost_gem_shrinker_init(struct drm_device *dev)
-{
-	struct panfrost_device *pfdev = dev->dev_private;
-	pfdev->shrinker.count_objects = panfrost_gem_shrinker_count;
-	pfdev->shrinker.scan_objects = panfrost_gem_shrinker_scan;
-	pfdev->shrinker.seeks = DEFAULT_SEEKS;
-	WARN_ON(register_shrinker(&pfdev->shrinker, "drm-panfrost"));
-}
-
-/**
- * panfrost_gem_shrinker_cleanup - Clean up panfrost shrinker
- * @dev: DRM device
- *
- * This function unregisters the panfrost shrinker.
- */
-void panfrost_gem_shrinker_cleanup(struct drm_device *dev)
-{
-	struct panfrost_device *pfdev = dev->dev_private;
-
-	if (pfdev->shrinker.nr_deferred) {
-		unregister_shrinker(&pfdev->shrinker);
-	}
-}
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
index dbc597ab46fb..98d9751d2b2c 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.c
+++ b/drivers/gpu/drm/panfrost/panfrost_job.c
@@ -272,6 +272,19 @@ static void panfrost_attach_object_fences(struct drm_gem_object **bos,
 		dma_resv_add_fence(bos[i]->resv, fence, DMA_RESV_USAGE_WRITE);
 }
 
+static int panfrost_objects_prepare(struct drm_gem_object **bos, int bo_count)
+{
+	struct panfrost_gem_object *bo;
+	int ret = 0;
+
+	while (!ret && bo_count--) {
+		bo = to_panfrost_bo(bos[bo_count]);
+		ret = bo->base.madv ? -ENOMEM : 0;
+	}
+
+	return ret;
+}
+
 int panfrost_job_push(struct panfrost_job *job)
 {
 	struct panfrost_device *pfdev = job->pfdev;
@@ -283,6 +296,10 @@ int panfrost_job_push(struct panfrost_job *job)
 	if (ret)
 		return ret;
 
+	ret = panfrost_objects_prepare(job->bos, job->bo_count);
+	if (ret)
+		goto unlock;
+
 	mutex_lock(&pfdev->sched_lock);
 	drm_sched_job_arm(&job->base);
 
@@ -324,7 +341,6 @@ static void panfrost_job_cleanup(struct kref *ref)
 			if (!job->mappings[i])
 				break;
 
-			atomic_dec(&job->mappings[i]->obj->gpu_usecount);
 			panfrost_gem_mapping_put(job->mappings[i]);
 		}
 		kvfree(job->mappings);
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index c264caf6c83b..22039fe2b160 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -59,13 +59,6 @@ struct drm_gem_shmem_object {
 	 */
 	int madv;
 
-	/**
-	 * @madv_list: List entry for madvise tracking
-	 *
-	 * Typically used by drivers to track purgeable objects
-	 */
-	struct list_head madv_list;
-
 	/**
 	 * @sgt: Scatter/gather table for imported PRIME buffers
 	 */
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers
  2023-01-08 21:04 [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
                   ` (10 preceding siblings ...)
  2023-01-08 21:04 ` [PATCH v10 11/11] drm/panfrost: Switch to generic memory shrinker Dmitry Osipenko
@ 2023-01-25 22:55 ` Dmitry Osipenko
  2023-01-27  8:13   ` Gerd Hoffmann
  2023-02-17 13:28 ` Thomas Zimmermann
  12 siblings, 1 reply; 41+ messages in thread
From: Dmitry Osipenko @ 2023-01-25 22:55 UTC (permalink / raw)
  To: Thomas Zimmermann, Gerd Hoffmann
  Cc: dri-devel, linux-kernel, kernel, virtualization, David Airlie,
	Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida,
	Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst,
	Maxime Ripard, Rob Clark, Sumit Semwal, Christian König,
	Qiang Yu, Steven Price, Alyssa Rosenzweig, Rob Herring,
	Sean Paul, Dmitry Baryshkov, Abhinav Kumar

Hello Thomas and Gerd,

On 1/9/23 00:04, Dmitry Osipenko wrote:
> This series:
> 
>   1. Makes minor fixes for drm_gem_lru and Panfrost
>   2. Brings refactoring for older code
>   3. Adds common drm-shmem memory shrinker
>   4. Enables shrinker for VirtIO-GPU driver
>   5. Switches Panfrost driver to the common shrinker
> 
> Changelog:
> 
> v10:- Rebased on a recent linux-next.
> 
>     - Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.
> 
>     - Added Steven's ack/r-b/t-b for the Panfrost patches.
> 
>     - Fixed missing export of the new drm_gem_object_evict() function.
> 
>     - Added fixes tags to the first two patches that are making minor fixes,
>       for consistency.

Do you have comments on this version? Otherwise ack will be appreciated.
Thanks in advance!

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 05/11] drm/shmem: Switch to use drm_* debug helpers
  2023-01-08 21:04 ` [PATCH v10 05/11] drm/shmem: Switch to use drm_* debug helpers Dmitry Osipenko
@ 2023-01-26 12:15   ` Gerd Hoffmann
  2023-02-17 12:28   ` Thomas Zimmermann
  1 sibling, 0 replies; 41+ messages in thread
From: Gerd Hoffmann @ 2023-01-26 12:15 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: David Airlie, Gurchetan Singh, Chia-I Wu, Daniel Vetter,
	Daniel Almeida, Gustavo Padovan, Daniel Stone, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Clark,
	Sumit Semwal, Christian König, Qiang Yu, Steven Price,
	Alyssa Rosenzweig, Rob Herring, Sean Paul, Dmitry Baryshkov,
	Abhinav Kumar, dri-devel, linux-kernel, kernel, virtualization

On Mon, Jan 09, 2023 at 12:04:39AM +0300, Dmitry Osipenko wrote:
> f a multi-GPU system by using drm_WARN_*() and
> drm_dbg_kms() helpers that print out DRM device name corresponding
> to shmem GEM.

That commit message looks truncated ...

take care,
  Gerd


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 06/11] drm/shmem-helper: Don't use vmap_use_count for dma-bufs
  2023-01-08 21:04 ` [PATCH v10 06/11] drm/shmem-helper: Don't use vmap_use_count for dma-bufs Dmitry Osipenko
@ 2023-01-26 12:17   ` Gerd Hoffmann
  2023-01-26 12:24     ` Dmitry Osipenko
  2023-02-17 12:41   ` Thomas Zimmermann
  1 sibling, 1 reply; 41+ messages in thread
From: Gerd Hoffmann @ 2023-01-26 12:17 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: David Airlie, Gurchetan Singh, Chia-I Wu, Daniel Vetter,
	Daniel Almeida, Gustavo Padovan, Daniel Stone, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Clark,
	Sumit Semwal, Christian König, Qiang Yu, Steven Price,
	Alyssa Rosenzweig, Rob Herring, Sean Paul, Dmitry Baryshkov,
	Abhinav Kumar, dri-devel, linux-kernel, kernel, virtualization

On Mon, Jan 09, 2023 at 12:04:40AM +0300, Dmitry Osipenko wrote:
>  its own refcounting of vmaps, use it instead of drm-shmem
> counting. This change prepares drm-shmem for addition of memory shrinker
> support where drm-shmem will use a single dma-buf reservation lock for
> all operations performed over dma-bufs.

Likewise truncated?

take care,
  Gerd


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 06/11] drm/shmem-helper: Don't use vmap_use_count for dma-bufs
  2023-01-26 12:17   ` Gerd Hoffmann
@ 2023-01-26 12:24     ` Dmitry Osipenko
  2023-01-27  8:06       ` Gerd Hoffmann
  0 siblings, 1 reply; 41+ messages in thread
From: Dmitry Osipenko @ 2023-01-26 12:24 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: David Airlie, Gurchetan Singh, Chia-I Wu, Daniel Vetter,
	Daniel Almeida, Gustavo Padovan, Daniel Stone, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Clark,
	Sumit Semwal, Christian König, Qiang Yu, Steven Price,
	Alyssa Rosenzweig, Rob Herring, Sean Paul, Dmitry Baryshkov,
	Abhinav Kumar, dri-devel, linux-kernel, kernel, virtualization

On 1/26/23 15:17, Gerd Hoffmann wrote:
> On Mon, Jan 09, 2023 at 12:04:40AM +0300, Dmitry Osipenko wrote:
>>  its own refcounting of vmaps, use it instead of drm-shmem
>> counting. This change prepares drm-shmem for addition of memory shrinker
>> support where drm-shmem will use a single dma-buf reservation lock for
>> all operations performed over dma-bufs.
> 
> Likewise truncated?

Should be the email problem on yours side, please see [1][2] where the
messages are okay.

[1]
https://lore.kernel.org/dri-devel/20230108210445.3948344-7-dmitry.osipenko@collabora.com/
[2] https://patchwork.freedesktop.org/patch/517401/

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 10/11] drm/virtio: Support memory shrinking
  2023-01-08 21:04 ` [PATCH v10 10/11] drm/virtio: Support memory shrinking Dmitry Osipenko
@ 2023-01-27  8:04   ` Gerd Hoffmann
  0 siblings, 0 replies; 41+ messages in thread
From: Gerd Hoffmann @ 2023-01-27  8:04 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: David Airlie, Gurchetan Singh, Chia-I Wu, Daniel Vetter,
	Daniel Almeida, Gustavo Padovan, Daniel Stone, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Clark,
	Sumit Semwal, Christian König, Qiang Yu, Steven Price,
	Alyssa Rosenzweig, Rob Herring, Sean Paul, Dmitry Baryshkov,
	Abhinav Kumar, dri-devel, linux-kernel, kernel, virtualization

On Mon, Jan 09, 2023 at 12:04:44AM +0300, Dmitry Osipenko wrote:
> Support generic drm-shmem memory shrinker and add new madvise IOCTL to
> the VirtIO-GPU driver. BO cache manager of Mesa driver will mark BOs as
> "don't need" using the new IOCTL to let shrinker purge the marked BOs on
> OOM, the shrinker will also evict unpurgeable shmem BOs from memory if
> guest supports SWAP file or partition.
> 
> Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>

Acked-by: Gerd Hoffmann <kraxel@redhat.com>


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 06/11] drm/shmem-helper: Don't use vmap_use_count for dma-bufs
  2023-01-26 12:24     ` Dmitry Osipenko
@ 2023-01-27  8:06       ` Gerd Hoffmann
  0 siblings, 0 replies; 41+ messages in thread
From: Gerd Hoffmann @ 2023-01-27  8:06 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: David Airlie, Gurchetan Singh, Chia-I Wu, Daniel Vetter,
	Daniel Almeida, Gustavo Padovan, Daniel Stone, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Clark,
	Sumit Semwal, Christian König, Qiang Yu, Steven Price,
	Alyssa Rosenzweig, Rob Herring, Sean Paul, Dmitry Baryshkov,
	Abhinav Kumar, dri-devel, linux-kernel, kernel, virtualization

On Thu, Jan 26, 2023 at 03:24:30PM +0300, Dmitry Osipenko wrote:
> On 1/26/23 15:17, Gerd Hoffmann wrote:
> > On Mon, Jan 09, 2023 at 12:04:40AM +0300, Dmitry Osipenko wrote:
> >>  its own refcounting of vmaps, use it instead of drm-shmem
> >> counting. This change prepares drm-shmem for addition of memory shrinker
> >> support where drm-shmem will use a single dma-buf reservation lock for
> >> all operations performed over dma-bufs.
> > 
> > Likewise truncated?
> 
> Should be the email problem on yours side, please see [1][2] where the
> messages are okay.

Indeed, scratch the comments then.

take care,
  Gerd


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers
  2023-01-25 22:55 ` [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
@ 2023-01-27  8:13   ` Gerd Hoffmann
  2023-01-30 12:02     ` Dmitry Osipenko
  0 siblings, 1 reply; 41+ messages in thread
From: Gerd Hoffmann @ 2023-01-27  8:13 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: Thomas Zimmermann, dri-devel, linux-kernel, kernel,
	virtualization, David Airlie, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gustavo Padovan, Daniel Stone,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Rob Clark,
	Sumit Semwal, Christian König, Qiang Yu, Steven Price,
	Alyssa Rosenzweig, Rob Herring, Sean Paul, Dmitry Baryshkov,
	Abhinav Kumar

On Thu, Jan 26, 2023 at 01:55:09AM +0300, Dmitry Osipenko wrote:
> Hello Thomas and Gerd,
> 
> On 1/9/23 00:04, Dmitry Osipenko wrote:
> > This series:
> > 
> >   1. Makes minor fixes for drm_gem_lru and Panfrost
> >   2. Brings refactoring for older code
> >   3. Adds common drm-shmem memory shrinker
> >   4. Enables shrinker for VirtIO-GPU driver
> >   5. Switches Panfrost driver to the common shrinker
> > 
> > Changelog:
> > 
> > v10:- Rebased on a recent linux-next.
> > 
> >     - Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.
> > 
> >     - Added Steven's ack/r-b/t-b for the Panfrost patches.
> > 
> >     - Fixed missing export of the new drm_gem_object_evict() function.
> > 
> >     - Added fixes tags to the first two patches that are making minor fixes,
> >       for consistency.
> 
> Do you have comments on this version? Otherwise ack will be appreciated.
> Thanks in advance!

Don't feel like signing off on the locking changes, I'm not that
familiar with the drm locking rules.  So someone else looking at them
would be good.  Otherwise the series and specifically the virtio changes
look good to me.

Acked-by: Gerd Hoffmann <kraxel@redhat.com>

take care,
  Gerd


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers
  2023-01-27  8:13   ` Gerd Hoffmann
@ 2023-01-30 12:02     ` Dmitry Osipenko
  2023-02-16 12:15       ` Daniel Vetter
  0 siblings, 1 reply; 41+ messages in thread
From: Dmitry Osipenko @ 2023-01-30 12:02 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Thomas Zimmermann, dri-devel, linux-kernel, kernel,
	virtualization, David Airlie, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gustavo Padovan, Daniel Stone,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Rob Clark,
	Sumit Semwal, Christian König, Qiang Yu, Steven Price,
	Alyssa Rosenzweig, Rob Herring, Sean Paul, Dmitry Baryshkov,
	Abhinav Kumar

On 1/27/23 11:13, Gerd Hoffmann wrote:
> On Thu, Jan 26, 2023 at 01:55:09AM +0300, Dmitry Osipenko wrote:
>> Hello Thomas and Gerd,
>>
>> On 1/9/23 00:04, Dmitry Osipenko wrote:
>>> This series:
>>>
>>>   1. Makes minor fixes for drm_gem_lru and Panfrost
>>>   2. Brings refactoring for older code
>>>   3. Adds common drm-shmem memory shrinker
>>>   4. Enables shrinker for VirtIO-GPU driver
>>>   5. Switches Panfrost driver to the common shrinker
>>>
>>> Changelog:
>>>
>>> v10:- Rebased on a recent linux-next.
>>>
>>>     - Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.
>>>
>>>     - Added Steven's ack/r-b/t-b for the Panfrost patches.
>>>
>>>     - Fixed missing export of the new drm_gem_object_evict() function.
>>>
>>>     - Added fixes tags to the first two patches that are making minor fixes,
>>>       for consistency.
>>
>> Do you have comments on this version? Otherwise ack will be appreciated.
>> Thanks in advance!
> 
> Don't feel like signing off on the locking changes, I'm not that
> familiar with the drm locking rules.  So someone else looking at them
> would be good.  Otherwise the series and specifically the virtio changes
> look good to me.
> 
> Acked-by: Gerd Hoffmann <kraxel@redhat.com>

Thomas was looking at the the DRM core changes. I expect he'll ack them.

Thank you for reviewing the virtio patches!

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers
  2023-01-30 12:02     ` Dmitry Osipenko
@ 2023-02-16 12:15       ` Daniel Vetter
  2023-02-16 13:08         ` AngeloGioacchino Del Regno
  2023-02-16 20:43         ` Dmitry Osipenko
  0 siblings, 2 replies; 41+ messages in thread
From: Daniel Vetter @ 2023-02-16 12:15 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: Gerd Hoffmann, Thomas Zimmermann, dri-devel, linux-kernel,
	kernel, virtualization, David Airlie, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gustavo Padovan, Daniel Stone,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Rob Clark,
	Sumit Semwal, Christian König, Qiang Yu, Steven Price,
	Alyssa Rosenzweig, Rob Herring, Sean Paul, Dmitry Baryshkov,
	Abhinav Kumar

On Mon, Jan 30, 2023 at 03:02:10PM +0300, Dmitry Osipenko wrote:
> On 1/27/23 11:13, Gerd Hoffmann wrote:
> > On Thu, Jan 26, 2023 at 01:55:09AM +0300, Dmitry Osipenko wrote:
> >> Hello Thomas and Gerd,
> >>
> >> On 1/9/23 00:04, Dmitry Osipenko wrote:
> >>> This series:
> >>>
> >>>   1. Makes minor fixes for drm_gem_lru and Panfrost
> >>>   2. Brings refactoring for older code
> >>>   3. Adds common drm-shmem memory shrinker
> >>>   4. Enables shrinker for VirtIO-GPU driver
> >>>   5. Switches Panfrost driver to the common shrinker
> >>>
> >>> Changelog:
> >>>
> >>> v10:- Rebased on a recent linux-next.
> >>>
> >>>     - Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.
> >>>
> >>>     - Added Steven's ack/r-b/t-b for the Panfrost patches.
> >>>
> >>>     - Fixed missing export of the new drm_gem_object_evict() function.
> >>>
> >>>     - Added fixes tags to the first two patches that are making minor fixes,
> >>>       for consistency.
> >>
> >> Do you have comments on this version? Otherwise ack will be appreciated.
> >> Thanks in advance!
> > 
> > Don't feel like signing off on the locking changes, I'm not that
> > familiar with the drm locking rules.  So someone else looking at them
> > would be good.  Otherwise the series and specifically the virtio changes
> > look good to me.
> > 
> > Acked-by: Gerd Hoffmann <kraxel@redhat.com>
> 
> Thomas was looking at the the DRM core changes. I expect he'll ack them.
> 
> Thank you for reviewing the virtio patches!

I think best-case would be an ack from msm people that this looks good
(even better a conversion for msm to start using this).

Otherwise I think the locking looks reasonable, I think the tricky bits
have been moving the dma-buf rules, but if you want I can try to take
another in-depth look. But would need to be in 2 weeks since I'm going on
vacations, pls ping me on irc if I'm needed.

Otherwise would be great if we can land this soon, so that it can soak the
entire linux-next cycle to catch any driver specific issues.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers
  2023-02-16 12:15       ` Daniel Vetter
@ 2023-02-16 13:08         ` AngeloGioacchino Del Regno
  2023-02-16 20:43         ` Dmitry Osipenko
  1 sibling, 0 replies; 41+ messages in thread
From: AngeloGioacchino Del Regno @ 2023-02-16 13:08 UTC (permalink / raw)
  To: Dmitry Osipenko, Dmitry Baryshkov
  Cc: Gerd Hoffmann, Thomas Zimmermann, dri-devel, linux-kernel,
	kernel, virtualization, David Airlie, Gurchetan Singh, Chia-I Wu,
	Daniel Almeida, Gustavo Padovan, Daniel Stone, Maarten Lankhorst,
	Maxime Ripard, Rob Clark, Sumit Semwal, Christian König,
	Qiang Yu, Steven Price, Alyssa Rosenzweig, Rob Herring,
	Sean Paul, Abhinav Kumar, Konrad Dybcio

Il 16/02/23 13:15, Daniel Vetter ha scritto:
> On Mon, Jan 30, 2023 at 03:02:10PM +0300, Dmitry Osipenko wrote:
>> On 1/27/23 11:13, Gerd Hoffmann wrote:
>>> On Thu, Jan 26, 2023 at 01:55:09AM +0300, Dmitry Osipenko wrote:
>>>> Hello Thomas and Gerd,
>>>>
>>>> On 1/9/23 00:04, Dmitry Osipenko wrote:
>>>>> This series:
>>>>>
>>>>>    1. Makes minor fixes for drm_gem_lru and Panfrost
>>>>>    2. Brings refactoring for older code
>>>>>    3. Adds common drm-shmem memory shrinker
>>>>>    4. Enables shrinker for VirtIO-GPU driver
>>>>>    5. Switches Panfrost driver to the common shrinker
>>>>>
>>>>> Changelog:
>>>>>
>>>>> v10:- Rebased on a recent linux-next.
>>>>>
>>>>>      - Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.
>>>>>
>>>>>      - Added Steven's ack/r-b/t-b for the Panfrost patches.
>>>>>
>>>>>      - Fixed missing export of the new drm_gem_object_evict() function.
>>>>>
>>>>>      - Added fixes tags to the first two patches that are making minor fixes,
>>>>>        for consistency.
>>>>
>>>> Do you have comments on this version? Otherwise ack will be appreciated.
>>>> Thanks in advance!
>>>
>>> Don't feel like signing off on the locking changes, I'm not that
>>> familiar with the drm locking rules.  So someone else looking at them
>>> would be good.  Otherwise the series and specifically the virtio changes
>>> look good to me.
>>>
>>> Acked-by: Gerd Hoffmann <kraxel@redhat.com>
>>
>> Thomas was looking at the the DRM core changes. I expect he'll ack them.
>>
>> Thank you for reviewing the virtio patches!
> 


> I think best-case would be an ack from msm people that this looks good
> (even better a conversion for msm to start using this).
> 

Dmitry B, Konrad, can you please help with this one?

Thanks!

Regards,
Angelo

> Otherwise I think the locking looks reasonable, I think the tricky bits
> have been moving the dma-buf rules, but if you want I can try to take
> another in-depth look. But would need to be in 2 weeks since I'm going on
> vacations, pls ping me on irc if I'm needed.
> 
> Otherwise would be great if we can land this soon, so that it can soak the
> entire linux-next cycle to catch any driver specific issues.
> -Daniel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers
  2023-02-16 12:15       ` Daniel Vetter
  2023-02-16 13:08         ` AngeloGioacchino Del Regno
@ 2023-02-16 20:43         ` Dmitry Osipenko
  2023-02-16 22:07           ` Daniel Vetter
  1 sibling, 1 reply; 41+ messages in thread
From: Dmitry Osipenko @ 2023-02-16 20:43 UTC (permalink / raw)
  To: Gerd Hoffmann, Thomas Zimmermann, dri-devel, linux-kernel,
	kernel, virtualization, David Airlie, Gurchetan Singh, Chia-I Wu,
	Daniel Almeida, Gustavo Padovan, Daniel Stone, Maarten Lankhorst,
	Maxime Ripard, Rob Clark, Sumit Semwal, Christian König,
	Qiang Yu, Steven Price, Alyssa Rosenzweig, Rob Herring,
	Sean Paul, Dmitry Baryshkov, Abhinav Kumar

On 2/16/23 15:15, Daniel Vetter wrote:
> On Mon, Jan 30, 2023 at 03:02:10PM +0300, Dmitry Osipenko wrote:
>> On 1/27/23 11:13, Gerd Hoffmann wrote:
>>> On Thu, Jan 26, 2023 at 01:55:09AM +0300, Dmitry Osipenko wrote:
>>>> Hello Thomas and Gerd,
>>>>
>>>> On 1/9/23 00:04, Dmitry Osipenko wrote:
>>>>> This series:
>>>>>
>>>>>   1. Makes minor fixes for drm_gem_lru and Panfrost
>>>>>   2. Brings refactoring for older code
>>>>>   3. Adds common drm-shmem memory shrinker
>>>>>   4. Enables shrinker for VirtIO-GPU driver
>>>>>   5. Switches Panfrost driver to the common shrinker
>>>>>
>>>>> Changelog:
>>>>>
>>>>> v10:- Rebased on a recent linux-next.
>>>>>
>>>>>     - Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.
>>>>>
>>>>>     - Added Steven's ack/r-b/t-b for the Panfrost patches.
>>>>>
>>>>>     - Fixed missing export of the new drm_gem_object_evict() function.
>>>>>
>>>>>     - Added fixes tags to the first two patches that are making minor fixes,
>>>>>       for consistency.
>>>>
>>>> Do you have comments on this version? Otherwise ack will be appreciated.
>>>> Thanks in advance!
>>>
>>> Don't feel like signing off on the locking changes, I'm not that
>>> familiar with the drm locking rules.  So someone else looking at them
>>> would be good.  Otherwise the series and specifically the virtio changes
>>> look good to me.
>>>
>>> Acked-by: Gerd Hoffmann <kraxel@redhat.com>
>>
>> Thomas was looking at the the DRM core changes. I expect he'll ack them.
>>
>> Thank you for reviewing the virtio patches!
> 
> I think best-case would be an ack from msm people that this looks good
> (even better a conversion for msm to start using this).

The MSM pretty much isn't touched by this patchset, apart from the minor
common shrinker fix. Moving whole MSM to use drm_shmem should be a big
change to the driver.

The Panfrost and VirtIO-GPU drivers already got the acks. I also tested
the Lima driver, which uses drm-shmem helpers. Other DRM drivers should
be unaffected by this series.

> Otherwise I think the locking looks reasonable, I think the tricky bits
> have been moving the dma-buf rules, but if you want I can try to take
> another in-depth look. But would need to be in 2 weeks since I'm going on
> vacations, pls ping me on irc if I'm needed.

The locking conversion is mostly a straightforward replacement of mutex
with resv lock for drm-shmem. The dma-buf rules were tricky, another
tricky part was fixing the lockdep warning for the bogus report of
fs_reclaim vs GEM shrinker at the GEM destroy time where I borrowed the
drm_gem_shmem_resv_assert_held() solution from the MSM driver where Rob
had a similar issue.

> Otherwise would be great if we can land this soon, so that it can soak the
> entire linux-next cycle to catch any driver specific issues.

That will be great. Was waiting for Thomas to ack the shmem patches
since he reviewed the previous versions, but if you or anyone else could
ack them, then will be good too. Thanks!

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers
  2023-02-16 20:43         ` Dmitry Osipenko
@ 2023-02-16 22:07           ` Daniel Vetter
  0 siblings, 0 replies; 41+ messages in thread
From: Daniel Vetter @ 2023-02-16 22:07 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: Gerd Hoffmann, Thomas Zimmermann, dri-devel, linux-kernel,
	kernel, virtualization, David Airlie, Gurchetan Singh, Chia-I Wu,
	Daniel Almeida, Gustavo Padovan, Daniel Stone, Maarten Lankhorst,
	Maxime Ripard, Rob Clark, Sumit Semwal, Christian König,
	Qiang Yu, Steven Price, Alyssa Rosenzweig, Rob Herring,
	Sean Paul, Dmitry Baryshkov, Abhinav Kumar

On Thu, Feb 16, 2023 at 11:43:38PM +0300, Dmitry Osipenko wrote:
> On 2/16/23 15:15, Daniel Vetter wrote:
> > On Mon, Jan 30, 2023 at 03:02:10PM +0300, Dmitry Osipenko wrote:
> >> On 1/27/23 11:13, Gerd Hoffmann wrote:
> >>> On Thu, Jan 26, 2023 at 01:55:09AM +0300, Dmitry Osipenko wrote:
> >>>> Hello Thomas and Gerd,
> >>>>
> >>>> On 1/9/23 00:04, Dmitry Osipenko wrote:
> >>>>> This series:
> >>>>>
> >>>>>   1. Makes minor fixes for drm_gem_lru and Panfrost
> >>>>>   2. Brings refactoring for older code
> >>>>>   3. Adds common drm-shmem memory shrinker
> >>>>>   4. Enables shrinker for VirtIO-GPU driver
> >>>>>   5. Switches Panfrost driver to the common shrinker
> >>>>>
> >>>>> Changelog:
> >>>>>
> >>>>> v10:- Rebased on a recent linux-next.
> >>>>>
> >>>>>     - Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.
> >>>>>
> >>>>>     - Added Steven's ack/r-b/t-b for the Panfrost patches.
> >>>>>
> >>>>>     - Fixed missing export of the new drm_gem_object_evict() function.
> >>>>>
> >>>>>     - Added fixes tags to the first two patches that are making minor fixes,
> >>>>>       for consistency.
> >>>>
> >>>> Do you have comments on this version? Otherwise ack will be appreciated.
> >>>> Thanks in advance!
> >>>
> >>> Don't feel like signing off on the locking changes, I'm not that
> >>> familiar with the drm locking rules.  So someone else looking at them
> >>> would be good.  Otherwise the series and specifically the virtio changes
> >>> look good to me.
> >>>
> >>> Acked-by: Gerd Hoffmann <kraxel@redhat.com>
> >>
> >> Thomas was looking at the the DRM core changes. I expect he'll ack them.
> >>
> >> Thank you for reviewing the virtio patches!
> > 
> > I think best-case would be an ack from msm people that this looks good
> > (even better a conversion for msm to start using this).
> 
> The MSM pretty much isn't touched by this patchset, apart from the minor
> common shrinker fix. Moving whole MSM to use drm_shmem should be a big
> change to the driver.
> 
> The Panfrost and VirtIO-GPU drivers already got the acks. I also tested
> the Lima driver, which uses drm-shmem helpers. Other DRM drivers should
> be unaffected by this series.

Ah that sounds good, I somehow thought that etnaviv also uses the helpers,
but there we only had problems with dma-buf. So that's all sorted.

> > Otherwise I think the locking looks reasonable, I think the tricky bits
> > have been moving the dma-buf rules, but if you want I can try to take
> > another in-depth look. But would need to be in 2 weeks since I'm going on
> > vacations, pls ping me on irc if I'm needed.
> 
> The locking conversion is mostly a straightforward replacement of mutex
> with resv lock for drm-shmem. The dma-buf rules were tricky, another
> tricky part was fixing the lockdep warning for the bogus report of
> fs_reclaim vs GEM shrinker at the GEM destroy time where I borrowed the
> drm_gem_shmem_resv_assert_held() solution from the MSM driver where Rob
> had a similar issue.

Ah I missed that detail, if msm solved that the same way then I think very
high chances it all ends up being compatible. Which is really what
matters, not so much whether every last driver actually has converted
over.

> > Otherwise would be great if we can land this soon, so that it can soak the
> > entire linux-next cycle to catch any driver specific issues.
> 
> That will be great. Was waiting for Thomas to ack the shmem patches
> since he reviewed the previous versions, but if you or anyone else could
> ack them, then will be good too. Thanks!

I'm good for an ack, but maybe ping Thomas for a review on irc since I'm
out next week. Also maybe Thomas has some series you can help land for
cross review.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 01/11] drm/msm/gem: Prevent blocking within shrinker loop
  2023-01-08 21:04 ` [PATCH v10 01/11] drm/msm/gem: Prevent blocking within shrinker loop Dmitry Osipenko
@ 2023-02-17 12:02   ` Thomas Zimmermann
  2023-02-27  4:27     ` Dmitry Osipenko
  0 siblings, 1 reply; 41+ messages in thread
From: Thomas Zimmermann @ 2023-02-17 12:02 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gustavo Padovan,
	Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Rob Clark, Sumit Semwal, Christian König, Qiang Yu,
	Steven Price, Alyssa Rosenzweig, Rob Herring, Sean Paul,
	Dmitry Baryshkov, Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization


[-- Attachment #1.1: Type: text/plain, Size: 5784 bytes --]

Hi

Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
> Consider this scenario:
> 
> 1. APP1 continuously creates lots of small GEMs
> 2. APP2 triggers `drop_caches`
> 3. Shrinker starts to evict APP1 GEMs, while APP1 produces new purgeable
>     GEMs
> 4. msm_gem_shrinker_scan() returns non-zero number of freed pages
>     and causes shrinker to try shrink more
> 5. msm_gem_shrinker_scan() returns non-zero number of freed pages again,
>     goto 4
> 6. The APP2 is blocked in `drop_caches` until APP1 stops producing
>     purgeable GEMs
> 
> To prevent this blocking scenario, check number of remaining pages
> that GPU shrinker couldn't release due to a GEM locking contention
> or shrinking rejection. If there are no remaining pages left to shrink,
> then there is no need to free up more pages and shrinker may break out
> from the loop.
> 
> This problem was found during shrinker/madvise IOCTL testing of
> virtio-gpu driver. The MSM driver is affected in the same way.
> 
> Reviewed-by: Rob Clark <robdclark@gmail.com>
> Fixes: b352ba54a820 ("drm/msm/gem: Convert to using drm_gem_lru")
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
>   drivers/gpu/drm/drm_gem.c              | 9 +++++++--
>   drivers/gpu/drm/msm/msm_gem_shrinker.c | 8 ++++++--
>   include/drm/drm_gem.h                  | 4 +++-
>   3 files changed, 16 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 59a0bb5ebd85..c6bca5ac6e0f 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1388,10 +1388,13 @@ EXPORT_SYMBOL(drm_gem_lru_move_tail);
>    *
>    * @lru: The LRU to scan
>    * @nr_to_scan: The number of pages to try to reclaim
> + * @remaining: The number of pages left to reclaim
>    * @shrink: Callback to try to shrink/reclaim the object.
>    */
>   unsigned long
> -drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan,
> +drm_gem_lru_scan(struct drm_gem_lru *lru,
> +		 unsigned int nr_to_scan,
> +		 unsigned long *remaining,
>   		 bool (*shrink)(struct drm_gem_object *obj))
>   {
>   	struct drm_gem_lru still_in_lru;
> @@ -1430,8 +1433,10 @@ drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan,
>   		 * hit shrinker in response to trying to get backing pages
>   		 * for this obj (ie. while it's lock is already held)
>   		 */
> -		if (!dma_resv_trylock(obj->resv))
> +		if (!dma_resv_trylock(obj->resv)) {
> +			*remaining += obj->size >> PAGE_SHIFT;
>   			goto tail;
> +		}
>   
>   		if (shrink(obj)) {
>   			freed += obj->size >> PAGE_SHIFT;
> diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c
> index 051bdbc093cf..b7c1242014ec 100644
> --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c
> +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c
> @@ -116,12 +116,14 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
>   	};
>   	long nr = sc->nr_to_scan;
>   	unsigned long freed = 0;
> +	unsigned long remaining = 0;
>   
>   	for (unsigned i = 0; (nr > 0) && (i < ARRAY_SIZE(stages)); i++) {
>   		if (!stages[i].cond)
>   			continue;
>   		stages[i].freed =
> -			drm_gem_lru_scan(stages[i].lru, nr, stages[i].shrink);
> +			drm_gem_lru_scan(stages[i].lru, nr, &remaining,

This function relies in remaining being pre-initialized. That's not 
obvious and error prone. At least, pass-in something like 
&stages[i].remaining that is then initialized internally by 
drm_gem_lru_scan() to zero. And similar to freed, sum up the individual 
stages' remaining here.

TBH I somehow don't like the overall design of how all these functions 
interact with each other. But I also can't really point to the actual 
problem. So it's best to take what you have here; maybe with the change 
I proposed.

Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>

Best regards
Thomas

> +					 stages[i].shrink);
>   		nr -= stages[i].freed;
>   		freed += stages[i].freed;
>   	}
> @@ -132,7 +134,7 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
>   				     stages[3].freed);
>   	}
>   
> -	return (freed > 0) ? freed : SHRINK_STOP;
> +	return (freed > 0 && remaining > 0) ? freed : SHRINK_STOP;
>   }
>   
>   #ifdef CONFIG_DEBUG_FS
> @@ -182,10 +184,12 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr)
>   		NULL,
>   	};
>   	unsigned idx, unmapped = 0;
> +	unsigned long remaining = 0;
>   
>   	for (idx = 0; lrus[idx] && unmapped < vmap_shrink_limit; idx++) {
>   		unmapped += drm_gem_lru_scan(lrus[idx],
>   					     vmap_shrink_limit - unmapped,
> +					     &remaining,
>   					     vmap_shrink);
>   	}
>   
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index 772a4adf5287..f1f00fc2dba6 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -476,7 +476,9 @@ int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev,
>   void drm_gem_lru_init(struct drm_gem_lru *lru, struct mutex *lock);
>   void drm_gem_lru_remove(struct drm_gem_object *obj);
>   void drm_gem_lru_move_tail(struct drm_gem_lru *lru, struct drm_gem_object *obj);
> -unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan,
> +unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru,
> +			       unsigned int nr_to_scan,
> +			       unsigned long *remaining,
>   			       bool (*shrink)(struct drm_gem_object *obj));
>   
>   #endif /* __DRM_GEM_H__ */

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 03/11] drm/gem: Add evict() callback to drm_gem_object_funcs
  2023-01-08 21:04 ` [PATCH v10 03/11] drm/gem: Add evict() callback to drm_gem_object_funcs Dmitry Osipenko
@ 2023-02-17 12:23   ` Thomas Zimmermann
  0 siblings, 0 replies; 41+ messages in thread
From: Thomas Zimmermann @ 2023-02-17 12:23 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gustavo Padovan,
	Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Rob Clark, Sumit Semwal, Christian König, Qiang Yu,
	Steven Price, Alyssa Rosenzweig, Rob Herring, Sean Paul,
	Dmitry Baryshkov, Abhinav Kumar
  Cc: kernel, linux-kernel, dri-devel, virtualization


[-- Attachment #1.1: Type: text/plain, Size: 2718 bytes --]

Hi

Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
> Add new common evict() callback to drm_gem_object_funcs and corresponding
> drm_gem_object_evict() helper. This is a first step on a way to providing
> common GEM-shrinker API for DRM drivers.
> 
> Suggested-by: Thomas Zimmermann <tzimmermann@suse.de>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>

Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>

with my comments below addressed.

> ---
>   drivers/gpu/drm/drm_gem.c | 16 ++++++++++++++++
>   include/drm/drm_gem.h     | 12 ++++++++++++
>   2 files changed, 28 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index c6bca5ac6e0f..dbb48fc9dff3 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1471,3 +1471,19 @@ drm_gem_lru_scan(struct drm_gem_lru *lru,
>   	return freed;
>   }
>   EXPORT_SYMBOL(drm_gem_lru_scan);
> +
> +/**
> + * drm_gem_object_evict - helper to evict backing pages for a GEM object
> + * @obj: obj in question
> + */
> +bool

Please use int and return an errno code.

No newline here, please.

> +drm_gem_object_evict(struct drm_gem_object *obj)
> +{
> +	dma_resv_assert_held(obj->resv);
> +
> +	if (obj->funcs->evict)
> +		return obj->funcs->evict(obj);
> +
> +	return false;
> +}
> +EXPORT_SYMBOL(drm_gem_object_evict);
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index f1f00fc2dba6..8e5c22f25691 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -172,6 +172,16 @@ struct drm_gem_object_funcs {
>   	 * This is optional but necessary for mmap support.
>   	 */
>   	const struct vm_operations_struct *vm_ops;
> +
> +	/**
> +	 * @evict:
> +	 *
> +	 * Evicts gem object out from memory. Used by the drm_gem_object_evict()
> +	 * helper. Returns true on success, false otherwise.
> +	 *
> +	 * This callback is optional.
> +	 */
> +	bool (*evict)(struct drm_gem_object *obj);

This should be declared between mmap and evict.

Again, please use int and return an errno code.

>   };
>   
>   /**
> @@ -481,4 +491,6 @@ unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru,
>   			       unsigned long *remaining,
>   			       bool (*shrink)(struct drm_gem_object *obj));
>   
> +bool drm_gem_object_evict(struct drm_gem_object *obj);

drm_gem_evict() should be the correct name; like drm_gem_vmap() and 
drm_gem_pin(), etc.

Best regards
Thomas

> +
>   #endif /* __DRM_GEM_H__ */

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 04/11] drm/shmem: Put booleans in the end of struct drm_gem_shmem_object
  2023-01-08 21:04 ` [PATCH v10 04/11] drm/shmem: Put booleans in the end of struct drm_gem_shmem_object Dmitry Osipenko
@ 2023-02-17 12:25   ` Thomas Zimmermann
  0 siblings, 0 replies; 41+ messages in thread
From: Thomas Zimmermann @ 2023-02-17 12:25 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gustavo Padovan,
	Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Rob Clark, Sumit Semwal, Christian König, Qiang Yu,
	Steven Price, Alyssa Rosenzweig, Rob Herring, Sean Paul,
	Dmitry Baryshkov, Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization


[-- Attachment #1.1: Type: text/plain, Size: 2090 bytes --]



Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
> Group all 1-bit boolean members of struct drm_gem_shmem_object in the end
> of the structure, allowing compiler to pack data better and making code to
> look more consistent.
> 
> Suggested-by: Thomas Zimmermann <tzimmermann@suse.de>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>

Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>

> ---
>   include/drm/drm_gem_shmem_helper.h | 30 +++++++++++++++---------------
>   1 file changed, 15 insertions(+), 15 deletions(-)
> 
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index a2201b2488c5..5994fed5e327 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -60,20 +60,6 @@ struct drm_gem_shmem_object {
>   	 */
>   	struct list_head madv_list;
>   
> -	/**
> -	 * @pages_mark_dirty_on_put:
> -	 *
> -	 * Mark pages as dirty when they are put.
> -	 */
> -	unsigned int pages_mark_dirty_on_put    : 1;
> -
> -	/**
> -	 * @pages_mark_accessed_on_put:
> -	 *
> -	 * Mark pages as accessed when they are put.
> -	 */
> -	unsigned int pages_mark_accessed_on_put : 1;
> -
>   	/**
>   	 * @sgt: Scatter/gather table for imported PRIME buffers
>   	 */
> @@ -97,10 +83,24 @@ struct drm_gem_shmem_object {
>   	 */
>   	unsigned int vmap_use_count;
>   
> +	/**
> +	 * @pages_mark_dirty_on_put:
> +	 *
> +	 * Mark pages as dirty when they are put.
> +	 */
> +	bool pages_mark_dirty_on_put : 1;
> +
> +	/**
> +	 * @pages_mark_accessed_on_put:
> +	 *
> +	 * Mark pages as accessed when they are put.
> +	 */
> +	bool pages_mark_accessed_on_put : 1;
> +
>   	/**
>   	 * @map_wc: map object write-combined (instead of using shmem defaults).
>   	 */
> -	bool map_wc;
> +	bool map_wc : 1;
>   };
>   
>   #define to_drm_gem_shmem_obj(obj) \

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 05/11] drm/shmem: Switch to use drm_* debug helpers
  2023-01-08 21:04 ` [PATCH v10 05/11] drm/shmem: Switch to use drm_* debug helpers Dmitry Osipenko
  2023-01-26 12:15   ` Gerd Hoffmann
@ 2023-02-17 12:28   ` Thomas Zimmermann
  1 sibling, 0 replies; 41+ messages in thread
From: Thomas Zimmermann @ 2023-02-17 12:28 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gustavo Padovan,
	Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Rob Clark, Sumit Semwal, Christian König, Qiang Yu,
	Steven Price, Alyssa Rosenzweig, Rob Herring, Sean Paul,
	Dmitry Baryshkov, Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization


[-- Attachment #1.1: Type: text/plain, Size: 6623 bytes --]



Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
> Ease debugging of a multi-GPU system by using drm_WARN_*() and
> drm_dbg_kms() helpers that print out DRM device name corresponding
> to shmem GEM.
> 
> Suggested-by: Thomas Zimmermann <tzimmermann@suse.de>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>

Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>

> ---
>   drivers/gpu/drm/drm_gem_shmem_helper.c | 38 +++++++++++++++-----------
>   1 file changed, 22 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index f21f47737817..5006f7da7f2d 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -141,7 +141,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   
> -	WARN_ON(shmem->vmap_use_count);
> +	drm_WARN_ON(obj->dev, shmem->vmap_use_count);
>   
>   	if (obj->import_attach) {
>   		drm_prime_gem_destroy(obj, shmem->sgt);
> @@ -156,7 +156,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
>   			drm_gem_shmem_put_pages(shmem);
>   	}
>   
> -	WARN_ON(shmem->pages_use_count);
> +	drm_WARN_ON(obj->dev, shmem->pages_use_count);
>   
>   	drm_gem_object_release(obj);
>   	mutex_destroy(&shmem->pages_lock);
> @@ -175,7 +175,8 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
>   
>   	pages = drm_gem_get_pages(obj);
>   	if (IS_ERR(pages)) {
> -		DRM_DEBUG_KMS("Failed to get pages (%ld)\n", PTR_ERR(pages));
> +		drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n",
> +			    PTR_ERR(pages));
>   		shmem->pages_use_count = 0;
>   		return PTR_ERR(pages);
>   	}
> @@ -207,9 +208,10 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
>    */
>   int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
>   {
> +	struct drm_gem_object *obj = &shmem->base;
>   	int ret;
>   
> -	WARN_ON(shmem->base.import_attach);
> +	drm_WARN_ON(obj->dev, obj->import_attach);
>   
>   	ret = mutex_lock_interruptible(&shmem->pages_lock);
>   	if (ret)
> @@ -225,7 +227,7 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   
> -	if (WARN_ON_ONCE(!shmem->pages_use_count))
> +	if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
>   		return;
>   
>   	if (--shmem->pages_use_count > 0)
> @@ -268,7 +270,9 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages);
>    */
>   int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
>   {
> -	WARN_ON(shmem->base.import_attach);
> +	struct drm_gem_object *obj = &shmem->base;
> +
> +	drm_WARN_ON(obj->dev, obj->import_attach);
>   
>   	return drm_gem_shmem_get_pages(shmem);
>   }
> @@ -283,7 +287,9 @@ EXPORT_SYMBOL(drm_gem_shmem_pin);
>    */
>   void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
>   {
> -	WARN_ON(shmem->base.import_attach);
> +	struct drm_gem_object *obj = &shmem->base;
> +
> +	drm_WARN_ON(obj->dev, obj->import_attach);
>   
>   	drm_gem_shmem_put_pages(shmem);
>   }
> @@ -303,7 +309,7 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
>   	if (obj->import_attach) {
>   		ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
>   		if (!ret) {
> -			if (WARN_ON(map->is_iomem)) {
> +			if (drm_WARN_ON(obj->dev, map->is_iomem)) {
>   				dma_buf_vunmap(obj->import_attach->dmabuf, map);
>   				ret = -EIO;
>   				goto err_put_pages;
> @@ -328,7 +334,7 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
>   	}
>   
>   	if (ret) {
> -		DRM_DEBUG_KMS("Failed to vmap pages, error %d\n", ret);
> +		drm_dbg_kms(obj->dev, "Failed to vmap pages, error %d\n", ret);
>   		goto err_put_pages;
>   	}
>   
> @@ -378,7 +384,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   
> -	if (WARN_ON_ONCE(!shmem->vmap_use_count))
> +	if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count))
>   		return;
>   
>   	if (--shmem->vmap_use_count > 0)
> @@ -463,7 +469,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
>   	struct drm_gem_object *obj = &shmem->base;
>   	struct drm_device *dev = obj->dev;
>   
> -	WARN_ON(!drm_gem_shmem_is_purgeable(shmem));
> +	drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
>   
>   	dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
>   	sg_free_table(shmem->sgt);
> @@ -555,7 +561,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>   	mutex_lock(&shmem->pages_lock);
>   
>   	if (page_offset >= num_pages ||
> -	    WARN_ON_ONCE(!shmem->pages) ||
> +	    drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
>   	    shmem->madv < 0) {
>   		ret = VM_FAULT_SIGBUS;
>   	} else {
> @@ -574,7 +580,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
>   	struct drm_gem_object *obj = vma->vm_private_data;
>   	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>   
> -	WARN_ON(shmem->base.import_attach);
> +	drm_WARN_ON(obj->dev, obj->import_attach);
>   
>   	mutex_lock(&shmem->pages_lock);
>   
> @@ -583,7 +589,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
>   	 * mmap'd, vm_open() just grabs an additional reference for the new
>   	 * mm the vma is getting copied into (ie. on fork()).
>   	 */
> -	if (!WARN_ON_ONCE(!shmem->pages_use_count))
> +	if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
>   		shmem->pages_use_count++;
>   
>   	mutex_unlock(&shmem->pages_lock);
> @@ -677,7 +683,7 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   
> -	WARN_ON(shmem->base.import_attach);
> +	drm_WARN_ON(obj->dev, obj->import_attach);
>   
>   	return drm_prime_pages_to_sg(obj->dev, shmem->pages, obj->size >> PAGE_SHIFT);
>   }
> @@ -708,7 +714,7 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
>   	if (shmem->sgt)
>   		return shmem->sgt;
>   
> -	WARN_ON(obj->import_attach);
> +	drm_WARN_ON(obj->dev, obj->import_attach);
>   
>   	ret = drm_gem_shmem_get_pages(shmem);
>   	if (ret)

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 06/11] drm/shmem-helper: Don't use vmap_use_count for dma-bufs
  2023-01-08 21:04 ` [PATCH v10 06/11] drm/shmem-helper: Don't use vmap_use_count for dma-bufs Dmitry Osipenko
  2023-01-26 12:17   ` Gerd Hoffmann
@ 2023-02-17 12:41   ` Thomas Zimmermann
  1 sibling, 0 replies; 41+ messages in thread
From: Thomas Zimmermann @ 2023-02-17 12:41 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gustavo Padovan,
	Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Rob Clark, Sumit Semwal, Christian König, Qiang Yu,
	Steven Price, Alyssa Rosenzweig, Rob Herring, Sean Paul,
	Dmitry Baryshkov, Abhinav Kumar
  Cc: kernel, linux-kernel, dri-devel, virtualization


[-- Attachment #1.1: Type: text/plain, Size: 3786 bytes --]

Hi

Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
> DMA-buf core has its own refcounting of vmaps, use it instead of drm-shmem
> counting. This change prepares drm-shmem for addition of memory shrinker
> support where drm-shmem will use a single dma-buf reservation lock for
> all operations performed over dma-bufs.
> 
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>

Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>

with my comments below considered.

> ---
>   drivers/gpu/drm/drm_gem_shmem_helper.c | 35 +++++++++++++++-----------
>   1 file changed, 20 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 5006f7da7f2d..1392cbd3cc02 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -301,24 +301,22 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
>   	struct drm_gem_object *obj = &shmem->base;
>   	int ret = 0;
>   
> -	if (shmem->vmap_use_count++ > 0) {
> -		iosys_map_set_vaddr(map, shmem->vaddr);
> -		return 0;
> -	}
> -
>   	if (obj->import_attach) {
>   		ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
>   		if (!ret) {
>   			if (drm_WARN_ON(obj->dev, map->is_iomem)) {

I'm sure that I added this line at some point. But I'm now wondering why 
we're testing this flag. Everything that uses the mapped buffer should 
by agnostic to is_iomem. IIRC the only reason for this test is is that 
we're setting shmem->vaddr to the returned map->vaddr. Now that the code 
is gone, we can also remove that whole branch.

>   				dma_buf_vunmap(obj->import_attach->dmabuf, map);
> -				ret = -EIO;
> -				goto err_put_pages;
> +				return -EIO;
>   			}
> -			shmem->vaddr = map->vaddr;
>   		}
>   	} else {
>   		pgprot_t prot = PAGE_KERNEL;
>   
> +		if (shmem->vmap_use_count++ > 0) {
> +			iosys_map_set_vaddr(map, shmem->vaddr);
> +			return 0;
> +		}
> +
>   		ret = drm_gem_shmem_get_pages(shmem);
>   		if (ret)
>   			goto err_zero_use;
> @@ -384,15 +382,15 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   
> -	if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count))
> -		return;
> -
> -	if (--shmem->vmap_use_count > 0)
> -		return;
> -
>   	if (obj->import_attach) {
>   		dma_buf_vunmap(obj->import_attach->dmabuf, map);
>   	} else {
> +		if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count))
> +			return;
> +
> +		if (--shmem->vmap_use_count > 0)
> +			return;
> +
>   		vunmap(shmem->vaddr);
>   		drm_gem_shmem_put_pages(shmem);
>   	}
> @@ -660,7 +658,14 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem,
>   			      struct drm_printer *p, unsigned int indent)
>   {
>   	drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count);
> -	drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count);
> +
> +	if (shmem->base.import_attach)
> +		drm_printf_indent(p, indent, "vmap_use_count=%u\n",
> +				  shmem->base.dma_buf->vmapping_counter);

This is not vmap_use_count and the best solution is to add a print_info 
callback to dma_bufs. So Maybe simply ignore imported buffers here.

Best regards
Thomas

> +	else
> +		drm_printf_indent(p, indent, "vmap_use_count=%u\n",
> +				  shmem->vmap_use_count);
> +
>   	drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr);
>   }
>   EXPORT_SYMBOL(drm_gem_shmem_print_info);

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 07/11] drm/shmem-helper: Switch to reservation lock
  2023-01-08 21:04 ` [PATCH v10 07/11] drm/shmem-helper: Switch to reservation lock Dmitry Osipenko
@ 2023-02-17 12:52   ` Thomas Zimmermann
  2023-02-17 13:33     ` Dmitry Osipenko
  2023-02-17 13:29   ` Thomas Zimmermann
  1 sibling, 1 reply; 41+ messages in thread
From: Thomas Zimmermann @ 2023-02-17 12:52 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gustavo Padovan,
	Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Rob Clark, Sumit Semwal, Christian König, Qiang Yu,
	Steven Price, Alyssa Rosenzweig, Rob Herring, Sean Paul,
	Dmitry Baryshkov, Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization


[-- Attachment #1.1: Type: text/plain, Size: 23025 bytes --]

Hi

Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
> Replace all drm-shmem locks with a GEM reservation lock. This makes locks
> consistent with dma-buf locking convention where importers are responsible
> for holding reservation lock for all operations performed over dma-bufs,
> preventing deadlock between dma-buf importers and exporters.
> 
> Suggested-by: Daniel Vetter <daniel@ffwll.ch>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>

How much testing has this patch seen?

I'm asking because when I tried to fix the locking in this code, I had 
to review every importer to make sure that it aquired the lock. Has this 
problem been resolved?

Best regards
Thomas

> ---
>   drivers/gpu/drm/drm_gem_shmem_helper.c        | 185 +++++++-----------
>   drivers/gpu/drm/lima/lima_gem.c               |   8 +-
>   drivers/gpu/drm/panfrost/panfrost_drv.c       |   7 +-
>   .../gpu/drm/panfrost/panfrost_gem_shrinker.c  |   6 +-
>   drivers/gpu/drm/panfrost/panfrost_mmu.c       |  19 +-
>   include/drm/drm_gem_shmem_helper.h            |  14 +-
>   6 files changed, 94 insertions(+), 145 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 1392cbd3cc02..a1f2f2158c50 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -88,8 +88,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private)
>   	if (ret)
>   		goto err_release;
>   
> -	mutex_init(&shmem->pages_lock);
> -	mutex_init(&shmem->vmap_lock);
>   	INIT_LIST_HEAD(&shmem->madv_list);
>   
>   	if (!private) {
> @@ -141,11 +139,13 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   
> -	drm_WARN_ON(obj->dev, shmem->vmap_use_count);
> -
>   	if (obj->import_attach) {
>   		drm_prime_gem_destroy(obj, shmem->sgt);
>   	} else {
> +		dma_resv_lock(shmem->base.resv, NULL);
> +
> +		drm_WARN_ON(obj->dev, shmem->vmap_use_count);
> +
>   		if (shmem->sgt) {
>   			dma_unmap_sgtable(obj->dev->dev, shmem->sgt,
>   					  DMA_BIDIRECTIONAL, 0);
> @@ -154,18 +154,18 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
>   		}
>   		if (shmem->pages)
>   			drm_gem_shmem_put_pages(shmem);
> -	}
>   
> -	drm_WARN_ON(obj->dev, shmem->pages_use_count);
> +		drm_WARN_ON(obj->dev, shmem->pages_use_count);
> +
> +		dma_resv_unlock(shmem->base.resv);
> +	}
>   
>   	drm_gem_object_release(obj);
> -	mutex_destroy(&shmem->pages_lock);
> -	mutex_destroy(&shmem->vmap_lock);
>   	kfree(shmem);
>   }
>   EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
>   
> -static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
> +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   	struct page **pages;
> @@ -197,35 +197,16 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
>   }
>   
>   /*
> - * drm_gem_shmem_get_pages - Allocate backing pages for a shmem GEM object
> + * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
>    * @shmem: shmem GEM object
>    *
> - * This function makes sure that backing pages exists for the shmem GEM object
> - * and increases the use count.
> - *
> - * Returns:
> - * 0 on success or a negative error code on failure.
> + * This function decreases the use count and puts the backing pages when use drops to zero.
>    */
> -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
> +void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
> -	int ret;
>   
> -	drm_WARN_ON(obj->dev, obj->import_attach);
> -
> -	ret = mutex_lock_interruptible(&shmem->pages_lock);
> -	if (ret)
> -		return ret;
> -	ret = drm_gem_shmem_get_pages_locked(shmem);
> -	mutex_unlock(&shmem->pages_lock);
> -
> -	return ret;
> -}
> -EXPORT_SYMBOL(drm_gem_shmem_get_pages);
> -
> -static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
> -{
> -	struct drm_gem_object *obj = &shmem->base;
> +	dma_resv_assert_held(shmem->base.resv);
>   
>   	if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
>   		return;
> @@ -243,19 +224,6 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
>   			  shmem->pages_mark_accessed_on_put);
>   	shmem->pages = NULL;
>   }
> -
> -/*
> - * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
> - * @shmem: shmem GEM object
> - *
> - * This function decreases the use count and puts the backing pages when use drops to zero.
> - */
> -void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
> -{
> -	mutex_lock(&shmem->pages_lock);
> -	drm_gem_shmem_put_pages_locked(shmem);
> -	mutex_unlock(&shmem->pages_lock);
> -}
>   EXPORT_SYMBOL(drm_gem_shmem_put_pages);
>   
>   /**
> @@ -272,6 +240,8 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   
> +	dma_resv_assert_held(shmem->base.resv);
> +
>   	drm_WARN_ON(obj->dev, obj->import_attach);
>   
>   	return drm_gem_shmem_get_pages(shmem);
> @@ -289,14 +259,31 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   
> +	dma_resv_assert_held(shmem->base.resv);
> +
>   	drm_WARN_ON(obj->dev, obj->import_attach);
>   
>   	drm_gem_shmem_put_pages(shmem);
>   }
>   EXPORT_SYMBOL(drm_gem_shmem_unpin);
>   
> -static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
> -				     struct iosys_map *map)
> +/*
> + * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
> + * @shmem: shmem GEM object
> + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> + *       store.
> + *
> + * This function makes sure that a contiguous kernel virtual address mapping
> + * exists for the buffer backing the shmem GEM object. It hides the differences
> + * between dma-buf imported and natively allocated objects.
> + *
> + * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
> + *
> + * Returns:
> + * 0 on success or a negative error code on failure.
> + */
> +int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
> +		       struct iosys_map *map)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   	int ret = 0;
> @@ -312,6 +299,8 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
>   	} else {
>   		pgprot_t prot = PAGE_KERNEL;
>   
> +		dma_resv_assert_held(shmem->base.resv);
> +
>   		if (shmem->vmap_use_count++ > 0) {
>   			iosys_map_set_vaddr(map, shmem->vaddr);
>   			return 0;
> @@ -346,45 +335,30 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
>   
>   	return ret;
>   }
> +EXPORT_SYMBOL(drm_gem_shmem_vmap);
>   
>   /*
> - * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
> + * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
>    * @shmem: shmem GEM object
> - * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> - *       store.
> - *
> - * This function makes sure that a contiguous kernel virtual address mapping
> - * exists for the buffer backing the shmem GEM object. It hides the differences
> - * between dma-buf imported and natively allocated objects.
> + * @map: Kernel virtual address where the SHMEM GEM object was mapped
>    *
> - * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
> + * This function cleans up a kernel virtual address mapping acquired by
> + * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
> + * zero.
>    *
> - * Returns:
> - * 0 on success or a negative error code on failure.
> + * This function hides the differences between dma-buf imported and natively
> + * allocated objects.
>    */
> -int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
> -		       struct iosys_map *map)
> -{
> -	int ret;
> -
> -	ret = mutex_lock_interruptible(&shmem->vmap_lock);
> -	if (ret)
> -		return ret;
> -	ret = drm_gem_shmem_vmap_locked(shmem, map);
> -	mutex_unlock(&shmem->vmap_lock);
> -
> -	return ret;
> -}
> -EXPORT_SYMBOL(drm_gem_shmem_vmap);
> -
> -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> -					struct iosys_map *map)
> +void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
> +			  struct iosys_map *map)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   
>   	if (obj->import_attach) {
>   		dma_buf_vunmap(obj->import_attach->dmabuf, map);
>   	} else {
> +		dma_resv_assert_held(shmem->base.resv);
> +
>   		if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count))
>   			return;
>   
> @@ -397,26 +371,6 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
>   
>   	shmem->vaddr = NULL;
>   }
> -
> -/*
> - * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
> - * @shmem: shmem GEM object
> - * @map: Kernel virtual address where the SHMEM GEM object was mapped
> - *
> - * This function cleans up a kernel virtual address mapping acquired by
> - * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
> - * zero.
> - *
> - * This function hides the differences between dma-buf imported and natively
> - * allocated objects.
> - */
> -void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
> -			  struct iosys_map *map)
> -{
> -	mutex_lock(&shmem->vmap_lock);
> -	drm_gem_shmem_vunmap_locked(shmem, map);
> -	mutex_unlock(&shmem->vmap_lock);
> -}
>   EXPORT_SYMBOL(drm_gem_shmem_vunmap);
>   
>   static struct drm_gem_shmem_object *
> @@ -449,24 +403,24 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
>    */
>   int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
>   {
> -	mutex_lock(&shmem->pages_lock);
> +	dma_resv_assert_held(shmem->base.resv);
>   
>   	if (shmem->madv >= 0)
>   		shmem->madv = madv;
>   
>   	madv = shmem->madv;
>   
> -	mutex_unlock(&shmem->pages_lock);
> -
>   	return (madv >= 0);
>   }
>   EXPORT_SYMBOL(drm_gem_shmem_madvise);
>   
> -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
> +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   	struct drm_device *dev = obj->dev;
>   
> +	dma_resv_assert_held(shmem->base.resv);
> +
>   	drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
>   
>   	dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
> @@ -474,7 +428,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
>   	kfree(shmem->sgt);
>   	shmem->sgt = NULL;
>   
> -	drm_gem_shmem_put_pages_locked(shmem);
> +	drm_gem_shmem_put_pages(shmem);
>   
>   	shmem->madv = -1;
>   
> @@ -490,17 +444,6 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
>   
>   	invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
>   }
> -EXPORT_SYMBOL(drm_gem_shmem_purge_locked);
> -
> -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
> -{
> -	if (!mutex_trylock(&shmem->pages_lock))
> -		return false;
> -	drm_gem_shmem_purge_locked(shmem);
> -	mutex_unlock(&shmem->pages_lock);
> -
> -	return true;
> -}
>   EXPORT_SYMBOL(drm_gem_shmem_purge);
>   
>   /**
> @@ -556,7 +499,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>   	/* We don't use vmf->pgoff since that has the fake offset */
>   	page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
>   
> -	mutex_lock(&shmem->pages_lock);
> +	dma_resv_lock(shmem->base.resv, NULL);
>   
>   	if (page_offset >= num_pages ||
>   	    drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
> @@ -568,7 +511,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>   		ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
>   	}
>   
> -	mutex_unlock(&shmem->pages_lock);
> +	dma_resv_unlock(shmem->base.resv);
>   
>   	return ret;
>   }
> @@ -580,7 +523,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
>   
>   	drm_WARN_ON(obj->dev, obj->import_attach);
>   
> -	mutex_lock(&shmem->pages_lock);
> +	dma_resv_lock(shmem->base.resv, NULL);
>   
>   	/*
>   	 * We should have already pinned the pages when the buffer was first
> @@ -590,7 +533,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
>   	if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
>   		shmem->pages_use_count++;
>   
> -	mutex_unlock(&shmem->pages_lock);
> +	dma_resv_unlock(shmem->base.resv);
>   
>   	drm_gem_vm_open(vma);
>   }
> @@ -600,7 +543,10 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
>   	struct drm_gem_object *obj = vma->vm_private_data;
>   	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>   
> +	dma_resv_lock(shmem->base.resv, NULL);
>   	drm_gem_shmem_put_pages(shmem);
> +	dma_resv_unlock(shmem->base.resv);
> +
>   	drm_gem_vm_close(vma);
>   }
>   
> @@ -635,7 +581,10 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
>   		return dma_buf_mmap(obj->dma_buf, vma, 0);
>   	}
>   
> +	dma_resv_lock(shmem->base.resv, NULL);
>   	ret = drm_gem_shmem_get_pages(shmem);
> +	dma_resv_unlock(shmem->base.resv);
> +
>   	if (ret)
>   		return ret;
>   
> @@ -721,9 +670,11 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
>   
>   	drm_WARN_ON(obj->dev, obj->import_attach);
>   
> +	dma_resv_lock(shmem->base.resv, NULL);
> +
>   	ret = drm_gem_shmem_get_pages(shmem);
>   	if (ret)
> -		return ERR_PTR(ret);
> +		goto err_unlock;
>   
>   	sgt = drm_gem_shmem_get_sg_table(shmem);
>   	if (IS_ERR(sgt)) {
> @@ -737,6 +688,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
>   
>   	shmem->sgt = sgt;
>   
> +	dma_resv_unlock(shmem->base.resv);
> +
>   	return sgt;
>   
>   err_free_sgt:
> @@ -744,6 +697,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
>   	kfree(sgt);
>   err_put_pages:
>   	drm_gem_shmem_put_pages(shmem);
> +err_unlock:
> +	dma_resv_unlock(shmem->base.resv);
>   	return ERR_PTR(ret);
>   }
>   EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt);
> diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
> index 0f1ca0b0db49..5008f0c2428f 100644
> --- a/drivers/gpu/drm/lima/lima_gem.c
> +++ b/drivers/gpu/drm/lima/lima_gem.c
> @@ -34,7 +34,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
>   
>   	new_size = min(new_size, bo->base.base.size);
>   
> -	mutex_lock(&bo->base.pages_lock);
> +	dma_resv_lock(bo->base.base.resv, NULL);
>   
>   	if (bo->base.pages) {
>   		pages = bo->base.pages;
> @@ -42,7 +42,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
>   		pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
>   				       sizeof(*pages), GFP_KERNEL | __GFP_ZERO);
>   		if (!pages) {
> -			mutex_unlock(&bo->base.pages_lock);
> +			dma_resv_unlock(bo->base.base.resv);
>   			return -ENOMEM;
>   		}
>   
> @@ -56,13 +56,13 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
>   		struct page *page = shmem_read_mapping_page(mapping, i);
>   
>   		if (IS_ERR(page)) {
> -			mutex_unlock(&bo->base.pages_lock);
> +			dma_resv_unlock(bo->base.base.resv);
>   			return PTR_ERR(page);
>   		}
>   		pages[i] = page;
>   	}
>   
> -	mutex_unlock(&bo->base.pages_lock);
> +	dma_resv_unlock(bo->base.base.resv);
>   
>   	ret = sg_alloc_table_from_pages(&sgt, pages, i, 0,
>   					new_size, GFP_KERNEL);
> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> index abb0dadd8f63..9f3f2283b67a 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> @@ -414,6 +414,10 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
>   
>   	bo = to_panfrost_bo(gem_obj);
>   
> +	ret = dma_resv_lock_interruptible(bo->base.base.resv, NULL);
> +	if (ret)
> +		goto out_put_object;
> +
>   	mutex_lock(&pfdev->shrinker_lock);
>   	mutex_lock(&bo->mappings.lock);
>   	if (args->madv == PANFROST_MADV_DONTNEED) {
> @@ -451,7 +455,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
>   out_unlock_mappings:
>   	mutex_unlock(&bo->mappings.lock);
>   	mutex_unlock(&pfdev->shrinker_lock);
> -
> +	dma_resv_unlock(bo->base.base.resv);
> +out_put_object:
>   	drm_gem_object_put(gem_obj);
>   	return ret;
>   }
> diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> index bf0170782f25..6a71a2555f85 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> @@ -48,14 +48,14 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj)
>   	if (!mutex_trylock(&bo->mappings.lock))
>   		return false;
>   
> -	if (!mutex_trylock(&shmem->pages_lock))
> +	if (!dma_resv_trylock(shmem->base.resv))
>   		goto unlock_mappings;
>   
>   	panfrost_gem_teardown_mappings_locked(bo);
> -	drm_gem_shmem_purge_locked(&bo->base);
> +	drm_gem_shmem_purge(&bo->base);
>   	ret = true;
>   
> -	mutex_unlock(&shmem->pages_lock);
> +	dma_resv_unlock(shmem->base.resv);
>   
>   unlock_mappings:
>   	mutex_unlock(&bo->mappings.lock);
> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> index 666a5e53fe19..0679df57f394 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> @@ -443,6 +443,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
>   	struct panfrost_gem_mapping *bomapping;
>   	struct panfrost_gem_object *bo;
>   	struct address_space *mapping;
> +	struct drm_gem_object *obj;
>   	pgoff_t page_offset;
>   	struct sg_table *sgt;
>   	struct page **pages;
> @@ -465,15 +466,16 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
>   	page_offset = addr >> PAGE_SHIFT;
>   	page_offset -= bomapping->mmnode.start;
>   
> -	mutex_lock(&bo->base.pages_lock);
> +	obj = &bo->base.base;
> +
> +	dma_resv_lock(obj->resv, NULL);
>   
>   	if (!bo->base.pages) {
>   		bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M,
>   				     sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO);
>   		if (!bo->sgts) {
> -			mutex_unlock(&bo->base.pages_lock);
>   			ret = -ENOMEM;
> -			goto err_bo;
> +			goto err_unlock;
>   		}
>   
>   		pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
> @@ -481,9 +483,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
>   		if (!pages) {
>   			kvfree(bo->sgts);
>   			bo->sgts = NULL;
> -			mutex_unlock(&bo->base.pages_lock);
>   			ret = -ENOMEM;
> -			goto err_bo;
> +			goto err_unlock;
>   		}
>   		bo->base.pages = pages;
>   		bo->base.pages_use_count = 1;
> @@ -491,7 +492,6 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
>   		pages = bo->base.pages;
>   		if (pages[page_offset]) {
>   			/* Pages are already mapped, bail out. */
> -			mutex_unlock(&bo->base.pages_lock);
>   			goto out;
>   		}
>   	}
> @@ -502,14 +502,11 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
>   	for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
>   		pages[i] = shmem_read_mapping_page(mapping, i);
>   		if (IS_ERR(pages[i])) {
> -			mutex_unlock(&bo->base.pages_lock);
>   			ret = PTR_ERR(pages[i]);
>   			goto err_pages;
>   		}
>   	}
>   
> -	mutex_unlock(&bo->base.pages_lock);
> -
>   	sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)];
>   	ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
>   					NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL);
> @@ -528,6 +525,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
>   	dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr);
>   
>   out:
> +	dma_resv_unlock(obj->resv);
> +
>   	panfrost_gem_mapping_put(bomapping);
>   
>   	return 0;
> @@ -536,6 +535,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
>   	sg_free_table(sgt);
>   err_pages:
>   	drm_gem_shmem_put_pages(&bo->base);
> +err_unlock:
> +	dma_resv_unlock(obj->resv);
>   err_bo:
>   	panfrost_gem_mapping_put(bomapping);
>   	return ret;
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 5994fed5e327..20ddcd799df9 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -26,11 +26,6 @@ struct drm_gem_shmem_object {
>   	 */
>   	struct drm_gem_object base;
>   
> -	/**
> -	 * @pages_lock: Protects the page table and use count
> -	 */
> -	struct mutex pages_lock;
> -
>   	/**
>   	 * @pages: Page table
>   	 */
> @@ -65,11 +60,6 @@ struct drm_gem_shmem_object {
>   	 */
>   	struct sg_table *sgt;
>   
> -	/**
> -	 * @vmap_lock: Protects the vmap address and use count
> -	 */
> -	struct mutex vmap_lock;
> -
>   	/**
>   	 * @vaddr: Kernel virtual address of the backing memory
>   	 */
> @@ -109,7 +99,6 @@ struct drm_gem_shmem_object {
>   struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size);
>   void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem);
>   
> -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
>   void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
>   int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem);
>   void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem);
> @@ -128,8 +117,7 @@ static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem
>   		!shmem->base.dma_buf && !shmem->base.import_attach;
>   }
>   
> -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem);
> -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
> +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
>   
>   struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
>   struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem);

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 08/11] drm/shmem-helper: Add memory shrinker
  2023-01-08 21:04 ` [PATCH v10 08/11] drm/shmem-helper: Add memory shrinker Dmitry Osipenko
@ 2023-02-17 13:19   ` Thomas Zimmermann
  2023-02-27  4:34     ` Dmitry Osipenko
  0 siblings, 1 reply; 41+ messages in thread
From: Thomas Zimmermann @ 2023-02-17 13:19 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gustavo Padovan,
	Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Rob Clark, Sumit Semwal, Christian König, Qiang Yu,
	Steven Price, Alyssa Rosenzweig, Rob Herring, Sean Paul,
	Dmitry Baryshkov, Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization


[-- Attachment #1.1: Type: text/plain, Size: 28054 bytes --]

Hi

Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
> Introduce common drm-shmem shrinker for DRM drivers.
> 
> To start using drm-shmem shrinker drivers should do the following:
> 
> 1. Implement evict() callback of GEM object where driver should check
>     whether object is purgeable or evictable using drm-shmem helpers and
>     perform the shrinking action
> 
> 2. Initialize drm-shmem internals using drmm_gem_shmem_init(drm_device),
>     which will register drm-shmem shrinker
> 
> 3. Implement madvise IOCTL that will use drm_gem_shmem_madvise()

I left commenets beloew, but it's complicated code and a fairly large 
change. It there any chance of splitting this up in a meaningful way?

> 
> Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
>   drivers/gpu/drm/drm_gem_shmem_helper.c        | 460 ++++++++++++++++--
>   .../gpu/drm/panfrost/panfrost_gem_shrinker.c  |   9 +-
>   include/drm/drm_device.h                      |  10 +-
>   include/drm/drm_gem_shmem_helper.h            |  61 ++-
>   4 files changed, 490 insertions(+), 50 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index a1f2f2158c50..3ab5ec325ddb 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -20,6 +20,7 @@
>   #include <drm/drm_device.h>
>   #include <drm/drm_drv.h>
>   #include <drm/drm_gem_shmem_helper.h>
> +#include <drm/drm_managed.h>
>   #include <drm/drm_prime.h>
>   #include <drm/drm_print.h>
>   
> @@ -128,6 +129,57 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
>   }
>   EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
>   
> +static void drm_gem_shmem_resv_assert_held(struct drm_gem_shmem_object *shmem)
> +{
> +	/*
> +	 * Destroying the object is a special case.. drm_gem_shmem_free()
> +	 * calls many things that WARN_ON if the obj lock is not held.  But
> +	 * acquiring the obj lock in drm_gem_shmem_free() can cause a locking
> +	 * order inversion between reservation_ww_class_mutex and fs_reclaim.
> +	 *
> +	 * This deadlock is not actually possible, because no one should
> +	 * be already holding the lock when msm_gem_free_object() is called.
> +	 * Unfortunately lockdep is not aware of this detail.  So when the
> +	 * refcount drops to zero, we pretend it is already locked.
> +	 */
> +	if (kref_read(&shmem->base.refcount))
> +		dma_resv_assert_held(shmem->base.resv);
> +}
> +
> +static bool drm_gem_shmem_is_evictable(struct drm_gem_shmem_object *shmem)
> +{
> +	dma_resv_assert_held(shmem->base.resv);
> +
> +	return (shmem->madv >= 0) && shmem->base.funcs->evict &&
> +		shmem->pages_use_count && !shmem->pages_pin_count &&
> +		!shmem->base.dma_buf && !shmem->base.import_attach &&
> +		shmem->sgt && !shmem->evicted;
> +}
> +
> +static void
> +drm_gem_shmem_update_pages_state(struct drm_gem_shmem_object *shmem)
> +{
> +	struct drm_gem_object *obj = &shmem->base;
> +	struct drm_gem_shmem *shmem_mm = obj->dev->shmem_mm;
> +	struct drm_gem_shmem_shrinker *gem_shrinker = &shmem_mm->shrinker;
> +
> +	drm_gem_shmem_resv_assert_held(shmem);
> +
> +	if (!gem_shrinker || obj->import_attach)
> +		return;
> +
> +	if (shmem->madv < 0)
> +		drm_gem_lru_remove(&shmem->base);
> +	else if (drm_gem_shmem_is_evictable(shmem) || drm_gem_shmem_is_purgeable(shmem))
> +		drm_gem_lru_move_tail(&gem_shrinker->lru_evictable, &shmem->base);
> +	else if (shmem->evicted)
> +		drm_gem_lru_move_tail(&gem_shrinker->lru_evicted, &shmem->base);
> +	else if (!shmem->pages)
> +		drm_gem_lru_remove(&shmem->base);
> +	else
> +		drm_gem_lru_move_tail(&gem_shrinker->lru_pinned, &shmem->base);
> +}
> +
>   /**
>    * drm_gem_shmem_free - Free resources associated with a shmem GEM object
>    * @shmem: shmem GEM object to free
> @@ -142,7 +194,8 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
>   	if (obj->import_attach) {
>   		drm_prime_gem_destroy(obj, shmem->sgt);
>   	} else {
> -		dma_resv_lock(shmem->base.resv, NULL);
> +		/* take out shmem GEM object from the memory shrinker */
> +		drm_gem_shmem_madvise(shmem, -1);
>   
>   		drm_WARN_ON(obj->dev, shmem->vmap_use_count);
>   
> @@ -152,12 +205,10 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
>   			sg_free_table(shmem->sgt);
>   			kfree(shmem->sgt);
>   		}
> -		if (shmem->pages)
> +		if (shmem->pages_use_count)
>   			drm_gem_shmem_put_pages(shmem);
>   
>   		drm_WARN_ON(obj->dev, shmem->pages_use_count);
> -
> -		dma_resv_unlock(shmem->base.resv);
>   	}
>   
>   	drm_gem_object_release(obj);
> @@ -165,19 +216,31 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
>   }
>   EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
>   
> -static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
> +static int
> +drm_gem_shmem_acquire_pages(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   	struct page **pages;
>   
> -	if (shmem->pages_use_count++ > 0)
> +	dma_resv_assert_held(shmem->base.resv);
> +
> +	if (shmem->madv < 0) {
> +		drm_WARN_ON(obj->dev, shmem->pages);
> +		return -ENOMEM;
> +	}
> +
> +	if (shmem->pages) {
> +		drm_WARN_ON(obj->dev, !shmem->evicted);
>   		return 0;
> +	}
> +
> +	if (drm_WARN_ON(obj->dev, !shmem->pages_use_count))
> +		return -EINVAL;
>   
>   	pages = drm_gem_get_pages(obj);
>   	if (IS_ERR(pages)) {
>   		drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n",
>   			    PTR_ERR(pages));
> -		shmem->pages_use_count = 0;
>   		return PTR_ERR(pages);
>   	}
>   
> @@ -196,6 +259,58 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
>   	return 0;
>   }
>   
> +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
> +{
> +	int err;
> +
> +	dma_resv_assert_held(shmem->base.resv);
> +
> +	if (shmem->madv < 0)
> +		return -ENOMEM;
> +
> +	if (shmem->pages_use_count++ > 0) {
> +		err = drm_gem_shmem_swap_in(shmem);
> +		if (err)
> +			goto err_zero_use;
> +
> +		return 0;
> +	}
> +
> +	err = drm_gem_shmem_acquire_pages(shmem);
> +	if (err)
> +		goto err_zero_use;
> +
> +	drm_gem_shmem_update_pages_state(shmem);
> +
> +	return 0;
> +
> +err_zero_use:
> +	shmem->pages_use_count = 0;
> +
> +	return err;
> +}
> +
> +static void
> +drm_gem_shmem_release_pages(struct drm_gem_shmem_object *shmem)
> +{
> +	struct drm_gem_object *obj = &shmem->base;
> +
> +	if (!shmem->pages) {
> +		drm_WARN_ON(obj->dev, !shmem->evicted && shmem->madv >= 0);
> +		return;
> +	}
> +
> +#ifdef CONFIG_X86
> +	if (shmem->map_wc)
> +		set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
> +#endif
> +
> +	drm_gem_put_pages(obj, shmem->pages,
> +			  shmem->pages_mark_dirty_on_put,
> +			  shmem->pages_mark_accessed_on_put);
> +	shmem->pages = NULL;
> +}
> +
>   /*
>    * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
>    * @shmem: shmem GEM object
> @@ -206,7 +321,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   
> -	dma_resv_assert_held(shmem->base.resv);
> +	drm_gem_shmem_resv_assert_held(shmem);
>   
>   	if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
>   		return;
> @@ -214,15 +329,9 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
>   	if (--shmem->pages_use_count > 0)
>   		return;
>   
> -#ifdef CONFIG_X86
> -	if (shmem->map_wc)
> -		set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
> -#endif
> +	drm_gem_shmem_release_pages(shmem);
>   
> -	drm_gem_put_pages(obj, shmem->pages,
> -			  shmem->pages_mark_dirty_on_put,
> -			  shmem->pages_mark_accessed_on_put);
> -	shmem->pages = NULL;
> +	drm_gem_shmem_update_pages_state(shmem);
>   }
>   EXPORT_SYMBOL(drm_gem_shmem_put_pages);
>   
> @@ -239,12 +348,17 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages);
>   int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
> +	int ret;
>   
>   	dma_resv_assert_held(shmem->base.resv);
>   
>   	drm_WARN_ON(obj->dev, obj->import_attach);
>   
> -	return drm_gem_shmem_get_pages(shmem);
> +	ret = drm_gem_shmem_get_pages(shmem);
> +	if (!ret)
> +		shmem->pages_pin_count++;
> +
> +	return ret;
>   }
>   EXPORT_SYMBOL(drm_gem_shmem_pin);
>   
> @@ -263,7 +377,12 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
>   
>   	drm_WARN_ON(obj->dev, obj->import_attach);
>   
> +	if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_pin_count))
> +		return;
> +
>   	drm_gem_shmem_put_pages(shmem);
> +
> +	shmem->pages_pin_count--;
>   }
>   EXPORT_SYMBOL(drm_gem_shmem_unpin);
>   
> @@ -306,7 +425,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
>   			return 0;
>   		}
>   
> -		ret = drm_gem_shmem_get_pages(shmem);
> +		ret = drm_gem_shmem_pin(shmem);
>   		if (ret)
>   			goto err_zero_use;
>   
> @@ -329,7 +448,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
>   
>   err_put_pages:
>   	if (!obj->import_attach)
> -		drm_gem_shmem_put_pages(shmem);
> +		drm_gem_shmem_unpin(shmem);
>   err_zero_use:
>   	shmem->vmap_use_count = 0;
>   
> @@ -366,7 +485,7 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
>   			return;
>   
>   		vunmap(shmem->vaddr);
> -		drm_gem_shmem_put_pages(shmem);
> +		drm_gem_shmem_unpin(shmem);
>   	}
>   
>   	shmem->vaddr = NULL;
> @@ -403,48 +522,84 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
>    */
>   int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
>   {
> -	dma_resv_assert_held(shmem->base.resv);
> +	drm_gem_shmem_resv_assert_held(shmem);
>   
>   	if (shmem->madv >= 0)
>   		shmem->madv = madv;
>   
>   	madv = shmem->madv;
>   
> +	drm_gem_shmem_update_pages_state(shmem);
> +
>   	return (madv >= 0);
>   }
>   EXPORT_SYMBOL(drm_gem_shmem_madvise);
>   
> -void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
> +/**
> + * drm_gem_shmem_swap_in() - Moves shmem GEM back to memory and enables
> + *                           hardware access to the memory.

Do we have a better name than _swap_in()? I suggest 
drm_gem_shmem_unevict(), which suggest that it's the inverse to _evict().

> + * @shmem: shmem GEM object
> + *
> + * This function moves shmem GEM back to memory if it was previously evicted
> + * by the memory shrinker. The GEM is ready to use on success.
> + *
> + * Returns:
> + * 0 on success or a negative error code on failure.
> + */
> +int drm_gem_shmem_swap_in(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
> -	struct drm_device *dev = obj->dev;
> +	struct sg_table *sgt;
> +	int err;
>   
>   	dma_resv_assert_held(shmem->base.resv);
>   
> -	drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
> +	if (shmem->evicted) {
> +		err = drm_gem_shmem_acquire_pages(shmem);
> +		if (err)
> +			return err;
> +
> +		sgt = drm_gem_shmem_get_sg_table(shmem);
> +		if (IS_ERR(sgt))
> +			return PTR_ERR(sgt);
> +
> +		err = dma_map_sgtable(obj->dev->dev, sgt,
> +				      DMA_BIDIRECTIONAL, 0);
> +		if (err) {
> +			sg_free_table(sgt);
> +			kfree(sgt);
> +			return err;
> +		}
>   
> -	dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
> -	sg_free_table(shmem->sgt);
> -	kfree(shmem->sgt);
> -	shmem->sgt = NULL;
> +		shmem->sgt = sgt;
> +		shmem->evicted = false;
>   
> -	drm_gem_shmem_put_pages(shmem);
> +		drm_gem_shmem_update_pages_state(shmem);
> +	}
>   
> -	shmem->madv = -1;
> +	if (!shmem->pages)
> +		return -ENOMEM;
>   
> -	drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
> -	drm_gem_free_mmap_offset(obj);
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(drm_gem_shmem_swap_in);
>   
> -	/* Our goal here is to return as much of the memory as
> -	 * is possible back to the system as we are called from OOM.
> -	 * To do this we must instruct the shmfs to drop all of its
> -	 * backing pages, *now*.
> -	 */
> -	shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1);
> +static void drm_gem_shmem_unpin_pages(struct drm_gem_shmem_object *shmem)
> +{
> +	struct drm_gem_object *obj = &shmem->base;
> +	struct drm_device *dev = obj->dev;
>   
> -	invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
> +	if (shmem->evicted)
> +		return;
> +
> +	dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
> +	drm_gem_shmem_release_pages(shmem);
> +	drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
> +
> +	sg_free_table(shmem->sgt);
> +	kfree(shmem->sgt);
> +	shmem->sgt = NULL;
>   }
> -EXPORT_SYMBOL(drm_gem_shmem_purge);
>   
>   /**
>    * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object
> @@ -495,22 +650,33 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>   	vm_fault_t ret;
>   	struct page *page;
>   	pgoff_t page_offset;
> +	bool pages_unpinned;
> +	int err;
>   
>   	/* We don't use vmf->pgoff since that has the fake offset */
>   	page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
>   
>   	dma_resv_lock(shmem->base.resv, NULL);
>   
> -	if (page_offset >= num_pages ||
> -	    drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
> -	    shmem->madv < 0) {
> +	/* Sanity-check that we have the pages pointer when it should present */
> +	pages_unpinned = (shmem->evicted || shmem->madv < 0 || !shmem->pages_use_count);
> +	drm_WARN_ON_ONCE(obj->dev, !shmem->pages ^ pages_unpinned);
> +
> +	if (page_offset >= num_pages || (!shmem->pages && !shmem->evicted)) {
>   		ret = VM_FAULT_SIGBUS;
>   	} else {
> +		err = drm_gem_shmem_swap_in(shmem);
> +		if (err) {
> +			ret = VM_FAULT_OOM;
> +			goto unlock;
> +		}
> +
>   		page = shmem->pages[page_offset];
>   
>   		ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
>   	}
>   
> +unlock:
>   	dma_resv_unlock(shmem->base.resv);
>   
>   	return ret;
> @@ -533,6 +699,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
>   	if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
>   		shmem->pages_use_count++;
>   
> +	drm_gem_shmem_update_pages_state(shmem);
>   	dma_resv_unlock(shmem->base.resv);
>   
>   	drm_gem_vm_open(vma);
> @@ -615,7 +782,9 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem,
>   		drm_printf_indent(p, indent, "vmap_use_count=%u\n",
>   				  shmem->vmap_use_count);
>   
> +	drm_printf_indent(p, indent, "evicted=%d\n", shmem->evicted);
>   	drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr);
> +	drm_printf_indent(p, indent, "madv=%d\n", shmem->madv);
>   }
>   EXPORT_SYMBOL(drm_gem_shmem_print_info);
>   
> @@ -688,6 +857,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
>   
>   	shmem->sgt = sgt;
>   
> +	drm_gem_shmem_update_pages_state(shmem);
> +
>   	dma_resv_unlock(shmem->base.resv);
>   
>   	return sgt;
> @@ -738,6 +909,209 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev,
>   }
>   EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table);
>   
> +static struct drm_gem_shmem_shrinker *
> +to_drm_shrinker(struct shrinker *shrinker)

to_drm_gem_shmem_shrinker()

> +{
> +	return container_of(shrinker, struct drm_gem_shmem_shrinker, base);
> +}
> +
> +static unsigned long
> +drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker,
> +				     struct shrink_control *sc)
> +{
> +	struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker);
> +	unsigned long count = gem_shrinker->lru_evictable.count;
> +
> +	if (count >= SHRINK_EMPTY)
> +		return SHRINK_EMPTY - 1;
> +
> +	return count ?: SHRINK_EMPTY;
> +}
> +
> +void drm_gem_shmem_evict(struct drm_gem_shmem_object *shmem)
> +{
> +	struct drm_gem_object *obj = &shmem->base;
> +
> +	drm_WARN_ON(obj->dev, !drm_gem_shmem_is_evictable(shmem));
> +	drm_WARN_ON(obj->dev, shmem->evicted);
> +
> +	drm_gem_shmem_unpin_pages(shmem);
> +
> +	shmem->evicted = true;
> +	drm_gem_shmem_update_pages_state(shmem);
> +}
> +EXPORT_SYMBOL_GPL(drm_gem_shmem_evict);
> +
> +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
> +{
> +	struct drm_gem_object *obj = &shmem->base;
> +
> +	drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
> +
> +	drm_gem_shmem_unpin_pages(shmem);
> +	drm_gem_free_mmap_offset(obj);
> +
> +	/* Our goal here is to return as much of the memory as
> +	 * is possible back to the system as we are called from OOM.
> +	 * To do this we must instruct the shmfs to drop all of its
> +	 * backing pages, *now*.
> +	 */
> +	shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1);
> +
> +	invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
> +
> +	shmem->madv = -1;
> +	shmem->evicted = false;
> +	drm_gem_shmem_update_pages_state(shmem);
> +}
> +EXPORT_SYMBOL_GPL(drm_gem_shmem_purge);
> +
> +static bool drm_gem_is_busy(struct drm_gem_object *obj)
> +{
> +	return !dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_READ);
> +}

This is a generic GEM function. But do we need it?

> +
> +static bool drm_gem_shmem_shrinker_evict(struct drm_gem_object *obj)
> +{
> +	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> +
> +	if (!drm_gem_shmem_is_evictable(shmem) ||
> +	    get_nr_swap_pages() < obj->size >> PAGE_SHIFT ||
> +	    drm_gem_is_busy(obj))

Because here and below, we call drm_gem_is_busy(). Could we test 
dma_resv_test_signaled() directly in drm_gem_shmem_evict() once and for all?

> +		return false;
> +
> +	return drm_gem_object_evict(obj);

I complaint about the use of booleans before. Here is should be

   int ret = _evcit();
   if (ret)
	return false;
    return true

and the shrink callback for drm_gem_lru_scan() should be changed to use 
errno codes as well. That's for a later patchset.

> +}
> +
> +static bool drm_gem_shmem_shrinker_purge(struct drm_gem_object *obj)
> +{
> +	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> +
> +	if (!drm_gem_shmem_is_purgeable(shmem) ||
> +	    drm_gem_is_busy(obj))
> +		return false;
> +
> +	return drm_gem_object_evict(obj);
> +}
> +
> +static unsigned long
> +drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker,
> +				    struct shrink_control *sc)
> +{
> +	struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker);
> +	unsigned long nr_to_scan = sc->nr_to_scan;
> +	unsigned long remaining = 0;
> +	unsigned long freed = 0;
> +
> +	/* purge as many objects as we can */
> +	freed += drm_gem_lru_scan(&gem_shrinker->lru_evictable,
> +				  nr_to_scan, &remaining,
> +				  drm_gem_shmem_shrinker_purge);
> +
> +	/* evict as many objects as we can */
> +	if (freed < nr_to_scan)
> +		freed += drm_gem_lru_scan(&gem_shrinker->lru_evictable,
> +					  nr_to_scan - freed, &remaining,
> +					  drm_gem_shmem_shrinker_evict);
> +
> +	return (freed > 0 && remaining > 0) ? freed : SHRINK_STOP;
> +}
> +
> +static int drm_gem_shmem_shrinker_init(struct drm_gem_shmem *shmem_mm,
> +				       const char *shrinker_name)
> +{
> +	struct drm_gem_shmem_shrinker *gem_shrinker = &shmem_mm->shrinker;
> +	int err;
> +
> +	gem_shrinker->base.count_objects = drm_gem_shmem_shrinker_count_objects;
> +	gem_shrinker->base.scan_objects = drm_gem_shmem_shrinker_scan_objects;
> +	gem_shrinker->base.seeks = DEFAULT_SEEKS;
> +
> +	mutex_init(&gem_shrinker->lock);
> +	drm_gem_lru_init(&gem_shrinker->lru_evictable, &gem_shrinker->lock);
> +	drm_gem_lru_init(&gem_shrinker->lru_evicted, &gem_shrinker->lock);
> +	drm_gem_lru_init(&gem_shrinker->lru_pinned, &gem_shrinker->lock);
> +
> +	err = register_shrinker(&gem_shrinker->base, shrinker_name);
> +	if (err) {
> +		mutex_destroy(&gem_shrinker->lock);
> +		return err;
> +	}
> +
> +	return 0;
> +}
> +
> +static void drm_gem_shmem_shrinker_release(struct drm_device *dev,
> +					   struct drm_gem_shmem *shmem_mm)
> +{
> +	struct drm_gem_shmem_shrinker *gem_shrinker = &shmem_mm->shrinker;
> +
> +	unregister_shrinker(&gem_shrinker->base);
> +	drm_WARN_ON(dev, !list_empty(&gem_shrinker->lru_evictable.list));
> +	drm_WARN_ON(dev, !list_empty(&gem_shrinker->lru_evicted.list));
> +	drm_WARN_ON(dev, !list_empty(&gem_shrinker->lru_pinned.list));
> +	mutex_destroy(&gem_shrinker->lock);
> +}
> +
> +static int drm_gem_shmem_init(struct drm_device *dev)
> +{
> +	int err;
> +
> +	if (WARN_ON(dev->shmem_mm))

drm_WARN_ON()

> +		return -EBUSY;
> +
> +	dev->shmem_mm = kzalloc(sizeof(*dev->shmem_mm), GFP_KERNEL);
> +	if (!dev->shmem_mm)
> +		return -ENOMEM;
> +
> +	err = drm_gem_shmem_shrinker_init(dev->shmem_mm, dev->unique);
> +	if (err)
> +		goto free_gem_shmem;
> +
> +	return 0;
> +
> +free_gem_shmem:
> +	kfree(dev->shmem_mm);
> +	dev->shmem_mm = NULL;
> +
> +	return err;
> +}
> +
> +static void drm_gem_shmem_release(struct drm_device *dev, void *ptr)
> +{
> +	struct drm_gem_shmem *shmem_mm = dev->shmem_mm;
> +
> +	drm_gem_shmem_shrinker_release(dev, shmem_mm);
> +	dev->shmem_mm = NULL;
> +	kfree(shmem_mm);
> +}
> +
> +/**
> + * drmm_gem_shmem_init() - Initialize drm-shmem internals
> + * @dev: DRM device
> + *
> + * Cleanup is automatically managed as part of DRM device releasing.
> + * Calling this function multiple times will result in a error.
> + *
> + * Returns:
> + * 0 on success or a negative error code on failure.
> + */
> +int drmm_gem_shmem_init(struct drm_device *dev)
> +{
> +	int err;
> +
> +	err = drm_gem_shmem_init(dev);
> +	if (err)
> +		return err;
> +
> +	err = drmm_add_action_or_reset(dev, drm_gem_shmem_release, NULL);
> +	if (err)
> +		return err;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(drmm_gem_shmem_init);
> +
>   MODULE_DESCRIPTION("DRM SHMEM memory-management helpers");
>   MODULE_IMPORT_NS(DMA_BUF);
>   MODULE_LICENSE("GPL v2");
> diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> index 6a71a2555f85..865a989d67c8 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> @@ -15,6 +15,13 @@
>   #include "panfrost_gem.h"
>   #include "panfrost_mmu.h"
>   
> +static bool panfrost_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
> +{
> +	return (shmem->madv > 0) &&
> +		!shmem->pages_pin_count && shmem->sgt &&
> +		!shmem->base.dma_buf && !shmem->base.import_attach;
> +}
> +
>   static unsigned long
>   panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc)
>   {
> @@ -27,7 +34,7 @@ panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc
>   		return 0;
>   
>   	list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) {
> -		if (drm_gem_shmem_is_purgeable(shmem))
> +		if (panfrost_gem_shmem_is_purgeable(shmem))
>   			count += shmem->base.size >> PAGE_SHIFT;
>   	}
>   
> diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h
> index a68c6a312b46..8acd455fc156 100644
> --- a/include/drm/drm_device.h
> +++ b/include/drm/drm_device.h
> @@ -16,6 +16,7 @@ struct drm_vblank_crtc;
>   struct drm_vma_offset_manager;
>   struct drm_vram_mm;
>   struct drm_fb_helper;
> +struct drm_gem_shmem_shrinker;
>   
>   struct inode;
>   
> @@ -277,8 +278,13 @@ struct drm_device {
>   	/** @vma_offset_manager: GEM information */
>   	struct drm_vma_offset_manager *vma_offset_manager;
>   
> -	/** @vram_mm: VRAM MM memory manager */
> -	struct drm_vram_mm *vram_mm;
> +	union {
> +		/** @vram_mm: VRAM MM memory manager */
> +		struct drm_vram_mm *vram_mm;
> +
> +		/** @shmem_mm: SHMEM GEM memory manager */
> +		struct drm_gem_shmem *shmem_mm;
> +	};
>   
>   	/**
>   	 * @switch_power_state:
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 20ddcd799df9..c264caf6c83b 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -6,6 +6,7 @@
>   #include <linux/fs.h>
>   #include <linux/mm.h>
>   #include <linux/mutex.h>
> +#include <linux/shrinker.h>
>   
>   #include <drm/drm_file.h>
>   #include <drm/drm_gem.h>
> @@ -15,6 +16,7 @@
>   struct dma_buf_attachment;
>   struct drm_mode_create_dumb;
>   struct drm_printer;
> +struct drm_device;

Alphabetically, please.

>   struct sg_table;
>   
>   /**
> @@ -39,12 +41,21 @@ struct drm_gem_shmem_object {
>   	 */
>   	unsigned int pages_use_count;
>   
> +	/**
> +	 * @pages_pin_count:
> +	 *
> +	 * Reference count on the pinned pages table.
> +	 * The pages allowed to be evicted by memory shrinker
> +	 * only when the count is zero.
> +	 */
> +	unsigned int pages_pin_count;
> +
>   	/**
>   	 * @madv: State for madvise
>   	 *
>   	 * 0 is active/inuse.
> +	 * 1 is not-needed/can-be-purged
>   	 * A negative value is the object is purged.
> -	 * Positive values are driver specific and not used by the helpers.
>   	 */
>   	int madv;
>   
> @@ -91,6 +102,12 @@ struct drm_gem_shmem_object {
>   	 * @map_wc: map object write-combined (instead of using shmem defaults).
>   	 */
>   	bool map_wc : 1;
> +
> +	/**
> +	 * @evicted: True if shmem pages are evicted by the memory shrinker.
> +	 * Used internally by memory shrinker.
> +	 */
> +	bool evicted : 1;
>   };
>   
>   #define to_drm_gem_shmem_obj(obj) \
> @@ -112,11 +129,17 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv);
>   
>   static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
>   {
> -	return (shmem->madv > 0) &&
> -		!shmem->vmap_use_count && shmem->sgt &&
> -		!shmem->base.dma_buf && !shmem->base.import_attach;
> +	dma_resv_assert_held(shmem->base.resv);
> +
> +	return (shmem->madv > 0) && shmem->base.funcs->evict &&
> +		shmem->pages_use_count && !shmem->pages_pin_count &&
> +		!shmem->base.dma_buf && !shmem->base.import_attach &&
> +		(shmem->sgt || shmem->evicted);
>   }
>   
> +int drm_gem_shmem_swap_in(struct drm_gem_shmem_object *shmem);
> +
> +void drm_gem_shmem_evict(struct drm_gem_shmem_object *shmem);
>   void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
>   
>   struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
> @@ -260,6 +283,36 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v
>   	return drm_gem_shmem_mmap(shmem, vma);
>   }
>   
> +/**
> + * struct drm_gem_shmem_shrinker - Memory shrinker of GEM shmem memory manager
> + */
> +struct drm_gem_shmem_shrinker {
> +	/** @base: Shrinker for purging shmem GEM objects */
> +	struct shrinker base;
> +
> +	/** @lock: Protects @lru_* */
> +	struct mutex lock;
> +
> +	/** @lru_pinned: List of pinned shmem GEM objects */
> +	struct drm_gem_lru lru_pinned;
> +
> +	/** @lru_evictable: List of shmem GEM objects to be evicted */
> +	struct drm_gem_lru lru_evictable;
> +
> +	/** @lru_evicted: List of evicted shmem GEM objects */
> +	struct drm_gem_lru lru_evicted;
> +};
> +
> +/**
> + * struct drm_gem_shmem - GEM shmem memory manager
> + */
> +struct drm_gem_shmem {
> +	/** @shrinker: GEM shmem shrinker */
> +	struct drm_gem_shmem_shrinker shrinker;
> +};
> +
> +int drmm_gem_shmem_init(struct drm_device *dev);
> +
>   /*
>    * Driver ops
>    */

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers
  2023-01-08 21:04 [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
                   ` (11 preceding siblings ...)
  2023-01-25 22:55 ` [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
@ 2023-02-17 13:28 ` Thomas Zimmermann
  2023-02-17 13:41   ` Dmitry Osipenko
  12 siblings, 1 reply; 41+ messages in thread
From: Thomas Zimmermann @ 2023-02-17 13:28 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gustavo Padovan,
	Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Rob Clark, Sumit Semwal, Christian König, Qiang Yu,
	Steven Price, Alyssa Rosenzweig, Rob Herring, Sean Paul,
	Dmitry Baryshkov, Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization


[-- Attachment #1.1: Type: text/plain, Size: 5838 bytes --]

Hi,

I looked through the series. Most of the patches should have an r-b or 
a-b at this point. I can't say much about patch 2 and had questions 
about others.

Maybe you can already land patches 2, and 4 to 6? They look independent 
from the shrinker changes. You could also attempt to land the locking 
changes in patch 7. They need to get testing. I'll send you an a-b for 
the patch.

Best regards
Thomas

Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
> This series:
> 
>    1. Makes minor fixes for drm_gem_lru and Panfrost
>    2. Brings refactoring for older code
>    3. Adds common drm-shmem memory shrinker
>    4. Enables shrinker for VirtIO-GPU driver
>    5. Switches Panfrost driver to the common shrinker
> 
> Changelog:
> 
> v10:- Rebased on a recent linux-next.
> 
>      - Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.
> 
>      - Added Steven's ack/r-b/t-b for the Panfrost patches.
> 
>      - Fixed missing export of the new drm_gem_object_evict() function.
> 
>      - Added fixes tags to the first two patches that are making minor fixes,
>        for consistency.
> 
> v9: - Replaced struct drm_gem_shmem_shrinker with drm_gem_shmem and
>        moved it to drm_device, like was suggested by Thomas Zimmermann.
> 
>      - Replaced drm_gem_shmem_shrinker_register() with drmm_gem_shmem_init(),
>        like was suggested by Thomas Zimmermann.
> 
>      - Moved evict() callback to drm_gem_object_funcs and added common
>        drm_gem_object_evict() helper, like was suggested by Thomas Zimmermann.
> 
>      - The shmem object now is evictable by default, like was suggested by
>        Thomas Zimmermann. Dropped the set_evictable/purgeble() functions
>        as well, drivers will decide whether BO is evictable within theirs
>        madvise IOCTL.
> 
>      - Added patches that convert drm-shmem code to use drm_WARN_ON() and
>        drm_dbg_kms(), like was requested by Thomas Zimmermann.
> 
>      - Turned drm_gem_shmem_object booleans into 1-bit bit fields, like was
>        suggested by Thomas Zimmermann.
> 
>      - Switched to use drm_dev->unique for the shmem shrinker name. Drivers
>        don't need to specify the name explicitly anymore.
> 
>      - Re-added dma_resv_test_signaled() that was missing in v8 and also
>        fixed its argument to DMA_RESV_USAGE_READ. See comment to
>        dma_resv_usage_rw().
> 
>      - Added new fix for Panfrost driver that silences lockdep warning
>        caused by shrinker. Both Panfrost old and new shmem shrinkers are
>        affected.
> 
> v8: - Rebased on top of recent linux-next that now has dma-buf locking
>        convention patches merged, which was blocking shmem shrinker before.
> 
>      - Shmem shrinker now uses new drm_gem_lru helper.
> 
>      - Dropped Steven Price t-b from the Panfrost patch because code
>        changed significantly since v6 and should be re-tested.
> 
> v7: - dma-buf locking convention
> 
> v6: https://lore.kernel.org/dri-devel/20220526235040.678984-1-dmitry.osipenko@collabora.com/
> 
> Related patches:
> 
> Mesa: https://gitlab.freedesktop.org/digetx/mesa/-/commits/virgl-madvise
> igt:  https://gitlab.freedesktop.org/digetx/igt-gpu-tools/-/commits/virtio-madvise
>        https://gitlab.freedesktop.org/digetx/igt-gpu-tools/-/commits/panfrost-madvise
> 
> The Mesa and IGT patches will be sent out once the kernel part will land.
> 
> Dmitry Osipenko (11):
>    drm/msm/gem: Prevent blocking within shrinker loop
>    drm/panfrost: Don't sync rpm suspension after mmu flushing
>    drm/gem: Add evict() callback to drm_gem_object_funcs
>    drm/shmem: Put booleans in the end of struct drm_gem_shmem_object
>    drm/shmem: Switch to use drm_* debug helpers
>    drm/shmem-helper: Don't use vmap_use_count for dma-bufs
>    drm/shmem-helper: Switch to reservation lock
>    drm/shmem-helper: Add memory shrinker
>    drm/gem: Add drm_gem_pin_unlocked()
>    drm/virtio: Support memory shrinking
>    drm/panfrost: Switch to generic memory shrinker
> 
>   drivers/gpu/drm/drm_gem.c                     |  54 +-
>   drivers/gpu/drm/drm_gem_shmem_helper.c        | 646 +++++++++++++-----
>   drivers/gpu/drm/lima/lima_gem.c               |   8 +-
>   drivers/gpu/drm/msm/msm_gem_shrinker.c        |   8 +-
>   drivers/gpu/drm/panfrost/Makefile             |   1 -
>   drivers/gpu/drm/panfrost/panfrost_device.h    |   4 -
>   drivers/gpu/drm/panfrost/panfrost_drv.c       |  34 +-
>   drivers/gpu/drm/panfrost/panfrost_gem.c       |  30 +-
>   drivers/gpu/drm/panfrost/panfrost_gem.h       |   9 -
>   .../gpu/drm/panfrost/panfrost_gem_shrinker.c  | 122 ----
>   drivers/gpu/drm/panfrost/panfrost_job.c       |  18 +-
>   drivers/gpu/drm/panfrost/panfrost_mmu.c       |  21 +-
>   drivers/gpu/drm/virtio/virtgpu_drv.h          |  18 +-
>   drivers/gpu/drm/virtio/virtgpu_gem.c          |  52 ++
>   drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  37 +
>   drivers/gpu/drm/virtio/virtgpu_kms.c          |   8 +
>   drivers/gpu/drm/virtio/virtgpu_object.c       | 132 +++-
>   drivers/gpu/drm/virtio/virtgpu_plane.c        |  22 +-
>   drivers/gpu/drm/virtio/virtgpu_vq.c           |  40 ++
>   include/drm/drm_device.h                      |  10 +-
>   include/drm/drm_gem.h                         |  19 +-
>   include/drm/drm_gem_shmem_helper.h            | 112 +--
>   include/uapi/drm/virtgpu_drm.h                |  14 +
>   23 files changed, 1010 insertions(+), 409 deletions(-)
>   delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 07/11] drm/shmem-helper: Switch to reservation lock
  2023-01-08 21:04 ` [PATCH v10 07/11] drm/shmem-helper: Switch to reservation lock Dmitry Osipenko
  2023-02-17 12:52   ` Thomas Zimmermann
@ 2023-02-17 13:29   ` Thomas Zimmermann
  1 sibling, 0 replies; 41+ messages in thread
From: Thomas Zimmermann @ 2023-02-17 13:29 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gustavo Padovan,
	Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Rob Clark, Sumit Semwal, Christian König, Qiang Yu,
	Steven Price, Alyssa Rosenzweig, Rob Herring, Sean Paul,
	Dmitry Baryshkov, Abhinav Kumar
  Cc: kernel, linux-kernel, dri-devel, virtualization


[-- Attachment #1.1: Type: text/plain, Size: 22939 bytes --]

Hi

Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
> Replace all drm-shmem locks with a GEM reservation lock. This makes locks
> consistent with dma-buf locking convention where importers are responsible
> for holding reservation lock for all operations performed over dma-bufs,
> preventing deadlock between dma-buf importers and exporters.
> 
> Suggested-by: Daniel Vetter <daniel@ffwll.ch>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>

I don't dare to r-b this, but take my

Acked-by: Thomas Zimmermann <tzimmermann@suse.de>

if you want to land this patch.

Best regards
Thomas

> ---
>   drivers/gpu/drm/drm_gem_shmem_helper.c        | 185 +++++++-----------
>   drivers/gpu/drm/lima/lima_gem.c               |   8 +-
>   drivers/gpu/drm/panfrost/panfrost_drv.c       |   7 +-
>   .../gpu/drm/panfrost/panfrost_gem_shrinker.c  |   6 +-
>   drivers/gpu/drm/panfrost/panfrost_mmu.c       |  19 +-
>   include/drm/drm_gem_shmem_helper.h            |  14 +-
>   6 files changed, 94 insertions(+), 145 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 1392cbd3cc02..a1f2f2158c50 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -88,8 +88,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private)
>   	if (ret)
>   		goto err_release;
>   
> -	mutex_init(&shmem->pages_lock);
> -	mutex_init(&shmem->vmap_lock);
>   	INIT_LIST_HEAD(&shmem->madv_list);
>   
>   	if (!private) {
> @@ -141,11 +139,13 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   
> -	drm_WARN_ON(obj->dev, shmem->vmap_use_count);
> -
>   	if (obj->import_attach) {
>   		drm_prime_gem_destroy(obj, shmem->sgt);
>   	} else {
> +		dma_resv_lock(shmem->base.resv, NULL);
> +
> +		drm_WARN_ON(obj->dev, shmem->vmap_use_count);
> +
>   		if (shmem->sgt) {
>   			dma_unmap_sgtable(obj->dev->dev, shmem->sgt,
>   					  DMA_BIDIRECTIONAL, 0);
> @@ -154,18 +154,18 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
>   		}
>   		if (shmem->pages)
>   			drm_gem_shmem_put_pages(shmem);
> -	}
>   
> -	drm_WARN_ON(obj->dev, shmem->pages_use_count);
> +		drm_WARN_ON(obj->dev, shmem->pages_use_count);
> +
> +		dma_resv_unlock(shmem->base.resv);
> +	}
>   
>   	drm_gem_object_release(obj);
> -	mutex_destroy(&shmem->pages_lock);
> -	mutex_destroy(&shmem->vmap_lock);
>   	kfree(shmem);
>   }
>   EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
>   
> -static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
> +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   	struct page **pages;
> @@ -197,35 +197,16 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
>   }
>   
>   /*
> - * drm_gem_shmem_get_pages - Allocate backing pages for a shmem GEM object
> + * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
>    * @shmem: shmem GEM object
>    *
> - * This function makes sure that backing pages exists for the shmem GEM object
> - * and increases the use count.
> - *
> - * Returns:
> - * 0 on success or a negative error code on failure.
> + * This function decreases the use count and puts the backing pages when use drops to zero.
>    */
> -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
> +void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
> -	int ret;
>   
> -	drm_WARN_ON(obj->dev, obj->import_attach);
> -
> -	ret = mutex_lock_interruptible(&shmem->pages_lock);
> -	if (ret)
> -		return ret;
> -	ret = drm_gem_shmem_get_pages_locked(shmem);
> -	mutex_unlock(&shmem->pages_lock);
> -
> -	return ret;
> -}
> -EXPORT_SYMBOL(drm_gem_shmem_get_pages);
> -
> -static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
> -{
> -	struct drm_gem_object *obj = &shmem->base;
> +	dma_resv_assert_held(shmem->base.resv);
>   
>   	if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
>   		return;
> @@ -243,19 +224,6 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
>   			  shmem->pages_mark_accessed_on_put);
>   	shmem->pages = NULL;
>   }
> -
> -/*
> - * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
> - * @shmem: shmem GEM object
> - *
> - * This function decreases the use count and puts the backing pages when use drops to zero.
> - */
> -void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
> -{
> -	mutex_lock(&shmem->pages_lock);
> -	drm_gem_shmem_put_pages_locked(shmem);
> -	mutex_unlock(&shmem->pages_lock);
> -}
>   EXPORT_SYMBOL(drm_gem_shmem_put_pages);
>   
>   /**
> @@ -272,6 +240,8 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   
> +	dma_resv_assert_held(shmem->base.resv);
> +
>   	drm_WARN_ON(obj->dev, obj->import_attach);
>   
>   	return drm_gem_shmem_get_pages(shmem);
> @@ -289,14 +259,31 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   
> +	dma_resv_assert_held(shmem->base.resv);
> +
>   	drm_WARN_ON(obj->dev, obj->import_attach);
>   
>   	drm_gem_shmem_put_pages(shmem);
>   }
>   EXPORT_SYMBOL(drm_gem_shmem_unpin);
>   
> -static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
> -				     struct iosys_map *map)
> +/*
> + * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
> + * @shmem: shmem GEM object
> + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> + *       store.
> + *
> + * This function makes sure that a contiguous kernel virtual address mapping
> + * exists for the buffer backing the shmem GEM object. It hides the differences
> + * between dma-buf imported and natively allocated objects.
> + *
> + * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
> + *
> + * Returns:
> + * 0 on success or a negative error code on failure.
> + */
> +int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
> +		       struct iosys_map *map)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   	int ret = 0;
> @@ -312,6 +299,8 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
>   	} else {
>   		pgprot_t prot = PAGE_KERNEL;
>   
> +		dma_resv_assert_held(shmem->base.resv);
> +
>   		if (shmem->vmap_use_count++ > 0) {
>   			iosys_map_set_vaddr(map, shmem->vaddr);
>   			return 0;
> @@ -346,45 +335,30 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
>   
>   	return ret;
>   }
> +EXPORT_SYMBOL(drm_gem_shmem_vmap);
>   
>   /*
> - * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
> + * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
>    * @shmem: shmem GEM object
> - * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> - *       store.
> - *
> - * This function makes sure that a contiguous kernel virtual address mapping
> - * exists for the buffer backing the shmem GEM object. It hides the differences
> - * between dma-buf imported and natively allocated objects.
> + * @map: Kernel virtual address where the SHMEM GEM object was mapped
>    *
> - * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
> + * This function cleans up a kernel virtual address mapping acquired by
> + * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
> + * zero.
>    *
> - * Returns:
> - * 0 on success or a negative error code on failure.
> + * This function hides the differences between dma-buf imported and natively
> + * allocated objects.
>    */
> -int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
> -		       struct iosys_map *map)
> -{
> -	int ret;
> -
> -	ret = mutex_lock_interruptible(&shmem->vmap_lock);
> -	if (ret)
> -		return ret;
> -	ret = drm_gem_shmem_vmap_locked(shmem, map);
> -	mutex_unlock(&shmem->vmap_lock);
> -
> -	return ret;
> -}
> -EXPORT_SYMBOL(drm_gem_shmem_vmap);
> -
> -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> -					struct iosys_map *map)
> +void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
> +			  struct iosys_map *map)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   
>   	if (obj->import_attach) {
>   		dma_buf_vunmap(obj->import_attach->dmabuf, map);
>   	} else {
> +		dma_resv_assert_held(shmem->base.resv);
> +
>   		if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count))
>   			return;
>   
> @@ -397,26 +371,6 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
>   
>   	shmem->vaddr = NULL;
>   }
> -
> -/*
> - * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
> - * @shmem: shmem GEM object
> - * @map: Kernel virtual address where the SHMEM GEM object was mapped
> - *
> - * This function cleans up a kernel virtual address mapping acquired by
> - * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
> - * zero.
> - *
> - * This function hides the differences between dma-buf imported and natively
> - * allocated objects.
> - */
> -void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
> -			  struct iosys_map *map)
> -{
> -	mutex_lock(&shmem->vmap_lock);
> -	drm_gem_shmem_vunmap_locked(shmem, map);
> -	mutex_unlock(&shmem->vmap_lock);
> -}
>   EXPORT_SYMBOL(drm_gem_shmem_vunmap);
>   
>   static struct drm_gem_shmem_object *
> @@ -449,24 +403,24 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
>    */
>   int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
>   {
> -	mutex_lock(&shmem->pages_lock);
> +	dma_resv_assert_held(shmem->base.resv);
>   
>   	if (shmem->madv >= 0)
>   		shmem->madv = madv;
>   
>   	madv = shmem->madv;
>   
> -	mutex_unlock(&shmem->pages_lock);
> -
>   	return (madv >= 0);
>   }
>   EXPORT_SYMBOL(drm_gem_shmem_madvise);
>   
> -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
> +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
>   	struct drm_device *dev = obj->dev;
>   
> +	dma_resv_assert_held(shmem->base.resv);
> +
>   	drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
>   
>   	dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
> @@ -474,7 +428,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
>   	kfree(shmem->sgt);
>   	shmem->sgt = NULL;
>   
> -	drm_gem_shmem_put_pages_locked(shmem);
> +	drm_gem_shmem_put_pages(shmem);
>   
>   	shmem->madv = -1;
>   
> @@ -490,17 +444,6 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
>   
>   	invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
>   }
> -EXPORT_SYMBOL(drm_gem_shmem_purge_locked);
> -
> -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
> -{
> -	if (!mutex_trylock(&shmem->pages_lock))
> -		return false;
> -	drm_gem_shmem_purge_locked(shmem);
> -	mutex_unlock(&shmem->pages_lock);
> -
> -	return true;
> -}
>   EXPORT_SYMBOL(drm_gem_shmem_purge);
>   
>   /**
> @@ -556,7 +499,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>   	/* We don't use vmf->pgoff since that has the fake offset */
>   	page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
>   
> -	mutex_lock(&shmem->pages_lock);
> +	dma_resv_lock(shmem->base.resv, NULL);
>   
>   	if (page_offset >= num_pages ||
>   	    drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
> @@ -568,7 +511,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>   		ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
>   	}
>   
> -	mutex_unlock(&shmem->pages_lock);
> +	dma_resv_unlock(shmem->base.resv);
>   
>   	return ret;
>   }
> @@ -580,7 +523,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
>   
>   	drm_WARN_ON(obj->dev, obj->import_attach);
>   
> -	mutex_lock(&shmem->pages_lock);
> +	dma_resv_lock(shmem->base.resv, NULL);
>   
>   	/*
>   	 * We should have already pinned the pages when the buffer was first
> @@ -590,7 +533,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
>   	if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
>   		shmem->pages_use_count++;
>   
> -	mutex_unlock(&shmem->pages_lock);
> +	dma_resv_unlock(shmem->base.resv);
>   
>   	drm_gem_vm_open(vma);
>   }
> @@ -600,7 +543,10 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
>   	struct drm_gem_object *obj = vma->vm_private_data;
>   	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>   
> +	dma_resv_lock(shmem->base.resv, NULL);
>   	drm_gem_shmem_put_pages(shmem);
> +	dma_resv_unlock(shmem->base.resv);
> +
>   	drm_gem_vm_close(vma);
>   }
>   
> @@ -635,7 +581,10 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
>   		return dma_buf_mmap(obj->dma_buf, vma, 0);
>   	}
>   
> +	dma_resv_lock(shmem->base.resv, NULL);
>   	ret = drm_gem_shmem_get_pages(shmem);
> +	dma_resv_unlock(shmem->base.resv);
> +
>   	if (ret)
>   		return ret;
>   
> @@ -721,9 +670,11 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
>   
>   	drm_WARN_ON(obj->dev, obj->import_attach);
>   
> +	dma_resv_lock(shmem->base.resv, NULL);
> +
>   	ret = drm_gem_shmem_get_pages(shmem);
>   	if (ret)
> -		return ERR_PTR(ret);
> +		goto err_unlock;
>   
>   	sgt = drm_gem_shmem_get_sg_table(shmem);
>   	if (IS_ERR(sgt)) {
> @@ -737,6 +688,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
>   
>   	shmem->sgt = sgt;
>   
> +	dma_resv_unlock(shmem->base.resv);
> +
>   	return sgt;
>   
>   err_free_sgt:
> @@ -744,6 +697,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
>   	kfree(sgt);
>   err_put_pages:
>   	drm_gem_shmem_put_pages(shmem);
> +err_unlock:
> +	dma_resv_unlock(shmem->base.resv);
>   	return ERR_PTR(ret);
>   }
>   EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt);
> diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
> index 0f1ca0b0db49..5008f0c2428f 100644
> --- a/drivers/gpu/drm/lima/lima_gem.c
> +++ b/drivers/gpu/drm/lima/lima_gem.c
> @@ -34,7 +34,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
>   
>   	new_size = min(new_size, bo->base.base.size);
>   
> -	mutex_lock(&bo->base.pages_lock);
> +	dma_resv_lock(bo->base.base.resv, NULL);
>   
>   	if (bo->base.pages) {
>   		pages = bo->base.pages;
> @@ -42,7 +42,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
>   		pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
>   				       sizeof(*pages), GFP_KERNEL | __GFP_ZERO);
>   		if (!pages) {
> -			mutex_unlock(&bo->base.pages_lock);
> +			dma_resv_unlock(bo->base.base.resv);
>   			return -ENOMEM;
>   		}
>   
> @@ -56,13 +56,13 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
>   		struct page *page = shmem_read_mapping_page(mapping, i);
>   
>   		if (IS_ERR(page)) {
> -			mutex_unlock(&bo->base.pages_lock);
> +			dma_resv_unlock(bo->base.base.resv);
>   			return PTR_ERR(page);
>   		}
>   		pages[i] = page;
>   	}
>   
> -	mutex_unlock(&bo->base.pages_lock);
> +	dma_resv_unlock(bo->base.base.resv);
>   
>   	ret = sg_alloc_table_from_pages(&sgt, pages, i, 0,
>   					new_size, GFP_KERNEL);
> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> index abb0dadd8f63..9f3f2283b67a 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> @@ -414,6 +414,10 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
>   
>   	bo = to_panfrost_bo(gem_obj);
>   
> +	ret = dma_resv_lock_interruptible(bo->base.base.resv, NULL);
> +	if (ret)
> +		goto out_put_object;
> +
>   	mutex_lock(&pfdev->shrinker_lock);
>   	mutex_lock(&bo->mappings.lock);
>   	if (args->madv == PANFROST_MADV_DONTNEED) {
> @@ -451,7 +455,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
>   out_unlock_mappings:
>   	mutex_unlock(&bo->mappings.lock);
>   	mutex_unlock(&pfdev->shrinker_lock);
> -
> +	dma_resv_unlock(bo->base.base.resv);
> +out_put_object:
>   	drm_gem_object_put(gem_obj);
>   	return ret;
>   }
> diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> index bf0170782f25..6a71a2555f85 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> @@ -48,14 +48,14 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj)
>   	if (!mutex_trylock(&bo->mappings.lock))
>   		return false;
>   
> -	if (!mutex_trylock(&shmem->pages_lock))
> +	if (!dma_resv_trylock(shmem->base.resv))
>   		goto unlock_mappings;
>   
>   	panfrost_gem_teardown_mappings_locked(bo);
> -	drm_gem_shmem_purge_locked(&bo->base);
> +	drm_gem_shmem_purge(&bo->base);
>   	ret = true;
>   
> -	mutex_unlock(&shmem->pages_lock);
> +	dma_resv_unlock(shmem->base.resv);
>   
>   unlock_mappings:
>   	mutex_unlock(&bo->mappings.lock);
> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> index 666a5e53fe19..0679df57f394 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> @@ -443,6 +443,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
>   	struct panfrost_gem_mapping *bomapping;
>   	struct panfrost_gem_object *bo;
>   	struct address_space *mapping;
> +	struct drm_gem_object *obj;
>   	pgoff_t page_offset;
>   	struct sg_table *sgt;
>   	struct page **pages;
> @@ -465,15 +466,16 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
>   	page_offset = addr >> PAGE_SHIFT;
>   	page_offset -= bomapping->mmnode.start;
>   
> -	mutex_lock(&bo->base.pages_lock);
> +	obj = &bo->base.base;
> +
> +	dma_resv_lock(obj->resv, NULL);
>   
>   	if (!bo->base.pages) {
>   		bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M,
>   				     sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO);
>   		if (!bo->sgts) {
> -			mutex_unlock(&bo->base.pages_lock);
>   			ret = -ENOMEM;
> -			goto err_bo;
> +			goto err_unlock;
>   		}
>   
>   		pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
> @@ -481,9 +483,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
>   		if (!pages) {
>   			kvfree(bo->sgts);
>   			bo->sgts = NULL;
> -			mutex_unlock(&bo->base.pages_lock);
>   			ret = -ENOMEM;
> -			goto err_bo;
> +			goto err_unlock;
>   		}
>   		bo->base.pages = pages;
>   		bo->base.pages_use_count = 1;
> @@ -491,7 +492,6 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
>   		pages = bo->base.pages;
>   		if (pages[page_offset]) {
>   			/* Pages are already mapped, bail out. */
> -			mutex_unlock(&bo->base.pages_lock);
>   			goto out;
>   		}
>   	}
> @@ -502,14 +502,11 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
>   	for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
>   		pages[i] = shmem_read_mapping_page(mapping, i);
>   		if (IS_ERR(pages[i])) {
> -			mutex_unlock(&bo->base.pages_lock);
>   			ret = PTR_ERR(pages[i]);
>   			goto err_pages;
>   		}
>   	}
>   
> -	mutex_unlock(&bo->base.pages_lock);
> -
>   	sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)];
>   	ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
>   					NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL);
> @@ -528,6 +525,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
>   	dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr);
>   
>   out:
> +	dma_resv_unlock(obj->resv);
> +
>   	panfrost_gem_mapping_put(bomapping);
>   
>   	return 0;
> @@ -536,6 +535,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
>   	sg_free_table(sgt);
>   err_pages:
>   	drm_gem_shmem_put_pages(&bo->base);
> +err_unlock:
> +	dma_resv_unlock(obj->resv);
>   err_bo:
>   	panfrost_gem_mapping_put(bomapping);
>   	return ret;
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 5994fed5e327..20ddcd799df9 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -26,11 +26,6 @@ struct drm_gem_shmem_object {
>   	 */
>   	struct drm_gem_object base;
>   
> -	/**
> -	 * @pages_lock: Protects the page table and use count
> -	 */
> -	struct mutex pages_lock;
> -
>   	/**
>   	 * @pages: Page table
>   	 */
> @@ -65,11 +60,6 @@ struct drm_gem_shmem_object {
>   	 */
>   	struct sg_table *sgt;
>   
> -	/**
> -	 * @vmap_lock: Protects the vmap address and use count
> -	 */
> -	struct mutex vmap_lock;
> -
>   	/**
>   	 * @vaddr: Kernel virtual address of the backing memory
>   	 */
> @@ -109,7 +99,6 @@ struct drm_gem_shmem_object {
>   struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size);
>   void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem);
>   
> -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
>   void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
>   int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem);
>   void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem);
> @@ -128,8 +117,7 @@ static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem
>   		!shmem->base.dma_buf && !shmem->base.import_attach;
>   }
>   
> -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem);
> -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
> +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
>   
>   struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
>   struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem);

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 07/11] drm/shmem-helper: Switch to reservation lock
  2023-02-17 12:52   ` Thomas Zimmermann
@ 2023-02-17 13:33     ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2023-02-17 13:33 UTC (permalink / raw)
  To: Thomas Zimmermann, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gustavo Padovan,
	Daniel Stone, Maarten Lankhorst, Maxime Ripard, Rob Clark,
	Sumit Semwal, Christian König, Qiang Yu, Steven Price,
	Alyssa Rosenzweig, Rob Herring, Sean Paul, Dmitry Baryshkov,
	Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization

On 2/17/23 15:52, Thomas Zimmermann wrote:
> Hi
> 
> Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
>> Replace all drm-shmem locks with a GEM reservation lock. This makes locks
>> consistent with dma-buf locking convention where importers are
>> responsible
>> for holding reservation lock for all operations performed over dma-bufs,
>> preventing deadlock between dma-buf importers and exporters.
>>
>> Suggested-by: Daniel Vetter <daniel@ffwll.ch>
>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> 
> How much testing has this patch seen?
> 
> I'm asking because when I tried to fix the locking in this code, I had
> to review every importer to make sure that it aquired the lock. Has this
> problem been resolved?

The dma-buf locking rules was merged to v6.2 kernel.

I tested all the available importers that use drm-shmem. There were
deadlocks and lockdep warnings while I was working/testing the importer
paths in the past, feel confident that the code paths were tested well
enough. Note that Lima and Panfrost always use the importer paths in
case of display because display is a separate driver.

I checked that:

- desktop environment works
- 3d works
- video dec (v4l2) dmabuf sharing works
- shrinker works

I.e. tested it all with VirtIO-GPU, Panfrost and Lima drivers. For
VirtIO-GPU importer paths aren't relevant because it can only share bufs
with other virtio driver and in upstream we only have VirtIO-GPU driver
supporting dma-bufs.

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers
  2023-02-17 13:28 ` Thomas Zimmermann
@ 2023-02-17 13:41   ` Dmitry Osipenko
  2023-02-27  4:19     ` Dmitry Osipenko
  0 siblings, 1 reply; 41+ messages in thread
From: Dmitry Osipenko @ 2023-02-17 13:41 UTC (permalink / raw)
  To: Thomas Zimmermann, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gustavo Padovan,
	Daniel Stone, Maarten Lankhorst, Maxime Ripard, Rob Clark,
	Sumit Semwal, Christian König, Qiang Yu, Steven Price,
	Alyssa Rosenzweig, Rob Herring, Sean Paul, Dmitry Baryshkov,
	Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization

On 2/17/23 16:28, Thomas Zimmermann wrote:
> Hi,
> 
> I looked through the series. Most of the patches should have an r-b or
> a-b at this point. I can't say much about patch 2 and had questions
> about others.
> 
> Maybe you can already land patches 2, and 4 to 6? They look independent
> from the shrinker changes. You could also attempt to land the locking
> changes in patch 7. They need to get testing. I'll send you an a-b for
> the patch.

Thank you, I'll apply the acked patches and then make v11 with the
remaining patches updated.

Not sure if it will be possible to split patch 8, but I'll think on it
for v11.

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 09/11] drm/gem: Add drm_gem_pin_unlocked()
  2023-01-08 21:04 ` [PATCH v10 09/11] drm/gem: Add drm_gem_pin_unlocked() Dmitry Osipenko
@ 2023-02-17 13:42   ` Thomas Zimmermann
  0 siblings, 0 replies; 41+ messages in thread
From: Thomas Zimmermann @ 2023-02-17 13:42 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gustavo Padovan,
	Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Rob Clark, Sumit Semwal, Christian König, Qiang Yu,
	Steven Price, Alyssa Rosenzweig, Rob Herring, Sean Paul,
	Dmitry Baryshkov, Abhinav Kumar
  Cc: kernel, linux-kernel, dri-devel, virtualization


[-- Attachment #1.1: Type: text/plain, Size: 2333 bytes --]

I forgot this change.

Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
> Add unlocked variants of drm_gem_un/pin() functions. These new helpers
> will take care of GEM dma-reservation locking for DRM drivers.
> 
> VirtIO-GPU driver will use these helpers to pin shmem framebuffers,
> preventing them from eviction during scanout.
> 
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>

Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>

Best regards
Thomas

> ---
>   drivers/gpu/drm/drm_gem.c | 29 +++++++++++++++++++++++++++++
>   include/drm/drm_gem.h     |  3 +++
>   2 files changed, 32 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index dbb48fc9dff3..0b8d3da985c7 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1167,6 +1167,35 @@ void drm_gem_unpin(struct drm_gem_object *obj)
>   		obj->funcs->unpin(obj);
>   }
>   
> +int drm_gem_pin_unlocked(struct drm_gem_object *obj)
> +{
> +	int ret;
> +
> +	if (!obj->funcs->pin)
> +		return 0;
> +
> +	ret = dma_resv_lock_interruptible(obj->resv, NULL);
> +	if (ret)
> +		return ret;
> +
> +	ret = obj->funcs->pin(obj);
> +	dma_resv_unlock(obj->resv);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(drm_gem_pin_unlocked);
> +
> +void drm_gem_unpin_unlocked(struct drm_gem_object *obj)
> +{
> +	if (!obj->funcs->unpin)
> +		return;
> +
> +	dma_resv_lock(obj->resv, NULL);
> +	obj->funcs->unpin(obj);
> +	dma_resv_unlock(obj->resv);
> +}
> +EXPORT_SYMBOL(drm_gem_unpin_unlocked);
> +
>   int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
>   {
>   	int ret;
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index 8e5c22f25691..6f6d96f79a67 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -493,4 +493,7 @@ unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru,
>   
>   bool drm_gem_object_evict(struct drm_gem_object *obj);
>   
> +int drm_gem_pin_unlocked(struct drm_gem_object *obj);
> +void drm_gem_unpin_unlocked(struct drm_gem_object *obj);
> +
>   #endif /* __DRM_GEM_H__ */

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers
  2023-02-17 13:41   ` Dmitry Osipenko
@ 2023-02-27  4:19     ` Dmitry Osipenko
  2023-02-27 10:37       ` Jani Nikula
  0 siblings, 1 reply; 41+ messages in thread
From: Dmitry Osipenko @ 2023-02-27  4:19 UTC (permalink / raw)
  To: Thomas Zimmermann, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gustavo Padovan,
	Daniel Stone, Maarten Lankhorst, Maxime Ripard, Rob Clark,
	Sumit Semwal, Christian König, Qiang Yu, Steven Price,
	Alyssa Rosenzweig, Rob Herring, Sean Paul, Dmitry Baryshkov,
	Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization

On 2/17/23 16:41, Dmitry Osipenko wrote:
> On 2/17/23 16:28, Thomas Zimmermann wrote:
>> Hi,
>>
>> I looked through the series. Most of the patches should have an r-b or
>> a-b at this point. I can't say much about patch 2 and had questions
>> about others.
>>
>> Maybe you can already land patches 2, and 4 to 6? They look independent
>> from the shrinker changes. You could also attempt to land the locking
>> changes in patch 7. They need to get testing. I'll send you an a-b for
>> the patch.
> 
> Thank you, I'll apply the acked patches and then make v11 with the
> remaining patches updated.
> 
> Not sure if it will be possible to split patch 8, but I'll think on it
> for v11.
> 

Applied patches 1-2 to misc-fixes and patches 3-7 to misc-next, with the
review comments addressed.

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 01/11] drm/msm/gem: Prevent blocking within shrinker loop
  2023-02-17 12:02   ` Thomas Zimmermann
@ 2023-02-27  4:27     ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2023-02-27  4:27 UTC (permalink / raw)
  To: Thomas Zimmermann, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gustavo Padovan,
	Daniel Stone, Maarten Lankhorst, Maxime Ripard, Rob Clark,
	Sumit Semwal, Christian König, Qiang Yu, Steven Price,
	Alyssa Rosenzweig, Rob Herring, Sean Paul, Dmitry Baryshkov,
	Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization

On 2/17/23 15:02, Thomas Zimmermann wrote:
> Hi
> 
> Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
>> Consider this scenario:
>>
>> 1. APP1 continuously creates lots of small GEMs
>> 2. APP2 triggers `drop_caches`
>> 3. Shrinker starts to evict APP1 GEMs, while APP1 produces new purgeable
>>     GEMs
>> 4. msm_gem_shrinker_scan() returns non-zero number of freed pages
>>     and causes shrinker to try shrink more
>> 5. msm_gem_shrinker_scan() returns non-zero number of freed pages again,
>>     goto 4
>> 6. The APP2 is blocked in `drop_caches` until APP1 stops producing
>>     purgeable GEMs
>>
>> To prevent this blocking scenario, check number of remaining pages
>> that GPU shrinker couldn't release due to a GEM locking contention
>> or shrinking rejection. If there are no remaining pages left to shrink,
>> then there is no need to free up more pages and shrinker may break out
>> from the loop.
>>
>> This problem was found during shrinker/madvise IOCTL testing of
>> virtio-gpu driver. The MSM driver is affected in the same way.
>>
>> Reviewed-by: Rob Clark <robdclark@gmail.com>
>> Fixes: b352ba54a820 ("drm/msm/gem: Convert to using drm_gem_lru")
>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>> ---
>>   drivers/gpu/drm/drm_gem.c              | 9 +++++++--
>>   drivers/gpu/drm/msm/msm_gem_shrinker.c | 8 ++++++--
>>   include/drm/drm_gem.h                  | 4 +++-
>>   3 files changed, 16 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>> index 59a0bb5ebd85..c6bca5ac6e0f 100644
>> --- a/drivers/gpu/drm/drm_gem.c
>> +++ b/drivers/gpu/drm/drm_gem.c
>> @@ -1388,10 +1388,13 @@ EXPORT_SYMBOL(drm_gem_lru_move_tail);
>>    *
>>    * @lru: The LRU to scan
>>    * @nr_to_scan: The number of pages to try to reclaim
>> + * @remaining: The number of pages left to reclaim
>>    * @shrink: Callback to try to shrink/reclaim the object.
>>    */
>>   unsigned long
>> -drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan,
>> +drm_gem_lru_scan(struct drm_gem_lru *lru,
>> +         unsigned int nr_to_scan,
>> +         unsigned long *remaining,
>>            bool (*shrink)(struct drm_gem_object *obj))
>>   {
>>       struct drm_gem_lru still_in_lru;
>> @@ -1430,8 +1433,10 @@ drm_gem_lru_scan(struct drm_gem_lru *lru,
>> unsigned nr_to_scan,
>>            * hit shrinker in response to trying to get backing pages
>>            * for this obj (ie. while it's lock is already held)
>>            */
>> -        if (!dma_resv_trylock(obj->resv))
>> +        if (!dma_resv_trylock(obj->resv)) {
>> +            *remaining += obj->size >> PAGE_SHIFT;
>>               goto tail;
>> +        }
>>             if (shrink(obj)) {
>>               freed += obj->size >> PAGE_SHIFT;
>> diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c
>> b/drivers/gpu/drm/msm/msm_gem_shrinker.c
>> index 051bdbc093cf..b7c1242014ec 100644
>> --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c
>> +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c
>> @@ -116,12 +116,14 @@ msm_gem_shrinker_scan(struct shrinker *shrinker,
>> struct shrink_control *sc)
>>       };
>>       long nr = sc->nr_to_scan;
>>       unsigned long freed = 0;
>> +    unsigned long remaining = 0;
>>         for (unsigned i = 0; (nr > 0) && (i < ARRAY_SIZE(stages)); i++) {
>>           if (!stages[i].cond)
>>               continue;
>>           stages[i].freed =
>> -            drm_gem_lru_scan(stages[i].lru, nr, stages[i].shrink);
>> +            drm_gem_lru_scan(stages[i].lru, nr, &remaining,
> 
> This function relies in remaining being pre-initialized. That's not
> obvious and error prone. At least, pass-in something like
> &stages[i].remaining that is then initialized internally by
> drm_gem_lru_scan() to zero. And similar to freed, sum up the individual
> stages' remaining here.
> 
> TBH I somehow don't like the overall design of how all these functions
> interact with each other. But I also can't really point to the actual
> problem. So it's best to take what you have here; maybe with the change
> I proposed.
> 
> Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>

I had to keep to the remaining being pre-initialized because moving the
initialization was hurting the rest of the code. Though, updated the MSM
patch to use &stages[i].remaining

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 08/11] drm/shmem-helper: Add memory shrinker
  2023-02-17 13:19   ` Thomas Zimmermann
@ 2023-02-27  4:34     ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2023-02-27  4:34 UTC (permalink / raw)
  To: Thomas Zimmermann, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gustavo Padovan,
	Daniel Stone, Maarten Lankhorst, Maxime Ripard, Rob Clark,
	Sumit Semwal, Christian König, Qiang Yu, Steven Price,
	Alyssa Rosenzweig, Rob Herring, Sean Paul, Dmitry Baryshkov,
	Abhinav Kumar
  Cc: dri-devel, linux-kernel, kernel, virtualization

On 2/17/23 16:19, Thomas Zimmermann wrote:
>> +/**
>> + * drm_gem_shmem_swap_in() - Moves shmem GEM back to memory and enables
>> + *                           hardware access to the memory.
> 
> Do we have a better name than _swap_in()? I suggest
> drm_gem_shmem_unevict(), which suggest that it's the inverse to _evict().

The canonical naming scheme used by TTM and other DRM drivers is
_swapin(), without the underscore. I'll use that variant in v11 for the
naming consistency with the rest of DRM code.

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers
  2023-02-27  4:19     ` Dmitry Osipenko
@ 2023-02-27 10:37       ` Jani Nikula
  2023-02-27 11:00         ` Dmitry Osipenko
  0 siblings, 1 reply; 41+ messages in thread
From: Jani Nikula @ 2023-02-27 10:37 UTC (permalink / raw)
  To: Dmitry Osipenko, Thomas Zimmermann, David Airlie, Gerd Hoffmann,
	Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida,
	Gustavo Padovan, Daniel Stone, Maarten Lankhorst, Maxime Ripard,
	Rob Clark, Sumit Semwal, Christian König, Qiang Yu,
	Steven Price, Alyssa Rosenzweig, Rob Herring, Sean Paul,
	Dmitry Baryshkov, Abhinav Kumar
  Cc: kernel, linux-kernel, dri-devel, virtualization

On Mon, 27 Feb 2023, Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
> On 2/17/23 16:41, Dmitry Osipenko wrote:
>> On 2/17/23 16:28, Thomas Zimmermann wrote:
>>> Hi,
>>>
>>> I looked through the series. Most of the patches should have an r-b or
>>> a-b at this point. I can't say much about patch 2 and had questions
>>> about others.
>>>
>>> Maybe you can already land patches 2, and 4 to 6? They look independent
>>> from the shrinker changes. You could also attempt to land the locking
>>> changes in patch 7. They need to get testing. I'll send you an a-b for
>>> the patch.
>> 
>> Thank you, I'll apply the acked patches and then make v11 with the
>> remaining patches updated.
>> 
>> Not sure if it will be possible to split patch 8, but I'll think on it
>> for v11.
>> 
>
> Applied patches 1-2 to misc-fixes and patches 3-7 to misc-next, with the
> review comments addressed.

Please resolve the drm-tip rebuild conflict [1].

BR,
Jani.


[1] https://paste.debian.net/1272275/


-- 
Jani Nikula, Intel Open Source Graphics Center

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers
  2023-02-27 10:37       ` Jani Nikula
@ 2023-02-27 11:00         ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2023-02-27 11:00 UTC (permalink / raw)
  To: Jani Nikula, Thomas Zimmermann
  Cc: kernel, linux-kernel, dri-devel, virtualization, David Airlie,
	Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter,
	Daniel Almeida, Gustavo Padovan, Daniel Stone, Maarten Lankhorst,
	Maxime Ripard, Rob Clark, Sumit Semwal, Christian König,
	Qiang Yu, Steven Price, Alyssa Rosenzweig, Rob Herring,
	Sean Paul, Dmitry Baryshkov, Abhinav Kumar

On 2/27/23 13:37, Jani Nikula wrote:
> On Mon, 27 Feb 2023, Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
>> On 2/17/23 16:41, Dmitry Osipenko wrote:
>>> On 2/17/23 16:28, Thomas Zimmermann wrote:
>>>> Hi,
>>>>
>>>> I looked through the series. Most of the patches should have an r-b or
>>>> a-b at this point. I can't say much about patch 2 and had questions
>>>> about others.
>>>>
>>>> Maybe you can already land patches 2, and 4 to 6? They look independent
>>>> from the shrinker changes. You could also attempt to land the locking
>>>> changes in patch 7. They need to get testing. I'll send you an a-b for
>>>> the patch.
>>>
>>> Thank you, I'll apply the acked patches and then make v11 with the
>>> remaining patches updated.
>>>
>>> Not sure if it will be possible to split patch 8, but I'll think on it
>>> for v11.
>>>
>>
>> Applied patches 1-2 to misc-fixes and patches 3-7 to misc-next, with the
>> review comments addressed.
> 
> Please resolve the drm-tip rebuild conflict [1].
> 
> BR,
> Jani.
> 
> 
> [1] https://paste.debian.net/1272275/

Don't see that conflict locally, perhaps somebody already fixed it?

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2023-02-27 11:02 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-08 21:04 [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
2023-01-08 21:04 ` [PATCH v10 01/11] drm/msm/gem: Prevent blocking within shrinker loop Dmitry Osipenko
2023-02-17 12:02   ` Thomas Zimmermann
2023-02-27  4:27     ` Dmitry Osipenko
2023-01-08 21:04 ` [PATCH v10 02/11] drm/panfrost: Don't sync rpm suspension after mmu flushing Dmitry Osipenko
2023-01-08 21:04 ` [PATCH v10 03/11] drm/gem: Add evict() callback to drm_gem_object_funcs Dmitry Osipenko
2023-02-17 12:23   ` Thomas Zimmermann
2023-01-08 21:04 ` [PATCH v10 04/11] drm/shmem: Put booleans in the end of struct drm_gem_shmem_object Dmitry Osipenko
2023-02-17 12:25   ` Thomas Zimmermann
2023-01-08 21:04 ` [PATCH v10 05/11] drm/shmem: Switch to use drm_* debug helpers Dmitry Osipenko
2023-01-26 12:15   ` Gerd Hoffmann
2023-02-17 12:28   ` Thomas Zimmermann
2023-01-08 21:04 ` [PATCH v10 06/11] drm/shmem-helper: Don't use vmap_use_count for dma-bufs Dmitry Osipenko
2023-01-26 12:17   ` Gerd Hoffmann
2023-01-26 12:24     ` Dmitry Osipenko
2023-01-27  8:06       ` Gerd Hoffmann
2023-02-17 12:41   ` Thomas Zimmermann
2023-01-08 21:04 ` [PATCH v10 07/11] drm/shmem-helper: Switch to reservation lock Dmitry Osipenko
2023-02-17 12:52   ` Thomas Zimmermann
2023-02-17 13:33     ` Dmitry Osipenko
2023-02-17 13:29   ` Thomas Zimmermann
2023-01-08 21:04 ` [PATCH v10 08/11] drm/shmem-helper: Add memory shrinker Dmitry Osipenko
2023-02-17 13:19   ` Thomas Zimmermann
2023-02-27  4:34     ` Dmitry Osipenko
2023-01-08 21:04 ` [PATCH v10 09/11] drm/gem: Add drm_gem_pin_unlocked() Dmitry Osipenko
2023-02-17 13:42   ` Thomas Zimmermann
2023-01-08 21:04 ` [PATCH v10 10/11] drm/virtio: Support memory shrinking Dmitry Osipenko
2023-01-27  8:04   ` Gerd Hoffmann
2023-01-08 21:04 ` [PATCH v10 11/11] drm/panfrost: Switch to generic memory shrinker Dmitry Osipenko
2023-01-25 22:55 ` [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
2023-01-27  8:13   ` Gerd Hoffmann
2023-01-30 12:02     ` Dmitry Osipenko
2023-02-16 12:15       ` Daniel Vetter
2023-02-16 13:08         ` AngeloGioacchino Del Regno
2023-02-16 20:43         ` Dmitry Osipenko
2023-02-16 22:07           ` Daniel Vetter
2023-02-17 13:28 ` Thomas Zimmermann
2023-02-17 13:41   ` Dmitry Osipenko
2023-02-27  4:19     ` Dmitry Osipenko
2023-02-27 10:37       ` Jani Nikula
2023-02-27 11:00         ` Dmitry Osipenko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).