All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/15] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers
@ 2022-04-11 21:59 ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: linux-kernel, virtualization, Gustavo Padovan, dri-devel,
	Dmitry Osipenko, Dmitry Osipenko

Hello,

This patchset introduces memory shrinker for the VirtIO-GPU DRM driver.
During OOM, the shrinker will release BOs that are marked as "not needed"
by userspace using the new madvise IOCTL, it will also evict idling BOs
to SWAP. The userspace in this case is the Mesa VirGL driver, it will mark
the cached BOs as "not needed", allowing kernel driver to release memory
of the cached shmem BOs on lowmem situations, preventing OOM kills.

This patchset adds memory purging and eviction support to VirtIO-GPU driver.

The Panfrost driver is switched to use generic memory shrinker. Eviction
support will come later on, after resolving the blocker bug in Panfrost.

This patchset also includes couple improvements and fixes for various
minor things that I found while was working on the shrinker.

The Mesa and IGT patches will be kept on hold until this kernel series
will be approved and merged.

This patchset was tested using Qemu and crosvm, including both cases of
IOMMU off/on.

Mesa: https://gitlab.freedesktop.org/digetx/mesa/-/commits/virgl-madvise
IGT:  https://gitlab.freedesktop.org/digetx/igt-gpu-tools/-/commits/virtio-madvise
      https://gitlab.freedesktop.org/digetx/igt-gpu-tools/-/commits/panfrost-madvise

Changelog:

v3: - Hardened shrinker's count() with usage of READ_ONCE() since we don't
      use atomic type for counting and technically compiler is free to
      re-fetch counter's variable.

    - "Correct drm_gem_shmem_get_sg_table() error handling" now uses
      PTR_ERR_OR_ZERO(), fixing typo that was made in v2.

    - Removed obsoleted shrinker from the Panfrost driver, which I missed to
      do in v2 by accident and Alyssa Rosenzweig managed to notice it.

    - CCed stable kernels in all patches that make fixes, even the minor ones,
      like was suggested by Emil Velikov and added his r-b to the patches.

    - Added t-b from Steven Price to the Panfrost's shrinker patch.

    - Corrected doc-comment of drm_gem_shmem_object.madv, like was suggested
      by Steven Price. Comment now says that madv=1 means "object is purged"
      instead of saying that value is unused.

    - Added more doc-comments to the new shmem shrinker API.

    - The "Improve DMA API usage for shmem BOs" patch got more improvements
      by removing the obsoleted drm_dev_set_unique() quirk and its comment.

    - Added patch that makes Virtio-GPU driver to use common dev_is_pci()
      helper, which was suggested by Robin Murphy.

    - Added new "drm/shmem-helper: Take GEM reservation lock instead of
      drm_gem_shmem locks" patch, which was suggested by Daniel Vetter.

    - Added new "drm/virtio: Simplify error handling of
      virtio_gpu_object_create()" patch.

    - Improved "Correct doc-comment of drm_gem_shmem_get_sg_table()" patch,
      like was suggested by Daniel Vetter, by saying that function returns
      ERR_PTR() and not errno.

    - virtio_gpu_purge_object() is fenced properly now, turned out
      virtio_gpu_notify() doesn't do fencing as I was supposing before.
      Stress testing of memory eviction revealed that.

    - Added new patch that corrects virtio_gpu_plane_cleanup_fb() to use
      appropriate atomic plane state.

    - SHMEM shrinker got eviction support.

    - VirtIO-GPU driver now supports memory eviction. It's enabled for a
      non-blob GEMs only, i.e. for VirGL. The blobs don't support dynamic
      attaching/detaching of guest's memory, so it's not trivial to enable
      them.

    - Added patch that removes obsoleted drm_gem_shmem_purge()

    - Added patch that makes drm_gem_shmem_get_pages() private.

    - Added patch that fixes lockup on dma_resv_reserve_fences() error.

v2: - Improved shrinker by using a more fine-grained locking to reduce
      contention during scan of objects and dropped locking from the
      'counting' callback by tracking count of shrinkable pages. This
      was suggested by Rob Clark in the comment to v1.

    - Factored out common shrinker code into drm_gem_shmem_helper.c
      and switched Panfrost driver to use the new common memory shrinker.
      This was proposed by Thomas Zimmermann in his prototype series that
      he shared with us in the comment to v1. Note that I only compile-tested
      the Panfrost driver.

    - Shrinker now takes object_name_lock during scan to prevent racing
      with dma-buf exporting.

    - Shrinker now takes vmap_lock during scan to prevent racing with shmem
      vmap/unmap code.

    - Added "Correct doc-comment of drm_gem_shmem_get_sg_table()" patch,
      which I sent out previously as a standalone change, since the
      drm_gem_shmem_helper.c is now touched by this patchset anyways and
      it doesn't hurt to group all the patches together.

Dmitry Osipenko (15):
  drm/virtio: Correct drm_gem_shmem_get_sg_table() error handling
  drm/virtio: Check whether transferred 2D BO is shmem
  drm/virtio: Unlock GEM reservations on virtio_gpu_object_shmem_init()
    error
  drm/virtio: Unlock reservations on dma_resv_reserve_fences() error
  drm/virtio: Use appropriate atomic state in
    virtio_gpu_plane_cleanup_fb()
  drm/virtio: Simplify error handling of virtio_gpu_object_create()
  drm/virtio: Improve DMA API usage for shmem BOs
  drm/virtio: Use dev_is_pci()
  drm/shmem-helper: Correct doc-comment of drm_gem_shmem_get_sg_table()
  drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks
  drm/shmem-helper: Add generic memory shrinker
  drm/virtio: Support memory shrinking
  drm/panfrost: Switch to generic memory shrinker
  drm/shmem-helper: Make drm_gem_shmem_get_pages() private
  drm/shmem-helper: Remove drm_gem_shmem_purge()

 drivers/gpu/drm/drm_gem_shmem_helper.c        | 815 ++++++++++++++++--
 drivers/gpu/drm/lima/lima_gem.c               |   8 +-
 drivers/gpu/drm/panfrost/Makefile             |   1 -
 drivers/gpu/drm/panfrost/panfrost_device.h    |   4 -
 drivers/gpu/drm/panfrost/panfrost_drv.c       |  19 +-
 drivers/gpu/drm/panfrost/panfrost_gem.c       |  30 +-
 drivers/gpu/drm/panfrost/panfrost_gem.h       |   9 -
 .../gpu/drm/panfrost/panfrost_gem_shrinker.c  | 122 ---
 drivers/gpu/drm/panfrost/panfrost_job.c       |  18 +-
 drivers/gpu/drm/panfrost/panfrost_mmu.c       |  15 +-
 drivers/gpu/drm/virtio/virtgpu_drv.c          |  53 +-
 drivers/gpu/drm/virtio/virtgpu_drv.h          |  20 +-
 drivers/gpu/drm/virtio/virtgpu_gem.c          |  50 +-
 drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  37 +
 drivers/gpu/drm/virtio/virtgpu_kms.c          |  16 +-
 drivers/gpu/drm/virtio/virtgpu_object.c       | 204 +++--
 drivers/gpu/drm/virtio/virtgpu_plane.c        |  23 +-
 drivers/gpu/drm/virtio/virtgpu_vq.c           |  55 +-
 include/drm/drm_device.h                      |   4 +
 include/drm/drm_gem.h                         |  35 +
 include/drm/drm_gem_shmem_helper.h            | 114 ++-
 include/uapi/drm/virtgpu_drm.h                |  14 +
 22 files changed, 1278 insertions(+), 388 deletions(-)
 delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c

-- 
2.35.1


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [PATCH v3 00/15] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers
@ 2022-04-11 21:59 ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: dri-devel, linux-kernel, Gustavo Padovan, Daniel Stone,
	virtualization, Dmitry Osipenko, Dmitry Osipenko

Hello,

This patchset introduces memory shrinker for the VirtIO-GPU DRM driver.
During OOM, the shrinker will release BOs that are marked as "not needed"
by userspace using the new madvise IOCTL, it will also evict idling BOs
to SWAP. The userspace in this case is the Mesa VirGL driver, it will mark
the cached BOs as "not needed", allowing kernel driver to release memory
of the cached shmem BOs on lowmem situations, preventing OOM kills.

This patchset adds memory purging and eviction support to VirtIO-GPU driver.

The Panfrost driver is switched to use generic memory shrinker. Eviction
support will come later on, after resolving the blocker bug in Panfrost.

This patchset also includes couple improvements and fixes for various
minor things that I found while was working on the shrinker.

The Mesa and IGT patches will be kept on hold until this kernel series
will be approved and merged.

This patchset was tested using Qemu and crosvm, including both cases of
IOMMU off/on.

Mesa: https://gitlab.freedesktop.org/digetx/mesa/-/commits/virgl-madvise
IGT:  https://gitlab.freedesktop.org/digetx/igt-gpu-tools/-/commits/virtio-madvise
      https://gitlab.freedesktop.org/digetx/igt-gpu-tools/-/commits/panfrost-madvise

Changelog:

v3: - Hardened shrinker's count() with usage of READ_ONCE() since we don't
      use atomic type for counting and technically compiler is free to
      re-fetch counter's variable.

    - "Correct drm_gem_shmem_get_sg_table() error handling" now uses
      PTR_ERR_OR_ZERO(), fixing typo that was made in v2.

    - Removed obsoleted shrinker from the Panfrost driver, which I missed to
      do in v2 by accident and Alyssa Rosenzweig managed to notice it.

    - CCed stable kernels in all patches that make fixes, even the minor ones,
      like was suggested by Emil Velikov and added his r-b to the patches.

    - Added t-b from Steven Price to the Panfrost's shrinker patch.

    - Corrected doc-comment of drm_gem_shmem_object.madv, like was suggested
      by Steven Price. Comment now says that madv=1 means "object is purged"
      instead of saying that value is unused.

    - Added more doc-comments to the new shmem shrinker API.

    - The "Improve DMA API usage for shmem BOs" patch got more improvements
      by removing the obsoleted drm_dev_set_unique() quirk and its comment.

    - Added patch that makes Virtio-GPU driver to use common dev_is_pci()
      helper, which was suggested by Robin Murphy.

    - Added new "drm/shmem-helper: Take GEM reservation lock instead of
      drm_gem_shmem locks" patch, which was suggested by Daniel Vetter.

    - Added new "drm/virtio: Simplify error handling of
      virtio_gpu_object_create()" patch.

    - Improved "Correct doc-comment of drm_gem_shmem_get_sg_table()" patch,
      like was suggested by Daniel Vetter, by saying that function returns
      ERR_PTR() and not errno.

    - virtio_gpu_purge_object() is fenced properly now, turned out
      virtio_gpu_notify() doesn't do fencing as I was supposing before.
      Stress testing of memory eviction revealed that.

    - Added new patch that corrects virtio_gpu_plane_cleanup_fb() to use
      appropriate atomic plane state.

    - SHMEM shrinker got eviction support.

    - VirtIO-GPU driver now supports memory eviction. It's enabled for a
      non-blob GEMs only, i.e. for VirGL. The blobs don't support dynamic
      attaching/detaching of guest's memory, so it's not trivial to enable
      them.

    - Added patch that removes obsoleted drm_gem_shmem_purge()

    - Added patch that makes drm_gem_shmem_get_pages() private.

    - Added patch that fixes lockup on dma_resv_reserve_fences() error.

v2: - Improved shrinker by using a more fine-grained locking to reduce
      contention during scan of objects and dropped locking from the
      'counting' callback by tracking count of shrinkable pages. This
      was suggested by Rob Clark in the comment to v1.

    - Factored out common shrinker code into drm_gem_shmem_helper.c
      and switched Panfrost driver to use the new common memory shrinker.
      This was proposed by Thomas Zimmermann in his prototype series that
      he shared with us in the comment to v1. Note that I only compile-tested
      the Panfrost driver.

    - Shrinker now takes object_name_lock during scan to prevent racing
      with dma-buf exporting.

    - Shrinker now takes vmap_lock during scan to prevent racing with shmem
      vmap/unmap code.

    - Added "Correct doc-comment of drm_gem_shmem_get_sg_table()" patch,
      which I sent out previously as a standalone change, since the
      drm_gem_shmem_helper.c is now touched by this patchset anyways and
      it doesn't hurt to group all the patches together.

Dmitry Osipenko (15):
  drm/virtio: Correct drm_gem_shmem_get_sg_table() error handling
  drm/virtio: Check whether transferred 2D BO is shmem
  drm/virtio: Unlock GEM reservations on virtio_gpu_object_shmem_init()
    error
  drm/virtio: Unlock reservations on dma_resv_reserve_fences() error
  drm/virtio: Use appropriate atomic state in
    virtio_gpu_plane_cleanup_fb()
  drm/virtio: Simplify error handling of virtio_gpu_object_create()
  drm/virtio: Improve DMA API usage for shmem BOs
  drm/virtio: Use dev_is_pci()
  drm/shmem-helper: Correct doc-comment of drm_gem_shmem_get_sg_table()
  drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks
  drm/shmem-helper: Add generic memory shrinker
  drm/virtio: Support memory shrinking
  drm/panfrost: Switch to generic memory shrinker
  drm/shmem-helper: Make drm_gem_shmem_get_pages() private
  drm/shmem-helper: Remove drm_gem_shmem_purge()

 drivers/gpu/drm/drm_gem_shmem_helper.c        | 815 ++++++++++++++++--
 drivers/gpu/drm/lima/lima_gem.c               |   8 +-
 drivers/gpu/drm/panfrost/Makefile             |   1 -
 drivers/gpu/drm/panfrost/panfrost_device.h    |   4 -
 drivers/gpu/drm/panfrost/panfrost_drv.c       |  19 +-
 drivers/gpu/drm/panfrost/panfrost_gem.c       |  30 +-
 drivers/gpu/drm/panfrost/panfrost_gem.h       |   9 -
 .../gpu/drm/panfrost/panfrost_gem_shrinker.c  | 122 ---
 drivers/gpu/drm/panfrost/panfrost_job.c       |  18 +-
 drivers/gpu/drm/panfrost/panfrost_mmu.c       |  15 +-
 drivers/gpu/drm/virtio/virtgpu_drv.c          |  53 +-
 drivers/gpu/drm/virtio/virtgpu_drv.h          |  20 +-
 drivers/gpu/drm/virtio/virtgpu_gem.c          |  50 +-
 drivers/gpu/drm/virtio/virtgpu_ioctl.c        |  37 +
 drivers/gpu/drm/virtio/virtgpu_kms.c          |  16 +-
 drivers/gpu/drm/virtio/virtgpu_object.c       | 204 +++--
 drivers/gpu/drm/virtio/virtgpu_plane.c        |  23 +-
 drivers/gpu/drm/virtio/virtgpu_vq.c           |  55 +-
 include/drm/drm_device.h                      |   4 +
 include/drm/drm_gem.h                         |  35 +
 include/drm/drm_gem_shmem_helper.h            | 114 ++-
 include/uapi/drm/virtgpu_drm.h                |  14 +
 22 files changed, 1278 insertions(+), 388 deletions(-)
 delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c

-- 
2.35.1


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [PATCH v3 01/15] drm/virtio: Correct drm_gem_shmem_get_sg_table() error handling
  2022-04-11 21:59 ` Dmitry Osipenko
@ 2022-04-11 21:59   ` Dmitry Osipenko
  -1 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: dri-devel, linux-kernel, Gustavo Padovan, Daniel Stone,
	virtualization, Dmitry Osipenko, Dmitry Osipenko

drm_gem_shmem_get_sg_table() never ever returned NULL on error. Correct
the error handling to avoid crash on OOM.

Cc: stable@vger.kernel.org
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_object.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index f293e6ad52da..3d0c8d4d1c20 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -168,9 +168,11 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
 	 * since virtio_gpu doesn't support dma-buf import from other devices.
 	 */
 	shmem->pages = drm_gem_shmem_get_sg_table(&bo->base);
-	if (!shmem->pages) {
+	ret = PTR_ERR_OR_ZERO(shmem->pages);
+	if (ret) {
 		drm_gem_shmem_unpin(&bo->base);
-		return -EINVAL;
+		shmem->pages = NULL;
+		return ret;
 	}
 
 	if (use_dma_api) {
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 01/15] drm/virtio: Correct drm_gem_shmem_get_sg_table() error handling
@ 2022-04-11 21:59   ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: linux-kernel, virtualization, Gustavo Padovan, dri-devel,
	Dmitry Osipenko, Dmitry Osipenko

drm_gem_shmem_get_sg_table() never ever returned NULL on error. Correct
the error handling to avoid crash on OOM.

Cc: stable@vger.kernel.org
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_object.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index f293e6ad52da..3d0c8d4d1c20 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -168,9 +168,11 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
 	 * since virtio_gpu doesn't support dma-buf import from other devices.
 	 */
 	shmem->pages = drm_gem_shmem_get_sg_table(&bo->base);
-	if (!shmem->pages) {
+	ret = PTR_ERR_OR_ZERO(shmem->pages);
+	if (ret) {
 		drm_gem_shmem_unpin(&bo->base);
-		return -EINVAL;
+		shmem->pages = NULL;
+		return ret;
 	}
 
 	if (use_dma_api) {
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 02/15] drm/virtio: Check whether transferred 2D BO is shmem
  2022-04-11 21:59 ` Dmitry Osipenko
@ 2022-04-11 21:59   ` Dmitry Osipenko
  -1 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: linux-kernel, virtualization, Gustavo Padovan, dri-devel,
	Dmitry Osipenko, Dmitry Osipenko

Transferred 2D BO always must be a shmem BO. Add check for that to prevent
NULL dereference if userspace passes a VRAM BO.

Cc: stable@vger.kernel.org
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_vq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 7c052efe8836..2edf31806b74 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -595,7 +595,7 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
 	bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev);
 	struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
 
-	if (use_dma_api)
+	if (virtio_gpu_is_shmem(bo) && use_dma_api)
 		dma_sync_sgtable_for_device(vgdev->vdev->dev.parent,
 					    shmem->pages, DMA_TO_DEVICE);
 
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 02/15] drm/virtio: Check whether transferred 2D BO is shmem
@ 2022-04-11 21:59   ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: dri-devel, linux-kernel, Gustavo Padovan, Daniel Stone,
	virtualization, Dmitry Osipenko, Dmitry Osipenko

Transferred 2D BO always must be a shmem BO. Add check for that to prevent
NULL dereference if userspace passes a VRAM BO.

Cc: stable@vger.kernel.org
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_vq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 7c052efe8836..2edf31806b74 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -595,7 +595,7 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
 	bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev);
 	struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
 
-	if (use_dma_api)
+	if (virtio_gpu_is_shmem(bo) && use_dma_api)
 		dma_sync_sgtable_for_device(vgdev->vdev->dev.parent,
 					    shmem->pages, DMA_TO_DEVICE);
 
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 03/15] drm/virtio: Unlock GEM reservations on virtio_gpu_object_shmem_init() error
  2022-04-11 21:59 ` Dmitry Osipenko
@ 2022-04-11 21:59   ` Dmitry Osipenko
  -1 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: linux-kernel, virtualization, Gustavo Padovan, dri-devel,
	Dmitry Osipenko, Dmitry Osipenko

Unlock reservations in the error code path of virtio_gpu_object_create()
to silence debug warning splat produced by ww_mutex_destroy(&obj->lock)
when GEM is released with the held lock.

Cc: stable@vger.kernel.org
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_object.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 3d0c8d4d1c20..21c19cdedce0 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -250,6 +250,8 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 
 	ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
 	if (ret != 0) {
+		if (fence)
+			virtio_gpu_array_unlock_resv(objs);
 		virtio_gpu_array_put_free(objs);
 		virtio_gpu_free_object(&shmem_obj->base);
 		return ret;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 03/15] drm/virtio: Unlock GEM reservations on virtio_gpu_object_shmem_init() error
@ 2022-04-11 21:59   ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: dri-devel, linux-kernel, Gustavo Padovan, Daniel Stone,
	virtualization, Dmitry Osipenko, Dmitry Osipenko

Unlock reservations in the error code path of virtio_gpu_object_create()
to silence debug warning splat produced by ww_mutex_destroy(&obj->lock)
when GEM is released with the held lock.

Cc: stable@vger.kernel.org
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_object.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 3d0c8d4d1c20..21c19cdedce0 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -250,6 +250,8 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 
 	ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
 	if (ret != 0) {
+		if (fence)
+			virtio_gpu_array_unlock_resv(objs);
 		virtio_gpu_array_put_free(objs);
 		virtio_gpu_free_object(&shmem_obj->base);
 		return ret;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 04/15] drm/virtio: Unlock reservations on dma_resv_reserve_fences() error
  2022-04-11 21:59 ` Dmitry Osipenko
@ 2022-04-11 21:59   ` Dmitry Osipenko
  -1 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: linux-kernel, virtualization, Gustavo Padovan, dri-devel,
	Dmitry Osipenko, Dmitry Osipenko

Unlock reservations on dma_resv_reserve_fences() error to fix recursive
locking of the reservations when this error happens.

Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_gem.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 580a78809836..7db48d17ee3a 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -228,8 +228,10 @@ int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs)
 
 	for (i = 0; i < objs->nents; ++i) {
 		ret = dma_resv_reserve_fences(objs->objs[i]->resv, 1);
-		if (ret)
+		if (ret) {
+			virtio_gpu_array_unlock_resv(objs);
 			return ret;
+		}
 	}
 	return ret;
 }
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 04/15] drm/virtio: Unlock reservations on dma_resv_reserve_fences() error
@ 2022-04-11 21:59   ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: dri-devel, linux-kernel, Gustavo Padovan, Daniel Stone,
	virtualization, Dmitry Osipenko, Dmitry Osipenko

Unlock reservations on dma_resv_reserve_fences() error to fix recursive
locking of the reservations when this error happens.

Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_gem.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 580a78809836..7db48d17ee3a 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -228,8 +228,10 @@ int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs)
 
 	for (i = 0; i < objs->nents; ++i) {
 		ret = dma_resv_reserve_fences(objs->objs[i]->resv, 1);
-		if (ret)
+		if (ret) {
+			virtio_gpu_array_unlock_resv(objs);
 			return ret;
+		}
 	}
 	return ret;
 }
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 05/15] drm/virtio: Use appropriate atomic state in virtio_gpu_plane_cleanup_fb()
  2022-04-11 21:59 ` Dmitry Osipenko
@ 2022-04-11 21:59   ` Dmitry Osipenko
  -1 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: dri-devel, linux-kernel, Gustavo Padovan, Daniel Stone,
	virtualization, Dmitry Osipenko, Dmitry Osipenko

Make virtio_gpu_plane_cleanup_fb() to clean the state which DRM core
wants to clean up and not the current plane's state. Normally the older
atomic state is cleaned up, but the newer state could also be cleaned up
in case of aborted commits.

Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_plane.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
index 6d3cc9e238a4..7148f3813d8b 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -266,14 +266,14 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
 }
 
 static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane,
-					struct drm_plane_state *old_state)
+					struct drm_plane_state *state)
 {
 	struct virtio_gpu_framebuffer *vgfb;
 
-	if (!plane->state->fb)
+	if (!state->fb)
 		return;
 
-	vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
+	vgfb = to_virtio_gpu_framebuffer(state->fb);
 	if (vgfb->fence) {
 		dma_fence_put(&vgfb->fence->f);
 		vgfb->fence = NULL;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 05/15] drm/virtio: Use appropriate atomic state in virtio_gpu_plane_cleanup_fb()
@ 2022-04-11 21:59   ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: linux-kernel, virtualization, Gustavo Padovan, dri-devel,
	Dmitry Osipenko, Dmitry Osipenko

Make virtio_gpu_plane_cleanup_fb() to clean the state which DRM core
wants to clean up and not the current plane's state. Normally the older
atomic state is cleaned up, but the newer state could also be cleaned up
in case of aborted commits.

Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_plane.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
index 6d3cc9e238a4..7148f3813d8b 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -266,14 +266,14 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
 }
 
 static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane,
-					struct drm_plane_state *old_state)
+					struct drm_plane_state *state)
 {
 	struct virtio_gpu_framebuffer *vgfb;
 
-	if (!plane->state->fb)
+	if (!state->fb)
 		return;
 
-	vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
+	vgfb = to_virtio_gpu_framebuffer(state->fb);
 	if (vgfb->fence) {
 		dma_fence_put(&vgfb->fence->f);
 		vgfb->fence = NULL;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 06/15] drm/virtio: Simplify error handling of virtio_gpu_object_create()
  2022-04-11 21:59 ` Dmitry Osipenko
@ 2022-04-11 21:59   ` Dmitry Osipenko
  -1 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: linux-kernel, virtualization, Gustavo Padovan, dri-devel,
	Dmitry Osipenko, Dmitry Osipenko

Change the order of SHMEM initialization and reservation locking to
make code cleaner a tad and to prepare to transitioning of the common
GEM SHMEM code to use the GEM's reservation lock instead of the
shmem.page_lock.

There is no need to lock reservation during allocation of the SHMEM pages
because the lock is needed only to avoid racing with the async host-side
allocation. Hence we can safely move the SHMEM initialization out of the
reservation lock.

Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_object.c | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 21c19cdedce0..18f70ef6b4d0 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -236,6 +236,10 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 
 	bo->dumb = params->dumb;
 
+	ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
+	if (ret != 0)
+		goto err_put_id;
+
 	if (fence) {
 		ret = -ENOMEM;
 		objs = virtio_gpu_array_alloc(1);
@@ -248,15 +252,6 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 			goto err_put_objs;
 	}
 
-	ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
-	if (ret != 0) {
-		if (fence)
-			virtio_gpu_array_unlock_resv(objs);
-		virtio_gpu_array_put_free(objs);
-		virtio_gpu_free_object(&shmem_obj->base);
-		return ret;
-	}
-
 	if (params->blob) {
 		if (params->blob_mem == VIRTGPU_BLOB_MEM_GUEST)
 			bo->guest_blob = true;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 06/15] drm/virtio: Simplify error handling of virtio_gpu_object_create()
@ 2022-04-11 21:59   ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: dri-devel, linux-kernel, Gustavo Padovan, Daniel Stone,
	virtualization, Dmitry Osipenko, Dmitry Osipenko

Change the order of SHMEM initialization and reservation locking to
make code cleaner a tad and to prepare to transitioning of the common
GEM SHMEM code to use the GEM's reservation lock instead of the
shmem.page_lock.

There is no need to lock reservation during allocation of the SHMEM pages
because the lock is needed only to avoid racing with the async host-side
allocation. Hence we can safely move the SHMEM initialization out of the
reservation lock.

Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_object.c | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 21c19cdedce0..18f70ef6b4d0 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -236,6 +236,10 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 
 	bo->dumb = params->dumb;
 
+	ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
+	if (ret != 0)
+		goto err_put_id;
+
 	if (fence) {
 		ret = -ENOMEM;
 		objs = virtio_gpu_array_alloc(1);
@@ -248,15 +252,6 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 			goto err_put_objs;
 	}
 
-	ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
-	if (ret != 0) {
-		if (fence)
-			virtio_gpu_array_unlock_resv(objs);
-		virtio_gpu_array_put_free(objs);
-		virtio_gpu_free_object(&shmem_obj->base);
-		return ret;
-	}
-
 	if (params->blob) {
 		if (params->blob_mem == VIRTGPU_BLOB_MEM_GUEST)
 			bo->guest_blob = true;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 07/15] drm/virtio: Improve DMA API usage for shmem BOs
  2022-04-11 21:59 ` Dmitry Osipenko
@ 2022-04-11 21:59   ` Dmitry Osipenko
  -1 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: linux-kernel, virtualization, Gustavo Padovan, dri-devel,
	Dmitry Osipenko, Dmitry Osipenko

DRM API requires the DRM's driver to be backed with the device that can
be used for generic DMA operations. The VirtIO-GPU device can't perform
DMA operations if it uses PCI transport because PCI device driver creates
a virtual VirtIO-GPU device that isn't associated with the PCI. Use PCI's
GPU device for the DRM's device instead of the VirtIO-GPU device and drop
DMA-related hacks from the VirtIO-GPU driver.

Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.c    | 51 ++++++----------------
 drivers/gpu/drm/virtio/virtgpu_drv.h    |  5 +--
 drivers/gpu/drm/virtio/virtgpu_kms.c    |  7 ++--
 drivers/gpu/drm/virtio/virtgpu_object.c | 56 +++++--------------------
 drivers/gpu/drm/virtio/virtgpu_vq.c     | 13 +++---
 5 files changed, 32 insertions(+), 100 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c
index 5f25a8d15464..0141b7df97ec 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.c
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
@@ -46,12 +46,11 @@ static int virtio_gpu_modeset = -1;
 MODULE_PARM_DESC(modeset, "Disable/Enable modesetting");
 module_param_named(modeset, virtio_gpu_modeset, int, 0400);
 
-static int virtio_gpu_pci_quirk(struct drm_device *dev, struct virtio_device *vdev)
+static int virtio_gpu_pci_quirk(struct drm_device *dev)
 {
-	struct pci_dev *pdev = to_pci_dev(vdev->dev.parent);
+	struct pci_dev *pdev = to_pci_dev(dev->dev);
 	const char *pname = dev_name(&pdev->dev);
 	bool vga = (pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA;
-	char unique[20];
 	int ret;
 
 	DRM_INFO("pci: %s detected at %s\n",
@@ -63,39 +62,7 @@ static int virtio_gpu_pci_quirk(struct drm_device *dev, struct virtio_device *vd
 			return ret;
 	}
 
-	/*
-	 * Normally the drm_dev_set_unique() call is done by core DRM.
-	 * The following comment covers, why virtio cannot rely on it.
-	 *
-	 * Unlike the other virtual GPU drivers, virtio abstracts the
-	 * underlying bus type by using struct virtio_device.
-	 *
-	 * Hence the dev_is_pci() check, used in core DRM, will fail
-	 * and the unique returned will be the virtio_device "virtio0",
-	 * while a "pci:..." one is required.
-	 *
-	 * A few other ideas were considered:
-	 * - Extend the dev_is_pci() check [in drm_set_busid] to
-	 *   consider virtio.
-	 *   Seems like a bigger hack than what we have already.
-	 *
-	 * - Point drm_device::dev to the parent of the virtio_device
-	 *   Semantic changes:
-	 *   * Using the wrong device for i2c, framebuffer_alloc and
-	 *     prime import.
-	 *   Visual changes:
-	 *   * Helpers such as DRM_DEV_ERROR, dev_info, drm_printer,
-	 *     will print the wrong information.
-	 *
-	 * We could address the latter issues, by introducing
-	 * drm_device::bus_dev, ... which would be used solely for this.
-	 *
-	 * So for the moment keep things as-is, with a bulky comment
-	 * for the next person who feels like removing this
-	 * drm_dev_set_unique() quirk.
-	 */
-	snprintf(unique, sizeof(unique), "pci:%s", pname);
-	return drm_dev_set_unique(dev, unique);
+	return 0;
 }
 
 static int virtio_gpu_probe(struct virtio_device *vdev)
@@ -109,18 +76,24 @@ static int virtio_gpu_probe(struct virtio_device *vdev)
 	if (virtio_gpu_modeset == 0)
 		return -EINVAL;
 
-	dev = drm_dev_alloc(&driver, &vdev->dev);
+	/*
+	 * The virtio-gpu device is a virtual device that doesn't have DMA
+	 * ops assigned to it, nor DMA mask set and etc. Its parent device
+	 * is actual GPU device we want to use it for the DRM's device in
+	 * order to benefit from using generic DRM APIs.
+	 */
+	dev = drm_dev_alloc(&driver, vdev->dev.parent);
 	if (IS_ERR(dev))
 		return PTR_ERR(dev);
 	vdev->priv = dev;
 
 	if (!strcmp(vdev->dev.parent->bus->name, "pci")) {
-		ret = virtio_gpu_pci_quirk(dev, vdev);
+		ret = virtio_gpu_pci_quirk(dev);
 		if (ret)
 			goto err_free;
 	}
 
-	ret = virtio_gpu_init(dev);
+	ret = virtio_gpu_init(vdev, dev);
 	if (ret)
 		goto err_free;
 
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 0a194aaad419..b2d93cb12ebf 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -100,8 +100,6 @@ struct virtio_gpu_object {
 
 struct virtio_gpu_object_shmem {
 	struct virtio_gpu_object base;
-	struct sg_table *pages;
-	uint32_t mapped;
 };
 
 struct virtio_gpu_object_vram {
@@ -214,7 +212,6 @@ struct virtio_gpu_drv_cap_cache {
 };
 
 struct virtio_gpu_device {
-	struct device *dev;
 	struct drm_device *ddev;
 
 	struct virtio_device *vdev;
@@ -282,7 +279,7 @@ extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS];
 void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file);
 
 /* virtgpu_kms.c */
-int virtio_gpu_init(struct drm_device *dev);
+int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev);
 void virtio_gpu_deinit(struct drm_device *dev);
 void virtio_gpu_release(struct drm_device *dev);
 int virtio_gpu_driver_open(struct drm_device *dev, struct drm_file *file);
diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
index 3313b92db531..0d1e3eb61bee 100644
--- a/drivers/gpu/drm/virtio/virtgpu_kms.c
+++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
@@ -110,7 +110,7 @@ static void virtio_gpu_get_capsets(struct virtio_gpu_device *vgdev,
 	vgdev->num_capsets = num_capsets;
 }
 
-int virtio_gpu_init(struct drm_device *dev)
+int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev)
 {
 	static vq_callback_t *callbacks[] = {
 		virtio_gpu_ctrl_ack, virtio_gpu_cursor_ack
@@ -123,7 +123,7 @@ int virtio_gpu_init(struct drm_device *dev)
 	u32 num_scanouts, num_capsets;
 	int ret = 0;
 
-	if (!virtio_has_feature(dev_to_virtio(dev->dev), VIRTIO_F_VERSION_1))
+	if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
 		return -ENODEV;
 
 	vgdev = kzalloc(sizeof(struct virtio_gpu_device), GFP_KERNEL);
@@ -132,8 +132,7 @@ int virtio_gpu_init(struct drm_device *dev)
 
 	vgdev->ddev = dev;
 	dev->dev_private = vgdev;
-	vgdev->vdev = dev_to_virtio(dev->dev);
-	vgdev->dev = dev->dev;
+	vgdev->vdev = vdev;
 
 	spin_lock_init(&vgdev->display_info_lock);
 	spin_lock_init(&vgdev->resource_export_lock);
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 18f70ef6b4d0..8d7728181de0 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -67,21 +67,6 @@ void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo)
 
 	virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);
 	if (virtio_gpu_is_shmem(bo)) {
-		struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
-
-		if (shmem->pages) {
-			if (shmem->mapped) {
-				dma_unmap_sgtable(vgdev->vdev->dev.parent,
-					     shmem->pages, DMA_TO_DEVICE, 0);
-				shmem->mapped = 0;
-			}
-
-			sg_free_table(shmem->pages);
-			kfree(shmem->pages);
-			shmem->pages = NULL;
-			drm_gem_shmem_unpin(&bo->base);
-		}
-
 		drm_gem_shmem_free(&bo->base);
 	} else if (virtio_gpu_is_vram(bo)) {
 		struct virtio_gpu_object_vram *vram = to_virtio_gpu_vram(bo);
@@ -153,37 +138,18 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
 					unsigned int *nents)
 {
 	bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev);
-	struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
 	struct scatterlist *sg;
-	int si, ret;
+	struct sg_table *pages;
+	int si;
 
-	ret = drm_gem_shmem_pin(&bo->base);
-	if (ret < 0)
-		return -EINVAL;
-
-	/*
-	 * virtio_gpu uses drm_gem_shmem_get_sg_table instead of
-	 * drm_gem_shmem_get_pages_sgt because virtio has it's own set of
-	 * dma-ops. This is discouraged for other drivers, but should be fine
-	 * since virtio_gpu doesn't support dma-buf import from other devices.
-	 */
-	shmem->pages = drm_gem_shmem_get_sg_table(&bo->base);
-	ret = PTR_ERR_OR_ZERO(shmem->pages);
-	if (ret) {
-		drm_gem_shmem_unpin(&bo->base);
-		shmem->pages = NULL;
-		return ret;
-	}
+	pages = drm_gem_shmem_get_pages_sgt(&bo->base);
+	if (IS_ERR(pages))
+		return PTR_ERR(pages);
 
-	if (use_dma_api) {
-		ret = dma_map_sgtable(vgdev->vdev->dev.parent,
-				      shmem->pages, DMA_TO_DEVICE, 0);
-		if (ret)
-			return ret;
-		*nents = shmem->mapped = shmem->pages->nents;
-	} else {
-		*nents = shmem->pages->orig_nents;
-	}
+	if (use_dma_api)
+		*nents = pages->nents;
+	else
+		*nents = pages->orig_nents;
 
 	*ents = kvmalloc_array(*nents,
 			       sizeof(struct virtio_gpu_mem_entry),
@@ -194,13 +160,13 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
 	}
 
 	if (use_dma_api) {
-		for_each_sgtable_dma_sg(shmem->pages, sg, si) {
+		for_each_sgtable_dma_sg(pages, sg, si) {
 			(*ents)[si].addr = cpu_to_le64(sg_dma_address(sg));
 			(*ents)[si].length = cpu_to_le32(sg_dma_len(sg));
 			(*ents)[si].padding = 0;
 		}
 	} else {
-		for_each_sgtable_sg(shmem->pages, sg, si) {
+		for_each_sgtable_sg(pages, sg, si) {
 			(*ents)[si].addr = cpu_to_le64(sg_phys(sg));
 			(*ents)[si].length = cpu_to_le32(sg->length);
 			(*ents)[si].padding = 0;
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 2edf31806b74..06566e44307d 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -593,11 +593,10 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
 	struct virtio_gpu_transfer_to_host_2d *cmd_p;
 	struct virtio_gpu_vbuffer *vbuf;
 	bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev);
-	struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
 
 	if (virtio_gpu_is_shmem(bo) && use_dma_api)
-		dma_sync_sgtable_for_device(vgdev->vdev->dev.parent,
-					    shmem->pages, DMA_TO_DEVICE);
+		dma_sync_sgtable_for_device(&vgdev->vdev->dev,
+					    bo->base.sgt, DMA_TO_DEVICE);
 
 	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
 	memset(cmd_p, 0, sizeof(*cmd_p));
@@ -1017,11 +1016,9 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev,
 	struct virtio_gpu_vbuffer *vbuf;
 	bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev);
 
-	if (virtio_gpu_is_shmem(bo) && use_dma_api) {
-		struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
-		dma_sync_sgtable_for_device(vgdev->vdev->dev.parent,
-					    shmem->pages, DMA_TO_DEVICE);
-	}
+	if (virtio_gpu_is_shmem(bo) && use_dma_api)
+		dma_sync_sgtable_for_device(&vgdev->vdev->dev,
+					    bo->base.sgt, DMA_TO_DEVICE);
 
 	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
 	memset(cmd_p, 0, sizeof(*cmd_p));
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 07/15] drm/virtio: Improve DMA API usage for shmem BOs
@ 2022-04-11 21:59   ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: dri-devel, linux-kernel, Gustavo Padovan, Daniel Stone,
	virtualization, Dmitry Osipenko, Dmitry Osipenko

DRM API requires the DRM's driver to be backed with the device that can
be used for generic DMA operations. The VirtIO-GPU device can't perform
DMA operations if it uses PCI transport because PCI device driver creates
a virtual VirtIO-GPU device that isn't associated with the PCI. Use PCI's
GPU device for the DRM's device instead of the VirtIO-GPU device and drop
DMA-related hacks from the VirtIO-GPU driver.

Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.c    | 51 ++++++----------------
 drivers/gpu/drm/virtio/virtgpu_drv.h    |  5 +--
 drivers/gpu/drm/virtio/virtgpu_kms.c    |  7 ++--
 drivers/gpu/drm/virtio/virtgpu_object.c | 56 +++++--------------------
 drivers/gpu/drm/virtio/virtgpu_vq.c     | 13 +++---
 5 files changed, 32 insertions(+), 100 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c
index 5f25a8d15464..0141b7df97ec 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.c
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
@@ -46,12 +46,11 @@ static int virtio_gpu_modeset = -1;
 MODULE_PARM_DESC(modeset, "Disable/Enable modesetting");
 module_param_named(modeset, virtio_gpu_modeset, int, 0400);
 
-static int virtio_gpu_pci_quirk(struct drm_device *dev, struct virtio_device *vdev)
+static int virtio_gpu_pci_quirk(struct drm_device *dev)
 {
-	struct pci_dev *pdev = to_pci_dev(vdev->dev.parent);
+	struct pci_dev *pdev = to_pci_dev(dev->dev);
 	const char *pname = dev_name(&pdev->dev);
 	bool vga = (pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA;
-	char unique[20];
 	int ret;
 
 	DRM_INFO("pci: %s detected at %s\n",
@@ -63,39 +62,7 @@ static int virtio_gpu_pci_quirk(struct drm_device *dev, struct virtio_device *vd
 			return ret;
 	}
 
-	/*
-	 * Normally the drm_dev_set_unique() call is done by core DRM.
-	 * The following comment covers, why virtio cannot rely on it.
-	 *
-	 * Unlike the other virtual GPU drivers, virtio abstracts the
-	 * underlying bus type by using struct virtio_device.
-	 *
-	 * Hence the dev_is_pci() check, used in core DRM, will fail
-	 * and the unique returned will be the virtio_device "virtio0",
-	 * while a "pci:..." one is required.
-	 *
-	 * A few other ideas were considered:
-	 * - Extend the dev_is_pci() check [in drm_set_busid] to
-	 *   consider virtio.
-	 *   Seems like a bigger hack than what we have already.
-	 *
-	 * - Point drm_device::dev to the parent of the virtio_device
-	 *   Semantic changes:
-	 *   * Using the wrong device for i2c, framebuffer_alloc and
-	 *     prime import.
-	 *   Visual changes:
-	 *   * Helpers such as DRM_DEV_ERROR, dev_info, drm_printer,
-	 *     will print the wrong information.
-	 *
-	 * We could address the latter issues, by introducing
-	 * drm_device::bus_dev, ... which would be used solely for this.
-	 *
-	 * So for the moment keep things as-is, with a bulky comment
-	 * for the next person who feels like removing this
-	 * drm_dev_set_unique() quirk.
-	 */
-	snprintf(unique, sizeof(unique), "pci:%s", pname);
-	return drm_dev_set_unique(dev, unique);
+	return 0;
 }
 
 static int virtio_gpu_probe(struct virtio_device *vdev)
@@ -109,18 +76,24 @@ static int virtio_gpu_probe(struct virtio_device *vdev)
 	if (virtio_gpu_modeset == 0)
 		return -EINVAL;
 
-	dev = drm_dev_alloc(&driver, &vdev->dev);
+	/*
+	 * The virtio-gpu device is a virtual device that doesn't have DMA
+	 * ops assigned to it, nor DMA mask set and etc. Its parent device
+	 * is actual GPU device we want to use it for the DRM's device in
+	 * order to benefit from using generic DRM APIs.
+	 */
+	dev = drm_dev_alloc(&driver, vdev->dev.parent);
 	if (IS_ERR(dev))
 		return PTR_ERR(dev);
 	vdev->priv = dev;
 
 	if (!strcmp(vdev->dev.parent->bus->name, "pci")) {
-		ret = virtio_gpu_pci_quirk(dev, vdev);
+		ret = virtio_gpu_pci_quirk(dev);
 		if (ret)
 			goto err_free;
 	}
 
-	ret = virtio_gpu_init(dev);
+	ret = virtio_gpu_init(vdev, dev);
 	if (ret)
 		goto err_free;
 
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 0a194aaad419..b2d93cb12ebf 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -100,8 +100,6 @@ struct virtio_gpu_object {
 
 struct virtio_gpu_object_shmem {
 	struct virtio_gpu_object base;
-	struct sg_table *pages;
-	uint32_t mapped;
 };
 
 struct virtio_gpu_object_vram {
@@ -214,7 +212,6 @@ struct virtio_gpu_drv_cap_cache {
 };
 
 struct virtio_gpu_device {
-	struct device *dev;
 	struct drm_device *ddev;
 
 	struct virtio_device *vdev;
@@ -282,7 +279,7 @@ extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS];
 void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file);
 
 /* virtgpu_kms.c */
-int virtio_gpu_init(struct drm_device *dev);
+int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev);
 void virtio_gpu_deinit(struct drm_device *dev);
 void virtio_gpu_release(struct drm_device *dev);
 int virtio_gpu_driver_open(struct drm_device *dev, struct drm_file *file);
diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
index 3313b92db531..0d1e3eb61bee 100644
--- a/drivers/gpu/drm/virtio/virtgpu_kms.c
+++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
@@ -110,7 +110,7 @@ static void virtio_gpu_get_capsets(struct virtio_gpu_device *vgdev,
 	vgdev->num_capsets = num_capsets;
 }
 
-int virtio_gpu_init(struct drm_device *dev)
+int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev)
 {
 	static vq_callback_t *callbacks[] = {
 		virtio_gpu_ctrl_ack, virtio_gpu_cursor_ack
@@ -123,7 +123,7 @@ int virtio_gpu_init(struct drm_device *dev)
 	u32 num_scanouts, num_capsets;
 	int ret = 0;
 
-	if (!virtio_has_feature(dev_to_virtio(dev->dev), VIRTIO_F_VERSION_1))
+	if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
 		return -ENODEV;
 
 	vgdev = kzalloc(sizeof(struct virtio_gpu_device), GFP_KERNEL);
@@ -132,8 +132,7 @@ int virtio_gpu_init(struct drm_device *dev)
 
 	vgdev->ddev = dev;
 	dev->dev_private = vgdev;
-	vgdev->vdev = dev_to_virtio(dev->dev);
-	vgdev->dev = dev->dev;
+	vgdev->vdev = vdev;
 
 	spin_lock_init(&vgdev->display_info_lock);
 	spin_lock_init(&vgdev->resource_export_lock);
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 18f70ef6b4d0..8d7728181de0 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -67,21 +67,6 @@ void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo)
 
 	virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);
 	if (virtio_gpu_is_shmem(bo)) {
-		struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
-
-		if (shmem->pages) {
-			if (shmem->mapped) {
-				dma_unmap_sgtable(vgdev->vdev->dev.parent,
-					     shmem->pages, DMA_TO_DEVICE, 0);
-				shmem->mapped = 0;
-			}
-
-			sg_free_table(shmem->pages);
-			kfree(shmem->pages);
-			shmem->pages = NULL;
-			drm_gem_shmem_unpin(&bo->base);
-		}
-
 		drm_gem_shmem_free(&bo->base);
 	} else if (virtio_gpu_is_vram(bo)) {
 		struct virtio_gpu_object_vram *vram = to_virtio_gpu_vram(bo);
@@ -153,37 +138,18 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
 					unsigned int *nents)
 {
 	bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev);
-	struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
 	struct scatterlist *sg;
-	int si, ret;
+	struct sg_table *pages;
+	int si;
 
-	ret = drm_gem_shmem_pin(&bo->base);
-	if (ret < 0)
-		return -EINVAL;
-
-	/*
-	 * virtio_gpu uses drm_gem_shmem_get_sg_table instead of
-	 * drm_gem_shmem_get_pages_sgt because virtio has it's own set of
-	 * dma-ops. This is discouraged for other drivers, but should be fine
-	 * since virtio_gpu doesn't support dma-buf import from other devices.
-	 */
-	shmem->pages = drm_gem_shmem_get_sg_table(&bo->base);
-	ret = PTR_ERR_OR_ZERO(shmem->pages);
-	if (ret) {
-		drm_gem_shmem_unpin(&bo->base);
-		shmem->pages = NULL;
-		return ret;
-	}
+	pages = drm_gem_shmem_get_pages_sgt(&bo->base);
+	if (IS_ERR(pages))
+		return PTR_ERR(pages);
 
-	if (use_dma_api) {
-		ret = dma_map_sgtable(vgdev->vdev->dev.parent,
-				      shmem->pages, DMA_TO_DEVICE, 0);
-		if (ret)
-			return ret;
-		*nents = shmem->mapped = shmem->pages->nents;
-	} else {
-		*nents = shmem->pages->orig_nents;
-	}
+	if (use_dma_api)
+		*nents = pages->nents;
+	else
+		*nents = pages->orig_nents;
 
 	*ents = kvmalloc_array(*nents,
 			       sizeof(struct virtio_gpu_mem_entry),
@@ -194,13 +160,13 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
 	}
 
 	if (use_dma_api) {
-		for_each_sgtable_dma_sg(shmem->pages, sg, si) {
+		for_each_sgtable_dma_sg(pages, sg, si) {
 			(*ents)[si].addr = cpu_to_le64(sg_dma_address(sg));
 			(*ents)[si].length = cpu_to_le32(sg_dma_len(sg));
 			(*ents)[si].padding = 0;
 		}
 	} else {
-		for_each_sgtable_sg(shmem->pages, sg, si) {
+		for_each_sgtable_sg(pages, sg, si) {
 			(*ents)[si].addr = cpu_to_le64(sg_phys(sg));
 			(*ents)[si].length = cpu_to_le32(sg->length);
 			(*ents)[si].padding = 0;
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 2edf31806b74..06566e44307d 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -593,11 +593,10 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
 	struct virtio_gpu_transfer_to_host_2d *cmd_p;
 	struct virtio_gpu_vbuffer *vbuf;
 	bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev);
-	struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
 
 	if (virtio_gpu_is_shmem(bo) && use_dma_api)
-		dma_sync_sgtable_for_device(vgdev->vdev->dev.parent,
-					    shmem->pages, DMA_TO_DEVICE);
+		dma_sync_sgtable_for_device(&vgdev->vdev->dev,
+					    bo->base.sgt, DMA_TO_DEVICE);
 
 	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
 	memset(cmd_p, 0, sizeof(*cmd_p));
@@ -1017,11 +1016,9 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev,
 	struct virtio_gpu_vbuffer *vbuf;
 	bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev);
 
-	if (virtio_gpu_is_shmem(bo) && use_dma_api) {
-		struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
-		dma_sync_sgtable_for_device(vgdev->vdev->dev.parent,
-					    shmem->pages, DMA_TO_DEVICE);
-	}
+	if (virtio_gpu_is_shmem(bo) && use_dma_api)
+		dma_sync_sgtable_for_device(&vgdev->vdev->dev,
+					    bo->base.sgt, DMA_TO_DEVICE);
 
 	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
 	memset(cmd_p, 0, sizeof(*cmd_p));
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 08/15] drm/virtio: Use dev_is_pci()
  2022-04-11 21:59 ` Dmitry Osipenko
@ 2022-04-11 21:59   ` Dmitry Osipenko
  -1 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: linux-kernel, virtualization, Gustavo Padovan, dri-devel,
	Dmitry Osipenko, Dmitry Osipenko

Use common dev_is_pci() helper to replace the strcmp("pci") used by driver.

Suggested-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c
index 0141b7df97ec..0035affc3e59 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.c
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
@@ -87,7 +87,7 @@ static int virtio_gpu_probe(struct virtio_device *vdev)
 		return PTR_ERR(dev);
 	vdev->priv = dev;
 
-	if (!strcmp(vdev->dev.parent->bus->name, "pci")) {
+	if (dev_is_pci(vdev->dev.parent)) {
 		ret = virtio_gpu_pci_quirk(dev);
 		if (ret)
 			goto err_free;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 08/15] drm/virtio: Use dev_is_pci()
@ 2022-04-11 21:59   ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: dri-devel, linux-kernel, Gustavo Padovan, Daniel Stone,
	virtualization, Dmitry Osipenko, Dmitry Osipenko

Use common dev_is_pci() helper to replace the strcmp("pci") used by driver.

Suggested-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c
index 0141b7df97ec..0035affc3e59 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.c
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
@@ -87,7 +87,7 @@ static int virtio_gpu_probe(struct virtio_device *vdev)
 		return PTR_ERR(dev);
 	vdev->priv = dev;
 
-	if (!strcmp(vdev->dev.parent->bus->name, "pci")) {
+	if (dev_is_pci(vdev->dev.parent)) {
 		ret = virtio_gpu_pci_quirk(dev);
 		if (ret)
 			goto err_free;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 09/15] drm/shmem-helper: Correct doc-comment of drm_gem_shmem_get_sg_table()
  2022-04-11 21:59 ` Dmitry Osipenko
@ 2022-04-11 21:59   ` Dmitry Osipenko
  -1 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: dri-devel, linux-kernel, Gustavo Padovan, Daniel Stone,
	virtualization, Dmitry Osipenko, Dmitry Osipenko

drm_gem_shmem_get_sg_table() never returns NULL on error, but a ERR_PTR.
Correct the doc comment which says that it returns NULL on error.

Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 8ad0e02991ca..30ee46348a99 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -662,7 +662,7 @@ EXPORT_SYMBOL(drm_gem_shmem_print_info);
  * drm_gem_shmem_get_pages_sgt() instead.
  *
  * Returns:
- * A pointer to the scatter/gather table of pinned pages or NULL on failure.
+ * A pointer to the scatter/gather table of pinned pages or errno on failure.
  */
 struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem)
 {
@@ -688,7 +688,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
  * drm_gem_shmem_get_sg_table() should not be directly called by drivers.
  *
  * Returns:
- * A pointer to the scatter/gather table of pinned pages or errno on failure.
+ * A pointer to the scatter/gather table of pinned pages ERR_PTR()-encoded
+ * error code on failure.
  */
 struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
 {
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 09/15] drm/shmem-helper: Correct doc-comment of drm_gem_shmem_get_sg_table()
@ 2022-04-11 21:59   ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: linux-kernel, virtualization, Gustavo Padovan, dri-devel,
	Dmitry Osipenko, Dmitry Osipenko

drm_gem_shmem_get_sg_table() never returns NULL on error, but a ERR_PTR.
Correct the doc comment which says that it returns NULL on error.

Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 8ad0e02991ca..30ee46348a99 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -662,7 +662,7 @@ EXPORT_SYMBOL(drm_gem_shmem_print_info);
  * drm_gem_shmem_get_pages_sgt() instead.
  *
  * Returns:
- * A pointer to the scatter/gather table of pinned pages or NULL on failure.
+ * A pointer to the scatter/gather table of pinned pages or errno on failure.
  */
 struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem)
 {
@@ -688,7 +688,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
  * drm_gem_shmem_get_sg_table() should not be directly called by drivers.
  *
  * Returns:
- * A pointer to the scatter/gather table of pinned pages or errno on failure.
+ * A pointer to the scatter/gather table of pinned pages ERR_PTR()-encoded
+ * error code on failure.
  */
 struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
 {
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 10/15] drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks
  2022-04-11 21:59 ` Dmitry Osipenko
@ 2022-04-11 21:59   ` Dmitry Osipenko
  -1 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: linux-kernel, virtualization, Gustavo Padovan, dri-devel,
	Dmitry Osipenko, Dmitry Osipenko

Replace drm_gem_shmem locks with the reservation lock to make GEM
lockings more consistent.

Previously drm_gem_shmem_vmap() and drm_gem_shmem_get_pages() were
protected by separate locks, now it's the same lock, but it doesn't
make any difference for the current GEM SHMEM users. Only Panfrost
and Lima drivers use vmap() and they do it in the slow code paths,
hence there was no practical justification for the usage of separate
lock in the vmap().

Suggested-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c  | 38 ++++++++++++-------------
 drivers/gpu/drm/lima/lima_gem.c         |  8 +++---
 drivers/gpu/drm/panfrost/panfrost_mmu.c | 15 ++++++----
 include/drm/drm_gem_shmem_helper.h      | 10 -------
 4 files changed, 31 insertions(+), 40 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 30ee46348a99..3ecef571eff3 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -86,8 +86,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private)
 	if (ret)
 		goto err_release;
 
-	mutex_init(&shmem->pages_lock);
-	mutex_init(&shmem->vmap_lock);
 	INIT_LIST_HEAD(&shmem->madv_list);
 
 	if (!private) {
@@ -157,8 +155,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
 	WARN_ON(shmem->pages_use_count);
 
 	drm_gem_object_release(obj);
-	mutex_destroy(&shmem->pages_lock);
-	mutex_destroy(&shmem->vmap_lock);
 	kfree(shmem);
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
@@ -209,11 +205,11 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
 
 	WARN_ON(shmem->base.import_attach);
 
-	ret = mutex_lock_interruptible(&shmem->pages_lock);
+	ret = dma_resv_lock_interruptible(shmem->base.resv, NULL);
 	if (ret)
 		return ret;
 	ret = drm_gem_shmem_get_pages_locked(shmem);
-	mutex_unlock(&shmem->pages_lock);
+	dma_resv_unlock(shmem->base.resv);
 
 	return ret;
 }
@@ -248,9 +244,9 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
  */
 void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
 {
-	mutex_lock(&shmem->pages_lock);
+	dma_resv_lock(shmem->base.resv, NULL);
 	drm_gem_shmem_put_pages_locked(shmem);
-	mutex_unlock(&shmem->pages_lock);
+	dma_resv_unlock(shmem->base.resv);
 }
 EXPORT_SYMBOL(drm_gem_shmem_put_pages);
 
@@ -310,7 +306,7 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
 	} else {
 		pgprot_t prot = PAGE_KERNEL;
 
-		ret = drm_gem_shmem_get_pages(shmem);
+		ret = drm_gem_shmem_get_pages_locked(shmem);
 		if (ret)
 			goto err_zero_use;
 
@@ -360,11 +356,11 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
 {
 	int ret;
 
-	ret = mutex_lock_interruptible(&shmem->vmap_lock);
+	ret = dma_resv_lock_interruptible(shmem->base.resv, NULL);
 	if (ret)
 		return ret;
 	ret = drm_gem_shmem_vmap_locked(shmem, map);
-	mutex_unlock(&shmem->vmap_lock);
+	dma_resv_unlock(shmem->base.resv);
 
 	return ret;
 }
@@ -385,7 +381,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
 		dma_buf_vunmap(obj->import_attach->dmabuf, map);
 	} else {
 		vunmap(shmem->vaddr);
-		drm_gem_shmem_put_pages(shmem);
+		drm_gem_shmem_put_pages_locked(shmem);
 	}
 
 	shmem->vaddr = NULL;
@@ -406,9 +402,11 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
 void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
 			  struct iosys_map *map)
 {
-	mutex_lock(&shmem->vmap_lock);
+	dma_resv_lock(shmem->base.resv, NULL);
 	drm_gem_shmem_vunmap_locked(shmem, map);
-	mutex_unlock(&shmem->vmap_lock);
+	dma_resv_unlock(shmem->base.resv);
+
+	drm_gem_shmem_update_purgeable_status(shmem);
 }
 EXPORT_SYMBOL(drm_gem_shmem_vunmap);
 
@@ -442,14 +440,14 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
  */
 int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
 {
-	mutex_lock(&shmem->pages_lock);
+	dma_resv_lock(shmem->base.resv, NULL);
 
 	if (shmem->madv >= 0)
 		shmem->madv = madv;
 
 	madv = shmem->madv;
 
-	mutex_unlock(&shmem->pages_lock);
+	dma_resv_unlock(shmem->base.resv);
 
 	return (madv >= 0);
 }
@@ -487,10 +485,10 @@ EXPORT_SYMBOL(drm_gem_shmem_purge_locked);
 
 bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
 {
-	if (!mutex_trylock(&shmem->pages_lock))
+	if (!dma_resv_trylock(shmem->base.resv))
 		return false;
 	drm_gem_shmem_purge_locked(shmem);
-	mutex_unlock(&shmem->pages_lock);
+	dma_resv_unlock(shmem->base.resv);
 
 	return true;
 }
@@ -549,7 +547,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 	/* We don't use vmf->pgoff since that has the fake offset */
 	page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
 
-	mutex_lock(&shmem->pages_lock);
+	dma_resv_lock(shmem->base.resv, NULL);
 
 	if (page_offset >= num_pages ||
 	    WARN_ON_ONCE(!shmem->pages) ||
@@ -561,7 +559,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 		ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
 	}
 
-	mutex_unlock(&shmem->pages_lock);
+	dma_resv_unlock(shmem->base.resv);
 
 	return ret;
 }
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 0f1ca0b0db49..5008f0c2428f 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -34,7 +34,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
 
 	new_size = min(new_size, bo->base.base.size);
 
-	mutex_lock(&bo->base.pages_lock);
+	dma_resv_lock(bo->base.base.resv, NULL);
 
 	if (bo->base.pages) {
 		pages = bo->base.pages;
@@ -42,7 +42,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
 		pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
 				       sizeof(*pages), GFP_KERNEL | __GFP_ZERO);
 		if (!pages) {
-			mutex_unlock(&bo->base.pages_lock);
+			dma_resv_unlock(bo->base.base.resv);
 			return -ENOMEM;
 		}
 
@@ -56,13 +56,13 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
 		struct page *page = shmem_read_mapping_page(mapping, i);
 
 		if (IS_ERR(page)) {
-			mutex_unlock(&bo->base.pages_lock);
+			dma_resv_unlock(bo->base.base.resv);
 			return PTR_ERR(page);
 		}
 		pages[i] = page;
 	}
 
-	mutex_unlock(&bo->base.pages_lock);
+	dma_resv_unlock(bo->base.base.resv);
 
 	ret = sg_alloc_table_from_pages(&sgt, pages, i, 0,
 					new_size, GFP_KERNEL);
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index d3f82b26a631..404b8f67e2df 100644
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
@@ -424,6 +424,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 	struct panfrost_gem_mapping *bomapping;
 	struct panfrost_gem_object *bo;
 	struct address_space *mapping;
+	struct drm_gem_object *obj;
 	pgoff_t page_offset;
 	struct sg_table *sgt;
 	struct page **pages;
@@ -446,13 +447,15 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 	page_offset = addr >> PAGE_SHIFT;
 	page_offset -= bomapping->mmnode.start;
 
-	mutex_lock(&bo->base.pages_lock);
+	obj = &bo->base.base;
+
+	dma_resv_lock(obj->resv, NULL);
 
 	if (!bo->base.pages) {
 		bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M,
 				     sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO);
 		if (!bo->sgts) {
-			mutex_unlock(&bo->base.pages_lock);
+			dma_resv_unlock(obj->resv);
 			ret = -ENOMEM;
 			goto err_bo;
 		}
@@ -462,7 +465,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 		if (!pages) {
 			kvfree(bo->sgts);
 			bo->sgts = NULL;
-			mutex_unlock(&bo->base.pages_lock);
+			dma_resv_unlock(obj->resv);
 			ret = -ENOMEM;
 			goto err_bo;
 		}
@@ -472,7 +475,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 		pages = bo->base.pages;
 		if (pages[page_offset]) {
 			/* Pages are already mapped, bail out. */
-			mutex_unlock(&bo->base.pages_lock);
+			dma_resv_unlock(obj->resv);
 			goto out;
 		}
 	}
@@ -483,13 +486,13 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 	for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
 		pages[i] = shmem_read_mapping_page(mapping, i);
 		if (IS_ERR(pages[i])) {
-			mutex_unlock(&bo->base.pages_lock);
+			dma_resv_unlock(obj->resv);
 			ret = PTR_ERR(pages[i]);
 			goto err_pages;
 		}
 	}
 
-	mutex_unlock(&bo->base.pages_lock);
+	dma_resv_unlock(obj->resv);
 
 	sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)];
 	ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index d0a57853c188..70889533962a 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -26,11 +26,6 @@ struct drm_gem_shmem_object {
 	 */
 	struct drm_gem_object base;
 
-	/**
-	 * @pages_lock: Protects the page table and use count
-	 */
-	struct mutex pages_lock;
-
 	/**
 	 * @pages: Page table
 	 */
@@ -79,11 +74,6 @@ struct drm_gem_shmem_object {
 	 */
 	struct sg_table *sgt;
 
-	/**
-	 * @vmap_lock: Protects the vmap address and use count
-	 */
-	struct mutex vmap_lock;
-
 	/**
 	 * @vaddr: Kernel virtual address of the backing memory
 	 */
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 10/15] drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks
@ 2022-04-11 21:59   ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: dri-devel, linux-kernel, Gustavo Padovan, Daniel Stone,
	virtualization, Dmitry Osipenko, Dmitry Osipenko

Replace drm_gem_shmem locks with the reservation lock to make GEM
lockings more consistent.

Previously drm_gem_shmem_vmap() and drm_gem_shmem_get_pages() were
protected by separate locks, now it's the same lock, but it doesn't
make any difference for the current GEM SHMEM users. Only Panfrost
and Lima drivers use vmap() and they do it in the slow code paths,
hence there was no practical justification for the usage of separate
lock in the vmap().

Suggested-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c  | 38 ++++++++++++-------------
 drivers/gpu/drm/lima/lima_gem.c         |  8 +++---
 drivers/gpu/drm/panfrost/panfrost_mmu.c | 15 ++++++----
 include/drm/drm_gem_shmem_helper.h      | 10 -------
 4 files changed, 31 insertions(+), 40 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 30ee46348a99..3ecef571eff3 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -86,8 +86,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private)
 	if (ret)
 		goto err_release;
 
-	mutex_init(&shmem->pages_lock);
-	mutex_init(&shmem->vmap_lock);
 	INIT_LIST_HEAD(&shmem->madv_list);
 
 	if (!private) {
@@ -157,8 +155,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
 	WARN_ON(shmem->pages_use_count);
 
 	drm_gem_object_release(obj);
-	mutex_destroy(&shmem->pages_lock);
-	mutex_destroy(&shmem->vmap_lock);
 	kfree(shmem);
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
@@ -209,11 +205,11 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
 
 	WARN_ON(shmem->base.import_attach);
 
-	ret = mutex_lock_interruptible(&shmem->pages_lock);
+	ret = dma_resv_lock_interruptible(shmem->base.resv, NULL);
 	if (ret)
 		return ret;
 	ret = drm_gem_shmem_get_pages_locked(shmem);
-	mutex_unlock(&shmem->pages_lock);
+	dma_resv_unlock(shmem->base.resv);
 
 	return ret;
 }
@@ -248,9 +244,9 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
  */
 void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
 {
-	mutex_lock(&shmem->pages_lock);
+	dma_resv_lock(shmem->base.resv, NULL);
 	drm_gem_shmem_put_pages_locked(shmem);
-	mutex_unlock(&shmem->pages_lock);
+	dma_resv_unlock(shmem->base.resv);
 }
 EXPORT_SYMBOL(drm_gem_shmem_put_pages);
 
@@ -310,7 +306,7 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
 	} else {
 		pgprot_t prot = PAGE_KERNEL;
 
-		ret = drm_gem_shmem_get_pages(shmem);
+		ret = drm_gem_shmem_get_pages_locked(shmem);
 		if (ret)
 			goto err_zero_use;
 
@@ -360,11 +356,11 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
 {
 	int ret;
 
-	ret = mutex_lock_interruptible(&shmem->vmap_lock);
+	ret = dma_resv_lock_interruptible(shmem->base.resv, NULL);
 	if (ret)
 		return ret;
 	ret = drm_gem_shmem_vmap_locked(shmem, map);
-	mutex_unlock(&shmem->vmap_lock);
+	dma_resv_unlock(shmem->base.resv);
 
 	return ret;
 }
@@ -385,7 +381,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
 		dma_buf_vunmap(obj->import_attach->dmabuf, map);
 	} else {
 		vunmap(shmem->vaddr);
-		drm_gem_shmem_put_pages(shmem);
+		drm_gem_shmem_put_pages_locked(shmem);
 	}
 
 	shmem->vaddr = NULL;
@@ -406,9 +402,11 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
 void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
 			  struct iosys_map *map)
 {
-	mutex_lock(&shmem->vmap_lock);
+	dma_resv_lock(shmem->base.resv, NULL);
 	drm_gem_shmem_vunmap_locked(shmem, map);
-	mutex_unlock(&shmem->vmap_lock);
+	dma_resv_unlock(shmem->base.resv);
+
+	drm_gem_shmem_update_purgeable_status(shmem);
 }
 EXPORT_SYMBOL(drm_gem_shmem_vunmap);
 
@@ -442,14 +440,14 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
  */
 int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
 {
-	mutex_lock(&shmem->pages_lock);
+	dma_resv_lock(shmem->base.resv, NULL);
 
 	if (shmem->madv >= 0)
 		shmem->madv = madv;
 
 	madv = shmem->madv;
 
-	mutex_unlock(&shmem->pages_lock);
+	dma_resv_unlock(shmem->base.resv);
 
 	return (madv >= 0);
 }
@@ -487,10 +485,10 @@ EXPORT_SYMBOL(drm_gem_shmem_purge_locked);
 
 bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
 {
-	if (!mutex_trylock(&shmem->pages_lock))
+	if (!dma_resv_trylock(shmem->base.resv))
 		return false;
 	drm_gem_shmem_purge_locked(shmem);
-	mutex_unlock(&shmem->pages_lock);
+	dma_resv_unlock(shmem->base.resv);
 
 	return true;
 }
@@ -549,7 +547,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 	/* We don't use vmf->pgoff since that has the fake offset */
 	page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
 
-	mutex_lock(&shmem->pages_lock);
+	dma_resv_lock(shmem->base.resv, NULL);
 
 	if (page_offset >= num_pages ||
 	    WARN_ON_ONCE(!shmem->pages) ||
@@ -561,7 +559,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 		ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
 	}
 
-	mutex_unlock(&shmem->pages_lock);
+	dma_resv_unlock(shmem->base.resv);
 
 	return ret;
 }
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 0f1ca0b0db49..5008f0c2428f 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -34,7 +34,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
 
 	new_size = min(new_size, bo->base.base.size);
 
-	mutex_lock(&bo->base.pages_lock);
+	dma_resv_lock(bo->base.base.resv, NULL);
 
 	if (bo->base.pages) {
 		pages = bo->base.pages;
@@ -42,7 +42,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
 		pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
 				       sizeof(*pages), GFP_KERNEL | __GFP_ZERO);
 		if (!pages) {
-			mutex_unlock(&bo->base.pages_lock);
+			dma_resv_unlock(bo->base.base.resv);
 			return -ENOMEM;
 		}
 
@@ -56,13 +56,13 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
 		struct page *page = shmem_read_mapping_page(mapping, i);
 
 		if (IS_ERR(page)) {
-			mutex_unlock(&bo->base.pages_lock);
+			dma_resv_unlock(bo->base.base.resv);
 			return PTR_ERR(page);
 		}
 		pages[i] = page;
 	}
 
-	mutex_unlock(&bo->base.pages_lock);
+	dma_resv_unlock(bo->base.base.resv);
 
 	ret = sg_alloc_table_from_pages(&sgt, pages, i, 0,
 					new_size, GFP_KERNEL);
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index d3f82b26a631..404b8f67e2df 100644
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
@@ -424,6 +424,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 	struct panfrost_gem_mapping *bomapping;
 	struct panfrost_gem_object *bo;
 	struct address_space *mapping;
+	struct drm_gem_object *obj;
 	pgoff_t page_offset;
 	struct sg_table *sgt;
 	struct page **pages;
@@ -446,13 +447,15 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 	page_offset = addr >> PAGE_SHIFT;
 	page_offset -= bomapping->mmnode.start;
 
-	mutex_lock(&bo->base.pages_lock);
+	obj = &bo->base.base;
+
+	dma_resv_lock(obj->resv, NULL);
 
 	if (!bo->base.pages) {
 		bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M,
 				     sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO);
 		if (!bo->sgts) {
-			mutex_unlock(&bo->base.pages_lock);
+			dma_resv_unlock(obj->resv);
 			ret = -ENOMEM;
 			goto err_bo;
 		}
@@ -462,7 +465,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 		if (!pages) {
 			kvfree(bo->sgts);
 			bo->sgts = NULL;
-			mutex_unlock(&bo->base.pages_lock);
+			dma_resv_unlock(obj->resv);
 			ret = -ENOMEM;
 			goto err_bo;
 		}
@@ -472,7 +475,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 		pages = bo->base.pages;
 		if (pages[page_offset]) {
 			/* Pages are already mapped, bail out. */
-			mutex_unlock(&bo->base.pages_lock);
+			dma_resv_unlock(obj->resv);
 			goto out;
 		}
 	}
@@ -483,13 +486,13 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 	for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
 		pages[i] = shmem_read_mapping_page(mapping, i);
 		if (IS_ERR(pages[i])) {
-			mutex_unlock(&bo->base.pages_lock);
+			dma_resv_unlock(obj->resv);
 			ret = PTR_ERR(pages[i]);
 			goto err_pages;
 		}
 	}
 
-	mutex_unlock(&bo->base.pages_lock);
+	dma_resv_unlock(obj->resv);
 
 	sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)];
 	ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index d0a57853c188..70889533962a 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -26,11 +26,6 @@ struct drm_gem_shmem_object {
 	 */
 	struct drm_gem_object base;
 
-	/**
-	 * @pages_lock: Protects the page table and use count
-	 */
-	struct mutex pages_lock;
-
 	/**
 	 * @pages: Page table
 	 */
@@ -79,11 +74,6 @@ struct drm_gem_shmem_object {
 	 */
 	struct sg_table *sgt;
 
-	/**
-	 * @vmap_lock: Protects the vmap address and use count
-	 */
-	struct mutex vmap_lock;
-
 	/**
 	 * @vaddr: Kernel virtual address of the backing memory
 	 */
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 11/15] drm/shmem-helper: Add generic memory shrinker
  2022-04-11 21:59 ` Dmitry Osipenko
@ 2022-04-11 21:59   ` Dmitry Osipenko
  -1 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: linux-kernel, virtualization, Gustavo Padovan, dri-devel,
	Dmitry Osipenko, Dmitry Osipenko

Introduce a common DRM SHMEM shrinker. It allows to reduce code
duplication among DRM drivers that implement theirs own shrinkers.
This is initial version of the shrinker that covers basic needs of
GPU drivers, both purging and eviction of shmem objects are supported.

This patch is based on a couple ideas borrowed from Rob's Clark MSM
shrinker and Thomas' Zimmermann variant of SHMEM shrinker.

In order to start using DRM SHMEM shrinker drivers should:

1. Implement new purge(), evict() + swap_in() GEM callbacks.
2. Register shrinker using drm_gem_shmem_shrinker_register(drm_device).
3. Use drm_gem_shmem_set_purgeable_and_evictable(shmem) and alike API
   functions to activate shrinking of GEMs.

Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 766 ++++++++++++++++++++++++-
 include/drm/drm_device.h               |   4 +
 include/drm/drm_gem.h                  |  35 ++
 include/drm/drm_gem_shmem_helper.h     | 105 +++-
 4 files changed, 878 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 3ecef571eff3..7e4851363d14 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -88,6 +88,13 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private)
 
 	INIT_LIST_HEAD(&shmem->madv_list);
 
+	/*
+	 * Eviction and purging are disabled by default, shmem user must enable
+	 * them explicitly using drm_gem_shmem_set_evictable/purgeable().
+	 */
+	shmem->eviction_disable_count = 1;
+	shmem->purging_disable_count = 1;
+
 	if (!private) {
 		/*
 		 * Our buffers are kept pinned, so allocating them
@@ -126,6 +133,107 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
 
+static void
+drm_gem_shmem_add_pages_to_shrinker(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+	struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker;
+	size_t page_count = obj->size >> PAGE_SHIFT;
+
+	if (!shmem->pages_shrinkable) {
+		WARN_ON(gem_shrinker->shrinkable_count + page_count < page_count);
+		gem_shrinker->shrinkable_count += page_count;
+		shmem->pages_shrinkable = true;
+	}
+}
+
+static void
+drm_gem_shmem_remove_pages_from_shrinker(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+	struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker;
+	size_t page_count = obj->size >> PAGE_SHIFT;
+
+	if (shmem->pages_shrinkable) {
+		WARN_ON(gem_shrinker->shrinkable_count < page_count);
+		gem_shrinker->shrinkable_count -= page_count;
+		shmem->pages_shrinkable = false;
+	}
+}
+
+static void
+drm_gem_shmem_set_pages_state_locked(struct drm_gem_shmem_object *shmem,
+				     enum drm_gem_shmem_pages_state new_state)
+{
+	struct drm_gem_object *obj = &shmem->base;
+	struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker;
+
+	lockdep_assert_held(&gem_shrinker->lock);
+	lockdep_assert_held(&obj->resv->lock.base);
+
+	if (new_state >= DRM_GEM_SHMEM_PAGES_STATE_ACTIVE) {
+		if (drm_gem_shmem_is_evictable(shmem))
+			new_state = DRM_GEM_SHMEM_PAGES_STATE_EVICTABLE;
+
+		if (drm_gem_shmem_is_purgeable(shmem))
+			new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGEABLE;
+
+		if (!shmem->pages)
+			new_state = DRM_GEM_SHMEM_PAGES_STATE_INACTIVE;
+
+		if (shmem->evicted)
+			new_state = DRM_GEM_SHMEM_PAGES_STATE_EVICTED;
+	}
+
+	if (shmem->pages_state == new_state)
+		return;
+
+	switch (new_state) {
+	case DRM_GEM_SHMEM_PAGES_STATE_INACTIVE:
+	case DRM_GEM_SHMEM_PAGES_STATE_PURGED:
+		drm_gem_shmem_remove_pages_from_shrinker(shmem);
+		list_del_init(&shmem->madv_list);
+		break;
+
+	case DRM_GEM_SHMEM_PAGES_STATE_ACTIVE:
+		drm_gem_shmem_remove_pages_from_shrinker(shmem);
+		list_move_tail(&shmem->madv_list, &gem_shrinker->lru_active);
+		break;
+
+	case DRM_GEM_SHMEM_PAGES_STATE_PURGEABLE:
+		drm_gem_shmem_add_pages_to_shrinker(shmem);
+		list_move_tail(&shmem->madv_list, &gem_shrinker->lru_purgeable);
+		break;
+
+	case DRM_GEM_SHMEM_PAGES_STATE_EVICTABLE:
+		drm_gem_shmem_add_pages_to_shrinker(shmem);
+		list_move_tail(&shmem->madv_list, &gem_shrinker->lru_evictable);
+		break;
+
+	case DRM_GEM_SHMEM_PAGES_STATE_EVICTED:
+		drm_gem_shmem_remove_pages_from_shrinker(shmem);
+		list_move_tail(&shmem->madv_list, &gem_shrinker->lru_evicted);
+		break;
+	}
+
+	shmem->pages_state = new_state;
+}
+
+static void
+drm_gem_shmem_set_pages_state(struct drm_gem_shmem_object *shmem,
+			      enum drm_gem_shmem_pages_state new_state)
+{
+	struct drm_gem_object *obj = &shmem->base;
+	struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker;
+
+	if (!gem_shrinker)
+		return;
+
+	mutex_lock(&gem_shrinker->lock);
+	drm_gem_shmem_set_pages_state_locked(shmem, new_state);
+	mutex_unlock(&gem_shrinker->lock);
+}
+
 /**
  * drm_gem_shmem_free - Free resources associated with a shmem GEM object
  * @shmem: shmem GEM object to free
@@ -137,6 +245,9 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
 
+	/* take out shmem GEM object from the memory shrinker */
+	drm_gem_shmem_madvise(shmem, -1);
+
 	WARN_ON(shmem->vmap_use_count);
 
 	if (obj->import_attach) {
@@ -145,10 +256,11 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
 		if (shmem->sgt) {
 			dma_unmap_sgtable(obj->dev->dev, shmem->sgt,
 					  DMA_BIDIRECTIONAL, 0);
+
 			sg_free_table(shmem->sgt);
 			kfree(shmem->sgt);
 		}
-		if (shmem->pages)
+		if (shmem->pages_use_count)
 			drm_gem_shmem_put_pages(shmem);
 	}
 
@@ -159,18 +271,226 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
 
-static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
+static void drm_gem_shmem_update_pages_state_locked(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+	struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker;
+	enum drm_gem_shmem_pages_state new_state;
+
+	if (!gem_shrinker || obj->import_attach)
+		return;
+
+	mutex_lock(&gem_shrinker->lock);
+
+	if (!shmem->madv)
+		new_state = DRM_GEM_SHMEM_PAGES_STATE_ACTIVE;
+	else if (shmem->madv > 0)
+		new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGEABLE;
+	else if (shmem->madv < 0)
+		new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGED;
+
+	drm_gem_shmem_set_pages_state_locked(shmem, new_state);
+
+	mutex_unlock(&gem_shrinker->lock);
+}
+
+static void drm_gem_shmem_update_pages_state(struct drm_gem_shmem_object *shmem)
+{
+	dma_resv_lock(shmem->base.resv, NULL);
+	drm_gem_shmem_update_pages_state_locked(shmem);
+	dma_resv_unlock(shmem->base.resv);
+}
+
+static int
+drm_gem_shmem_set_evictable_locked(struct drm_gem_shmem_object *shmem)
+{
+	int ret = 0;
+
+	WARN_ON_ONCE(!shmem->eviction_disable_count--);
+
+	if (shmem->madv < 0)
+		ret = -ENOMEM;
+
+	drm_gem_shmem_update_pages_state_locked(shmem);
+
+	return ret;
+}
+
+static int
+drm_gem_shmem_set_unevictable_locked(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+	int err;
+
+	if (shmem->madv < 0)
+		return -ENOMEM;
+
+	if (shmem->evicted) {
+		err = obj->funcs->swap_in(obj);
+		if (err)
+			return err;
+	}
+
+	shmem->eviction_disable_count++;
+
+	drm_gem_shmem_update_pages_state_locked(shmem);
+
+	return 0;
+}
+
+static int
+drm_gem_shmem_set_purgeable_locked(struct drm_gem_shmem_object *shmem)
+{
+	int ret = 0;
+
+	WARN_ON_ONCE(!shmem->purging_disable_count--);
+
+	if (shmem->madv < 0)
+		ret = -ENOMEM;
+
+	drm_gem_shmem_update_pages_state_locked(shmem);
+
+	return ret;
+}
+
+/**
+ * drm_gem_shmem_set_purgeable() - Make GEM purgeable by memory shrinker
+ * @shmem: shmem GEM object
+ *
+ * Tell memory shrinker that this GEM can be purged. Initially purging is
+ * disabled for all GEMs. Each set_pureable() call must have corresponding
+ * set_unpureable() call. If GEM was purged, then -ENOMEM is returned.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_shmem_set_purgeable(struct drm_gem_shmem_object *shmem)
+{
+	int ret;
+
+	dma_resv_lock(shmem->base.resv, NULL);
+	ret = drm_gem_shmem_set_purgeable_locked(shmem);
+	dma_resv_unlock(shmem->base.resv);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_set_purgeable);
+
+static int
+drm_gem_shmem_set_unpurgeable_locked(struct drm_gem_shmem_object *shmem)
+{
+	if (shmem->madv < 0)
+		return -ENOMEM;
+
+	shmem->purging_disable_count++;
+
+	drm_gem_shmem_update_pages_state_locked(shmem);
+
+	return 0;
+}
+
+static int
+drm_gem_shmem_set_purgeable_and_evictable_locked(struct drm_gem_shmem_object *shmem)
+{
+	int ret;
+
+	ret = drm_gem_shmem_set_evictable_locked(shmem);
+	if (!ret) {
+		ret = drm_gem_shmem_set_purgeable_locked(shmem);
+		if (ret)
+			drm_gem_shmem_set_unevictable_locked(shmem);
+	}
+
+	return ret;
+}
+
+static int
+drm_gem_shmem_set_unpurgeable_and_unevictable_locked(struct drm_gem_shmem_object *shmem)
+{
+	int ret;
+
+	ret = drm_gem_shmem_set_unpurgeable_locked(shmem);
+	if (!ret) {
+		ret = drm_gem_shmem_set_unevictable_locked(shmem);
+		if (ret)
+			drm_gem_shmem_set_purgeable_locked(shmem);
+	}
+
+	return ret;
+}
+
+/**
+ * drm_gem_shmem_set_purgeable_and_evictable() - Make GEM unpurgeable and
+ * 						 unevictable by memory shrinker
+ * @shmem: shmem GEM object
+ *
+ * Tell memory shrinker that this GEM can't be purged and evicted. Each
+ * set_purgeable_and_evictable() call must have corresponding
+ * unpurgeable_and_unevictable() call. If GEM was purged, then -ENOMEM
+ * is returned.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_shmem_set_purgeable_and_evictable(struct drm_gem_shmem_object *shmem)
+{
+	int ret;
+
+	dma_resv_lock(shmem->base.resv, NULL);
+	ret = drm_gem_shmem_set_purgeable_and_evictable_locked(shmem);
+	dma_resv_unlock(shmem->base.resv);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_set_purgeable_and_evictable);
+
+/**
+ * drm_gem_shmem_set_unpurgeable_and_unevictable() - Make GEM purgeable and
+ * 						     evictable by memory shrinker
+ * @shmem: shmem GEM object
+ *
+ * Tell memory shrinker that this GEM can be purged and evicted. Each
+ * unpurgeable_and_unevictable() call must have corresponding
+ * set_purgeable_and_evictable() call. If GEM was purged, then -ENOMEM
+ * is returned.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_shmem_set_unpurgeable_and_unevictable(struct drm_gem_shmem_object *shmem)
+{
+	int ret;
+
+	ret = dma_resv_lock_interruptible(shmem->base.resv, NULL);
+	if (ret)
+		return ret;
+
+	ret = drm_gem_shmem_set_unpurgeable_and_unevictable_locked(shmem);
+	dma_resv_unlock(shmem->base.resv);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_set_unpurgeable_and_unevictable);
+
+static int
+drm_gem_shmem_acquire_pages_locked(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
 	struct page **pages;
 
-	if (shmem->pages_use_count++ > 0)
+	if (shmem->madv < 0) {
+		WARN_ON(shmem->pages);
+		return -ENOMEM;
+	}
+
+	if (shmem->pages) {
+		WARN_ON(!shmem->evicted);
 		return 0;
+	}
 
 	pages = drm_gem_get_pages(obj);
 	if (IS_ERR(pages)) {
 		DRM_DEBUG_KMS("Failed to get pages (%ld)\n", PTR_ERR(pages));
-		shmem->pages_use_count = 0;
 		return PTR_ERR(pages);
 	}
 
@@ -189,6 +509,25 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
 	return 0;
 }
 
+static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
+{
+	int err;
+
+	if (shmem->madv < 0)
+		return -ENOMEM;
+
+	if (shmem->pages_use_count++ > 0)
+		return 0;
+
+	err = drm_gem_shmem_acquire_pages_locked(shmem);
+	if (err) {
+		shmem->pages_use_count = 0;
+		return err;
+	}
+
+	return 0;
+}
+
 /*
  * drm_gem_shmem_get_pages - Allocate backing pages for a shmem GEM object
  * @shmem: shmem GEM object
@@ -209,21 +548,38 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
 	if (ret)
 		return ret;
 	ret = drm_gem_shmem_get_pages_locked(shmem);
+
+	drm_gem_shmem_update_pages_state_locked(shmem);
+
 	dma_resv_unlock(shmem->base.resv);
 
 	return ret;
 }
 EXPORT_SYMBOL(drm_gem_shmem_get_pages);
 
-static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
+static void drm_gem_shmem_get_pages_no_fail(struct drm_gem_shmem_object *shmem)
 {
-	struct drm_gem_object *obj = &shmem->base;
+	WARN_ON(shmem->base.import_attach);
 
-	if (WARN_ON_ONCE(!shmem->pages_use_count))
-		return;
+	dma_resv_lock(shmem->base.resv, NULL);
 
-	if (--shmem->pages_use_count > 0)
+	if (drm_gem_shmem_get_pages_locked(shmem))
+		shmem->pages_use_count++;
+
+	drm_gem_shmem_update_pages_state_locked(shmem);
+
+	dma_resv_unlock(shmem->base.resv);
+}
+
+static void
+drm_gem_shmem_release_pages_locked(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+
+	if (!shmem->pages) {
+		WARN_ON(!shmem->evicted && shmem->madv >= 0);
 		return;
+	}
 
 #ifdef CONFIG_X86
 	if (shmem->map_wc)
@@ -236,6 +592,21 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
 	shmem->pages = NULL;
 }
 
+static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+
+	lockdep_assert_held(&obj->resv->lock.base);
+
+	if (WARN_ON(!shmem->pages_use_count))
+		return;
+
+	if (--shmem->pages_use_count > 0)
+		return;
+
+	drm_gem_shmem_release_pages_locked(shmem);
+}
+
 /*
  * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
  * @shmem: shmem GEM object
@@ -246,6 +617,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
 {
 	dma_resv_lock(shmem->base.resv, NULL);
 	drm_gem_shmem_put_pages_locked(shmem);
+	drm_gem_shmem_update_pages_state_locked(shmem);
 	dma_resv_unlock(shmem->base.resv);
 }
 EXPORT_SYMBOL(drm_gem_shmem_put_pages);
@@ -262,9 +634,21 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages);
  */
 int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
 {
+	int err;
+
 	WARN_ON(shmem->base.import_attach);
 
-	return drm_gem_shmem_get_pages(shmem);
+	err = drm_gem_shmem_set_unpurgeable_and_unevictable(shmem);
+	if (err)
+		return err;
+
+	err = drm_gem_shmem_get_pages(shmem);
+	if (err) {
+		drm_gem_shmem_set_purgeable_and_evictable(shmem);
+		return err;
+	}
+
+	return 0;
 }
 EXPORT_SYMBOL(drm_gem_shmem_pin);
 
@@ -280,6 +664,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
 	WARN_ON(shmem->base.import_attach);
 
 	drm_gem_shmem_put_pages(shmem);
+	drm_gem_shmem_set_purgeable_and_evictable(shmem);
 }
 EXPORT_SYMBOL(drm_gem_shmem_unpin);
 
@@ -359,7 +744,18 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
 	ret = dma_resv_lock_interruptible(shmem->base.resv, NULL);
 	if (ret)
 		return ret;
+
+	ret = drm_gem_shmem_set_unpurgeable_and_unevictable_locked(shmem);
+	if (ret)
+		goto unlock;
+
 	ret = drm_gem_shmem_vmap_locked(shmem, map);
+	if (ret)
+		drm_gem_shmem_set_purgeable_and_evictable_locked(shmem);
+	else
+		drm_gem_shmem_update_pages_state_locked(shmem);
+
+unlock:
 	dma_resv_unlock(shmem->base.resv);
 
 	return ret;
@@ -404,9 +800,9 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
 {
 	dma_resv_lock(shmem->base.resv, NULL);
 	drm_gem_shmem_vunmap_locked(shmem, map);
+	drm_gem_shmem_update_pages_state_locked(shmem);
+	drm_gem_shmem_set_purgeable_and_evictable_locked(shmem);
 	dma_resv_unlock(shmem->base.resv);
-
-	drm_gem_shmem_update_purgeable_status(shmem);
 }
 EXPORT_SYMBOL(drm_gem_shmem_vunmap);
 
@@ -447,29 +843,140 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
 
 	madv = shmem->madv;
 
+	drm_gem_shmem_update_pages_state_locked(shmem);
+
 	dma_resv_unlock(shmem->base.resv);
 
 	return (madv >= 0);
 }
 EXPORT_SYMBOL(drm_gem_shmem_madvise);
 
-void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
+/**
+ * drm_gem_shmem_swap_in_pages_locked() - Moves shmem pages back to memory
+ * @shmem: shmem GEM object
+ *
+ * This function moves pages back to memory if they were previously evicted
+ * by the memory shrinker.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_shmem_swap_in_pages_locked(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+	struct sg_table *sgt;
+	int ret;
+
+	lockdep_assert_held(&obj->resv->lock.base);
+
+	if (shmem->evicted) {
+		ret = drm_gem_shmem_acquire_pages_locked(shmem);
+		if (ret)
+			return ret;
+
+		sgt = drm_gem_shmem_get_sg_table(shmem);
+		if (IS_ERR(sgt))
+			return PTR_ERR(sgt);
+
+		ret = dma_map_sgtable(obj->dev->dev, sgt,
+				      DMA_BIDIRECTIONAL, 0);
+		if (ret) {
+			sg_free_table(sgt);
+			kfree(sgt);
+			return ret;
+		}
+
+		shmem->sgt = sgt;
+		shmem->evicted = false;
+		shmem->pages_state = DRM_GEM_SHMEM_PAGES_STATE_ACTIVE;
+
+		drm_gem_shmem_update_pages_state_locked(shmem);
+	}
+
+	return shmem->pages ? 0 : -ENOMEM;
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_swap_in_pages_locked);
+
+/**
+ * drm_gem_shmem_swap_in_locked() - Moves shmem GEM back to memory
+ * @shmem: shmem GEM object
+ *
+ * This function moves shmem GEM back to memory if it was previously evicted
+ * by the memory shrinker. The GEM is ready to use on success.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_shmem_swap_in_locked(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+
+	lockdep_assert_held(&obj->resv->lock.base);
+
+	if (shmem->evicted)
+		return obj->funcs->swap_in(obj);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_swap_in_locked);
+
+static void drm_gem_shmem_unpin_pages_locked(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
 	struct drm_device *dev = obj->dev;
 
-	WARN_ON(!drm_gem_shmem_is_purgeable(shmem));
+	if (shmem->evicted)
+		return;
 
 	dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
+	drm_gem_shmem_release_pages_locked(shmem);
+	drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
+
 	sg_free_table(shmem->sgt);
 	kfree(shmem->sgt);
 	shmem->sgt = NULL;
+}
 
-	drm_gem_shmem_put_pages_locked(shmem);
+/**
+ * drm_gem_shmem_evict_locked - Evict shmem pages
+ * @shmem: shmem GEM object
+ *
+ * This function unpins shmem pages, allowing them to be swapped out from
+ * memory.
+ */
+void drm_gem_shmem_evict_locked(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
 
-	shmem->madv = -1;
+	lockdep_assert_held(&obj->resv->lock.base);
 
-	drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
+	WARN_ON(!drm_gem_shmem_is_evictable(shmem));
+	WARN_ON(shmem->madv < 0);
+	WARN_ON(shmem->evicted);
+
+	drm_gem_shmem_unpin_pages_locked(shmem);
+
+	shmem->evicted = true;
+	drm_gem_shmem_set_pages_state(shmem, DRM_GEM_SHMEM_PAGES_STATE_EVICTED);
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_evict_locked);
+
+/**
+ * drm_gem_shmem_purge_locked - Purge shmem pages
+ * @shmem: shmem GEM object
+ *
+ * This function permanently releases shmem pages.
+ */
+void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+
+	lockdep_assert_held(&obj->resv->lock.base);
+
+	WARN_ON(!drm_gem_shmem_is_purgeable(shmem));
+	WARN_ON(shmem->madv < 0);
+
+	drm_gem_shmem_unpin_pages_locked(shmem);
 	drm_gem_free_mmap_offset(obj);
 
 	/* Our goal here is to return as much of the memory as
@@ -480,6 +987,9 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
 	shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1);
 
 	invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
+
+	shmem->madv = -1;
+	drm_gem_shmem_set_pages_state(shmem, DRM_GEM_SHMEM_PAGES_STATE_PURGED);
 }
 EXPORT_SYMBOL(drm_gem_shmem_purge_locked);
 
@@ -543,22 +1053,31 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 	vm_fault_t ret;
 	struct page *page;
 	pgoff_t page_offset;
+	bool pages_inactive;
+	int err;
 
 	/* We don't use vmf->pgoff since that has the fake offset */
 	page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
 
 	dma_resv_lock(shmem->base.resv, NULL);
 
-	if (page_offset >= num_pages ||
-	    WARN_ON_ONCE(!shmem->pages) ||
-	    shmem->madv < 0) {
+	pages_inactive = shmem->pages_state < DRM_GEM_SHMEM_PAGES_STATE_ACTIVE;
+	WARN_ON_ONCE(!shmem->pages ^ pages_inactive);
+
+	if (page_offset >= num_pages || (!shmem->pages && !shmem->evicted)) {
 		ret = VM_FAULT_SIGBUS;
 	} else {
+		err = drm_gem_shmem_swap_in_locked(shmem);
+		if (err) {
+			ret = VM_FAULT_OOM;
+			goto unlock;
+		}
+
 		page = shmem->pages[page_offset];
 
 		ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
 	}
-
+unlock:
 	dma_resv_unlock(shmem->base.resv);
 
 	return ret;
@@ -568,13 +1087,8 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
 {
 	struct drm_gem_object *obj = vma->vm_private_data;
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
-	int ret;
-
-	WARN_ON(shmem->base.import_attach);
-
-	ret = drm_gem_shmem_get_pages(shmem);
-	WARN_ON_ONCE(ret != 0);
 
+	drm_gem_shmem_get_pages_no_fail(shmem);
 	drm_gem_vm_open(vma);
 }
 
@@ -716,6 +1230,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
 
 	shmem->sgt = sgt;
 
+	drm_gem_shmem_update_pages_state(shmem);
+
 	return sgt;
 
 err_free_sgt:
@@ -762,6 +1278,202 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev,
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table);
 
+static struct drm_gem_shmem_shrinker *
+to_drm_shrinker(struct shrinker *shrinker)
+{
+	return container_of(shrinker, struct drm_gem_shmem_shrinker, base);
+}
+
+static unsigned long
+drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker,
+				     struct shrink_control *sc)
+{
+	struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker);
+	u64 count = READ_ONCE(gem_shrinker->shrinkable_count);
+
+	if (count >= SHRINK_EMPTY)
+		return SHRINK_EMPTY - 1;
+
+	return count ?: SHRINK_EMPTY;
+}
+
+static unsigned long
+drm_gem_shmem_shrinker_run_objects_scan(struct shrinker *shrinker,
+					unsigned long nr_to_scan,
+					bool *lock_contention,
+					bool evict)
+{
+	struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker);
+	struct drm_gem_shmem_object *shmem;
+	struct list_head still_in_list;
+	struct drm_gem_object *obj;
+	unsigned long freed = 0;
+	struct list_head *lru;
+	size_t page_count;
+
+	INIT_LIST_HEAD(&still_in_list);
+
+	mutex_lock(&gem_shrinker->lock);
+
+	if (evict)
+		lru = &gem_shrinker->lru_evictable;
+	else
+		lru = &gem_shrinker->lru_purgeable;
+
+	while (freed < nr_to_scan) {
+		shmem = list_first_entry_or_null(lru, typeof(*shmem), madv_list);
+		if (!shmem)
+			break;
+
+		obj = &shmem->base;
+		page_count = obj->size >> PAGE_SHIFT;
+		list_move_tail(&shmem->madv_list, &still_in_list);
+
+		if (evict && get_nr_swap_pages() < page_count)
+			continue;
+
+		/*
+		 * If it's in the process of being freed, gem_object->free()
+		 * may be blocked on lock waiting to remove it.  So just
+		 * skip it.
+		 */
+		if (!kref_get_unless_zero(&obj->refcount))
+			continue;
+
+		mutex_unlock(&gem_shrinker->lock);
+
+		/* prevent racing with job-submission code paths */
+		if (!dma_resv_trylock(obj->resv)) {
+			*lock_contention |= true;
+			goto shrinker_lock;
+		}
+
+		/* prevent racing with the dma-buf exporting */
+		if (!mutex_trylock(&gem_shrinker->dev->object_name_lock)) {
+			*lock_contention |= true;
+			goto resv_unlock;
+		}
+
+		/* check whether h/w uses this object */
+		if (!dma_resv_test_signaled(obj->resv, true))
+			goto object_name_unlock;
+
+		/* GEM may've become unpurgeable while shrinker was unlocked */
+		if (evict) {
+			if (!drm_gem_shmem_is_evictable(shmem))
+				goto object_name_unlock;
+		} else {
+			if (!drm_gem_shmem_is_purgeable(shmem))
+				goto object_name_unlock;
+		}
+
+		if (evict)
+			freed += obj->funcs->evict(obj);
+		else
+			freed += obj->funcs->purge(obj);
+object_name_unlock:
+		mutex_unlock(&gem_shrinker->dev->object_name_lock);
+resv_unlock:
+		dma_resv_unlock(obj->resv);
+shrinker_lock:
+		drm_gem_object_put(&shmem->base);
+		mutex_lock(&gem_shrinker->lock);
+	}
+
+	list_splice_tail(&still_in_list, lru);
+
+	mutex_unlock(&gem_shrinker->lock);
+
+	return freed;
+}
+
+static unsigned long
+drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker,
+				    struct shrink_control *sc)
+{
+	unsigned long nr_to_scan = sc->nr_to_scan;
+	bool lock_contention = false;
+	unsigned long freed;
+
+	/* purge as many objects as we can */
+	freed = drm_gem_shmem_shrinker_run_objects_scan(shrinker, nr_to_scan,
+							&lock_contention, false);
+	nr_to_scan -= freed;
+
+	/* evict as many objects as we can */
+	if (freed < nr_to_scan)
+		freed += drm_gem_shmem_shrinker_run_objects_scan(shrinker,
+								 nr_to_scan,
+								 &lock_contention,
+								 true);
+
+	return (!freed && !lock_contention) ? SHRINK_STOP : freed;
+}
+
+/**
+ * drm_gem_shmem_shrinker_register() - Register shmem shrinker
+ * @dev: DRM device
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_shmem_shrinker_register(struct drm_device *dev)
+{
+	struct drm_gem_shmem_shrinker *gem_shrinker;
+	int err;
+
+	if (WARN_ON(dev->shmem_shrinker))
+		return -EBUSY;
+
+	gem_shrinker = kzalloc(sizeof(*gem_shrinker), GFP_KERNEL);
+	if (!gem_shrinker)
+		return -ENOMEM;
+
+	gem_shrinker->base.count_objects = drm_gem_shmem_shrinker_count_objects;
+	gem_shrinker->base.scan_objects = drm_gem_shmem_shrinker_scan_objects;
+	gem_shrinker->base.seeks = DEFAULT_SEEKS;
+	gem_shrinker->dev = dev;
+
+	INIT_LIST_HEAD(&gem_shrinker->lru_purgeable);
+	INIT_LIST_HEAD(&gem_shrinker->lru_evictable);
+	INIT_LIST_HEAD(&gem_shrinker->lru_evicted);
+	INIT_LIST_HEAD(&gem_shrinker->lru_active);
+	mutex_init(&gem_shrinker->lock);
+
+	dev->shmem_shrinker = gem_shrinker;
+
+	err = register_shrinker(&gem_shrinker->base);
+	if (err) {
+		dev->shmem_shrinker = NULL;
+		kfree(gem_shrinker);
+		return err;
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_shrinker_register);
+
+/**
+ * drm_gem_shmem_shrinker_unregister() - Unregister shmem shrinker
+ * @dev: DRM device
+ */
+void drm_gem_shmem_shrinker_unregister(struct drm_device *dev)
+{
+	struct drm_gem_shmem_shrinker *gem_shrinker = dev->shmem_shrinker;
+
+	if (gem_shrinker) {
+		unregister_shrinker(&gem_shrinker->base);
+		WARN_ON(!list_empty(&gem_shrinker->lru_purgeable));
+		WARN_ON(!list_empty(&gem_shrinker->lru_evictable));
+		WARN_ON(!list_empty(&gem_shrinker->lru_evicted));
+		WARN_ON(!list_empty(&gem_shrinker->lru_active));
+		mutex_destroy(&gem_shrinker->lock);
+		dev->shmem_shrinker = NULL;
+		kfree(gem_shrinker);
+	}
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_shrinker_unregister);
+
 MODULE_DESCRIPTION("DRM SHMEM memory-management helpers");
 MODULE_IMPORT_NS(DMA_BUF);
 MODULE_LICENSE("GPL v2");
diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h
index 9923c7a6885e..929546cad894 100644
--- a/include/drm/drm_device.h
+++ b/include/drm/drm_device.h
@@ -16,6 +16,7 @@ struct drm_vblank_crtc;
 struct drm_vma_offset_manager;
 struct drm_vram_mm;
 struct drm_fb_helper;
+struct drm_gem_shmem_shrinker;
 
 struct inode;
 
@@ -277,6 +278,9 @@ struct drm_device {
 	/** @vram_mm: VRAM MM memory manager */
 	struct drm_vram_mm *vram_mm;
 
+	/** @shmem_shrinker: SHMEM GEM memory shrinker */
+	struct drm_gem_shmem_shrinker *shmem_shrinker;
+
 	/**
 	 * @switch_power_state:
 	 *
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 9d7c61a122dc..390d1ce08ed3 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -172,6 +172,41 @@ struct drm_gem_object_funcs {
 	 * This is optional but necessary for mmap support.
 	 */
 	const struct vm_operations_struct *vm_ops;
+
+	/**
+	 * @purge:
+	 *
+	 * Releases the GEM object's allocated backing storage to the system.
+	 *
+	 * Returns the number of pages that have been freed by purging the GEM object.
+	 *
+	 * This callback is used by the GEM shrinker.
+	 */
+	unsigned long (*purge)(struct drm_gem_object *obj);
+
+	/**
+	 * @evict:
+	 *
+	 * Unpins the GEM object's allocated backing storage, allowing shmem pages
+	 * to be swapped out.
+	 *
+	 * Returns the number of pages that have been unpinned.
+	 *
+	 * This callback is used by the GEM shrinker.
+	 */
+	unsigned long (*evict)(struct drm_gem_object *obj);
+
+	/**
+	 * @swap_in:
+	 *
+	 * Pins GEM object's allocated backing storage if it was previously evicted,
+	 * moving swapped out pages back to memory.
+	 *
+	 * Returns 0 on success, or -errno on error.
+	 *
+	 * This callback is used by the GEM shrinker.
+	 */
+	int (*swap_in)(struct drm_gem_object *obj);
 };
 
 /**
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 70889533962a..dc1c2db7d095 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -6,6 +6,7 @@
 #include <linux/fs.h>
 #include <linux/mm.h>
 #include <linux/mutex.h>
+#include <linux/shrinker.h>
 
 #include <drm/drm_file.h>
 #include <drm/drm_gem.h>
@@ -15,8 +16,18 @@
 struct dma_buf_attachment;
 struct drm_mode_create_dumb;
 struct drm_printer;
+struct drm_device;
 struct sg_table;
 
+enum drm_gem_shmem_pages_state {
+	DRM_GEM_SHMEM_PAGES_STATE_PURGED = -2,
+	DRM_GEM_SHMEM_PAGES_STATE_EVICTED = -1,
+	DRM_GEM_SHMEM_PAGES_STATE_INACTIVE = 0,
+	DRM_GEM_SHMEM_PAGES_STATE_ACTIVE = 1,
+	DRM_GEM_SHMEM_PAGES_STATE_EVICTABLE = 2,
+	DRM_GEM_SHMEM_PAGES_STATE_PURGEABLE = 3,
+};
+
 /**
  * struct drm_gem_shmem_object - GEM object backed by shmem
  */
@@ -43,8 +54,8 @@ struct drm_gem_shmem_object {
 	 * @madv: State for madvise
 	 *
 	 * 0 is active/inuse.
+	 * 1 is not-needed/can-be-purged
 	 * A negative value is the object is purged.
-	 * Positive values are driver specific and not used by the helpers.
 	 */
 	int madv;
 
@@ -91,6 +102,40 @@ struct drm_gem_shmem_object {
 	 * @map_wc: map object write-combined (instead of using shmem defaults).
 	 */
 	bool map_wc;
+
+	/**
+	 * @eviction_disable_count:
+	 *
+	 * The shmem pages are disallowed to be evicted by the memory shrinker
+	 * while count is non-zero. Used internally by memory shrinker.
+	 */
+	unsigned int eviction_disable_count;
+
+	/**
+	 * @purging_disable_count:
+	 *
+	 * The shmem pages are disallowed to be purged by the memory shrinker
+	 * while count is non-zero. Used internally by memory shrinker.
+	 */
+	unsigned int purging_disable_count;
+
+	/**
+	 * @pages_state: Current state of shmem pages. Used internally by
+	 * memory shrinker.
+	 */
+	enum drm_gem_shmem_pages_state pages_state;
+
+	/**
+	 * @evicted: True if shmem pages were evicted by the memory shrinker.
+	 * Used internally by memory shrinker.
+	 */
+	bool evicted;
+
+	/**
+	 * @pages_shrinkable: True if shmem pages can be evicted or purged
+	 * by the memory shrinker. Used internally by memory shrinker.
+	 */
+	bool pages_shrinkable;
 };
 
 #define to_drm_gem_shmem_obj(obj) \
@@ -111,15 +156,33 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
 
 int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv);
 
+int drm_gem_shmem_set_purgeable(struct drm_gem_shmem_object *shmem);
+int drm_gem_shmem_set_purgeable_and_evictable(struct drm_gem_shmem_object *shmem);
+int drm_gem_shmem_set_unpurgeable_and_unevictable(struct drm_gem_shmem_object *shmem);
+
+static inline bool drm_gem_shmem_is_evictable(struct drm_gem_shmem_object *shmem)
+{
+	return (shmem->madv >= 0) && !shmem->eviction_disable_count &&
+		shmem->base.funcs->evict && shmem->base.funcs->swap_in &&
+		!shmem->vmap_use_count && !shmem->base.dma_buf &&
+		!shmem->base.import_attach && shmem->sgt;
+}
+
 static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
 {
-	return (shmem->madv > 0) &&
-		!shmem->vmap_use_count && shmem->sgt &&
-		!shmem->base.dma_buf && !shmem->base.import_attach;
+	return (shmem->madv > 0) && !shmem->purging_disable_count &&
+		!shmem->vmap_use_count && shmem->base.funcs->purge &&
+		!shmem->base.dma_buf && !shmem->base.import_attach &&
+		shmem->sgt;
 }
 
-void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem);
+int drm_gem_shmem_swap_in_pages_locked(struct drm_gem_shmem_object *shmem);
+int drm_gem_shmem_swap_in_locked(struct drm_gem_shmem_object *shmem);
+
+void drm_gem_shmem_evict_locked(struct drm_gem_shmem_object *shmem);
+
 bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
+void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem);
 
 struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
 struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem);
@@ -262,6 +325,38 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v
 	return drm_gem_shmem_mmap(shmem, vma);
 }
 
+/**
+ * struct drm_gem_shmem_shrinker - Generic memory shrinker for shmem GEMs
+ */
+struct drm_gem_shmem_shrinker {
+	/** @base: Shrinker for purging shmem GEM objects */
+	struct shrinker base;
+
+	/** @lock: Protects @lru_* */
+	struct mutex lock;
+
+	/** @lru_purgeable: List of shmem GEM objects available for purging */
+	struct list_head lru_purgeable;
+
+	/** @lru_active: List of active shmem GEM objects */
+	struct list_head lru_active;
+
+	/** @lru_evictable: List of shmem GEM objects that can be evicted */
+	struct list_head lru_evictable;
+
+	/** @lru_evicted: List of evicted shmem GEM objects */
+	struct list_head lru_evicted;
+
+	/** @dev: DRM device that uses this shrinker */
+	struct drm_device *dev;
+
+	/** @shrinkable_count: Count of shmem GEM pages to be purged and evicted */
+	u64 shrinkable_count;
+};
+
+int drm_gem_shmem_shrinker_register(struct drm_device *dev);
+void drm_gem_shmem_shrinker_unregister(struct drm_device *dev);
+
 /*
  * Driver ops
  */
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 11/15] drm/shmem-helper: Add generic memory shrinker
@ 2022-04-11 21:59   ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: dri-devel, linux-kernel, Gustavo Padovan, Daniel Stone,
	virtualization, Dmitry Osipenko, Dmitry Osipenko

Introduce a common DRM SHMEM shrinker. It allows to reduce code
duplication among DRM drivers that implement theirs own shrinkers.
This is initial version of the shrinker that covers basic needs of
GPU drivers, both purging and eviction of shmem objects are supported.

This patch is based on a couple ideas borrowed from Rob's Clark MSM
shrinker and Thomas' Zimmermann variant of SHMEM shrinker.

In order to start using DRM SHMEM shrinker drivers should:

1. Implement new purge(), evict() + swap_in() GEM callbacks.
2. Register shrinker using drm_gem_shmem_shrinker_register(drm_device).
3. Use drm_gem_shmem_set_purgeable_and_evictable(shmem) and alike API
   functions to activate shrinking of GEMs.

Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 766 ++++++++++++++++++++++++-
 include/drm/drm_device.h               |   4 +
 include/drm/drm_gem.h                  |  35 ++
 include/drm/drm_gem_shmem_helper.h     | 105 +++-
 4 files changed, 878 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 3ecef571eff3..7e4851363d14 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -88,6 +88,13 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private)
 
 	INIT_LIST_HEAD(&shmem->madv_list);
 
+	/*
+	 * Eviction and purging are disabled by default, shmem user must enable
+	 * them explicitly using drm_gem_shmem_set_evictable/purgeable().
+	 */
+	shmem->eviction_disable_count = 1;
+	shmem->purging_disable_count = 1;
+
 	if (!private) {
 		/*
 		 * Our buffers are kept pinned, so allocating them
@@ -126,6 +133,107 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
 
+static void
+drm_gem_shmem_add_pages_to_shrinker(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+	struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker;
+	size_t page_count = obj->size >> PAGE_SHIFT;
+
+	if (!shmem->pages_shrinkable) {
+		WARN_ON(gem_shrinker->shrinkable_count + page_count < page_count);
+		gem_shrinker->shrinkable_count += page_count;
+		shmem->pages_shrinkable = true;
+	}
+}
+
+static void
+drm_gem_shmem_remove_pages_from_shrinker(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+	struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker;
+	size_t page_count = obj->size >> PAGE_SHIFT;
+
+	if (shmem->pages_shrinkable) {
+		WARN_ON(gem_shrinker->shrinkable_count < page_count);
+		gem_shrinker->shrinkable_count -= page_count;
+		shmem->pages_shrinkable = false;
+	}
+}
+
+static void
+drm_gem_shmem_set_pages_state_locked(struct drm_gem_shmem_object *shmem,
+				     enum drm_gem_shmem_pages_state new_state)
+{
+	struct drm_gem_object *obj = &shmem->base;
+	struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker;
+
+	lockdep_assert_held(&gem_shrinker->lock);
+	lockdep_assert_held(&obj->resv->lock.base);
+
+	if (new_state >= DRM_GEM_SHMEM_PAGES_STATE_ACTIVE) {
+		if (drm_gem_shmem_is_evictable(shmem))
+			new_state = DRM_GEM_SHMEM_PAGES_STATE_EVICTABLE;
+
+		if (drm_gem_shmem_is_purgeable(shmem))
+			new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGEABLE;
+
+		if (!shmem->pages)
+			new_state = DRM_GEM_SHMEM_PAGES_STATE_INACTIVE;
+
+		if (shmem->evicted)
+			new_state = DRM_GEM_SHMEM_PAGES_STATE_EVICTED;
+	}
+
+	if (shmem->pages_state == new_state)
+		return;
+
+	switch (new_state) {
+	case DRM_GEM_SHMEM_PAGES_STATE_INACTIVE:
+	case DRM_GEM_SHMEM_PAGES_STATE_PURGED:
+		drm_gem_shmem_remove_pages_from_shrinker(shmem);
+		list_del_init(&shmem->madv_list);
+		break;
+
+	case DRM_GEM_SHMEM_PAGES_STATE_ACTIVE:
+		drm_gem_shmem_remove_pages_from_shrinker(shmem);
+		list_move_tail(&shmem->madv_list, &gem_shrinker->lru_active);
+		break;
+
+	case DRM_GEM_SHMEM_PAGES_STATE_PURGEABLE:
+		drm_gem_shmem_add_pages_to_shrinker(shmem);
+		list_move_tail(&shmem->madv_list, &gem_shrinker->lru_purgeable);
+		break;
+
+	case DRM_GEM_SHMEM_PAGES_STATE_EVICTABLE:
+		drm_gem_shmem_add_pages_to_shrinker(shmem);
+		list_move_tail(&shmem->madv_list, &gem_shrinker->lru_evictable);
+		break;
+
+	case DRM_GEM_SHMEM_PAGES_STATE_EVICTED:
+		drm_gem_shmem_remove_pages_from_shrinker(shmem);
+		list_move_tail(&shmem->madv_list, &gem_shrinker->lru_evicted);
+		break;
+	}
+
+	shmem->pages_state = new_state;
+}
+
+static void
+drm_gem_shmem_set_pages_state(struct drm_gem_shmem_object *shmem,
+			      enum drm_gem_shmem_pages_state new_state)
+{
+	struct drm_gem_object *obj = &shmem->base;
+	struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker;
+
+	if (!gem_shrinker)
+		return;
+
+	mutex_lock(&gem_shrinker->lock);
+	drm_gem_shmem_set_pages_state_locked(shmem, new_state);
+	mutex_unlock(&gem_shrinker->lock);
+}
+
 /**
  * drm_gem_shmem_free - Free resources associated with a shmem GEM object
  * @shmem: shmem GEM object to free
@@ -137,6 +245,9 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
 
+	/* take out shmem GEM object from the memory shrinker */
+	drm_gem_shmem_madvise(shmem, -1);
+
 	WARN_ON(shmem->vmap_use_count);
 
 	if (obj->import_attach) {
@@ -145,10 +256,11 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
 		if (shmem->sgt) {
 			dma_unmap_sgtable(obj->dev->dev, shmem->sgt,
 					  DMA_BIDIRECTIONAL, 0);
+
 			sg_free_table(shmem->sgt);
 			kfree(shmem->sgt);
 		}
-		if (shmem->pages)
+		if (shmem->pages_use_count)
 			drm_gem_shmem_put_pages(shmem);
 	}
 
@@ -159,18 +271,226 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
 
-static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
+static void drm_gem_shmem_update_pages_state_locked(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+	struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker;
+	enum drm_gem_shmem_pages_state new_state;
+
+	if (!gem_shrinker || obj->import_attach)
+		return;
+
+	mutex_lock(&gem_shrinker->lock);
+
+	if (!shmem->madv)
+		new_state = DRM_GEM_SHMEM_PAGES_STATE_ACTIVE;
+	else if (shmem->madv > 0)
+		new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGEABLE;
+	else if (shmem->madv < 0)
+		new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGED;
+
+	drm_gem_shmem_set_pages_state_locked(shmem, new_state);
+
+	mutex_unlock(&gem_shrinker->lock);
+}
+
+static void drm_gem_shmem_update_pages_state(struct drm_gem_shmem_object *shmem)
+{
+	dma_resv_lock(shmem->base.resv, NULL);
+	drm_gem_shmem_update_pages_state_locked(shmem);
+	dma_resv_unlock(shmem->base.resv);
+}
+
+static int
+drm_gem_shmem_set_evictable_locked(struct drm_gem_shmem_object *shmem)
+{
+	int ret = 0;
+
+	WARN_ON_ONCE(!shmem->eviction_disable_count--);
+
+	if (shmem->madv < 0)
+		ret = -ENOMEM;
+
+	drm_gem_shmem_update_pages_state_locked(shmem);
+
+	return ret;
+}
+
+static int
+drm_gem_shmem_set_unevictable_locked(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+	int err;
+
+	if (shmem->madv < 0)
+		return -ENOMEM;
+
+	if (shmem->evicted) {
+		err = obj->funcs->swap_in(obj);
+		if (err)
+			return err;
+	}
+
+	shmem->eviction_disable_count++;
+
+	drm_gem_shmem_update_pages_state_locked(shmem);
+
+	return 0;
+}
+
+static int
+drm_gem_shmem_set_purgeable_locked(struct drm_gem_shmem_object *shmem)
+{
+	int ret = 0;
+
+	WARN_ON_ONCE(!shmem->purging_disable_count--);
+
+	if (shmem->madv < 0)
+		ret = -ENOMEM;
+
+	drm_gem_shmem_update_pages_state_locked(shmem);
+
+	return ret;
+}
+
+/**
+ * drm_gem_shmem_set_purgeable() - Make GEM purgeable by memory shrinker
+ * @shmem: shmem GEM object
+ *
+ * Tell memory shrinker that this GEM can be purged. Initially purging is
+ * disabled for all GEMs. Each set_pureable() call must have corresponding
+ * set_unpureable() call. If GEM was purged, then -ENOMEM is returned.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_shmem_set_purgeable(struct drm_gem_shmem_object *shmem)
+{
+	int ret;
+
+	dma_resv_lock(shmem->base.resv, NULL);
+	ret = drm_gem_shmem_set_purgeable_locked(shmem);
+	dma_resv_unlock(shmem->base.resv);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_set_purgeable);
+
+static int
+drm_gem_shmem_set_unpurgeable_locked(struct drm_gem_shmem_object *shmem)
+{
+	if (shmem->madv < 0)
+		return -ENOMEM;
+
+	shmem->purging_disable_count++;
+
+	drm_gem_shmem_update_pages_state_locked(shmem);
+
+	return 0;
+}
+
+static int
+drm_gem_shmem_set_purgeable_and_evictable_locked(struct drm_gem_shmem_object *shmem)
+{
+	int ret;
+
+	ret = drm_gem_shmem_set_evictable_locked(shmem);
+	if (!ret) {
+		ret = drm_gem_shmem_set_purgeable_locked(shmem);
+		if (ret)
+			drm_gem_shmem_set_unevictable_locked(shmem);
+	}
+
+	return ret;
+}
+
+static int
+drm_gem_shmem_set_unpurgeable_and_unevictable_locked(struct drm_gem_shmem_object *shmem)
+{
+	int ret;
+
+	ret = drm_gem_shmem_set_unpurgeable_locked(shmem);
+	if (!ret) {
+		ret = drm_gem_shmem_set_unevictable_locked(shmem);
+		if (ret)
+			drm_gem_shmem_set_purgeable_locked(shmem);
+	}
+
+	return ret;
+}
+
+/**
+ * drm_gem_shmem_set_purgeable_and_evictable() - Make GEM unpurgeable and
+ * 						 unevictable by memory shrinker
+ * @shmem: shmem GEM object
+ *
+ * Tell memory shrinker that this GEM can't be purged and evicted. Each
+ * set_purgeable_and_evictable() call must have corresponding
+ * unpurgeable_and_unevictable() call. If GEM was purged, then -ENOMEM
+ * is returned.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_shmem_set_purgeable_and_evictable(struct drm_gem_shmem_object *shmem)
+{
+	int ret;
+
+	dma_resv_lock(shmem->base.resv, NULL);
+	ret = drm_gem_shmem_set_purgeable_and_evictable_locked(shmem);
+	dma_resv_unlock(shmem->base.resv);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_set_purgeable_and_evictable);
+
+/**
+ * drm_gem_shmem_set_unpurgeable_and_unevictable() - Make GEM purgeable and
+ * 						     evictable by memory shrinker
+ * @shmem: shmem GEM object
+ *
+ * Tell memory shrinker that this GEM can be purged and evicted. Each
+ * unpurgeable_and_unevictable() call must have corresponding
+ * set_purgeable_and_evictable() call. If GEM was purged, then -ENOMEM
+ * is returned.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_shmem_set_unpurgeable_and_unevictable(struct drm_gem_shmem_object *shmem)
+{
+	int ret;
+
+	ret = dma_resv_lock_interruptible(shmem->base.resv, NULL);
+	if (ret)
+		return ret;
+
+	ret = drm_gem_shmem_set_unpurgeable_and_unevictable_locked(shmem);
+	dma_resv_unlock(shmem->base.resv);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_set_unpurgeable_and_unevictable);
+
+static int
+drm_gem_shmem_acquire_pages_locked(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
 	struct page **pages;
 
-	if (shmem->pages_use_count++ > 0)
+	if (shmem->madv < 0) {
+		WARN_ON(shmem->pages);
+		return -ENOMEM;
+	}
+
+	if (shmem->pages) {
+		WARN_ON(!shmem->evicted);
 		return 0;
+	}
 
 	pages = drm_gem_get_pages(obj);
 	if (IS_ERR(pages)) {
 		DRM_DEBUG_KMS("Failed to get pages (%ld)\n", PTR_ERR(pages));
-		shmem->pages_use_count = 0;
 		return PTR_ERR(pages);
 	}
 
@@ -189,6 +509,25 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
 	return 0;
 }
 
+static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
+{
+	int err;
+
+	if (shmem->madv < 0)
+		return -ENOMEM;
+
+	if (shmem->pages_use_count++ > 0)
+		return 0;
+
+	err = drm_gem_shmem_acquire_pages_locked(shmem);
+	if (err) {
+		shmem->pages_use_count = 0;
+		return err;
+	}
+
+	return 0;
+}
+
 /*
  * drm_gem_shmem_get_pages - Allocate backing pages for a shmem GEM object
  * @shmem: shmem GEM object
@@ -209,21 +548,38 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
 	if (ret)
 		return ret;
 	ret = drm_gem_shmem_get_pages_locked(shmem);
+
+	drm_gem_shmem_update_pages_state_locked(shmem);
+
 	dma_resv_unlock(shmem->base.resv);
 
 	return ret;
 }
 EXPORT_SYMBOL(drm_gem_shmem_get_pages);
 
-static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
+static void drm_gem_shmem_get_pages_no_fail(struct drm_gem_shmem_object *shmem)
 {
-	struct drm_gem_object *obj = &shmem->base;
+	WARN_ON(shmem->base.import_attach);
 
-	if (WARN_ON_ONCE(!shmem->pages_use_count))
-		return;
+	dma_resv_lock(shmem->base.resv, NULL);
 
-	if (--shmem->pages_use_count > 0)
+	if (drm_gem_shmem_get_pages_locked(shmem))
+		shmem->pages_use_count++;
+
+	drm_gem_shmem_update_pages_state_locked(shmem);
+
+	dma_resv_unlock(shmem->base.resv);
+}
+
+static void
+drm_gem_shmem_release_pages_locked(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+
+	if (!shmem->pages) {
+		WARN_ON(!shmem->evicted && shmem->madv >= 0);
 		return;
+	}
 
 #ifdef CONFIG_X86
 	if (shmem->map_wc)
@@ -236,6 +592,21 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
 	shmem->pages = NULL;
 }
 
+static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+
+	lockdep_assert_held(&obj->resv->lock.base);
+
+	if (WARN_ON(!shmem->pages_use_count))
+		return;
+
+	if (--shmem->pages_use_count > 0)
+		return;
+
+	drm_gem_shmem_release_pages_locked(shmem);
+}
+
 /*
  * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
  * @shmem: shmem GEM object
@@ -246,6 +617,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
 {
 	dma_resv_lock(shmem->base.resv, NULL);
 	drm_gem_shmem_put_pages_locked(shmem);
+	drm_gem_shmem_update_pages_state_locked(shmem);
 	dma_resv_unlock(shmem->base.resv);
 }
 EXPORT_SYMBOL(drm_gem_shmem_put_pages);
@@ -262,9 +634,21 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages);
  */
 int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
 {
+	int err;
+
 	WARN_ON(shmem->base.import_attach);
 
-	return drm_gem_shmem_get_pages(shmem);
+	err = drm_gem_shmem_set_unpurgeable_and_unevictable(shmem);
+	if (err)
+		return err;
+
+	err = drm_gem_shmem_get_pages(shmem);
+	if (err) {
+		drm_gem_shmem_set_purgeable_and_evictable(shmem);
+		return err;
+	}
+
+	return 0;
 }
 EXPORT_SYMBOL(drm_gem_shmem_pin);
 
@@ -280,6 +664,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
 	WARN_ON(shmem->base.import_attach);
 
 	drm_gem_shmem_put_pages(shmem);
+	drm_gem_shmem_set_purgeable_and_evictable(shmem);
 }
 EXPORT_SYMBOL(drm_gem_shmem_unpin);
 
@@ -359,7 +744,18 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
 	ret = dma_resv_lock_interruptible(shmem->base.resv, NULL);
 	if (ret)
 		return ret;
+
+	ret = drm_gem_shmem_set_unpurgeable_and_unevictable_locked(shmem);
+	if (ret)
+		goto unlock;
+
 	ret = drm_gem_shmem_vmap_locked(shmem, map);
+	if (ret)
+		drm_gem_shmem_set_purgeable_and_evictable_locked(shmem);
+	else
+		drm_gem_shmem_update_pages_state_locked(shmem);
+
+unlock:
 	dma_resv_unlock(shmem->base.resv);
 
 	return ret;
@@ -404,9 +800,9 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
 {
 	dma_resv_lock(shmem->base.resv, NULL);
 	drm_gem_shmem_vunmap_locked(shmem, map);
+	drm_gem_shmem_update_pages_state_locked(shmem);
+	drm_gem_shmem_set_purgeable_and_evictable_locked(shmem);
 	dma_resv_unlock(shmem->base.resv);
-
-	drm_gem_shmem_update_purgeable_status(shmem);
 }
 EXPORT_SYMBOL(drm_gem_shmem_vunmap);
 
@@ -447,29 +843,140 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
 
 	madv = shmem->madv;
 
+	drm_gem_shmem_update_pages_state_locked(shmem);
+
 	dma_resv_unlock(shmem->base.resv);
 
 	return (madv >= 0);
 }
 EXPORT_SYMBOL(drm_gem_shmem_madvise);
 
-void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
+/**
+ * drm_gem_shmem_swap_in_pages_locked() - Moves shmem pages back to memory
+ * @shmem: shmem GEM object
+ *
+ * This function moves pages back to memory if they were previously evicted
+ * by the memory shrinker.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_shmem_swap_in_pages_locked(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+	struct sg_table *sgt;
+	int ret;
+
+	lockdep_assert_held(&obj->resv->lock.base);
+
+	if (shmem->evicted) {
+		ret = drm_gem_shmem_acquire_pages_locked(shmem);
+		if (ret)
+			return ret;
+
+		sgt = drm_gem_shmem_get_sg_table(shmem);
+		if (IS_ERR(sgt))
+			return PTR_ERR(sgt);
+
+		ret = dma_map_sgtable(obj->dev->dev, sgt,
+				      DMA_BIDIRECTIONAL, 0);
+		if (ret) {
+			sg_free_table(sgt);
+			kfree(sgt);
+			return ret;
+		}
+
+		shmem->sgt = sgt;
+		shmem->evicted = false;
+		shmem->pages_state = DRM_GEM_SHMEM_PAGES_STATE_ACTIVE;
+
+		drm_gem_shmem_update_pages_state_locked(shmem);
+	}
+
+	return shmem->pages ? 0 : -ENOMEM;
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_swap_in_pages_locked);
+
+/**
+ * drm_gem_shmem_swap_in_locked() - Moves shmem GEM back to memory
+ * @shmem: shmem GEM object
+ *
+ * This function moves shmem GEM back to memory if it was previously evicted
+ * by the memory shrinker. The GEM is ready to use on success.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_shmem_swap_in_locked(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+
+	lockdep_assert_held(&obj->resv->lock.base);
+
+	if (shmem->evicted)
+		return obj->funcs->swap_in(obj);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_swap_in_locked);
+
+static void drm_gem_shmem_unpin_pages_locked(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
 	struct drm_device *dev = obj->dev;
 
-	WARN_ON(!drm_gem_shmem_is_purgeable(shmem));
+	if (shmem->evicted)
+		return;
 
 	dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
+	drm_gem_shmem_release_pages_locked(shmem);
+	drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
+
 	sg_free_table(shmem->sgt);
 	kfree(shmem->sgt);
 	shmem->sgt = NULL;
+}
 
-	drm_gem_shmem_put_pages_locked(shmem);
+/**
+ * drm_gem_shmem_evict_locked - Evict shmem pages
+ * @shmem: shmem GEM object
+ *
+ * This function unpins shmem pages, allowing them to be swapped out from
+ * memory.
+ */
+void drm_gem_shmem_evict_locked(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
 
-	shmem->madv = -1;
+	lockdep_assert_held(&obj->resv->lock.base);
 
-	drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
+	WARN_ON(!drm_gem_shmem_is_evictable(shmem));
+	WARN_ON(shmem->madv < 0);
+	WARN_ON(shmem->evicted);
+
+	drm_gem_shmem_unpin_pages_locked(shmem);
+
+	shmem->evicted = true;
+	drm_gem_shmem_set_pages_state(shmem, DRM_GEM_SHMEM_PAGES_STATE_EVICTED);
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_evict_locked);
+
+/**
+ * drm_gem_shmem_purge_locked - Purge shmem pages
+ * @shmem: shmem GEM object
+ *
+ * This function permanently releases shmem pages.
+ */
+void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
+{
+	struct drm_gem_object *obj = &shmem->base;
+
+	lockdep_assert_held(&obj->resv->lock.base);
+
+	WARN_ON(!drm_gem_shmem_is_purgeable(shmem));
+	WARN_ON(shmem->madv < 0);
+
+	drm_gem_shmem_unpin_pages_locked(shmem);
 	drm_gem_free_mmap_offset(obj);
 
 	/* Our goal here is to return as much of the memory as
@@ -480,6 +987,9 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
 	shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1);
 
 	invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
+
+	shmem->madv = -1;
+	drm_gem_shmem_set_pages_state(shmem, DRM_GEM_SHMEM_PAGES_STATE_PURGED);
 }
 EXPORT_SYMBOL(drm_gem_shmem_purge_locked);
 
@@ -543,22 +1053,31 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 	vm_fault_t ret;
 	struct page *page;
 	pgoff_t page_offset;
+	bool pages_inactive;
+	int err;
 
 	/* We don't use vmf->pgoff since that has the fake offset */
 	page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
 
 	dma_resv_lock(shmem->base.resv, NULL);
 
-	if (page_offset >= num_pages ||
-	    WARN_ON_ONCE(!shmem->pages) ||
-	    shmem->madv < 0) {
+	pages_inactive = shmem->pages_state < DRM_GEM_SHMEM_PAGES_STATE_ACTIVE;
+	WARN_ON_ONCE(!shmem->pages ^ pages_inactive);
+
+	if (page_offset >= num_pages || (!shmem->pages && !shmem->evicted)) {
 		ret = VM_FAULT_SIGBUS;
 	} else {
+		err = drm_gem_shmem_swap_in_locked(shmem);
+		if (err) {
+			ret = VM_FAULT_OOM;
+			goto unlock;
+		}
+
 		page = shmem->pages[page_offset];
 
 		ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
 	}
-
+unlock:
 	dma_resv_unlock(shmem->base.resv);
 
 	return ret;
@@ -568,13 +1087,8 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
 {
 	struct drm_gem_object *obj = vma->vm_private_data;
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
-	int ret;
-
-	WARN_ON(shmem->base.import_attach);
-
-	ret = drm_gem_shmem_get_pages(shmem);
-	WARN_ON_ONCE(ret != 0);
 
+	drm_gem_shmem_get_pages_no_fail(shmem);
 	drm_gem_vm_open(vma);
 }
 
@@ -716,6 +1230,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
 
 	shmem->sgt = sgt;
 
+	drm_gem_shmem_update_pages_state(shmem);
+
 	return sgt;
 
 err_free_sgt:
@@ -762,6 +1278,202 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev,
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table);
 
+static struct drm_gem_shmem_shrinker *
+to_drm_shrinker(struct shrinker *shrinker)
+{
+	return container_of(shrinker, struct drm_gem_shmem_shrinker, base);
+}
+
+static unsigned long
+drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker,
+				     struct shrink_control *sc)
+{
+	struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker);
+	u64 count = READ_ONCE(gem_shrinker->shrinkable_count);
+
+	if (count >= SHRINK_EMPTY)
+		return SHRINK_EMPTY - 1;
+
+	return count ?: SHRINK_EMPTY;
+}
+
+static unsigned long
+drm_gem_shmem_shrinker_run_objects_scan(struct shrinker *shrinker,
+					unsigned long nr_to_scan,
+					bool *lock_contention,
+					bool evict)
+{
+	struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker);
+	struct drm_gem_shmem_object *shmem;
+	struct list_head still_in_list;
+	struct drm_gem_object *obj;
+	unsigned long freed = 0;
+	struct list_head *lru;
+	size_t page_count;
+
+	INIT_LIST_HEAD(&still_in_list);
+
+	mutex_lock(&gem_shrinker->lock);
+
+	if (evict)
+		lru = &gem_shrinker->lru_evictable;
+	else
+		lru = &gem_shrinker->lru_purgeable;
+
+	while (freed < nr_to_scan) {
+		shmem = list_first_entry_or_null(lru, typeof(*shmem), madv_list);
+		if (!shmem)
+			break;
+
+		obj = &shmem->base;
+		page_count = obj->size >> PAGE_SHIFT;
+		list_move_tail(&shmem->madv_list, &still_in_list);
+
+		if (evict && get_nr_swap_pages() < page_count)
+			continue;
+
+		/*
+		 * If it's in the process of being freed, gem_object->free()
+		 * may be blocked on lock waiting to remove it.  So just
+		 * skip it.
+		 */
+		if (!kref_get_unless_zero(&obj->refcount))
+			continue;
+
+		mutex_unlock(&gem_shrinker->lock);
+
+		/* prevent racing with job-submission code paths */
+		if (!dma_resv_trylock(obj->resv)) {
+			*lock_contention |= true;
+			goto shrinker_lock;
+		}
+
+		/* prevent racing with the dma-buf exporting */
+		if (!mutex_trylock(&gem_shrinker->dev->object_name_lock)) {
+			*lock_contention |= true;
+			goto resv_unlock;
+		}
+
+		/* check whether h/w uses this object */
+		if (!dma_resv_test_signaled(obj->resv, true))
+			goto object_name_unlock;
+
+		/* GEM may've become unpurgeable while shrinker was unlocked */
+		if (evict) {
+			if (!drm_gem_shmem_is_evictable(shmem))
+				goto object_name_unlock;
+		} else {
+			if (!drm_gem_shmem_is_purgeable(shmem))
+				goto object_name_unlock;
+		}
+
+		if (evict)
+			freed += obj->funcs->evict(obj);
+		else
+			freed += obj->funcs->purge(obj);
+object_name_unlock:
+		mutex_unlock(&gem_shrinker->dev->object_name_lock);
+resv_unlock:
+		dma_resv_unlock(obj->resv);
+shrinker_lock:
+		drm_gem_object_put(&shmem->base);
+		mutex_lock(&gem_shrinker->lock);
+	}
+
+	list_splice_tail(&still_in_list, lru);
+
+	mutex_unlock(&gem_shrinker->lock);
+
+	return freed;
+}
+
+static unsigned long
+drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker,
+				    struct shrink_control *sc)
+{
+	unsigned long nr_to_scan = sc->nr_to_scan;
+	bool lock_contention = false;
+	unsigned long freed;
+
+	/* purge as many objects as we can */
+	freed = drm_gem_shmem_shrinker_run_objects_scan(shrinker, nr_to_scan,
+							&lock_contention, false);
+	nr_to_scan -= freed;
+
+	/* evict as many objects as we can */
+	if (freed < nr_to_scan)
+		freed += drm_gem_shmem_shrinker_run_objects_scan(shrinker,
+								 nr_to_scan,
+								 &lock_contention,
+								 true);
+
+	return (!freed && !lock_contention) ? SHRINK_STOP : freed;
+}
+
+/**
+ * drm_gem_shmem_shrinker_register() - Register shmem shrinker
+ * @dev: DRM device
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_shmem_shrinker_register(struct drm_device *dev)
+{
+	struct drm_gem_shmem_shrinker *gem_shrinker;
+	int err;
+
+	if (WARN_ON(dev->shmem_shrinker))
+		return -EBUSY;
+
+	gem_shrinker = kzalloc(sizeof(*gem_shrinker), GFP_KERNEL);
+	if (!gem_shrinker)
+		return -ENOMEM;
+
+	gem_shrinker->base.count_objects = drm_gem_shmem_shrinker_count_objects;
+	gem_shrinker->base.scan_objects = drm_gem_shmem_shrinker_scan_objects;
+	gem_shrinker->base.seeks = DEFAULT_SEEKS;
+	gem_shrinker->dev = dev;
+
+	INIT_LIST_HEAD(&gem_shrinker->lru_purgeable);
+	INIT_LIST_HEAD(&gem_shrinker->lru_evictable);
+	INIT_LIST_HEAD(&gem_shrinker->lru_evicted);
+	INIT_LIST_HEAD(&gem_shrinker->lru_active);
+	mutex_init(&gem_shrinker->lock);
+
+	dev->shmem_shrinker = gem_shrinker;
+
+	err = register_shrinker(&gem_shrinker->base);
+	if (err) {
+		dev->shmem_shrinker = NULL;
+		kfree(gem_shrinker);
+		return err;
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_shrinker_register);
+
+/**
+ * drm_gem_shmem_shrinker_unregister() - Unregister shmem shrinker
+ * @dev: DRM device
+ */
+void drm_gem_shmem_shrinker_unregister(struct drm_device *dev)
+{
+	struct drm_gem_shmem_shrinker *gem_shrinker = dev->shmem_shrinker;
+
+	if (gem_shrinker) {
+		unregister_shrinker(&gem_shrinker->base);
+		WARN_ON(!list_empty(&gem_shrinker->lru_purgeable));
+		WARN_ON(!list_empty(&gem_shrinker->lru_evictable));
+		WARN_ON(!list_empty(&gem_shrinker->lru_evicted));
+		WARN_ON(!list_empty(&gem_shrinker->lru_active));
+		mutex_destroy(&gem_shrinker->lock);
+		dev->shmem_shrinker = NULL;
+		kfree(gem_shrinker);
+	}
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_shrinker_unregister);
+
 MODULE_DESCRIPTION("DRM SHMEM memory-management helpers");
 MODULE_IMPORT_NS(DMA_BUF);
 MODULE_LICENSE("GPL v2");
diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h
index 9923c7a6885e..929546cad894 100644
--- a/include/drm/drm_device.h
+++ b/include/drm/drm_device.h
@@ -16,6 +16,7 @@ struct drm_vblank_crtc;
 struct drm_vma_offset_manager;
 struct drm_vram_mm;
 struct drm_fb_helper;
+struct drm_gem_shmem_shrinker;
 
 struct inode;
 
@@ -277,6 +278,9 @@ struct drm_device {
 	/** @vram_mm: VRAM MM memory manager */
 	struct drm_vram_mm *vram_mm;
 
+	/** @shmem_shrinker: SHMEM GEM memory shrinker */
+	struct drm_gem_shmem_shrinker *shmem_shrinker;
+
 	/**
 	 * @switch_power_state:
 	 *
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 9d7c61a122dc..390d1ce08ed3 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -172,6 +172,41 @@ struct drm_gem_object_funcs {
 	 * This is optional but necessary for mmap support.
 	 */
 	const struct vm_operations_struct *vm_ops;
+
+	/**
+	 * @purge:
+	 *
+	 * Releases the GEM object's allocated backing storage to the system.
+	 *
+	 * Returns the number of pages that have been freed by purging the GEM object.
+	 *
+	 * This callback is used by the GEM shrinker.
+	 */
+	unsigned long (*purge)(struct drm_gem_object *obj);
+
+	/**
+	 * @evict:
+	 *
+	 * Unpins the GEM object's allocated backing storage, allowing shmem pages
+	 * to be swapped out.
+	 *
+	 * Returns the number of pages that have been unpinned.
+	 *
+	 * This callback is used by the GEM shrinker.
+	 */
+	unsigned long (*evict)(struct drm_gem_object *obj);
+
+	/**
+	 * @swap_in:
+	 *
+	 * Pins GEM object's allocated backing storage if it was previously evicted,
+	 * moving swapped out pages back to memory.
+	 *
+	 * Returns 0 on success, or -errno on error.
+	 *
+	 * This callback is used by the GEM shrinker.
+	 */
+	int (*swap_in)(struct drm_gem_object *obj);
 };
 
 /**
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 70889533962a..dc1c2db7d095 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -6,6 +6,7 @@
 #include <linux/fs.h>
 #include <linux/mm.h>
 #include <linux/mutex.h>
+#include <linux/shrinker.h>
 
 #include <drm/drm_file.h>
 #include <drm/drm_gem.h>
@@ -15,8 +16,18 @@
 struct dma_buf_attachment;
 struct drm_mode_create_dumb;
 struct drm_printer;
+struct drm_device;
 struct sg_table;
 
+enum drm_gem_shmem_pages_state {
+	DRM_GEM_SHMEM_PAGES_STATE_PURGED = -2,
+	DRM_GEM_SHMEM_PAGES_STATE_EVICTED = -1,
+	DRM_GEM_SHMEM_PAGES_STATE_INACTIVE = 0,
+	DRM_GEM_SHMEM_PAGES_STATE_ACTIVE = 1,
+	DRM_GEM_SHMEM_PAGES_STATE_EVICTABLE = 2,
+	DRM_GEM_SHMEM_PAGES_STATE_PURGEABLE = 3,
+};
+
 /**
  * struct drm_gem_shmem_object - GEM object backed by shmem
  */
@@ -43,8 +54,8 @@ struct drm_gem_shmem_object {
 	 * @madv: State for madvise
 	 *
 	 * 0 is active/inuse.
+	 * 1 is not-needed/can-be-purged
 	 * A negative value is the object is purged.
-	 * Positive values are driver specific and not used by the helpers.
 	 */
 	int madv;
 
@@ -91,6 +102,40 @@ struct drm_gem_shmem_object {
 	 * @map_wc: map object write-combined (instead of using shmem defaults).
 	 */
 	bool map_wc;
+
+	/**
+	 * @eviction_disable_count:
+	 *
+	 * The shmem pages are disallowed to be evicted by the memory shrinker
+	 * while count is non-zero. Used internally by memory shrinker.
+	 */
+	unsigned int eviction_disable_count;
+
+	/**
+	 * @purging_disable_count:
+	 *
+	 * The shmem pages are disallowed to be purged by the memory shrinker
+	 * while count is non-zero. Used internally by memory shrinker.
+	 */
+	unsigned int purging_disable_count;
+
+	/**
+	 * @pages_state: Current state of shmem pages. Used internally by
+	 * memory shrinker.
+	 */
+	enum drm_gem_shmem_pages_state pages_state;
+
+	/**
+	 * @evicted: True if shmem pages were evicted by the memory shrinker.
+	 * Used internally by memory shrinker.
+	 */
+	bool evicted;
+
+	/**
+	 * @pages_shrinkable: True if shmem pages can be evicted or purged
+	 * by the memory shrinker. Used internally by memory shrinker.
+	 */
+	bool pages_shrinkable;
 };
 
 #define to_drm_gem_shmem_obj(obj) \
@@ -111,15 +156,33 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
 
 int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv);
 
+int drm_gem_shmem_set_purgeable(struct drm_gem_shmem_object *shmem);
+int drm_gem_shmem_set_purgeable_and_evictable(struct drm_gem_shmem_object *shmem);
+int drm_gem_shmem_set_unpurgeable_and_unevictable(struct drm_gem_shmem_object *shmem);
+
+static inline bool drm_gem_shmem_is_evictable(struct drm_gem_shmem_object *shmem)
+{
+	return (shmem->madv >= 0) && !shmem->eviction_disable_count &&
+		shmem->base.funcs->evict && shmem->base.funcs->swap_in &&
+		!shmem->vmap_use_count && !shmem->base.dma_buf &&
+		!shmem->base.import_attach && shmem->sgt;
+}
+
 static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
 {
-	return (shmem->madv > 0) &&
-		!shmem->vmap_use_count && shmem->sgt &&
-		!shmem->base.dma_buf && !shmem->base.import_attach;
+	return (shmem->madv > 0) && !shmem->purging_disable_count &&
+		!shmem->vmap_use_count && shmem->base.funcs->purge &&
+		!shmem->base.dma_buf && !shmem->base.import_attach &&
+		shmem->sgt;
 }
 
-void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem);
+int drm_gem_shmem_swap_in_pages_locked(struct drm_gem_shmem_object *shmem);
+int drm_gem_shmem_swap_in_locked(struct drm_gem_shmem_object *shmem);
+
+void drm_gem_shmem_evict_locked(struct drm_gem_shmem_object *shmem);
+
 bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
+void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem);
 
 struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
 struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem);
@@ -262,6 +325,38 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v
 	return drm_gem_shmem_mmap(shmem, vma);
 }
 
+/**
+ * struct drm_gem_shmem_shrinker - Generic memory shrinker for shmem GEMs
+ */
+struct drm_gem_shmem_shrinker {
+	/** @base: Shrinker for purging shmem GEM objects */
+	struct shrinker base;
+
+	/** @lock: Protects @lru_* */
+	struct mutex lock;
+
+	/** @lru_purgeable: List of shmem GEM objects available for purging */
+	struct list_head lru_purgeable;
+
+	/** @lru_active: List of active shmem GEM objects */
+	struct list_head lru_active;
+
+	/** @lru_evictable: List of shmem GEM objects that can be evicted */
+	struct list_head lru_evictable;
+
+	/** @lru_evicted: List of evicted shmem GEM objects */
+	struct list_head lru_evicted;
+
+	/** @dev: DRM device that uses this shrinker */
+	struct drm_device *dev;
+
+	/** @shrinkable_count: Count of shmem GEM pages to be purged and evicted */
+	u64 shrinkable_count;
+};
+
+int drm_gem_shmem_shrinker_register(struct drm_device *dev);
+void drm_gem_shmem_shrinker_unregister(struct drm_device *dev);
+
 /*
  * Driver ops
  */
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 12/15] drm/virtio: Support memory shrinking
  2022-04-11 21:59 ` Dmitry Osipenko
@ 2022-04-11 21:59   ` Dmitry Osipenko
  -1 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: linux-kernel, virtualization, Gustavo Padovan, dri-devel,
	Dmitry Osipenko, Dmitry Osipenko

Support generic DRM SHMEM memory shrinker and add new madvise IOCTL
to the VirtIO-GPU driver. Userspace (BO cache manager of Mesa driver)
will mark BOs as "don't need" using the new IOCTL to let shrinker purge
the marked BOs on OOM, the shrinker will also evict unpurgeable shmem BOs
from memory if guest supports SWAP. Altogether this allows to prevent OOM
kills of guest applications that use VirGL by lowering memory pressure.

Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h    |  15 ++-
 drivers/gpu/drm/virtio/virtgpu_gem.c    |  46 ++++++++
 drivers/gpu/drm/virtio/virtgpu_ioctl.c  |  37 +++++++
 drivers/gpu/drm/virtio/virtgpu_kms.c    |   9 ++
 drivers/gpu/drm/virtio/virtgpu_object.c | 139 +++++++++++++++++++-----
 drivers/gpu/drm/virtio/virtgpu_plane.c  |  17 ++-
 drivers/gpu/drm/virtio/virtgpu_vq.c     |  40 +++++++
 include/uapi/drm/virtgpu_drm.h          |  14 +++
 8 files changed, 288 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index b2d93cb12ebf..c8918a271e1c 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -274,7 +274,7 @@ struct virtio_gpu_fpriv {
 };
 
 /* virtgpu_ioctl.c */
-#define DRM_VIRTIO_NUM_IOCTLS 12
+#define DRM_VIRTIO_NUM_IOCTLS 13
 extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS];
 void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file);
 
@@ -310,6 +310,10 @@ void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs);
 void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev,
 				       struct virtio_gpu_object_array *objs);
 void virtio_gpu_array_put_free_work(struct work_struct *work);
+int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev,
+			     struct virtio_gpu_object_array *objs);
+int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo);
+bool virtio_gpu_gem_madvise(struct virtio_gpu_object *obj, int madv);
 
 /* virtgpu_vq.c */
 int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev);
@@ -321,6 +325,8 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev,
 				    struct virtio_gpu_fence *fence);
 void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
 				   struct virtio_gpu_object *bo);
+int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev,
+				    struct virtio_gpu_object *bo);
 void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
 					uint64_t offset,
 					uint32_t width, uint32_t height,
@@ -341,6 +347,9 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev,
 			      struct virtio_gpu_object *obj,
 			      struct virtio_gpu_mem_entry *ents,
 			      unsigned int nents);
+void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev,
+			      struct virtio_gpu_object *obj,
+			      struct virtio_gpu_fence *fence);
 int virtio_gpu_attach_status_page(struct virtio_gpu_device *vgdev);
 int virtio_gpu_detach_status_page(struct virtio_gpu_device *vgdev);
 void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
@@ -483,4 +492,8 @@ void virtio_gpu_vram_unmap_dma_buf(struct device *dev,
 				   struct sg_table *sgt,
 				   enum dma_data_direction dir);
 
+/* virtgpu_gem_shrinker.c */
+int virtio_gpu_gem_shrinker_init(struct virtio_gpu_device *vgdev);
+void virtio_gpu_gem_shrinker_fini(struct virtio_gpu_device *vgdev);
+
 #endif
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 7db48d17ee3a..08189ad43736 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -294,3 +294,49 @@ void virtio_gpu_array_put_free_work(struct work_struct *work)
 	}
 	spin_unlock(&vgdev->obj_free_lock);
 }
+
+int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev,
+			     struct virtio_gpu_object_array *objs)
+{
+	struct drm_gem_shmem_object *shmem;
+	int ret = 0;
+	u32 i;
+
+	for (i = 0; i < objs->nents; i++) {
+		shmem = to_drm_gem_shmem_obj(objs->objs[i]);
+		ret = drm_gem_shmem_swap_in_locked(shmem);
+		if (ret)
+			break;
+	}
+
+	return ret;
+}
+
+bool virtio_gpu_gem_madvise(struct virtio_gpu_object *bo, int madv)
+{
+	/*
+	 * For now we support only purging BOs that are backed by guest's
+	 * memory.
+	 */
+	if (!virtio_gpu_is_shmem(bo))
+		return true;
+
+	return drm_gem_shmem_madvise(&bo->base, madv);
+}
+
+int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo)
+{
+	struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
+	int err;
+
+	if (bo->created) {
+		err = virtio_gpu_cmd_release_resource(vgdev, bo);
+		if (err)
+			return err;
+
+		virtio_gpu_notify(vgdev);
+		bo->created = false;
+	}
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index f8d83358d2a0..55ee9bd2098e 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -217,6 +217,10 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
 		ret = virtio_gpu_array_lock_resv(buflist);
 		if (ret)
 			goto out_memdup;
+
+		ret = virtio_gpu_array_prepare(vgdev, buflist);
+		if (ret)
+			goto out_unresv;
 	}
 
 	out_fence = virtio_gpu_fence_alloc(vgdev, fence_ctx, ring_idx);
@@ -423,6 +427,10 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
 	if (ret != 0)
 		goto err_put_free;
 
+	ret = virtio_gpu_array_prepare(vgdev, objs);
+	if (ret)
+		goto err_unlock;
+
 	fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0);
 	if (!fence) {
 		ret = -ENOMEM;
@@ -482,6 +490,10 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data,
 		if (ret != 0)
 			goto err_put_free;
 
+		ret = virtio_gpu_array_prepare(vgdev, objs);
+		if (ret)
+			goto err_unlock;
+
 		ret = -ENOMEM;
 		fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
 					       0);
@@ -836,6 +848,28 @@ static int virtio_gpu_context_init_ioctl(struct drm_device *dev,
 	return ret;
 }
 
+static int virtio_gpu_madvise_ioctl(struct drm_device *dev,
+				    void *data,
+				    struct drm_file *file)
+{
+	struct drm_virtgpu_madvise *args = data;
+	struct virtio_gpu_object *bo;
+	struct drm_gem_object *obj;
+
+	if (args->madv > VIRTGPU_MADV_DONTNEED)
+		return -EOPNOTSUPP;
+
+	obj = drm_gem_object_lookup(file, args->bo_handle);
+	if (!obj)
+		return -ENOENT;
+
+	bo = gem_to_virtio_gpu_obj(obj);
+	args->retained = virtio_gpu_gem_madvise(bo, args->madv);
+	drm_gem_object_put(obj);
+
+	return 0;
+}
+
 struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = {
 	DRM_IOCTL_DEF_DRV(VIRTGPU_MAP, virtio_gpu_map_ioctl,
 			  DRM_RENDER_ALLOW),
@@ -875,4 +909,7 @@ struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = {
 
 	DRM_IOCTL_DEF_DRV(VIRTGPU_CONTEXT_INIT, virtio_gpu_context_init_ioctl,
 			  DRM_RENDER_ALLOW),
+
+	DRM_IOCTL_DEF_DRV(VIRTGPU_MADVISE, virtio_gpu_madvise_ioctl,
+			  DRM_RENDER_ALLOW),
 };
diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
index 0d1e3eb61bee..1175999acea1 100644
--- a/drivers/gpu/drm/virtio/virtgpu_kms.c
+++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
@@ -238,6 +238,12 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev)
 		goto err_scanouts;
 	}
 
+	ret = drm_gem_shmem_shrinker_register(dev);
+	if (ret) {
+		DRM_ERROR("shrinker init failed\n");
+		goto err_modeset;
+	}
+
 	virtio_device_ready(vgdev->vdev);
 
 	if (num_capsets)
@@ -250,6 +256,8 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev)
 			   5 * HZ);
 	return 0;
 
+err_modeset:
+	virtio_gpu_modeset_fini(vgdev);
 err_scanouts:
 	virtio_gpu_free_vbufs(vgdev);
 err_vbufs:
@@ -289,6 +297,7 @@ void virtio_gpu_release(struct drm_device *dev)
 	if (!vgdev)
 		return;
 
+	drm_gem_shmem_shrinker_unregister(dev);
 	virtio_gpu_modeset_fini(vgdev);
 	virtio_gpu_free_vbufs(vgdev);
 	virtio_gpu_cleanup_cap_cache(vgdev);
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 8d7728181de0..771165fbda7c 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -97,39 +97,58 @@ static void virtio_gpu_free_object(struct drm_gem_object *obj)
 	virtio_gpu_cleanup_object(bo);
 }
 
-static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = {
-	.free = virtio_gpu_free_object,
-	.open = virtio_gpu_gem_object_open,
-	.close = virtio_gpu_gem_object_close,
-	.print_info = drm_gem_shmem_object_print_info,
-	.export = virtgpu_gem_prime_export,
-	.pin = drm_gem_shmem_object_pin,
-	.unpin = drm_gem_shmem_object_unpin,
-	.get_sg_table = drm_gem_shmem_object_get_sg_table,
-	.vmap = drm_gem_shmem_object_vmap,
-	.vunmap = drm_gem_shmem_object_vunmap,
-	.mmap = drm_gem_shmem_object_mmap,
-	.vm_ops = &drm_gem_shmem_vm_ops,
-};
+static int virtio_gpu_detach_object_fenced(struct virtio_gpu_object *bo)
+{
+	struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
+	struct virtio_gpu_fence *fence;
 
-bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo)
+	fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0);
+	if (!fence)
+		return -ENOMEM;
+
+	virtio_gpu_object_detach(vgdev, bo, fence);
+	virtio_gpu_notify(vgdev);
+
+	dma_fence_wait(&fence->f, false);
+	dma_fence_put(&fence->f);
+
+	return 0;
+}
+
+static unsigned long virtio_gpu_purge_object(struct drm_gem_object *obj)
 {
-	return bo->base.base.funcs == &virtio_gpu_shmem_funcs;
+	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
+	int err;
+
+	/*
+	 * At first tell host to stop using guest's memory to ensure that
+	 * host won't touch the released guest's memory once it's gone.
+	 */
+	err = virtio_gpu_detach_object_fenced(bo);
+	if (err)
+		return 0;
+
+	err = virtio_gpu_gem_host_mem_release(bo);
+	if (err)
+		return 0;
+
+	drm_gem_shmem_purge_locked(&bo->base);
+
+	return obj->size >> PAGE_SHIFT;
 }
 
-struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
-						size_t size)
+static unsigned long virtio_gpu_evict_object(struct drm_gem_object *obj)
 {
-	struct virtio_gpu_object_shmem *shmem;
-	struct drm_gem_shmem_object *dshmem;
+	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
+	int err;
 
-	shmem = kzalloc(sizeof(*shmem), GFP_KERNEL);
-	if (!shmem)
-		return ERR_PTR(-ENOMEM);
+	err = virtio_gpu_detach_object_fenced(bo);
+	if (err)
+		return 0;
 
-	dshmem = &shmem->base.base;
-	dshmem->base.funcs = &virtio_gpu_shmem_funcs;
-	return &dshmem->base;
+	drm_gem_shmem_evict_locked(&bo->base);
+
+	return obj->size >> PAGE_SHIFT;
 }
 
 static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
@@ -176,6 +195,66 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
 	return 0;
 }
 
+static int virtio_gpu_swap_in_object(struct drm_gem_object *obj)
+{
+	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
+	struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
+	struct virtio_gpu_mem_entry *ents;
+	unsigned int nents;
+	int err;
+
+	err = drm_gem_shmem_swap_in_pages_locked(&bo->base);
+	if (err)
+		return err;
+
+	err = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
+	if (err)
+		return err;
+
+	virtio_gpu_object_attach(vgdev, bo, ents, nents);
+	virtio_gpu_notify(vgdev);
+
+	return 0;
+}
+
+static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = {
+	.free = virtio_gpu_free_object,
+	.open = virtio_gpu_gem_object_open,
+	.close = virtio_gpu_gem_object_close,
+	.print_info = drm_gem_shmem_object_print_info,
+	.export = virtgpu_gem_prime_export,
+	.pin = drm_gem_shmem_object_pin,
+	.unpin = drm_gem_shmem_object_unpin,
+	.get_sg_table = drm_gem_shmem_object_get_sg_table,
+	.vmap = drm_gem_shmem_object_vmap,
+	.vunmap = drm_gem_shmem_object_vunmap,
+	.mmap = drm_gem_shmem_object_mmap,
+	.vm_ops = &drm_gem_shmem_vm_ops,
+	.purge = virtio_gpu_purge_object,
+	.evict = virtio_gpu_evict_object,
+	.swap_in = virtio_gpu_swap_in_object,
+};
+
+bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo)
+{
+	return bo->base.base.funcs == &virtio_gpu_shmem_funcs;
+}
+
+struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
+						size_t size)
+{
+	struct virtio_gpu_object_shmem *shmem;
+	struct drm_gem_shmem_object *dshmem;
+
+	shmem = kzalloc(sizeof(*shmem), GFP_KERNEL);
+	if (!shmem)
+		return ERR_PTR(-ENOMEM);
+
+	dshmem = &shmem->base.base;
+	dshmem->base.funcs = &virtio_gpu_shmem_funcs;
+	return &dshmem->base;
+}
+
 int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 			     struct virtio_gpu_object_params *params,
 			     struct virtio_gpu_object **bo_ptr,
@@ -228,10 +307,18 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 		virtio_gpu_cmd_resource_create_3d(vgdev, bo, params,
 						  objs, fence);
 		virtio_gpu_object_attach(vgdev, bo, ents, nents);
+
+		shmem_obj->pages_mark_dirty_on_put = 1;
+
+		drm_gem_shmem_set_purgeable_and_evictable(shmem_obj);
 	} else {
 		virtio_gpu_cmd_create_resource(vgdev, bo, params,
 					       objs, fence);
 		virtio_gpu_object_attach(vgdev, bo, ents, nents);
+
+		shmem_obj->pages_mark_dirty_on_put = 1;
+
+		drm_gem_shmem_set_purgeable_and_evictable(shmem_obj);
 	}
 
 	*bo_ptr = bo;
diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
index 7148f3813d8b..2db6166a0307 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -246,20 +246,28 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
 	struct virtio_gpu_device *vgdev = dev->dev_private;
 	struct virtio_gpu_framebuffer *vgfb;
 	struct virtio_gpu_object *bo;
+	int err;
 
 	if (!new_state->fb)
 		return 0;
 
 	vgfb = to_virtio_gpu_framebuffer(new_state->fb);
 	bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
-	if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob))
+
+	err = drm_gem_shmem_set_unpurgeable_and_unevictable(&bo->base);
+	if (err)
+		return err;
+
+	if (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob)
 		return 0;
 
 	if (bo->dumb && (plane->state->fb != new_state->fb)) {
 		vgfb->fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
 						     0);
-		if (!vgfb->fence)
+		if (!vgfb->fence) {
+			drm_gem_shmem_set_purgeable_and_evictable(&bo->base);
 			return -ENOMEM;
+		}
 	}
 
 	return 0;
@@ -269,15 +277,20 @@ static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane,
 					struct drm_plane_state *state)
 {
 	struct virtio_gpu_framebuffer *vgfb;
+	struct virtio_gpu_object *bo;
 
 	if (!state->fb)
 		return;
 
 	vgfb = to_virtio_gpu_framebuffer(state->fb);
+	bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
+
 	if (vgfb->fence) {
 		dma_fence_put(&vgfb->fence->f);
 		vgfb->fence = NULL;
 	}
+
+	drm_gem_shmem_set_purgeable_and_evictable(&bo->base);
 }
 
 static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 06566e44307d..2a04dad1ae89 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -536,6 +536,21 @@ void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
 		virtio_gpu_cleanup_object(bo);
 }
 
+int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev,
+				    struct virtio_gpu_object *bo)
+{
+	struct virtio_gpu_resource_unref *cmd_p;
+	struct virtio_gpu_vbuffer *vbuf;
+
+	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
+	memset(cmd_p, 0, sizeof(*cmd_p));
+
+	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_UNREF);
+	cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
+
+	return virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
+}
+
 void virtio_gpu_cmd_set_scanout(struct virtio_gpu_device *vgdev,
 				uint32_t scanout_id, uint32_t resource_id,
 				uint32_t width, uint32_t height,
@@ -636,6 +651,23 @@ virtio_gpu_cmd_resource_attach_backing(struct virtio_gpu_device *vgdev,
 	virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence);
 }
 
+static void
+virtio_gpu_cmd_resource_detach_backing(struct virtio_gpu_device *vgdev,
+				       u32 resource_id,
+				       struct virtio_gpu_fence *fence)
+{
+	struct virtio_gpu_resource_attach_backing *cmd_p;
+	struct virtio_gpu_vbuffer *vbuf;
+
+	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
+	memset(cmd_p, 0, sizeof(*cmd_p));
+
+	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING);
+	cmd_p->resource_id = cpu_to_le32(resource_id);
+
+	virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence);
+}
+
 static void virtio_gpu_cmd_get_display_info_cb(struct virtio_gpu_device *vgdev,
 					       struct virtio_gpu_vbuffer *vbuf)
 {
@@ -1099,6 +1131,14 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev,
 					       ents, nents, NULL);
 }
 
+void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev,
+			      struct virtio_gpu_object *obj,
+			      struct virtio_gpu_fence *fence)
+{
+	virtio_gpu_cmd_resource_detach_backing(vgdev, obj->hw_res_handle,
+					       fence);
+}
+
 void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
 			    struct virtio_gpu_output *output)
 {
diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h
index 0512fde5e697..12197d8e9759 100644
--- a/include/uapi/drm/virtgpu_drm.h
+++ b/include/uapi/drm/virtgpu_drm.h
@@ -48,6 +48,7 @@ extern "C" {
 #define DRM_VIRTGPU_GET_CAPS  0x09
 #define DRM_VIRTGPU_RESOURCE_CREATE_BLOB 0x0a
 #define DRM_VIRTGPU_CONTEXT_INIT 0x0b
+#define DRM_VIRTGPU_MADVISE 0x0c
 
 #define VIRTGPU_EXECBUF_FENCE_FD_IN	0x01
 #define VIRTGPU_EXECBUF_FENCE_FD_OUT	0x02
@@ -196,6 +197,15 @@ struct drm_virtgpu_context_init {
 	__u64 ctx_set_params;
 };
 
+#define VIRTGPU_MADV_WILLNEED 0
+#define VIRTGPU_MADV_DONTNEED 1
+struct drm_virtgpu_madvise {
+	__u32 bo_handle;
+	__u32 retained; /* out, non-zero if BO can be used */
+	__u32 madv;
+	__u32 pad;
+};
+
 /*
  * Event code that's given when VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK is in
  * effect.  The event size is sizeof(drm_event), since there is no additional
@@ -246,6 +256,10 @@ struct drm_virtgpu_context_init {
 	DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_CONTEXT_INIT,		\
 		struct drm_virtgpu_context_init)
 
+#define DRM_IOCTL_VIRTGPU_MADVISE \
+	DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MADVISE, \
+		 struct drm_virtgpu_madvise)
+
 #if defined(__cplusplus)
 }
 #endif
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 12/15] drm/virtio: Support memory shrinking
@ 2022-04-11 21:59   ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: dri-devel, linux-kernel, Gustavo Padovan, Daniel Stone,
	virtualization, Dmitry Osipenko, Dmitry Osipenko

Support generic DRM SHMEM memory shrinker and add new madvise IOCTL
to the VirtIO-GPU driver. Userspace (BO cache manager of Mesa driver)
will mark BOs as "don't need" using the new IOCTL to let shrinker purge
the marked BOs on OOM, the shrinker will also evict unpurgeable shmem BOs
from memory if guest supports SWAP. Altogether this allows to prevent OOM
kills of guest applications that use VirGL by lowering memory pressure.

Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h    |  15 ++-
 drivers/gpu/drm/virtio/virtgpu_gem.c    |  46 ++++++++
 drivers/gpu/drm/virtio/virtgpu_ioctl.c  |  37 +++++++
 drivers/gpu/drm/virtio/virtgpu_kms.c    |   9 ++
 drivers/gpu/drm/virtio/virtgpu_object.c | 139 +++++++++++++++++++-----
 drivers/gpu/drm/virtio/virtgpu_plane.c  |  17 ++-
 drivers/gpu/drm/virtio/virtgpu_vq.c     |  40 +++++++
 include/uapi/drm/virtgpu_drm.h          |  14 +++
 8 files changed, 288 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index b2d93cb12ebf..c8918a271e1c 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -274,7 +274,7 @@ struct virtio_gpu_fpriv {
 };
 
 /* virtgpu_ioctl.c */
-#define DRM_VIRTIO_NUM_IOCTLS 12
+#define DRM_VIRTIO_NUM_IOCTLS 13
 extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS];
 void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file);
 
@@ -310,6 +310,10 @@ void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs);
 void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev,
 				       struct virtio_gpu_object_array *objs);
 void virtio_gpu_array_put_free_work(struct work_struct *work);
+int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev,
+			     struct virtio_gpu_object_array *objs);
+int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo);
+bool virtio_gpu_gem_madvise(struct virtio_gpu_object *obj, int madv);
 
 /* virtgpu_vq.c */
 int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev);
@@ -321,6 +325,8 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev,
 				    struct virtio_gpu_fence *fence);
 void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
 				   struct virtio_gpu_object *bo);
+int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev,
+				    struct virtio_gpu_object *bo);
 void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
 					uint64_t offset,
 					uint32_t width, uint32_t height,
@@ -341,6 +347,9 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev,
 			      struct virtio_gpu_object *obj,
 			      struct virtio_gpu_mem_entry *ents,
 			      unsigned int nents);
+void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev,
+			      struct virtio_gpu_object *obj,
+			      struct virtio_gpu_fence *fence);
 int virtio_gpu_attach_status_page(struct virtio_gpu_device *vgdev);
 int virtio_gpu_detach_status_page(struct virtio_gpu_device *vgdev);
 void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
@@ -483,4 +492,8 @@ void virtio_gpu_vram_unmap_dma_buf(struct device *dev,
 				   struct sg_table *sgt,
 				   enum dma_data_direction dir);
 
+/* virtgpu_gem_shrinker.c */
+int virtio_gpu_gem_shrinker_init(struct virtio_gpu_device *vgdev);
+void virtio_gpu_gem_shrinker_fini(struct virtio_gpu_device *vgdev);
+
 #endif
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 7db48d17ee3a..08189ad43736 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -294,3 +294,49 @@ void virtio_gpu_array_put_free_work(struct work_struct *work)
 	}
 	spin_unlock(&vgdev->obj_free_lock);
 }
+
+int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev,
+			     struct virtio_gpu_object_array *objs)
+{
+	struct drm_gem_shmem_object *shmem;
+	int ret = 0;
+	u32 i;
+
+	for (i = 0; i < objs->nents; i++) {
+		shmem = to_drm_gem_shmem_obj(objs->objs[i]);
+		ret = drm_gem_shmem_swap_in_locked(shmem);
+		if (ret)
+			break;
+	}
+
+	return ret;
+}
+
+bool virtio_gpu_gem_madvise(struct virtio_gpu_object *bo, int madv)
+{
+	/*
+	 * For now we support only purging BOs that are backed by guest's
+	 * memory.
+	 */
+	if (!virtio_gpu_is_shmem(bo))
+		return true;
+
+	return drm_gem_shmem_madvise(&bo->base, madv);
+}
+
+int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo)
+{
+	struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
+	int err;
+
+	if (bo->created) {
+		err = virtio_gpu_cmd_release_resource(vgdev, bo);
+		if (err)
+			return err;
+
+		virtio_gpu_notify(vgdev);
+		bo->created = false;
+	}
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index f8d83358d2a0..55ee9bd2098e 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -217,6 +217,10 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
 		ret = virtio_gpu_array_lock_resv(buflist);
 		if (ret)
 			goto out_memdup;
+
+		ret = virtio_gpu_array_prepare(vgdev, buflist);
+		if (ret)
+			goto out_unresv;
 	}
 
 	out_fence = virtio_gpu_fence_alloc(vgdev, fence_ctx, ring_idx);
@@ -423,6 +427,10 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
 	if (ret != 0)
 		goto err_put_free;
 
+	ret = virtio_gpu_array_prepare(vgdev, objs);
+	if (ret)
+		goto err_unlock;
+
 	fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0);
 	if (!fence) {
 		ret = -ENOMEM;
@@ -482,6 +490,10 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data,
 		if (ret != 0)
 			goto err_put_free;
 
+		ret = virtio_gpu_array_prepare(vgdev, objs);
+		if (ret)
+			goto err_unlock;
+
 		ret = -ENOMEM;
 		fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
 					       0);
@@ -836,6 +848,28 @@ static int virtio_gpu_context_init_ioctl(struct drm_device *dev,
 	return ret;
 }
 
+static int virtio_gpu_madvise_ioctl(struct drm_device *dev,
+				    void *data,
+				    struct drm_file *file)
+{
+	struct drm_virtgpu_madvise *args = data;
+	struct virtio_gpu_object *bo;
+	struct drm_gem_object *obj;
+
+	if (args->madv > VIRTGPU_MADV_DONTNEED)
+		return -EOPNOTSUPP;
+
+	obj = drm_gem_object_lookup(file, args->bo_handle);
+	if (!obj)
+		return -ENOENT;
+
+	bo = gem_to_virtio_gpu_obj(obj);
+	args->retained = virtio_gpu_gem_madvise(bo, args->madv);
+	drm_gem_object_put(obj);
+
+	return 0;
+}
+
 struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = {
 	DRM_IOCTL_DEF_DRV(VIRTGPU_MAP, virtio_gpu_map_ioctl,
 			  DRM_RENDER_ALLOW),
@@ -875,4 +909,7 @@ struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = {
 
 	DRM_IOCTL_DEF_DRV(VIRTGPU_CONTEXT_INIT, virtio_gpu_context_init_ioctl,
 			  DRM_RENDER_ALLOW),
+
+	DRM_IOCTL_DEF_DRV(VIRTGPU_MADVISE, virtio_gpu_madvise_ioctl,
+			  DRM_RENDER_ALLOW),
 };
diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
index 0d1e3eb61bee..1175999acea1 100644
--- a/drivers/gpu/drm/virtio/virtgpu_kms.c
+++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
@@ -238,6 +238,12 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev)
 		goto err_scanouts;
 	}
 
+	ret = drm_gem_shmem_shrinker_register(dev);
+	if (ret) {
+		DRM_ERROR("shrinker init failed\n");
+		goto err_modeset;
+	}
+
 	virtio_device_ready(vgdev->vdev);
 
 	if (num_capsets)
@@ -250,6 +256,8 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev)
 			   5 * HZ);
 	return 0;
 
+err_modeset:
+	virtio_gpu_modeset_fini(vgdev);
 err_scanouts:
 	virtio_gpu_free_vbufs(vgdev);
 err_vbufs:
@@ -289,6 +297,7 @@ void virtio_gpu_release(struct drm_device *dev)
 	if (!vgdev)
 		return;
 
+	drm_gem_shmem_shrinker_unregister(dev);
 	virtio_gpu_modeset_fini(vgdev);
 	virtio_gpu_free_vbufs(vgdev);
 	virtio_gpu_cleanup_cap_cache(vgdev);
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 8d7728181de0..771165fbda7c 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -97,39 +97,58 @@ static void virtio_gpu_free_object(struct drm_gem_object *obj)
 	virtio_gpu_cleanup_object(bo);
 }
 
-static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = {
-	.free = virtio_gpu_free_object,
-	.open = virtio_gpu_gem_object_open,
-	.close = virtio_gpu_gem_object_close,
-	.print_info = drm_gem_shmem_object_print_info,
-	.export = virtgpu_gem_prime_export,
-	.pin = drm_gem_shmem_object_pin,
-	.unpin = drm_gem_shmem_object_unpin,
-	.get_sg_table = drm_gem_shmem_object_get_sg_table,
-	.vmap = drm_gem_shmem_object_vmap,
-	.vunmap = drm_gem_shmem_object_vunmap,
-	.mmap = drm_gem_shmem_object_mmap,
-	.vm_ops = &drm_gem_shmem_vm_ops,
-};
+static int virtio_gpu_detach_object_fenced(struct virtio_gpu_object *bo)
+{
+	struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
+	struct virtio_gpu_fence *fence;
 
-bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo)
+	fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0);
+	if (!fence)
+		return -ENOMEM;
+
+	virtio_gpu_object_detach(vgdev, bo, fence);
+	virtio_gpu_notify(vgdev);
+
+	dma_fence_wait(&fence->f, false);
+	dma_fence_put(&fence->f);
+
+	return 0;
+}
+
+static unsigned long virtio_gpu_purge_object(struct drm_gem_object *obj)
 {
-	return bo->base.base.funcs == &virtio_gpu_shmem_funcs;
+	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
+	int err;
+
+	/*
+	 * At first tell host to stop using guest's memory to ensure that
+	 * host won't touch the released guest's memory once it's gone.
+	 */
+	err = virtio_gpu_detach_object_fenced(bo);
+	if (err)
+		return 0;
+
+	err = virtio_gpu_gem_host_mem_release(bo);
+	if (err)
+		return 0;
+
+	drm_gem_shmem_purge_locked(&bo->base);
+
+	return obj->size >> PAGE_SHIFT;
 }
 
-struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
-						size_t size)
+static unsigned long virtio_gpu_evict_object(struct drm_gem_object *obj)
 {
-	struct virtio_gpu_object_shmem *shmem;
-	struct drm_gem_shmem_object *dshmem;
+	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
+	int err;
 
-	shmem = kzalloc(sizeof(*shmem), GFP_KERNEL);
-	if (!shmem)
-		return ERR_PTR(-ENOMEM);
+	err = virtio_gpu_detach_object_fenced(bo);
+	if (err)
+		return 0;
 
-	dshmem = &shmem->base.base;
-	dshmem->base.funcs = &virtio_gpu_shmem_funcs;
-	return &dshmem->base;
+	drm_gem_shmem_evict_locked(&bo->base);
+
+	return obj->size >> PAGE_SHIFT;
 }
 
 static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
@@ -176,6 +195,66 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
 	return 0;
 }
 
+static int virtio_gpu_swap_in_object(struct drm_gem_object *obj)
+{
+	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
+	struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
+	struct virtio_gpu_mem_entry *ents;
+	unsigned int nents;
+	int err;
+
+	err = drm_gem_shmem_swap_in_pages_locked(&bo->base);
+	if (err)
+		return err;
+
+	err = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
+	if (err)
+		return err;
+
+	virtio_gpu_object_attach(vgdev, bo, ents, nents);
+	virtio_gpu_notify(vgdev);
+
+	return 0;
+}
+
+static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = {
+	.free = virtio_gpu_free_object,
+	.open = virtio_gpu_gem_object_open,
+	.close = virtio_gpu_gem_object_close,
+	.print_info = drm_gem_shmem_object_print_info,
+	.export = virtgpu_gem_prime_export,
+	.pin = drm_gem_shmem_object_pin,
+	.unpin = drm_gem_shmem_object_unpin,
+	.get_sg_table = drm_gem_shmem_object_get_sg_table,
+	.vmap = drm_gem_shmem_object_vmap,
+	.vunmap = drm_gem_shmem_object_vunmap,
+	.mmap = drm_gem_shmem_object_mmap,
+	.vm_ops = &drm_gem_shmem_vm_ops,
+	.purge = virtio_gpu_purge_object,
+	.evict = virtio_gpu_evict_object,
+	.swap_in = virtio_gpu_swap_in_object,
+};
+
+bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo)
+{
+	return bo->base.base.funcs == &virtio_gpu_shmem_funcs;
+}
+
+struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
+						size_t size)
+{
+	struct virtio_gpu_object_shmem *shmem;
+	struct drm_gem_shmem_object *dshmem;
+
+	shmem = kzalloc(sizeof(*shmem), GFP_KERNEL);
+	if (!shmem)
+		return ERR_PTR(-ENOMEM);
+
+	dshmem = &shmem->base.base;
+	dshmem->base.funcs = &virtio_gpu_shmem_funcs;
+	return &dshmem->base;
+}
+
 int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 			     struct virtio_gpu_object_params *params,
 			     struct virtio_gpu_object **bo_ptr,
@@ -228,10 +307,18 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 		virtio_gpu_cmd_resource_create_3d(vgdev, bo, params,
 						  objs, fence);
 		virtio_gpu_object_attach(vgdev, bo, ents, nents);
+
+		shmem_obj->pages_mark_dirty_on_put = 1;
+
+		drm_gem_shmem_set_purgeable_and_evictable(shmem_obj);
 	} else {
 		virtio_gpu_cmd_create_resource(vgdev, bo, params,
 					       objs, fence);
 		virtio_gpu_object_attach(vgdev, bo, ents, nents);
+
+		shmem_obj->pages_mark_dirty_on_put = 1;
+
+		drm_gem_shmem_set_purgeable_and_evictable(shmem_obj);
 	}
 
 	*bo_ptr = bo;
diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
index 7148f3813d8b..2db6166a0307 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -246,20 +246,28 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
 	struct virtio_gpu_device *vgdev = dev->dev_private;
 	struct virtio_gpu_framebuffer *vgfb;
 	struct virtio_gpu_object *bo;
+	int err;
 
 	if (!new_state->fb)
 		return 0;
 
 	vgfb = to_virtio_gpu_framebuffer(new_state->fb);
 	bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
-	if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob))
+
+	err = drm_gem_shmem_set_unpurgeable_and_unevictable(&bo->base);
+	if (err)
+		return err;
+
+	if (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob)
 		return 0;
 
 	if (bo->dumb && (plane->state->fb != new_state->fb)) {
 		vgfb->fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
 						     0);
-		if (!vgfb->fence)
+		if (!vgfb->fence) {
+			drm_gem_shmem_set_purgeable_and_evictable(&bo->base);
 			return -ENOMEM;
+		}
 	}
 
 	return 0;
@@ -269,15 +277,20 @@ static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane,
 					struct drm_plane_state *state)
 {
 	struct virtio_gpu_framebuffer *vgfb;
+	struct virtio_gpu_object *bo;
 
 	if (!state->fb)
 		return;
 
 	vgfb = to_virtio_gpu_framebuffer(state->fb);
+	bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
+
 	if (vgfb->fence) {
 		dma_fence_put(&vgfb->fence->f);
 		vgfb->fence = NULL;
 	}
+
+	drm_gem_shmem_set_purgeable_and_evictable(&bo->base);
 }
 
 static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 06566e44307d..2a04dad1ae89 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -536,6 +536,21 @@ void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
 		virtio_gpu_cleanup_object(bo);
 }
 
+int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev,
+				    struct virtio_gpu_object *bo)
+{
+	struct virtio_gpu_resource_unref *cmd_p;
+	struct virtio_gpu_vbuffer *vbuf;
+
+	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
+	memset(cmd_p, 0, sizeof(*cmd_p));
+
+	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_UNREF);
+	cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
+
+	return virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
+}
+
 void virtio_gpu_cmd_set_scanout(struct virtio_gpu_device *vgdev,
 				uint32_t scanout_id, uint32_t resource_id,
 				uint32_t width, uint32_t height,
@@ -636,6 +651,23 @@ virtio_gpu_cmd_resource_attach_backing(struct virtio_gpu_device *vgdev,
 	virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence);
 }
 
+static void
+virtio_gpu_cmd_resource_detach_backing(struct virtio_gpu_device *vgdev,
+				       u32 resource_id,
+				       struct virtio_gpu_fence *fence)
+{
+	struct virtio_gpu_resource_attach_backing *cmd_p;
+	struct virtio_gpu_vbuffer *vbuf;
+
+	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
+	memset(cmd_p, 0, sizeof(*cmd_p));
+
+	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING);
+	cmd_p->resource_id = cpu_to_le32(resource_id);
+
+	virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence);
+}
+
 static void virtio_gpu_cmd_get_display_info_cb(struct virtio_gpu_device *vgdev,
 					       struct virtio_gpu_vbuffer *vbuf)
 {
@@ -1099,6 +1131,14 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev,
 					       ents, nents, NULL);
 }
 
+void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev,
+			      struct virtio_gpu_object *obj,
+			      struct virtio_gpu_fence *fence)
+{
+	virtio_gpu_cmd_resource_detach_backing(vgdev, obj->hw_res_handle,
+					       fence);
+}
+
 void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
 			    struct virtio_gpu_output *output)
 {
diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h
index 0512fde5e697..12197d8e9759 100644
--- a/include/uapi/drm/virtgpu_drm.h
+++ b/include/uapi/drm/virtgpu_drm.h
@@ -48,6 +48,7 @@ extern "C" {
 #define DRM_VIRTGPU_GET_CAPS  0x09
 #define DRM_VIRTGPU_RESOURCE_CREATE_BLOB 0x0a
 #define DRM_VIRTGPU_CONTEXT_INIT 0x0b
+#define DRM_VIRTGPU_MADVISE 0x0c
 
 #define VIRTGPU_EXECBUF_FENCE_FD_IN	0x01
 #define VIRTGPU_EXECBUF_FENCE_FD_OUT	0x02
@@ -196,6 +197,15 @@ struct drm_virtgpu_context_init {
 	__u64 ctx_set_params;
 };
 
+#define VIRTGPU_MADV_WILLNEED 0
+#define VIRTGPU_MADV_DONTNEED 1
+struct drm_virtgpu_madvise {
+	__u32 bo_handle;
+	__u32 retained; /* out, non-zero if BO can be used */
+	__u32 madv;
+	__u32 pad;
+};
+
 /*
  * Event code that's given when VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK is in
  * effect.  The event size is sizeof(drm_event), since there is no additional
@@ -246,6 +256,10 @@ struct drm_virtgpu_context_init {
 	DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_CONTEXT_INIT,		\
 		struct drm_virtgpu_context_init)
 
+#define DRM_IOCTL_VIRTGPU_MADVISE \
+	DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MADVISE, \
+		 struct drm_virtgpu_madvise)
+
 #if defined(__cplusplus)
 }
 #endif
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 13/15] drm/panfrost: Switch to generic memory shrinker
  2022-04-11 21:59 ` Dmitry Osipenko
@ 2022-04-11 21:59   ` Dmitry Osipenko
  -1 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: linux-kernel, virtualization, Gustavo Padovan, dri-devel,
	Dmitry Osipenko, Dmitry Osipenko

Replace Panfrost's memory shrinker with a generic DRM SHMEM memory
shrinker.

Tested-by: Steven Price <steven.price@arm.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/panfrost/Makefile             |   1 -
 drivers/gpu/drm/panfrost/panfrost_device.h    |   4 -
 drivers/gpu/drm/panfrost/panfrost_drv.c       |  19 +--
 drivers/gpu/drm/panfrost/panfrost_gem.c       |  30 +++--
 drivers/gpu/drm/panfrost/panfrost_gem.h       |   9 --
 .../gpu/drm/panfrost/panfrost_gem_shrinker.c  | 122 ------------------
 drivers/gpu/drm/panfrost/panfrost_job.c       |  18 ++-
 7 files changed, 39 insertions(+), 164 deletions(-)
 delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c

diff --git a/drivers/gpu/drm/panfrost/Makefile b/drivers/gpu/drm/panfrost/Makefile
index b71935862417..ecf0864cb515 100644
--- a/drivers/gpu/drm/panfrost/Makefile
+++ b/drivers/gpu/drm/panfrost/Makefile
@@ -5,7 +5,6 @@ panfrost-y := \
 	panfrost_device.o \
 	panfrost_devfreq.o \
 	panfrost_gem.o \
-	panfrost_gem_shrinker.o \
 	panfrost_gpu.o \
 	panfrost_job.o \
 	panfrost_mmu.o \
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h
index 8b25278f34c8..fe04b21fc044 100644
--- a/drivers/gpu/drm/panfrost/panfrost_device.h
+++ b/drivers/gpu/drm/panfrost/panfrost_device.h
@@ -115,10 +115,6 @@ struct panfrost_device {
 		atomic_t pending;
 	} reset;
 
-	struct mutex shrinker_lock;
-	struct list_head shrinker_list;
-	struct shrinker shrinker;
-
 	struct panfrost_devfreq pfdevfreq;
 };
 
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
index 7fcbc2a5b6cd..57a93555813f 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -160,7 +160,6 @@ panfrost_lookup_bos(struct drm_device *dev,
 			break;
 		}
 
-		atomic_inc(&bo->gpu_usecount);
 		job->mappings[i] = mapping;
 	}
 
@@ -391,7 +390,6 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
 {
 	struct panfrost_file_priv *priv = file_priv->driver_priv;
 	struct drm_panfrost_madvise *args = data;
-	struct panfrost_device *pfdev = dev->dev_private;
 	struct drm_gem_object *gem_obj;
 	struct panfrost_gem_object *bo;
 	int ret = 0;
@@ -404,7 +402,6 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
 
 	bo = to_panfrost_bo(gem_obj);
 
-	mutex_lock(&pfdev->shrinker_lock);
 	mutex_lock(&bo->mappings.lock);
 	if (args->madv == PANFROST_MADV_DONTNEED) {
 		struct panfrost_gem_mapping *first;
@@ -430,17 +427,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
 
 	args->retained = drm_gem_shmem_madvise(&bo->base, args->madv);
 
-	if (args->retained) {
-		if (args->madv == PANFROST_MADV_DONTNEED)
-			list_add_tail(&bo->base.madv_list,
-				      &pfdev->shrinker_list);
-		else if (args->madv == PANFROST_MADV_WILLNEED)
-			list_del_init(&bo->base.madv_list);
-	}
-
 out_unlock_mappings:
 	mutex_unlock(&bo->mappings.lock);
-	mutex_unlock(&pfdev->shrinker_lock);
 
 	drm_gem_object_put(gem_obj);
 	return ret;
@@ -571,9 +559,6 @@ static int panfrost_probe(struct platform_device *pdev)
 	ddev->dev_private = pfdev;
 	pfdev->ddev = ddev;
 
-	mutex_init(&pfdev->shrinker_lock);
-	INIT_LIST_HEAD(&pfdev->shrinker_list);
-
 	err = panfrost_device_init(pfdev);
 	if (err) {
 		if (err != -EPROBE_DEFER)
@@ -595,7 +580,7 @@ static int panfrost_probe(struct platform_device *pdev)
 	if (err < 0)
 		goto err_out1;
 
-	panfrost_gem_shrinker_init(ddev);
+	drm_gem_shmem_shrinker_register(ddev);
 
 	return 0;
 
@@ -613,8 +598,8 @@ static int panfrost_remove(struct platform_device *pdev)
 	struct panfrost_device *pfdev = platform_get_drvdata(pdev);
 	struct drm_device *ddev = pfdev->ddev;
 
+	drm_gem_shmem_shrinker_unregister(ddev);
 	drm_dev_unregister(ddev);
-	panfrost_gem_shrinker_cleanup(ddev);
 
 	pm_runtime_get_sync(pfdev->dev);
 	pm_runtime_disable(pfdev->dev);
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
index 293e799e2fe8..b4a7ea7c8f00 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
+++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
@@ -19,16 +19,6 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj)
 	struct panfrost_gem_object *bo = to_panfrost_bo(obj);
 	struct panfrost_device *pfdev = obj->dev->dev_private;
 
-	/*
-	 * Make sure the BO is no longer inserted in the shrinker list before
-	 * taking care of the destruction itself. If we don't do that we have a
-	 * race condition between this function and what's done in
-	 * panfrost_gem_shrinker_scan().
-	 */
-	mutex_lock(&pfdev->shrinker_lock);
-	list_del_init(&bo->base.madv_list);
-	mutex_unlock(&pfdev->shrinker_lock);
-
 	/*
 	 * If we still have mappings attached to the BO, there's a problem in
 	 * our refcounting.
@@ -195,6 +185,22 @@ static int panfrost_gem_pin(struct drm_gem_object *obj)
 	return drm_gem_shmem_pin(&bo->base);
 }
 
+static unsigned long panfrost_gem_purge(struct drm_gem_object *obj)
+{
+	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+	struct panfrost_gem_object *bo = to_panfrost_bo(obj);
+
+	if (!mutex_trylock(&bo->mappings.lock))
+		return 0;
+
+	panfrost_gem_teardown_mappings_locked(bo);
+	drm_gem_shmem_purge_locked(shmem);
+
+	mutex_unlock(&bo->mappings.lock);
+
+	return obj->size >> PAGE_SHIFT;
+}
+
 static const struct drm_gem_object_funcs panfrost_gem_funcs = {
 	.free = panfrost_gem_free_object,
 	.open = panfrost_gem_open,
@@ -207,6 +213,7 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = {
 	.vunmap = drm_gem_shmem_object_vunmap,
 	.mmap = drm_gem_shmem_object_mmap,
 	.vm_ops = &drm_gem_shmem_vm_ops,
+	.purge = panfrost_gem_purge,
 };
 
 /**
@@ -266,6 +273,9 @@ panfrost_gem_create_with_handle(struct drm_file *file_priv,
 	if (ret)
 		return ERR_PTR(ret);
 
+	if (!bo->is_heap)
+		drm_gem_shmem_set_purgeable(shmem);
+
 	return bo;
 }
 
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h
index 8088d5fd8480..09da064f1c07 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem.h
+++ b/drivers/gpu/drm/panfrost/panfrost_gem.h
@@ -30,12 +30,6 @@ struct panfrost_gem_object {
 		struct mutex lock;
 	} mappings;
 
-	/*
-	 * Count the number of jobs referencing this BO so we don't let the
-	 * shrinker reclaim this object prematurely.
-	 */
-	atomic_t gpu_usecount;
-
 	bool noexec		:1;
 	bool is_heap		:1;
 };
@@ -84,7 +78,4 @@ panfrost_gem_mapping_get(struct panfrost_gem_object *bo,
 void panfrost_gem_mapping_put(struct panfrost_gem_mapping *mapping);
 void panfrost_gem_teardown_mappings_locked(struct panfrost_gem_object *bo);
 
-void panfrost_gem_shrinker_init(struct drm_device *dev);
-void panfrost_gem_shrinker_cleanup(struct drm_device *dev);
-
 #endif /* __PANFROST_GEM_H__ */
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
deleted file mode 100644
index 77e7cb6d1ae3..000000000000
--- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
+++ /dev/null
@@ -1,122 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright (C) 2019 Arm Ltd.
- *
- * Based on msm_gem_freedreno.c:
- * Copyright (C) 2016 Red Hat
- * Author: Rob Clark <robdclark@gmail.com>
- */
-
-#include <linux/list.h>
-
-#include <drm/drm_device.h>
-#include <drm/drm_gem_shmem_helper.h>
-
-#include "panfrost_device.h"
-#include "panfrost_gem.h"
-#include "panfrost_mmu.h"
-
-static unsigned long
-panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc)
-{
-	struct panfrost_device *pfdev =
-		container_of(shrinker, struct panfrost_device, shrinker);
-	struct drm_gem_shmem_object *shmem;
-	unsigned long count = 0;
-
-	if (!mutex_trylock(&pfdev->shrinker_lock))
-		return 0;
-
-	list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) {
-		if (drm_gem_shmem_is_purgeable(shmem))
-			count += shmem->base.size >> PAGE_SHIFT;
-	}
-
-	mutex_unlock(&pfdev->shrinker_lock);
-
-	return count;
-}
-
-static bool panfrost_gem_purge(struct drm_gem_object *obj)
-{
-	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
-	struct panfrost_gem_object *bo = to_panfrost_bo(obj);
-	bool ret = false;
-
-	if (atomic_read(&bo->gpu_usecount))
-		return false;
-
-	if (!mutex_trylock(&bo->mappings.lock))
-		return false;
-
-	if (!mutex_trylock(&shmem->pages_lock))
-		goto unlock_mappings;
-
-	panfrost_gem_teardown_mappings_locked(bo);
-	drm_gem_shmem_purge_locked(&bo->base);
-	ret = true;
-
-	mutex_unlock(&shmem->pages_lock);
-
-unlock_mappings:
-	mutex_unlock(&bo->mappings.lock);
-	return ret;
-}
-
-static unsigned long
-panfrost_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
-{
-	struct panfrost_device *pfdev =
-		container_of(shrinker, struct panfrost_device, shrinker);
-	struct drm_gem_shmem_object *shmem, *tmp;
-	unsigned long freed = 0;
-
-	if (!mutex_trylock(&pfdev->shrinker_lock))
-		return SHRINK_STOP;
-
-	list_for_each_entry_safe(shmem, tmp, &pfdev->shrinker_list, madv_list) {
-		if (freed >= sc->nr_to_scan)
-			break;
-		if (drm_gem_shmem_is_purgeable(shmem) &&
-		    panfrost_gem_purge(&shmem->base)) {
-			freed += shmem->base.size >> PAGE_SHIFT;
-			list_del_init(&shmem->madv_list);
-		}
-	}
-
-	mutex_unlock(&pfdev->shrinker_lock);
-
-	if (freed > 0)
-		pr_info_ratelimited("Purging %lu bytes\n", freed << PAGE_SHIFT);
-
-	return freed;
-}
-
-/**
- * panfrost_gem_shrinker_init - Initialize panfrost shrinker
- * @dev: DRM device
- *
- * This function registers and sets up the panfrost shrinker.
- */
-void panfrost_gem_shrinker_init(struct drm_device *dev)
-{
-	struct panfrost_device *pfdev = dev->dev_private;
-	pfdev->shrinker.count_objects = panfrost_gem_shrinker_count;
-	pfdev->shrinker.scan_objects = panfrost_gem_shrinker_scan;
-	pfdev->shrinker.seeks = DEFAULT_SEEKS;
-	WARN_ON(register_shrinker(&pfdev->shrinker));
-}
-
-/**
- * panfrost_gem_shrinker_cleanup - Clean up panfrost shrinker
- * @dev: DRM device
- *
- * This function unregisters the panfrost shrinker.
- */
-void panfrost_gem_shrinker_cleanup(struct drm_device *dev)
-{
-	struct panfrost_device *pfdev = dev->dev_private;
-
-	if (pfdev->shrinker.nr_deferred) {
-		unregister_shrinker(&pfdev->shrinker);
-	}
-}
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
index fda5871aebe3..bcf496b837ce 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.c
+++ b/drivers/gpu/drm/panfrost/panfrost_job.c
@@ -271,6 +271,19 @@ static void panfrost_attach_object_fences(struct drm_gem_object **bos,
 		dma_resv_add_fence(bos[i]->resv, fence, DMA_RESV_USAGE_WRITE);
 }
 
+static int panfrost_objects_prepare(struct drm_gem_object **bos, int bo_count)
+{
+	struct panfrost_gem_object *bo;
+	int ret = 0;
+
+	while (!ret && bo_count--) {
+		bo = to_panfrost_bo(bos[bo_count]);
+		ret = bo->base.madv ? -ENOMEM : 0;
+	}
+
+	return ret;
+}
+
 int panfrost_job_push(struct panfrost_job *job)
 {
 	struct panfrost_device *pfdev = job->pfdev;
@@ -282,6 +295,10 @@ int panfrost_job_push(struct panfrost_job *job)
 	if (ret)
 		return ret;
 
+	ret = panfrost_objects_prepare(job->bos, job->bo_count);
+	if (ret)
+		goto unlock;
+
 	mutex_lock(&pfdev->sched_lock);
 	drm_sched_job_arm(&job->base);
 
@@ -323,7 +340,6 @@ static void panfrost_job_cleanup(struct kref *ref)
 			if (!job->mappings[i])
 				break;
 
-			atomic_dec(&job->mappings[i]->obj->gpu_usecount);
 			panfrost_gem_mapping_put(job->mappings[i]);
 		}
 		kvfree(job->mappings);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 13/15] drm/panfrost: Switch to generic memory shrinker
@ 2022-04-11 21:59   ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: dri-devel, linux-kernel, Gustavo Padovan, Daniel Stone,
	virtualization, Dmitry Osipenko, Dmitry Osipenko

Replace Panfrost's memory shrinker with a generic DRM SHMEM memory
shrinker.

Tested-by: Steven Price <steven.price@arm.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/panfrost/Makefile             |   1 -
 drivers/gpu/drm/panfrost/panfrost_device.h    |   4 -
 drivers/gpu/drm/panfrost/panfrost_drv.c       |  19 +--
 drivers/gpu/drm/panfrost/panfrost_gem.c       |  30 +++--
 drivers/gpu/drm/panfrost/panfrost_gem.h       |   9 --
 .../gpu/drm/panfrost/panfrost_gem_shrinker.c  | 122 ------------------
 drivers/gpu/drm/panfrost/panfrost_job.c       |  18 ++-
 7 files changed, 39 insertions(+), 164 deletions(-)
 delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c

diff --git a/drivers/gpu/drm/panfrost/Makefile b/drivers/gpu/drm/panfrost/Makefile
index b71935862417..ecf0864cb515 100644
--- a/drivers/gpu/drm/panfrost/Makefile
+++ b/drivers/gpu/drm/panfrost/Makefile
@@ -5,7 +5,6 @@ panfrost-y := \
 	panfrost_device.o \
 	panfrost_devfreq.o \
 	panfrost_gem.o \
-	panfrost_gem_shrinker.o \
 	panfrost_gpu.o \
 	panfrost_job.o \
 	panfrost_mmu.o \
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h
index 8b25278f34c8..fe04b21fc044 100644
--- a/drivers/gpu/drm/panfrost/panfrost_device.h
+++ b/drivers/gpu/drm/panfrost/panfrost_device.h
@@ -115,10 +115,6 @@ struct panfrost_device {
 		atomic_t pending;
 	} reset;
 
-	struct mutex shrinker_lock;
-	struct list_head shrinker_list;
-	struct shrinker shrinker;
-
 	struct panfrost_devfreq pfdevfreq;
 };
 
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
index 7fcbc2a5b6cd..57a93555813f 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -160,7 +160,6 @@ panfrost_lookup_bos(struct drm_device *dev,
 			break;
 		}
 
-		atomic_inc(&bo->gpu_usecount);
 		job->mappings[i] = mapping;
 	}
 
@@ -391,7 +390,6 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
 {
 	struct panfrost_file_priv *priv = file_priv->driver_priv;
 	struct drm_panfrost_madvise *args = data;
-	struct panfrost_device *pfdev = dev->dev_private;
 	struct drm_gem_object *gem_obj;
 	struct panfrost_gem_object *bo;
 	int ret = 0;
@@ -404,7 +402,6 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
 
 	bo = to_panfrost_bo(gem_obj);
 
-	mutex_lock(&pfdev->shrinker_lock);
 	mutex_lock(&bo->mappings.lock);
 	if (args->madv == PANFROST_MADV_DONTNEED) {
 		struct panfrost_gem_mapping *first;
@@ -430,17 +427,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
 
 	args->retained = drm_gem_shmem_madvise(&bo->base, args->madv);
 
-	if (args->retained) {
-		if (args->madv == PANFROST_MADV_DONTNEED)
-			list_add_tail(&bo->base.madv_list,
-				      &pfdev->shrinker_list);
-		else if (args->madv == PANFROST_MADV_WILLNEED)
-			list_del_init(&bo->base.madv_list);
-	}
-
 out_unlock_mappings:
 	mutex_unlock(&bo->mappings.lock);
-	mutex_unlock(&pfdev->shrinker_lock);
 
 	drm_gem_object_put(gem_obj);
 	return ret;
@@ -571,9 +559,6 @@ static int panfrost_probe(struct platform_device *pdev)
 	ddev->dev_private = pfdev;
 	pfdev->ddev = ddev;
 
-	mutex_init(&pfdev->shrinker_lock);
-	INIT_LIST_HEAD(&pfdev->shrinker_list);
-
 	err = panfrost_device_init(pfdev);
 	if (err) {
 		if (err != -EPROBE_DEFER)
@@ -595,7 +580,7 @@ static int panfrost_probe(struct platform_device *pdev)
 	if (err < 0)
 		goto err_out1;
 
-	panfrost_gem_shrinker_init(ddev);
+	drm_gem_shmem_shrinker_register(ddev);
 
 	return 0;
 
@@ -613,8 +598,8 @@ static int panfrost_remove(struct platform_device *pdev)
 	struct panfrost_device *pfdev = platform_get_drvdata(pdev);
 	struct drm_device *ddev = pfdev->ddev;
 
+	drm_gem_shmem_shrinker_unregister(ddev);
 	drm_dev_unregister(ddev);
-	panfrost_gem_shrinker_cleanup(ddev);
 
 	pm_runtime_get_sync(pfdev->dev);
 	pm_runtime_disable(pfdev->dev);
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
index 293e799e2fe8..b4a7ea7c8f00 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
+++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
@@ -19,16 +19,6 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj)
 	struct panfrost_gem_object *bo = to_panfrost_bo(obj);
 	struct panfrost_device *pfdev = obj->dev->dev_private;
 
-	/*
-	 * Make sure the BO is no longer inserted in the shrinker list before
-	 * taking care of the destruction itself. If we don't do that we have a
-	 * race condition between this function and what's done in
-	 * panfrost_gem_shrinker_scan().
-	 */
-	mutex_lock(&pfdev->shrinker_lock);
-	list_del_init(&bo->base.madv_list);
-	mutex_unlock(&pfdev->shrinker_lock);
-
 	/*
 	 * If we still have mappings attached to the BO, there's a problem in
 	 * our refcounting.
@@ -195,6 +185,22 @@ static int panfrost_gem_pin(struct drm_gem_object *obj)
 	return drm_gem_shmem_pin(&bo->base);
 }
 
+static unsigned long panfrost_gem_purge(struct drm_gem_object *obj)
+{
+	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+	struct panfrost_gem_object *bo = to_panfrost_bo(obj);
+
+	if (!mutex_trylock(&bo->mappings.lock))
+		return 0;
+
+	panfrost_gem_teardown_mappings_locked(bo);
+	drm_gem_shmem_purge_locked(shmem);
+
+	mutex_unlock(&bo->mappings.lock);
+
+	return obj->size >> PAGE_SHIFT;
+}
+
 static const struct drm_gem_object_funcs panfrost_gem_funcs = {
 	.free = panfrost_gem_free_object,
 	.open = panfrost_gem_open,
@@ -207,6 +213,7 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = {
 	.vunmap = drm_gem_shmem_object_vunmap,
 	.mmap = drm_gem_shmem_object_mmap,
 	.vm_ops = &drm_gem_shmem_vm_ops,
+	.purge = panfrost_gem_purge,
 };
 
 /**
@@ -266,6 +273,9 @@ panfrost_gem_create_with_handle(struct drm_file *file_priv,
 	if (ret)
 		return ERR_PTR(ret);
 
+	if (!bo->is_heap)
+		drm_gem_shmem_set_purgeable(shmem);
+
 	return bo;
 }
 
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h
index 8088d5fd8480..09da064f1c07 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem.h
+++ b/drivers/gpu/drm/panfrost/panfrost_gem.h
@@ -30,12 +30,6 @@ struct panfrost_gem_object {
 		struct mutex lock;
 	} mappings;
 
-	/*
-	 * Count the number of jobs referencing this BO so we don't let the
-	 * shrinker reclaim this object prematurely.
-	 */
-	atomic_t gpu_usecount;
-
 	bool noexec		:1;
 	bool is_heap		:1;
 };
@@ -84,7 +78,4 @@ panfrost_gem_mapping_get(struct panfrost_gem_object *bo,
 void panfrost_gem_mapping_put(struct panfrost_gem_mapping *mapping);
 void panfrost_gem_teardown_mappings_locked(struct panfrost_gem_object *bo);
 
-void panfrost_gem_shrinker_init(struct drm_device *dev);
-void panfrost_gem_shrinker_cleanup(struct drm_device *dev);
-
 #endif /* __PANFROST_GEM_H__ */
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
deleted file mode 100644
index 77e7cb6d1ae3..000000000000
--- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
+++ /dev/null
@@ -1,122 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright (C) 2019 Arm Ltd.
- *
- * Based on msm_gem_freedreno.c:
- * Copyright (C) 2016 Red Hat
- * Author: Rob Clark <robdclark@gmail.com>
- */
-
-#include <linux/list.h>
-
-#include <drm/drm_device.h>
-#include <drm/drm_gem_shmem_helper.h>
-
-#include "panfrost_device.h"
-#include "panfrost_gem.h"
-#include "panfrost_mmu.h"
-
-static unsigned long
-panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc)
-{
-	struct panfrost_device *pfdev =
-		container_of(shrinker, struct panfrost_device, shrinker);
-	struct drm_gem_shmem_object *shmem;
-	unsigned long count = 0;
-
-	if (!mutex_trylock(&pfdev->shrinker_lock))
-		return 0;
-
-	list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) {
-		if (drm_gem_shmem_is_purgeable(shmem))
-			count += shmem->base.size >> PAGE_SHIFT;
-	}
-
-	mutex_unlock(&pfdev->shrinker_lock);
-
-	return count;
-}
-
-static bool panfrost_gem_purge(struct drm_gem_object *obj)
-{
-	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
-	struct panfrost_gem_object *bo = to_panfrost_bo(obj);
-	bool ret = false;
-
-	if (atomic_read(&bo->gpu_usecount))
-		return false;
-
-	if (!mutex_trylock(&bo->mappings.lock))
-		return false;
-
-	if (!mutex_trylock(&shmem->pages_lock))
-		goto unlock_mappings;
-
-	panfrost_gem_teardown_mappings_locked(bo);
-	drm_gem_shmem_purge_locked(&bo->base);
-	ret = true;
-
-	mutex_unlock(&shmem->pages_lock);
-
-unlock_mappings:
-	mutex_unlock(&bo->mappings.lock);
-	return ret;
-}
-
-static unsigned long
-panfrost_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
-{
-	struct panfrost_device *pfdev =
-		container_of(shrinker, struct panfrost_device, shrinker);
-	struct drm_gem_shmem_object *shmem, *tmp;
-	unsigned long freed = 0;
-
-	if (!mutex_trylock(&pfdev->shrinker_lock))
-		return SHRINK_STOP;
-
-	list_for_each_entry_safe(shmem, tmp, &pfdev->shrinker_list, madv_list) {
-		if (freed >= sc->nr_to_scan)
-			break;
-		if (drm_gem_shmem_is_purgeable(shmem) &&
-		    panfrost_gem_purge(&shmem->base)) {
-			freed += shmem->base.size >> PAGE_SHIFT;
-			list_del_init(&shmem->madv_list);
-		}
-	}
-
-	mutex_unlock(&pfdev->shrinker_lock);
-
-	if (freed > 0)
-		pr_info_ratelimited("Purging %lu bytes\n", freed << PAGE_SHIFT);
-
-	return freed;
-}
-
-/**
- * panfrost_gem_shrinker_init - Initialize panfrost shrinker
- * @dev: DRM device
- *
- * This function registers and sets up the panfrost shrinker.
- */
-void panfrost_gem_shrinker_init(struct drm_device *dev)
-{
-	struct panfrost_device *pfdev = dev->dev_private;
-	pfdev->shrinker.count_objects = panfrost_gem_shrinker_count;
-	pfdev->shrinker.scan_objects = panfrost_gem_shrinker_scan;
-	pfdev->shrinker.seeks = DEFAULT_SEEKS;
-	WARN_ON(register_shrinker(&pfdev->shrinker));
-}
-
-/**
- * panfrost_gem_shrinker_cleanup - Clean up panfrost shrinker
- * @dev: DRM device
- *
- * This function unregisters the panfrost shrinker.
- */
-void panfrost_gem_shrinker_cleanup(struct drm_device *dev)
-{
-	struct panfrost_device *pfdev = dev->dev_private;
-
-	if (pfdev->shrinker.nr_deferred) {
-		unregister_shrinker(&pfdev->shrinker);
-	}
-}
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
index fda5871aebe3..bcf496b837ce 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.c
+++ b/drivers/gpu/drm/panfrost/panfrost_job.c
@@ -271,6 +271,19 @@ static void panfrost_attach_object_fences(struct drm_gem_object **bos,
 		dma_resv_add_fence(bos[i]->resv, fence, DMA_RESV_USAGE_WRITE);
 }
 
+static int panfrost_objects_prepare(struct drm_gem_object **bos, int bo_count)
+{
+	struct panfrost_gem_object *bo;
+	int ret = 0;
+
+	while (!ret && bo_count--) {
+		bo = to_panfrost_bo(bos[bo_count]);
+		ret = bo->base.madv ? -ENOMEM : 0;
+	}
+
+	return ret;
+}
+
 int panfrost_job_push(struct panfrost_job *job)
 {
 	struct panfrost_device *pfdev = job->pfdev;
@@ -282,6 +295,10 @@ int panfrost_job_push(struct panfrost_job *job)
 	if (ret)
 		return ret;
 
+	ret = panfrost_objects_prepare(job->bos, job->bo_count);
+	if (ret)
+		goto unlock;
+
 	mutex_lock(&pfdev->sched_lock);
 	drm_sched_job_arm(&job->base);
 
@@ -323,7 +340,6 @@ static void panfrost_job_cleanup(struct kref *ref)
 			if (!job->mappings[i])
 				break;
 
-			atomic_dec(&job->mappings[i]->obj->gpu_usecount);
 			panfrost_gem_mapping_put(job->mappings[i]);
 		}
 		kvfree(job->mappings);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 14/15] drm/shmem-helper: Make drm_gem_shmem_get_pages() private
  2022-04-11 21:59 ` Dmitry Osipenko
@ 2022-04-11 21:59   ` Dmitry Osipenko
  -1 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: linux-kernel, virtualization, Gustavo Padovan, dri-devel,
	Dmitry Osipenko, Dmitry Osipenko

VirtIO-GPU driver was the only user of drm_gem_shmem_get_pages()
and it now uses drm_gem_shmem_get_pages_sgt(). Make the get_pages()
private to drm_gem_shmem_helper.

Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 3 +--
 include/drm/drm_gem_shmem_helper.h     | 1 -
 2 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 7e4851363d14..8e31056575e3 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -538,7 +538,7 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
  * Returns:
  * 0 on success or a negative error code on failure.
  */
-int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
+static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
 {
 	int ret;
 
@@ -555,7 +555,6 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
 
 	return ret;
 }
-EXPORT_SYMBOL(drm_gem_shmem_get_pages);
 
 static void drm_gem_shmem_get_pages_no_fail(struct drm_gem_shmem_object *shmem)
 {
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index dc1c2db7d095..7284335ea2c9 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -144,7 +144,6 @@ struct drm_gem_shmem_object {
 struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size);
 void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem);
 
-int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
 void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
 int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem);
 void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 14/15] drm/shmem-helper: Make drm_gem_shmem_get_pages() private
@ 2022-04-11 21:59   ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: dri-devel, linux-kernel, Gustavo Padovan, Daniel Stone,
	virtualization, Dmitry Osipenko, Dmitry Osipenko

VirtIO-GPU driver was the only user of drm_gem_shmem_get_pages()
and it now uses drm_gem_shmem_get_pages_sgt(). Make the get_pages()
private to drm_gem_shmem_helper.

Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 3 +--
 include/drm/drm_gem_shmem_helper.h     | 1 -
 2 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 7e4851363d14..8e31056575e3 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -538,7 +538,7 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
  * Returns:
  * 0 on success or a negative error code on failure.
  */
-int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
+static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
 {
 	int ret;
 
@@ -555,7 +555,6 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
 
 	return ret;
 }
-EXPORT_SYMBOL(drm_gem_shmem_get_pages);
 
 static void drm_gem_shmem_get_pages_no_fail(struct drm_gem_shmem_object *shmem)
 {
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index dc1c2db7d095..7284335ea2c9 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -144,7 +144,6 @@ struct drm_gem_shmem_object {
 struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size);
 void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem);
 
-int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
 void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
 int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem);
 void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 15/15] drm/shmem-helper: Remove drm_gem_shmem_purge()
  2022-04-11 21:59 ` Dmitry Osipenko
@ 2022-04-11 21:59   ` Dmitry Osipenko
  -1 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: linux-kernel, virtualization, Gustavo Padovan, dri-devel,
	Dmitry Osipenko, Dmitry Osipenko

The drm_gem_shmem_purge() was added back in 2019 and never had a user
since then. GEM's purging is now managed by the generic shrinker and
only the "locked" variant of the function is wanted. Remove the obsoleted
function.

Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 11 -----------
 include/drm/drm_gem_shmem_helper.h     |  2 --
 2 files changed, 13 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 8e31056575e3..89c64053247c 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -992,17 +992,6 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
 }
 EXPORT_SYMBOL(drm_gem_shmem_purge_locked);
 
-bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
-{
-	if (!dma_resv_trylock(shmem->base.resv))
-		return false;
-	drm_gem_shmem_purge_locked(shmem);
-	dma_resv_unlock(shmem->base.resv);
-
-	return true;
-}
-EXPORT_SYMBOL(drm_gem_shmem_purge);
-
 /**
  * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object
  * @file: DRM file structure to create the dumb buffer for
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 7284335ea2c9..197d08452974 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -179,8 +179,6 @@ int drm_gem_shmem_swap_in_pages_locked(struct drm_gem_shmem_object *shmem);
 int drm_gem_shmem_swap_in_locked(struct drm_gem_shmem_object *shmem);
 
 void drm_gem_shmem_evict_locked(struct drm_gem_shmem_object *shmem);
-
-bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
 void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem);
 
 struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 15/15] drm/shmem-helper: Remove drm_gem_shmem_purge()
@ 2022-04-11 21:59   ` Dmitry Osipenko
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry Osipenko @ 2022-04-11 21:59 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Daniel Almeida, Gert Wollny, Tomeu Vizoso,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring,
	Steven Price, Alyssa Rosenzweig, Rob Clark
  Cc: dri-devel, linux-kernel, Gustavo Padovan, Daniel Stone,
	virtualization, Dmitry Osipenko, Dmitry Osipenko

The drm_gem_shmem_purge() was added back in 2019 and never had a user
since then. GEM's purging is now managed by the generic shrinker and
only the "locked" variant of the function is wanted. Remove the obsoleted
function.

Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 11 -----------
 include/drm/drm_gem_shmem_helper.h     |  2 --
 2 files changed, 13 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 8e31056575e3..89c64053247c 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -992,17 +992,6 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
 }
 EXPORT_SYMBOL(drm_gem_shmem_purge_locked);
 
-bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
-{
-	if (!dma_resv_trylock(shmem->base.resv))
-		return false;
-	drm_gem_shmem_purge_locked(shmem);
-	dma_resv_unlock(shmem->base.resv);
-
-	return true;
-}
-EXPORT_SYMBOL(drm_gem_shmem_purge);
-
 /**
  * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object
  * @file: DRM file structure to create the dumb buffer for
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 7284335ea2c9..197d08452974 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -179,8 +179,6 @@ int drm_gem_shmem_swap_in_pages_locked(struct drm_gem_shmem_object *shmem);
 int drm_gem_shmem_swap_in_locked(struct drm_gem_shmem_object *shmem);
 
 void drm_gem_shmem_evict_locked(struct drm_gem_shmem_object *shmem);
-
-bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
 void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem);
 
 struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 11/15] drm/shmem-helper: Add generic memory shrinker
  2022-04-11 21:59   ` Dmitry Osipenko
  (?)
@ 2022-04-12  3:35     ` kernel test robot
  -1 siblings, 0 replies; 41+ messages in thread
From: kernel test robot @ 2022-04-12  3:35 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig,
	Rob Clark
  Cc: kbuild-all, linux-kernel, virtualization, Gustavo Padovan,
	dri-devel, Dmitry Osipenko

Hi Dmitry,

I love your patch! Perhaps something to improve:

[auto build test WARNING on next-20220411]
[cannot apply to drm/drm-next v5.18-rc2 v5.18-rc1 v5.17 v5.18-rc2]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/intel-lab-lkp/linux/commits/Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220412-060325
base:    d12d7e1cfe38e0c36d28c7a9fbbc436ad0d17c14
config: arc-buildonly-randconfig-r002-20220411 (https://download.01.org/0day-ci/archive/20220412/202204121135.zXHzC28U-lkp@intel.com/config)
compiler: arc-elf-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/683ba8a9d72ba7770a61a9266a2b33949f3874f2
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220412-060325
        git checkout 683ba8a9d72ba7770a61a9266a2b33949f3874f2
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=arc SHELL=/bin/bash drivers/gpu/drm/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   drivers/gpu/drm/drm_gem_shmem_helper.c: In function 'drm_gem_shmem_shrinker_run_objects_scan':
>> drivers/gpu/drm/drm_gem_shmem_helper.c:1358:56: warning: implicit conversion from 'enum <anonymous>' to 'enum dma_resv_usage' [-Wenum-conversion]
    1358 |                 if (!dma_resv_test_signaled(obj->resv, true))
         |                                                        ^~~~


vim +1358 drivers/gpu/drm/drm_gem_shmem_helper.c

  1299	
  1300	static unsigned long
  1301	drm_gem_shmem_shrinker_run_objects_scan(struct shrinker *shrinker,
  1302						unsigned long nr_to_scan,
  1303						bool *lock_contention,
  1304						bool evict)
  1305	{
  1306		struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker);
  1307		struct drm_gem_shmem_object *shmem;
  1308		struct list_head still_in_list;
  1309		struct drm_gem_object *obj;
  1310		unsigned long freed = 0;
  1311		struct list_head *lru;
  1312		size_t page_count;
  1313	
  1314		INIT_LIST_HEAD(&still_in_list);
  1315	
  1316		mutex_lock(&gem_shrinker->lock);
  1317	
  1318		if (evict)
  1319			lru = &gem_shrinker->lru_evictable;
  1320		else
  1321			lru = &gem_shrinker->lru_purgeable;
  1322	
  1323		while (freed < nr_to_scan) {
  1324			shmem = list_first_entry_or_null(lru, typeof(*shmem), madv_list);
  1325			if (!shmem)
  1326				break;
  1327	
  1328			obj = &shmem->base;
  1329			page_count = obj->size >> PAGE_SHIFT;
  1330			list_move_tail(&shmem->madv_list, &still_in_list);
  1331	
  1332			if (evict && get_nr_swap_pages() < page_count)
  1333				continue;
  1334	
  1335			/*
  1336			 * If it's in the process of being freed, gem_object->free()
  1337			 * may be blocked on lock waiting to remove it.  So just
  1338			 * skip it.
  1339			 */
  1340			if (!kref_get_unless_zero(&obj->refcount))
  1341				continue;
  1342	
  1343			mutex_unlock(&gem_shrinker->lock);
  1344	
  1345			/* prevent racing with job-submission code paths */
  1346			if (!dma_resv_trylock(obj->resv)) {
  1347				*lock_contention |= true;
  1348				goto shrinker_lock;
  1349			}
  1350	
  1351			/* prevent racing with the dma-buf exporting */
  1352			if (!mutex_trylock(&gem_shrinker->dev->object_name_lock)) {
  1353				*lock_contention |= true;
  1354				goto resv_unlock;
  1355			}
  1356	
  1357			/* check whether h/w uses this object */
> 1358			if (!dma_resv_test_signaled(obj->resv, true))
  1359				goto object_name_unlock;
  1360	
  1361			/* GEM may've become unpurgeable while shrinker was unlocked */
  1362			if (evict) {
  1363				if (!drm_gem_shmem_is_evictable(shmem))
  1364					goto object_name_unlock;
  1365			} else {
  1366				if (!drm_gem_shmem_is_purgeable(shmem))
  1367					goto object_name_unlock;
  1368			}
  1369	
  1370			if (evict)
  1371				freed += obj->funcs->evict(obj);
  1372			else
  1373				freed += obj->funcs->purge(obj);
  1374	object_name_unlock:
  1375			mutex_unlock(&gem_shrinker->dev->object_name_lock);
  1376	resv_unlock:
  1377			dma_resv_unlock(obj->resv);
  1378	shrinker_lock:
  1379			drm_gem_object_put(&shmem->base);
  1380			mutex_lock(&gem_shrinker->lock);
  1381		}
  1382	
  1383		list_splice_tail(&still_in_list, lru);
  1384	
  1385		mutex_unlock(&gem_shrinker->lock);
  1386	
  1387		return freed;
  1388	}
  1389	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 11/15] drm/shmem-helper: Add generic memory shrinker
@ 2022-04-12  3:35     ` kernel test robot
  0 siblings, 0 replies; 41+ messages in thread
From: kernel test robot @ 2022-04-12  3:35 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig,
	Rob Clark
  Cc: kbuild-all, linux-kernel, dri-devel, virtualization,
	Gustavo Padovan, Dmitry Osipenko

Hi Dmitry,

I love your patch! Perhaps something to improve:

[auto build test WARNING on next-20220411]
[cannot apply to drm/drm-next v5.18-rc2 v5.18-rc1 v5.17 v5.18-rc2]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/intel-lab-lkp/linux/commits/Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220412-060325
base:    d12d7e1cfe38e0c36d28c7a9fbbc436ad0d17c14
config: arc-buildonly-randconfig-r002-20220411 (https://download.01.org/0day-ci/archive/20220412/202204121135.zXHzC28U-lkp@intel.com/config)
compiler: arc-elf-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/683ba8a9d72ba7770a61a9266a2b33949f3874f2
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220412-060325
        git checkout 683ba8a9d72ba7770a61a9266a2b33949f3874f2
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=arc SHELL=/bin/bash drivers/gpu/drm/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   drivers/gpu/drm/drm_gem_shmem_helper.c: In function 'drm_gem_shmem_shrinker_run_objects_scan':
>> drivers/gpu/drm/drm_gem_shmem_helper.c:1358:56: warning: implicit conversion from 'enum <anonymous>' to 'enum dma_resv_usage' [-Wenum-conversion]
    1358 |                 if (!dma_resv_test_signaled(obj->resv, true))
         |                                                        ^~~~


vim +1358 drivers/gpu/drm/drm_gem_shmem_helper.c

  1299	
  1300	static unsigned long
  1301	drm_gem_shmem_shrinker_run_objects_scan(struct shrinker *shrinker,
  1302						unsigned long nr_to_scan,
  1303						bool *lock_contention,
  1304						bool evict)
  1305	{
  1306		struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker);
  1307		struct drm_gem_shmem_object *shmem;
  1308		struct list_head still_in_list;
  1309		struct drm_gem_object *obj;
  1310		unsigned long freed = 0;
  1311		struct list_head *lru;
  1312		size_t page_count;
  1313	
  1314		INIT_LIST_HEAD(&still_in_list);
  1315	
  1316		mutex_lock(&gem_shrinker->lock);
  1317	
  1318		if (evict)
  1319			lru = &gem_shrinker->lru_evictable;
  1320		else
  1321			lru = &gem_shrinker->lru_purgeable;
  1322	
  1323		while (freed < nr_to_scan) {
  1324			shmem = list_first_entry_or_null(lru, typeof(*shmem), madv_list);
  1325			if (!shmem)
  1326				break;
  1327	
  1328			obj = &shmem->base;
  1329			page_count = obj->size >> PAGE_SHIFT;
  1330			list_move_tail(&shmem->madv_list, &still_in_list);
  1331	
  1332			if (evict && get_nr_swap_pages() < page_count)
  1333				continue;
  1334	
  1335			/*
  1336			 * If it's in the process of being freed, gem_object->free()
  1337			 * may be blocked on lock waiting to remove it.  So just
  1338			 * skip it.
  1339			 */
  1340			if (!kref_get_unless_zero(&obj->refcount))
  1341				continue;
  1342	
  1343			mutex_unlock(&gem_shrinker->lock);
  1344	
  1345			/* prevent racing with job-submission code paths */
  1346			if (!dma_resv_trylock(obj->resv)) {
  1347				*lock_contention |= true;
  1348				goto shrinker_lock;
  1349			}
  1350	
  1351			/* prevent racing with the dma-buf exporting */
  1352			if (!mutex_trylock(&gem_shrinker->dev->object_name_lock)) {
  1353				*lock_contention |= true;
  1354				goto resv_unlock;
  1355			}
  1356	
  1357			/* check whether h/w uses this object */
> 1358			if (!dma_resv_test_signaled(obj->resv, true))
  1359				goto object_name_unlock;
  1360	
  1361			/* GEM may've become unpurgeable while shrinker was unlocked */
  1362			if (evict) {
  1363				if (!drm_gem_shmem_is_evictable(shmem))
  1364					goto object_name_unlock;
  1365			} else {
  1366				if (!drm_gem_shmem_is_purgeable(shmem))
  1367					goto object_name_unlock;
  1368			}
  1369	
  1370			if (evict)
  1371				freed += obj->funcs->evict(obj);
  1372			else
  1373				freed += obj->funcs->purge(obj);
  1374	object_name_unlock:
  1375			mutex_unlock(&gem_shrinker->dev->object_name_lock);
  1376	resv_unlock:
  1377			dma_resv_unlock(obj->resv);
  1378	shrinker_lock:
  1379			drm_gem_object_put(&shmem->base);
  1380			mutex_lock(&gem_shrinker->lock);
  1381		}
  1382	
  1383		list_splice_tail(&still_in_list, lru);
  1384	
  1385		mutex_unlock(&gem_shrinker->lock);
  1386	
  1387		return freed;
  1388	}
  1389	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 11/15] drm/shmem-helper: Add generic memory shrinker
@ 2022-04-12  3:35     ` kernel test robot
  0 siblings, 0 replies; 41+ messages in thread
From: kernel test robot @ 2022-04-12  3:35 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig,
	Rob Clark
  Cc: kbuild-all, linux-kernel, dri-devel, virtualization,
	Gustavo Padovan, Dmitry Osipenko

Hi Dmitry,

I love your patch! Perhaps something to improve:

[auto build test WARNING on next-20220411]
[cannot apply to drm/drm-next v5.18-rc2 v5.18-rc1 v5.17 v5.18-rc2]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/intel-lab-lkp/linux/commits/Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220412-060325
base:    d12d7e1cfe38e0c36d28c7a9fbbc436ad0d17c14
config: arc-buildonly-randconfig-r002-20220411 (https://download.01.org/0day-ci/archive/20220412/202204121135.zXHzC28U-lkp@intel.com/config)
compiler: arc-elf-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/683ba8a9d72ba7770a61a9266a2b33949f3874f2
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220412-060325
        git checkout 683ba8a9d72ba7770a61a9266a2b33949f3874f2
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=arc SHELL=/bin/bash drivers/gpu/drm/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   drivers/gpu/drm/drm_gem_shmem_helper.c: In function 'drm_gem_shmem_shrinker_run_objects_scan':
>> drivers/gpu/drm/drm_gem_shmem_helper.c:1358:56: warning: implicit conversion from 'enum <anonymous>' to 'enum dma_resv_usage' [-Wenum-conversion]
    1358 |                 if (!dma_resv_test_signaled(obj->resv, true))
         |                                                        ^~~~


vim +1358 drivers/gpu/drm/drm_gem_shmem_helper.c

  1299	
  1300	static unsigned long
  1301	drm_gem_shmem_shrinker_run_objects_scan(struct shrinker *shrinker,
  1302						unsigned long nr_to_scan,
  1303						bool *lock_contention,
  1304						bool evict)
  1305	{
  1306		struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker);
  1307		struct drm_gem_shmem_object *shmem;
  1308		struct list_head still_in_list;
  1309		struct drm_gem_object *obj;
  1310		unsigned long freed = 0;
  1311		struct list_head *lru;
  1312		size_t page_count;
  1313	
  1314		INIT_LIST_HEAD(&still_in_list);
  1315	
  1316		mutex_lock(&gem_shrinker->lock);
  1317	
  1318		if (evict)
  1319			lru = &gem_shrinker->lru_evictable;
  1320		else
  1321			lru = &gem_shrinker->lru_purgeable;
  1322	
  1323		while (freed < nr_to_scan) {
  1324			shmem = list_first_entry_or_null(lru, typeof(*shmem), madv_list);
  1325			if (!shmem)
  1326				break;
  1327	
  1328			obj = &shmem->base;
  1329			page_count = obj->size >> PAGE_SHIFT;
  1330			list_move_tail(&shmem->madv_list, &still_in_list);
  1331	
  1332			if (evict && get_nr_swap_pages() < page_count)
  1333				continue;
  1334	
  1335			/*
  1336			 * If it's in the process of being freed, gem_object->free()
  1337			 * may be blocked on lock waiting to remove it.  So just
  1338			 * skip it.
  1339			 */
  1340			if (!kref_get_unless_zero(&obj->refcount))
  1341				continue;
  1342	
  1343			mutex_unlock(&gem_shrinker->lock);
  1344	
  1345			/* prevent racing with job-submission code paths */
  1346			if (!dma_resv_trylock(obj->resv)) {
  1347				*lock_contention |= true;
  1348				goto shrinker_lock;
  1349			}
  1350	
  1351			/* prevent racing with the dma-buf exporting */
  1352			if (!mutex_trylock(&gem_shrinker->dev->object_name_lock)) {
  1353				*lock_contention |= true;
  1354				goto resv_unlock;
  1355			}
  1356	
  1357			/* check whether h/w uses this object */
> 1358			if (!dma_resv_test_signaled(obj->resv, true))
  1359				goto object_name_unlock;
  1360	
  1361			/* GEM may've become unpurgeable while shrinker was unlocked */
  1362			if (evict) {
  1363				if (!drm_gem_shmem_is_evictable(shmem))
  1364					goto object_name_unlock;
  1365			} else {
  1366				if (!drm_gem_shmem_is_purgeable(shmem))
  1367					goto object_name_unlock;
  1368			}
  1369	
  1370			if (evict)
  1371				freed += obj->funcs->evict(obj);
  1372			else
  1373				freed += obj->funcs->purge(obj);
  1374	object_name_unlock:
  1375			mutex_unlock(&gem_shrinker->dev->object_name_lock);
  1376	resv_unlock:
  1377			dma_resv_unlock(obj->resv);
  1378	shrinker_lock:
  1379			drm_gem_object_put(&shmem->base);
  1380			mutex_lock(&gem_shrinker->lock);
  1381		}
  1382	
  1383		list_splice_tail(&still_in_list, lru);
  1384	
  1385		mutex_unlock(&gem_shrinker->lock);
  1386	
  1387		return freed;
  1388	}
  1389	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 11/15] drm/shmem-helper: Add generic memory shrinker
  2022-04-11 21:59   ` Dmitry Osipenko
  (?)
@ 2022-04-12  7:25     ` kernel test robot
  -1 siblings, 0 replies; 41+ messages in thread
From: kernel test robot @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig,
	Rob Clark
  Cc: llvm, kbuild-all, linux-kernel, virtualization, Gustavo Padovan,
	dri-devel, Dmitry Osipenko

Hi Dmitry,

I love your patch! Perhaps something to improve:

[auto build test WARNING on next-20220411]
[cannot apply to drm/drm-next v5.18-rc2 v5.18-rc1 v5.17 v5.18-rc2]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/intel-lab-lkp/linux/commits/Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220412-060325
base:    d12d7e1cfe38e0c36d28c7a9fbbc436ad0d17c14
config: i386-randconfig-a005-20220411 (https://download.01.org/0day-ci/archive/20220412/202204121523.qVMxOvZg-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project fe2478d44e4f7f191c43fef629ac7a23d0251e72)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/683ba8a9d72ba7770a61a9266a2b33949f3874f2
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220412-060325
        git checkout 683ba8a9d72ba7770a61a9266a2b33949f3874f2
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 SHELL=/bin/bash drivers/gpu/drm/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/gpu/drm/drm_gem_shmem_helper.c:289:11: warning: variable 'new_state' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
           else if (shmem->madv < 0)
                    ^~~~~~~~~~~~~~~
   drivers/gpu/drm/drm_gem_shmem_helper.c:292:46: note: uninitialized use occurs here
           drm_gem_shmem_set_pages_state_locked(shmem, new_state);
                                                       ^~~~~~~~~
   drivers/gpu/drm/drm_gem_shmem_helper.c:289:7: note: remove the 'if' if its condition is always true
           else if (shmem->madv < 0)
                ^~~~~~~~~~~~~~~~~~~~
   drivers/gpu/drm/drm_gem_shmem_helper.c:278:2: note: variable 'new_state' is declared here
           enum drm_gem_shmem_pages_state new_state;
           ^
   1 warning generated.


vim +289 drivers/gpu/drm/drm_gem_shmem_helper.c

   273	
   274	static void drm_gem_shmem_update_pages_state_locked(struct drm_gem_shmem_object *shmem)
   275	{
   276		struct drm_gem_object *obj = &shmem->base;
   277		struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker;
   278		enum drm_gem_shmem_pages_state new_state;
   279	
   280		if (!gem_shrinker || obj->import_attach)
   281			return;
   282	
   283		mutex_lock(&gem_shrinker->lock);
   284	
   285		if (!shmem->madv)
   286			new_state = DRM_GEM_SHMEM_PAGES_STATE_ACTIVE;
   287		else if (shmem->madv > 0)
   288			new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGEABLE;
 > 289		else if (shmem->madv < 0)
   290			new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGED;
   291	
   292		drm_gem_shmem_set_pages_state_locked(shmem, new_state);
   293	
   294		mutex_unlock(&gem_shrinker->lock);
   295	}
   296	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 11/15] drm/shmem-helper: Add generic memory shrinker
@ 2022-04-12  7:25     ` kernel test robot
  0 siblings, 0 replies; 41+ messages in thread
From: kernel test robot @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig,
	Rob Clark
  Cc: kbuild-all, llvm, linux-kernel, dri-devel, virtualization,
	Gustavo Padovan, Dmitry Osipenko

Hi Dmitry,

I love your patch! Perhaps something to improve:

[auto build test WARNING on next-20220411]
[cannot apply to drm/drm-next v5.18-rc2 v5.18-rc1 v5.17 v5.18-rc2]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/intel-lab-lkp/linux/commits/Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220412-060325
base:    d12d7e1cfe38e0c36d28c7a9fbbc436ad0d17c14
config: i386-randconfig-a005-20220411 (https://download.01.org/0day-ci/archive/20220412/202204121523.qVMxOvZg-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project fe2478d44e4f7f191c43fef629ac7a23d0251e72)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/683ba8a9d72ba7770a61a9266a2b33949f3874f2
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220412-060325
        git checkout 683ba8a9d72ba7770a61a9266a2b33949f3874f2
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 SHELL=/bin/bash drivers/gpu/drm/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/gpu/drm/drm_gem_shmem_helper.c:289:11: warning: variable 'new_state' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
           else if (shmem->madv < 0)
                    ^~~~~~~~~~~~~~~
   drivers/gpu/drm/drm_gem_shmem_helper.c:292:46: note: uninitialized use occurs here
           drm_gem_shmem_set_pages_state_locked(shmem, new_state);
                                                       ^~~~~~~~~
   drivers/gpu/drm/drm_gem_shmem_helper.c:289:7: note: remove the 'if' if its condition is always true
           else if (shmem->madv < 0)
                ^~~~~~~~~~~~~~~~~~~~
   drivers/gpu/drm/drm_gem_shmem_helper.c:278:2: note: variable 'new_state' is declared here
           enum drm_gem_shmem_pages_state new_state;
           ^
   1 warning generated.


vim +289 drivers/gpu/drm/drm_gem_shmem_helper.c

   273	
   274	static void drm_gem_shmem_update_pages_state_locked(struct drm_gem_shmem_object *shmem)
   275	{
   276		struct drm_gem_object *obj = &shmem->base;
   277		struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker;
   278		enum drm_gem_shmem_pages_state new_state;
   279	
   280		if (!gem_shrinker || obj->import_attach)
   281			return;
   282	
   283		mutex_lock(&gem_shrinker->lock);
   284	
   285		if (!shmem->madv)
   286			new_state = DRM_GEM_SHMEM_PAGES_STATE_ACTIVE;
   287		else if (shmem->madv > 0)
   288			new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGEABLE;
 > 289		else if (shmem->madv < 0)
   290			new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGED;
   291	
   292		drm_gem_shmem_set_pages_state_locked(shmem, new_state);
   293	
   294		mutex_unlock(&gem_shrinker->lock);
   295	}
   296	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 11/15] drm/shmem-helper: Add generic memory shrinker
@ 2022-04-12  7:25     ` kernel test robot
  0 siblings, 0 replies; 41+ messages in thread
From: kernel test robot @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig,
	Rob Clark
  Cc: kbuild-all, llvm, linux-kernel, dri-devel, virtualization,
	Gustavo Padovan, Dmitry Osipenko

Hi Dmitry,

I love your patch! Perhaps something to improve:

[auto build test WARNING on next-20220411]
[cannot apply to drm/drm-next v5.18-rc2 v5.18-rc1 v5.17 v5.18-rc2]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/intel-lab-lkp/linux/commits/Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220412-060325
base:    d12d7e1cfe38e0c36d28c7a9fbbc436ad0d17c14
config: i386-randconfig-a005-20220411 (https://download.01.org/0day-ci/archive/20220412/202204121523.qVMxOvZg-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project fe2478d44e4f7f191c43fef629ac7a23d0251e72)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/683ba8a9d72ba7770a61a9266a2b33949f3874f2
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220412-060325
        git checkout 683ba8a9d72ba7770a61a9266a2b33949f3874f2
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 SHELL=/bin/bash drivers/gpu/drm/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/gpu/drm/drm_gem_shmem_helper.c:289:11: warning: variable 'new_state' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
           else if (shmem->madv < 0)
                    ^~~~~~~~~~~~~~~
   drivers/gpu/drm/drm_gem_shmem_helper.c:292:46: note: uninitialized use occurs here
           drm_gem_shmem_set_pages_state_locked(shmem, new_state);
                                                       ^~~~~~~~~
   drivers/gpu/drm/drm_gem_shmem_helper.c:289:7: note: remove the 'if' if its condition is always true
           else if (shmem->madv < 0)
                ^~~~~~~~~~~~~~~~~~~~
   drivers/gpu/drm/drm_gem_shmem_helper.c:278:2: note: variable 'new_state' is declared here
           enum drm_gem_shmem_pages_state new_state;
           ^
   1 warning generated.


vim +289 drivers/gpu/drm/drm_gem_shmem_helper.c

   273	
   274	static void drm_gem_shmem_update_pages_state_locked(struct drm_gem_shmem_object *shmem)
   275	{
   276		struct drm_gem_object *obj = &shmem->base;
   277		struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker;
   278		enum drm_gem_shmem_pages_state new_state;
   279	
   280		if (!gem_shrinker || obj->import_attach)
   281			return;
   282	
   283		mutex_lock(&gem_shrinker->lock);
   284	
   285		if (!shmem->madv)
   286			new_state = DRM_GEM_SHMEM_PAGES_STATE_ACTIVE;
   287		else if (shmem->madv > 0)
   288			new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGEABLE;
 > 289		else if (shmem->madv < 0)
   290			new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGED;
   291	
   292		drm_gem_shmem_set_pages_state_locked(shmem, new_state);
   293	
   294		mutex_unlock(&gem_shrinker->lock);
   295	}
   296	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 11/15] drm/shmem-helper: Add generic memory shrinker
  2022-04-11 21:59   ` Dmitry Osipenko
  (?)
@ 2022-04-12  7:25     ` kernel test robot
  -1 siblings, 0 replies; 41+ messages in thread
From: kernel test robot @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig,
	Rob Clark
  Cc: llvm, kbuild-all, linux-kernel, virtualization, Gustavo Padovan,
	dri-devel, Dmitry Osipenko

Hi Dmitry,

I love your patch! Perhaps something to improve:

[auto build test WARNING on next-20220411]
[cannot apply to drm/drm-next v5.18-rc2 v5.18-rc1 v5.17 v5.18-rc2]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/intel-lab-lkp/linux/commits/Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220412-060325
base:    d12d7e1cfe38e0c36d28c7a9fbbc436ad0d17c14
config: hexagon-randconfig-r045-20220411 (https://download.01.org/0day-ci/archive/20220412/202204121504.gLR3FHQe-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project fe2478d44e4f7f191c43fef629ac7a23d0251e72)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/683ba8a9d72ba7770a61a9266a2b33949f3874f2
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220412-060325
        git checkout 683ba8a9d72ba7770a61a9266a2b33949f3874f2
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=hexagon SHELL=/bin/bash drivers/gpu/drm/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/gpu/drm/drm_gem_shmem_helper.c:289:11: warning: variable 'new_state' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
           else if (shmem->madv < 0)
                    ^~~~~~~~~~~~~~~
   drivers/gpu/drm/drm_gem_shmem_helper.c:292:46: note: uninitialized use occurs here
           drm_gem_shmem_set_pages_state_locked(shmem, new_state);
                                                       ^~~~~~~~~
   drivers/gpu/drm/drm_gem_shmem_helper.c:289:7: note: remove the 'if' if its condition is always true
           else if (shmem->madv < 0)
                ^~~~~~~~~~~~~~~~~~~~
   drivers/gpu/drm/drm_gem_shmem_helper.c:278:2: note: variable 'new_state' is declared here
           enum drm_gem_shmem_pages_state new_state;
           ^
   1 warning generated.


vim +289 drivers/gpu/drm/drm_gem_shmem_helper.c

   273	
   274	static void drm_gem_shmem_update_pages_state_locked(struct drm_gem_shmem_object *shmem)
   275	{
   276		struct drm_gem_object *obj = &shmem->base;
   277		struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker;
   278		enum drm_gem_shmem_pages_state new_state;
   279	
   280		if (!gem_shrinker || obj->import_attach)
   281			return;
   282	
   283		mutex_lock(&gem_shrinker->lock);
   284	
   285		if (!shmem->madv)
   286			new_state = DRM_GEM_SHMEM_PAGES_STATE_ACTIVE;
   287		else if (shmem->madv > 0)
   288			new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGEABLE;
 > 289		else if (shmem->madv < 0)
   290			new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGED;
   291	
   292		drm_gem_shmem_set_pages_state_locked(shmem, new_state);
   293	
   294		mutex_unlock(&gem_shrinker->lock);
   295	}
   296	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 11/15] drm/shmem-helper: Add generic memory shrinker
@ 2022-04-12  7:25     ` kernel test robot
  0 siblings, 0 replies; 41+ messages in thread
From: kernel test robot @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig,
	Rob Clark
  Cc: kbuild-all, llvm, linux-kernel, dri-devel, virtualization,
	Gustavo Padovan, Dmitry Osipenko

Hi Dmitry,

I love your patch! Perhaps something to improve:

[auto build test WARNING on next-20220411]
[cannot apply to drm/drm-next v5.18-rc2 v5.18-rc1 v5.17 v5.18-rc2]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/intel-lab-lkp/linux/commits/Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220412-060325
base:    d12d7e1cfe38e0c36d28c7a9fbbc436ad0d17c14
config: hexagon-randconfig-r045-20220411 (https://download.01.org/0day-ci/archive/20220412/202204121504.gLR3FHQe-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project fe2478d44e4f7f191c43fef629ac7a23d0251e72)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/683ba8a9d72ba7770a61a9266a2b33949f3874f2
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220412-060325
        git checkout 683ba8a9d72ba7770a61a9266a2b33949f3874f2
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=hexagon SHELL=/bin/bash drivers/gpu/drm/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/gpu/drm/drm_gem_shmem_helper.c:289:11: warning: variable 'new_state' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
           else if (shmem->madv < 0)
                    ^~~~~~~~~~~~~~~
   drivers/gpu/drm/drm_gem_shmem_helper.c:292:46: note: uninitialized use occurs here
           drm_gem_shmem_set_pages_state_locked(shmem, new_state);
                                                       ^~~~~~~~~
   drivers/gpu/drm/drm_gem_shmem_helper.c:289:7: note: remove the 'if' if its condition is always true
           else if (shmem->madv < 0)
                ^~~~~~~~~~~~~~~~~~~~
   drivers/gpu/drm/drm_gem_shmem_helper.c:278:2: note: variable 'new_state' is declared here
           enum drm_gem_shmem_pages_state new_state;
           ^
   1 warning generated.


vim +289 drivers/gpu/drm/drm_gem_shmem_helper.c

   273	
   274	static void drm_gem_shmem_update_pages_state_locked(struct drm_gem_shmem_object *shmem)
   275	{
   276		struct drm_gem_object *obj = &shmem->base;
   277		struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker;
   278		enum drm_gem_shmem_pages_state new_state;
   279	
   280		if (!gem_shrinker || obj->import_attach)
   281			return;
   282	
   283		mutex_lock(&gem_shrinker->lock);
   284	
   285		if (!shmem->madv)
   286			new_state = DRM_GEM_SHMEM_PAGES_STATE_ACTIVE;
   287		else if (shmem->madv > 0)
   288			new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGEABLE;
 > 289		else if (shmem->madv < 0)
   290			new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGED;
   291	
   292		drm_gem_shmem_set_pages_state_locked(shmem, new_state);
   293	
   294		mutex_unlock(&gem_shrinker->lock);
   295	}
   296	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 11/15] drm/shmem-helper: Add generic memory shrinker
@ 2022-04-12  7:25     ` kernel test robot
  0 siblings, 0 replies; 41+ messages in thread
From: kernel test robot @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny,
	Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig,
	Rob Clark
  Cc: kbuild-all, llvm, linux-kernel, dri-devel, virtualization,
	Gustavo Padovan, Dmitry Osipenko

Hi Dmitry,

I love your patch! Perhaps something to improve:

[auto build test WARNING on next-20220411]
[cannot apply to drm/drm-next v5.18-rc2 v5.18-rc1 v5.17 v5.18-rc2]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/intel-lab-lkp/linux/commits/Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220412-060325
base:    d12d7e1cfe38e0c36d28c7a9fbbc436ad0d17c14
config: hexagon-randconfig-r045-20220411 (https://download.01.org/0day-ci/archive/20220412/202204121504.gLR3FHQe-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project fe2478d44e4f7f191c43fef629ac7a23d0251e72)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/683ba8a9d72ba7770a61a9266a2b33949f3874f2
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220412-060325
        git checkout 683ba8a9d72ba7770a61a9266a2b33949f3874f2
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=hexagon SHELL=/bin/bash drivers/gpu/drm/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/gpu/drm/drm_gem_shmem_helper.c:289:11: warning: variable 'new_state' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
           else if (shmem->madv < 0)
                    ^~~~~~~~~~~~~~~
   drivers/gpu/drm/drm_gem_shmem_helper.c:292:46: note: uninitialized use occurs here
           drm_gem_shmem_set_pages_state_locked(shmem, new_state);
                                                       ^~~~~~~~~
   drivers/gpu/drm/drm_gem_shmem_helper.c:289:7: note: remove the 'if' if its condition is always true
           else if (shmem->madv < 0)
                ^~~~~~~~~~~~~~~~~~~~
   drivers/gpu/drm/drm_gem_shmem_helper.c:278:2: note: variable 'new_state' is declared here
           enum drm_gem_shmem_pages_state new_state;
           ^
   1 warning generated.


vim +289 drivers/gpu/drm/drm_gem_shmem_helper.c

   273	
   274	static void drm_gem_shmem_update_pages_state_locked(struct drm_gem_shmem_object *shmem)
   275	{
   276		struct drm_gem_object *obj = &shmem->base;
   277		struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker;
   278		enum drm_gem_shmem_pages_state new_state;
   279	
   280		if (!gem_shrinker || obj->import_attach)
   281			return;
   282	
   283		mutex_lock(&gem_shrinker->lock);
   284	
   285		if (!shmem->madv)
   286			new_state = DRM_GEM_SHMEM_PAGES_STATE_ACTIVE;
   287		else if (shmem->madv > 0)
   288			new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGEABLE;
 > 289		else if (shmem->madv < 0)
   290			new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGED;
   291	
   292		drm_gem_shmem_set_pages_state_locked(shmem, new_state);
   293	
   294		mutex_unlock(&gem_shrinker->lock);
   295	}
   296	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp

^ permalink raw reply	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2022-04-12  7:26 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-11 21:59 [PATCH v3 00/15] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
2022-04-11 21:59 ` Dmitry Osipenko
2022-04-11 21:59 ` [PATCH v3 01/15] drm/virtio: Correct drm_gem_shmem_get_sg_table() error handling Dmitry Osipenko
2022-04-11 21:59   ` Dmitry Osipenko
2022-04-11 21:59 ` [PATCH v3 02/15] drm/virtio: Check whether transferred 2D BO is shmem Dmitry Osipenko
2022-04-11 21:59   ` Dmitry Osipenko
2022-04-11 21:59 ` [PATCH v3 03/15] drm/virtio: Unlock GEM reservations on virtio_gpu_object_shmem_init() error Dmitry Osipenko
2022-04-11 21:59   ` Dmitry Osipenko
2022-04-11 21:59 ` [PATCH v3 04/15] drm/virtio: Unlock reservations on dma_resv_reserve_fences() error Dmitry Osipenko
2022-04-11 21:59   ` Dmitry Osipenko
2022-04-11 21:59 ` [PATCH v3 05/15] drm/virtio: Use appropriate atomic state in virtio_gpu_plane_cleanup_fb() Dmitry Osipenko
2022-04-11 21:59   ` Dmitry Osipenko
2022-04-11 21:59 ` [PATCH v3 06/15] drm/virtio: Simplify error handling of virtio_gpu_object_create() Dmitry Osipenko
2022-04-11 21:59   ` Dmitry Osipenko
2022-04-11 21:59 ` [PATCH v3 07/15] drm/virtio: Improve DMA API usage for shmem BOs Dmitry Osipenko
2022-04-11 21:59   ` Dmitry Osipenko
2022-04-11 21:59 ` [PATCH v3 08/15] drm/virtio: Use dev_is_pci() Dmitry Osipenko
2022-04-11 21:59   ` Dmitry Osipenko
2022-04-11 21:59 ` [PATCH v3 09/15] drm/shmem-helper: Correct doc-comment of drm_gem_shmem_get_sg_table() Dmitry Osipenko
2022-04-11 21:59   ` Dmitry Osipenko
2022-04-11 21:59 ` [PATCH v3 10/15] drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks Dmitry Osipenko
2022-04-11 21:59   ` Dmitry Osipenko
2022-04-11 21:59 ` [PATCH v3 11/15] drm/shmem-helper: Add generic memory shrinker Dmitry Osipenko
2022-04-11 21:59   ` Dmitry Osipenko
2022-04-12  3:35   ` kernel test robot
2022-04-12  3:35     ` kernel test robot
2022-04-12  3:35     ` kernel test robot
2022-04-12  7:25   ` kernel test robot
2022-04-12  7:25     ` kernel test robot
2022-04-12  7:25     ` kernel test robot
2022-04-12  7:25   ` kernel test robot
2022-04-12  7:25     ` kernel test robot
2022-04-12  7:25     ` kernel test robot
2022-04-11 21:59 ` [PATCH v3 12/15] drm/virtio: Support memory shrinking Dmitry Osipenko
2022-04-11 21:59   ` Dmitry Osipenko
2022-04-11 21:59 ` [PATCH v3 13/15] drm/panfrost: Switch to generic memory shrinker Dmitry Osipenko
2022-04-11 21:59   ` Dmitry Osipenko
2022-04-11 21:59 ` [PATCH v3 14/15] drm/shmem-helper: Make drm_gem_shmem_get_pages() private Dmitry Osipenko
2022-04-11 21:59   ` Dmitry Osipenko
2022-04-11 21:59 ` [PATCH v3 15/15] drm/shmem-helper: Remove drm_gem_shmem_purge() Dmitry Osipenko
2022-04-11 21:59   ` Dmitry Osipenko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.