dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
To: <dri-devel@lists.freedesktop.org>, <amd-gfx@lists.freedesktop.org>
Cc: ckoenig.leichtzumerken@gmail.com,
	"Andrey Grodzovsky" <andrey.grodzovsky@amd.com>,
	"Christian König" <christian.koenig@amd.com>
Subject: [PATCH v2 1/4] drm/ttm: Create pinned list
Date: Thu, 26 Aug 2021 13:27:05 -0400	[thread overview]
Message-ID: <20210826172708.229134-2-andrey.grodzovsky@amd.com> (raw)
In-Reply-To: <20210826172708.229134-1-andrey.grodzovsky@amd.com>

This list will be used to capture all non VRAM BOs not
on LRU so when device is hot unplugged we can iterate
the list and unmap DMA mappings before device is removed.

v2: 
Reanme function to ttm_bo_move_to_pinned
Keep deleting BOs from LRU in the new function
if they have no resource struct assigned to them.

Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
Suggested-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/ttm/ttm_bo.c       | 30 ++++++++++++++++++++++++++----
 drivers/gpu/drm/ttm/ttm_resource.c |  1 +
 include/drm/ttm/ttm_resource.h     |  1 +
 3 files changed, 28 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 1b950b45cf4b..64594819e9e7 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -69,7 +69,29 @@ static void ttm_bo_mem_space_debug(struct ttm_buffer_object *bo,
 	}
 }
 
-static void ttm_bo_del_from_lru(struct ttm_buffer_object *bo)
+static inline void ttm_bo_move_to_pinned_or_del(struct ttm_buffer_object *bo)
+{
+	struct ttm_device *bdev = bo->bdev;
+	struct ttm_resource_manager *man = NULL;
+
+	if (bo->resource)
+		man = ttm_manager_type(bdev, bo->resource->mem_type);
+
+	/*
+	 * Some BOs might be in transient state where they don't belong
+	 * to any domain at the moment, simply remove them from whatever
+	 * LRU list they are still hanged on to keep previous functionality
+	 */
+	if (man && man->use_tt)
+		list_move_tail(&bo->lru, &man->pinned);
+	else
+		list_del_init(&bo->lru);
+
+	if (bdev->funcs->del_from_lru_notify)
+		bdev->funcs->del_from_lru_notify(bo);
+}
+
+static inline void ttm_bo_del_from_lru(struct ttm_buffer_object *bo)
 {
 	struct ttm_device *bdev = bo->bdev;
 
@@ -98,7 +120,7 @@ void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo,
 		dma_resv_assert_held(bo->base.resv);
 
 	if (bo->pin_count) {
-		ttm_bo_del_from_lru(bo);
+		ttm_bo_move_to_pinned_or_del(bo);
 		return;
 	}
 
@@ -339,7 +361,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
 		return ret;
 	}
 
-	ttm_bo_del_from_lru(bo);
+	ttm_bo_move_to_pinned_or_del(bo);
 	list_del_init(&bo->ddestroy);
 	spin_unlock(&bo->bdev->lru_lock);
 	ttm_bo_cleanup_memtype_use(bo);
@@ -1154,7 +1176,7 @@ int ttm_bo_swapout(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx,
 		return 0;
 	}
 
-	ttm_bo_del_from_lru(bo);
+	ttm_bo_move_to_pinned_or_del(bo);
 	/* TODO: Cleanup the locking */
 	spin_unlock(&bo->bdev->lru_lock);
 
diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c
index 2431717376e7..91165f77fe0e 100644
--- a/drivers/gpu/drm/ttm/ttm_resource.c
+++ b/drivers/gpu/drm/ttm/ttm_resource.c
@@ -85,6 +85,7 @@ void ttm_resource_manager_init(struct ttm_resource_manager *man,
 
 	for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i)
 		INIT_LIST_HEAD(&man->lru[i]);
+	INIT_LIST_HEAD(&man->pinned);
 	man->move = NULL;
 }
 EXPORT_SYMBOL(ttm_resource_manager_init);
diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h
index 140b6b9a8bbe..1ec0d5ebb59f 100644
--- a/include/drm/ttm/ttm_resource.h
+++ b/include/drm/ttm/ttm_resource.h
@@ -130,6 +130,7 @@ struct ttm_resource_manager {
 	 */
 
 	struct list_head lru[TTM_MAX_BO_PRIORITY];
+	struct list_head pinned;
 
 	/*
 	 * Protected by @move_lock.
-- 
2.25.1


  reply	other threads:[~2021-08-26 17:27 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-26 17:27 [PATCH v2 0/4] Various fixes to pass libdrm hotunplug tests Andrey Grodzovsky
2021-08-26 17:27 ` Andrey Grodzovsky [this message]
2021-08-27  6:19   ` [PATCH v2 1/4] drm/ttm: Create pinned list Christian König
2021-08-26 17:27 ` [PATCH v2 2/4] drm/ttm: Clear all DMA mappings on demand Andrey Grodzovsky
2021-08-27  6:23   ` Christian König
2021-08-26 17:27 ` [PATCH v2 3/4] drm/amdgpu: drm/amdgpu: Handle IOMMU enabled case Andrey Grodzovsky
2021-08-26 17:27 ` [PATCH v2 4/4] drm/amdgpu: Add a UAPI flag for hot plug/unplug Andrey Grodzovsky
2021-08-27 18:12 ` [PATCH v2 0/4] Various fixes to pass libdrm hotunplug tests Andrey Grodzovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210826172708.229134-2-andrey.grodzovsky@amd.com \
    --to=andrey.grodzovsky@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=ckoenig.leichtzumerken@gmail.com \
    --cc=dri-devel@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).