All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/5] drm/ttm,amdgpu: Introduce LRU bulk move functionality
@ 2018-08-13  9:58 Huang Rui
  2018-08-13  9:58 ` [PATCH v3 3/5] drm/ttm: add bulk move function on LRU Huang Rui
                   ` (2 more replies)
  0 siblings, 3 replies; 19+ messages in thread
From: Huang Rui @ 2018-08-13  9:58 UTC (permalink / raw)
  To: dri-devel, amd-gfx; +Cc: Huang Rui

The idea and proposal is originally from Christian, and I continue to work to
deliver it.

Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end of the LRU, and impact performance seriously.

Then Christian provided a workaround to not move PD/PT BOs on LRU with below
patch:
"drm/amdgpu: band aid validating VM PTs"
Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae

However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
instead of one by one.

Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
validated we move all BOs together to the end of the LRU without dropping the
lock for the LRU.

While doing so we note the beginning and end of this block in the LRU list.

Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
we don't move every BO one by one, but instead cut the LRU list into pieces so
that we bulk move everything to the end in just one operation.

Test data:
+--------------+-----------------+-----------+---------------------------------------+
|              |The Talos        |Clpeak(OCL)|BusSpeedReadback(OCL)                  |
|              |Principle(Vulkan)|           |                                       |
+------------------------------------------------------------------------------------+
|              |                 |           |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
| Original     |  147.7 FPS      |  76.86 us |0.307 ms(8K) 0.310 ms(16K)             |
+------------------------------------------------------------------------------------+
| Orignial + WA|                 |           |0.254 ms(1K) 0.241 ms(2K)              |
|(don't move   |  162.1 FPS      |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
|PT BOs on LRU)|                 |           |                                       |
+------------------------------------------------------------------------------------+
| Bulk move    |  163.1 FPS      |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
|              |                 |           |0.214 ms(8K) 0.225 ms(16K)             |
+--------------+-----------------+-----------+---------------------------------------+

After test them with above three benchmarks include vulkan and opencl. We can
see the visible improvement than original, and even better than original with
workaround.

Changes from V1 -> V2:
- Fix to missed the BOs in relocated/moved that should be also moved to the end
  of LRU.

Changes from V2 -> V3:
- Remove unused parameter and use list_for_each_entry instead of the one with
  save entry.

Thanks,
Rui

Christian König (2):
  drm/ttm: add helper structures for bulk moves on lru list
  drm/ttm: revise ttm_bo_move_to_lru_tail to support bulk moves

Huang Rui (3):
  drm/ttm: add bulk move function on LRU
  drm/amdgpu: use bulk moves for efficient VM LRU handling (v3)
  drm/amdgpu: move PD/PT bos on LRU again

 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 75 ++++++++++++++++++++++++--------
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  4 ++
 drivers/gpu/drm/ttm/ttm_bo.c           | 78 +++++++++++++++++++++++++++++++++-
 include/drm/ttm/ttm_bo_api.h           | 16 ++++++-
 include/drm/ttm/ttm_bo_driver.h        | 28 ++++++++++++
 5 files changed, 182 insertions(+), 19 deletions(-)

-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v3 1/5] drm/ttm: add helper structures for bulk moves on lru list
       [not found] ` <1534154331-11810-1-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
@ 2018-08-13  9:58   ` Huang Rui
       [not found]     ` <1534154331-11810-2-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
  2018-08-13  9:58   ` [PATCH v3 2/5] drm/ttm: revise ttm_bo_move_to_lru_tail to support bulk moves Huang Rui
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 19+ messages in thread
From: Huang Rui @ 2018-08-13  9:58 UTC (permalink / raw)
  To: dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Huang Rui, Christian König

From: Christian König <christian.koenig@amd.com>

Add bulk move pos to store the pointer of first and last buffer object.
The list in between will be bulk moved on lru list.

Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
---
 include/drm/ttm/ttm_bo_driver.h | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h
index 3234cc3..e4fee8e 100644
--- a/include/drm/ttm/ttm_bo_driver.h
+++ b/include/drm/ttm/ttm_bo_driver.h
@@ -491,6 +491,34 @@ struct ttm_bo_device {
 };
 
 /**
+ * struct ttm_lru_bulk_move_pos
+ *
+ * @first: first BO in the bulk move range
+ * @last: last BO in the bulk move range
+ *
+ * Positions for a lru bulk move.
+ */
+struct ttm_lru_bulk_move_pos {
+	struct ttm_buffer_object *first;
+	struct ttm_buffer_object *last;
+};
+
+/**
+ * struct ttm_lru_bulk_move
+ *
+ * @tt: first/last lru entry for BOs in the TT domain
+ * @vram: first/last lru entry for BOs in the VRAM domain
+ * @swap: first/last lru entry for BOs on the swap list
+ *
+ * Helper structure for bulk moves on the LRU list.
+ */
+struct ttm_lru_bulk_move {
+	struct ttm_lru_bulk_move_pos tt[TTM_MAX_BO_PRIORITY];
+	struct ttm_lru_bulk_move_pos vram[TTM_MAX_BO_PRIORITY];
+	struct ttm_lru_bulk_move_pos swap[TTM_MAX_BO_PRIORITY];
+};
+
+/**
  * ttm_flag_masked
  *
  * @old: Pointer to the result and original value.
-- 
2.7.4

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 2/5] drm/ttm: revise ttm_bo_move_to_lru_tail to support bulk moves
       [not found] ` <1534154331-11810-1-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
  2018-08-13  9:58   ` [PATCH v3 1/5] drm/ttm: add helper structures for bulk moves on lru list Huang Rui
@ 2018-08-13  9:58   ` Huang Rui
  2018-08-13  9:58   ` [PATCH v3 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v3) Huang Rui
  2018-08-13  9:58   ` [PATCH v3 5/5] drm/amdgpu: move PD/PT bos on LRU again Huang Rui
  3 siblings, 0 replies; 19+ messages in thread
From: Huang Rui @ 2018-08-13  9:58 UTC (permalink / raw)
  To: dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Huang Rui, Christian König

From: Christian König <christian.koenig@amd.com>

When move a BO to the end of LRU, it need remember the BO positions.
Make sure all moved bo in between "first" and "last". And they will be bulk
moving together.

Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c |  8 ++++----
 drivers/gpu/drm/ttm/ttm_bo.c           | 26 +++++++++++++++++++++++++-
 include/drm/ttm/ttm_bo_api.h           |  6 +++++-
 3 files changed, 34 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 015613b..9c84770 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -297,9 +297,9 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 				break;
 
 			spin_lock(&glob->lru_lock);
-			ttm_bo_move_to_lru_tail(&bo->tbo);
+			ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
 			if (bo->shadow)
-				ttm_bo_move_to_lru_tail(&bo->shadow->tbo);
+				ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
 			spin_unlock(&glob->lru_lock);
 		}
 
@@ -319,9 +319,9 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 		if (!bo->parent)
 			continue;
 
-		ttm_bo_move_to_lru_tail(&bo->tbo);
+		ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
 		if (bo->shadow)
-			ttm_bo_move_to_lru_tail(&bo->shadow->tbo);
+			ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
 	}
 	spin_unlock(&glob->lru_lock);
 
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 7c48472..7117b6b 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -214,12 +214,36 @@ void ttm_bo_del_sub_from_lru(struct ttm_buffer_object *bo)
 }
 EXPORT_SYMBOL(ttm_bo_del_sub_from_lru);
 
-void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo)
+static void ttm_bo_bulk_move_set_pos(struct ttm_lru_bulk_move_pos *pos,
+				     struct ttm_buffer_object *bo)
+{
+	if (!pos->first)
+		pos->first = bo;
+	pos->last = bo;
+}
+
+void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo,
+			     struct ttm_lru_bulk_move *bulk)
 {
 	reservation_object_assert_held(bo->resv);
 
 	ttm_bo_del_from_lru(bo);
 	ttm_bo_add_to_lru(bo);
+
+	if (bulk && !(bo->mem.placement & TTM_PL_FLAG_NO_EVICT)) {
+		switch (bo->mem.mem_type) {
+		case TTM_PL_TT:
+			ttm_bo_bulk_move_set_pos(&bulk->tt[bo->priority], bo);
+			break;
+
+		case TTM_PL_VRAM:
+			ttm_bo_bulk_move_set_pos(&bulk->vram[bo->priority], bo);
+			break;
+		}
+		if (bo->ttm && !(bo->ttm->page_flags &
+				 (TTM_PAGE_FLAG_SG | TTM_PAGE_FLAG_SWAPPED)))
+			ttm_bo_bulk_move_set_pos(&bulk->swap[bo->priority], bo);
+	}
 }
 EXPORT_SYMBOL(ttm_bo_move_to_lru_tail);
 
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index a01ba20..0d4eb81 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -51,6 +51,8 @@ struct ttm_placement;
 
 struct ttm_place;
 
+struct ttm_lru_bulk_move;
+
 /**
  * struct ttm_bus_placement
  *
@@ -405,12 +407,14 @@ void ttm_bo_del_from_lru(struct ttm_buffer_object *bo);
  * ttm_bo_move_to_lru_tail
  *
  * @bo: The buffer object.
+ * @bulk: optional bulk move structure to remember BO positions
  *
  * Move this BO to the tail of all lru lists used to lookup and reserve an
  * object. This function must be called with struct ttm_bo_global::lru_lock
  * held, and is used to make a BO less likely to be considered for eviction.
  */
-void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo);
+void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo,
+			     struct ttm_lru_bulk_move *bulk);
 
 /**
  * ttm_bo_lock_delayed_workqueue
-- 
2.7.4

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 3/5] drm/ttm: add bulk move function on LRU
  2018-08-13  9:58 [PATCH v3 0/5] drm/ttm,amdgpu: Introduce LRU bulk move functionality Huang Rui
@ 2018-08-13  9:58 ` Huang Rui
       [not found] ` <1534154331-11810-1-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
  2018-08-16  0:41 ` [PATCH v3 0/5] drm/ttm,amdgpu: Introduce LRU bulk move functionality Dieter Nützel
  2 siblings, 0 replies; 19+ messages in thread
From: Huang Rui @ 2018-08-13  9:58 UTC (permalink / raw)
  To: dri-devel, amd-gfx; +Cc: Huang Rui, Christian König

This function allow us to bulk move a group of BOs to the tail of their LRU.
The positions of group of BOs are stored on the (first, last) bulk_move_pos
structure.

Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
---
 drivers/gpu/drm/ttm/ttm_bo.c | 52 ++++++++++++++++++++++++++++++++++++++++++++
 include/drm/ttm/ttm_bo_api.h | 10 +++++++++
 2 files changed, 62 insertions(+)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 7117b6b..39d9d55 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -247,6 +247,58 @@ void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo,
 }
 EXPORT_SYMBOL(ttm_bo_move_to_lru_tail);
 
+static void ttm_bo_bulk_move_helper(struct ttm_lru_bulk_move_pos *pos,
+				    struct list_head *lru, bool is_swap)
+{
+	struct list_head entries, before;
+	struct list_head *list1, *list2;
+
+	list1 = is_swap ? &pos->last->swap : &pos->last->lru;
+	list2 = is_swap ? pos->first->swap.prev : pos->first->lru.prev;
+
+	list_cut_position(&entries, lru, list1);
+	list_cut_position(&before, &entries, list2);
+	list_splice(&before, lru);
+	list_splice_tail(&entries, lru);
+}
+
+void ttm_bo_bulk_move_lru_tail(struct ttm_lru_bulk_move *bulk)
+{
+	unsigned i;
+
+	for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) {
+		struct ttm_mem_type_manager *man;
+
+		if (!bulk->tt[i].first)
+			continue;
+
+		man = &bulk->tt[i].first->bdev->man[TTM_PL_TT];
+		ttm_bo_bulk_move_helper(&bulk->tt[i], &man->lru[i], false);
+	}
+
+	for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) {
+		struct ttm_mem_type_manager *man;
+
+		if (!bulk->vram[i].first)
+			continue;
+
+		man = &bulk->vram[i].first->bdev->man[TTM_PL_VRAM];
+		ttm_bo_bulk_move_helper(&bulk->vram[i], &man->lru[i], false);
+	}
+
+	for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) {
+		struct ttm_lru_bulk_move_pos *pos = &bulk->swap[i];
+		struct list_head *lru;
+
+		if (!pos->first)
+			continue;
+
+		lru = &pos->first->bdev->glob->swap_lru[i];
+		ttm_bo_bulk_move_helper(&bulk->swap[i], lru, true);
+	}
+}
+EXPORT_SYMBOL(ttm_bo_bulk_move_lru_tail);
+
 static int ttm_bo_handle_move_mem(struct ttm_buffer_object *bo,
 				  struct ttm_mem_reg *mem, bool evict,
 				  struct ttm_operation_ctx *ctx)
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 0d4eb81..8c19470 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -417,6 +417,16 @@ void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo,
 			     struct ttm_lru_bulk_move *bulk);
 
 /**
+ * ttm_bo_bulk_move_lru_tail
+ *
+ * @bulk: bulk move structure
+ *
+ * Bulk move BOs to the LRU tail, only valid to use when driver makes sure that
+ * BO order never changes. Should be called with ttm_bo_global::lru_lock held.
+ */
+void ttm_bo_bulk_move_lru_tail(struct ttm_lru_bulk_move *bulk);
+
+/**
  * ttm_bo_lock_delayed_workqueue
  *
  * Prevent the delayed workqueue from running.
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v3)
       [not found] ` <1534154331-11810-1-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
  2018-08-13  9:58   ` [PATCH v3 1/5] drm/ttm: add helper structures for bulk moves on lru list Huang Rui
  2018-08-13  9:58   ` [PATCH v3 2/5] drm/ttm: revise ttm_bo_move_to_lru_tail to support bulk moves Huang Rui
@ 2018-08-13  9:58   ` Huang Rui
  2018-08-14  2:26     ` Zhang, Jerry (Junwei)
  2018-08-13  9:58   ` [PATCH v3 5/5] drm/amdgpu: move PD/PT bos on LRU again Huang Rui
  3 siblings, 1 reply; 19+ messages in thread
From: Huang Rui @ 2018-08-13  9:58 UTC (permalink / raw)
  To: dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Huang Rui, Christian König

I continue to work for bulk moving that based on the proposal by Christian.

Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end of the LRU, and impact performance seriously.

Then Christian provided a workaround to not move PD/PT BOs on LRU with below
patch:
"drm/amdgpu: band aid validating VM PTs"
Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae

However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
instead of one by one.

Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
validated we move all BOs together to the end of the LRU without dropping the
lock for the LRU.

While doing so we note the beginning and end of this block in the LRU list.

Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
we don't move every BO one by one, but instead cut the LRU list into pieces so
that we bulk move everything to the end in just one operation.

Test data:
+--------------+-----------------+-----------+---------------------------------------+
|              |The Talos        |Clpeak(OCL)|BusSpeedReadback(OCL)                  |
|              |Principle(Vulkan)|           |                                       |
+------------------------------------------------------------------------------------+
|              |                 |           |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
| Original     |  147.7 FPS      |  76.86 us |0.307 ms(8K) 0.310 ms(16K)             |
+------------------------------------------------------------------------------------+
| Orignial + WA|                 |           |0.254 ms(1K) 0.241 ms(2K)              |
|(don't move   |  162.1 FPS      |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
|PT BOs on LRU)|                 |           |                                       |
+------------------------------------------------------------------------------------+
| Bulk move    |  163.1 FPS      |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
|              |                 |           |0.214 ms(8K) 0.225 ms(16K)             |
+--------------+-----------------+-----------+---------------------------------------+

After test them with above three benchmarks include vulkan and opencl. We can
see the visible improvement than original, and even better than original with
workaround.

v2: move all BOs include idle, relocated, and moved list to the end of LRU and
put them together.
v3: remove unused parameter and use list_for_each_entry instead of the one with
save entry.

Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 73 ++++++++++++++++++++++++++--------
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  4 ++
 2 files changed, 61 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 9c84770..ee1af53 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -268,6 +268,53 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
 }
 
 /**
+ * amdgpu_vm_move_to_lru_tail - move one list of BOs to end of LRU
+ *
+ * @vm: vm providing the BOs
+ * @list: the list that stored BOs
+ *
+ * Move one list of BOs to the end of LRU and update the positions.
+ */
+static void
+amdgpu_vm_move_to_lru_tail_by_list(struct amdgpu_vm *vm, struct list_head *list)
+{
+	struct amdgpu_vm_bo_base *bo_base;
+
+	list_for_each_entry(bo_base, list, vm_status) {
+		struct amdgpu_bo *bo = bo_base->bo;
+
+		if (!bo->parent)
+			continue;
+
+		ttm_bo_move_to_lru_tail(&bo->tbo, &vm->lru_bulk_move);
+		if (bo->shadow)
+			ttm_bo_move_to_lru_tail(&bo->shadow->tbo,
+						&vm->lru_bulk_move);
+	}
+}
+
+/**
+ * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
+ *
+ * @adev: amdgpu device pointer
+ * @vm: vm providing the BOs
+ *
+ * Move all BOs to the end of LRU and remember their positions to put them
+ * together.
+ */
+static void
+amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev, struct amdgpu_vm *vm)
+{
+	struct ttm_bo_global *glob = adev->mman.bdev.glob;
+
+	spin_lock(&glob->lru_lock);
+	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->idle);
+	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->relocated);
+	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->moved);
+	spin_unlock(&glob->lru_lock);
+}
+
+/**
  * amdgpu_vm_validate_pt_bos - validate the page table BOs
  *
  * @adev: amdgpu device pointer
@@ -286,6 +333,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 {
 	struct ttm_bo_global *glob = adev->mman.bdev.glob;
 	struct amdgpu_vm_bo_base *bo_base, *tmp;
+	bool validated = false;
 	int r = 0;
 
 	list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
@@ -295,14 +343,9 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 			r = validate(param, bo);
 			if (r)
 				break;
-
-			spin_lock(&glob->lru_lock);
-			ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
-			if (bo->shadow)
-				ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
-			spin_unlock(&glob->lru_lock);
 		}
 
+		validated = true;
 		if (bo->tbo.type != ttm_bo_type_kernel) {
 			spin_lock(&vm->moved_lock);
 			list_move(&bo_base->vm_status, &vm->moved);
@@ -312,18 +355,16 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 		}
 	}
 
-	spin_lock(&glob->lru_lock);
-	list_for_each_entry(bo_base, &vm->idle, vm_status) {
-		struct amdgpu_bo *bo = bo_base->bo;
+	if (!validated) {
+		spin_lock(&glob->lru_lock);
+		ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
+		spin_unlock(&glob->lru_lock);
+		return 0;
+	}
 
-		if (!bo->parent)
-			continue;
+	memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
 
-		ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
-		if (bo->shadow)
-			ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
-	}
-	spin_unlock(&glob->lru_lock);
+	amdgpu_vm_move_to_lru_tail(adev, vm);
 
 	return r;
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 67a15d4..92725ac 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -29,6 +29,7 @@
 #include <linux/rbtree.h>
 #include <drm/gpu_scheduler.h>
 #include <drm/drm_file.h>
+#include <drm/ttm/ttm_bo_driver.h>
 
 #include "amdgpu_sync.h"
 #include "amdgpu_ring.h"
@@ -226,6 +227,9 @@ struct amdgpu_vm {
 
 	/* Some basic info about the task */
 	struct amdgpu_task_info task_info;
+
+	/* Store positions of group of BOs */
+	struct ttm_lru_bulk_move lru_bulk_move;
 };
 
 struct amdgpu_vm_manager {
-- 
2.7.4

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 5/5] drm/amdgpu: move PD/PT bos on LRU again
       [not found] ` <1534154331-11810-1-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
                     ` (2 preceding siblings ...)
  2018-08-13  9:58   ` [PATCH v3 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v3) Huang Rui
@ 2018-08-13  9:58   ` Huang Rui
  3 siblings, 0 replies; 19+ messages in thread
From: Huang Rui @ 2018-08-13  9:58 UTC (permalink / raw)
  To: dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Huang Rui

The new bulk moving functionality is ready, the overhead of moving PD/PT bos to
LRU is fixed. So move them on LRU again.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index ee1af53..872ae5b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1125,7 +1125,7 @@ int amdgpu_vm_update_directories(struct amdgpu_device *adev,
 					   struct amdgpu_vm_bo_base,
 					   vm_status);
 		bo_base->moved = false;
-		list_del_init(&bo_base->vm_status);
+		list_move(&bo_base->vm_status, &vm->idle);
 
 		bo = bo_base->bo->parent;
 		if (!bo)
-- 
2.7.4

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 1/5] drm/ttm: add helper structures for bulk moves on lru list
       [not found]     ` <1534154331-11810-2-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
@ 2018-08-13 10:16       ` Christian König
       [not found]         ` <d0ebdd92-73a7-7959-4df0-391f3dd27526-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 19+ messages in thread
From: Christian König @ 2018-08-13 10:16 UTC (permalink / raw)
  To: Huang Rui, dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 13.08.2018 um 11:58 schrieb Huang Rui:
> From: Christian König <christian.koenig@amd.com>
>
> Add bulk move pos to store the pointer of first and last buffer object.
> The list in between will be bulk moved on lru list.
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> Tested-by: Mike Lothian <mike@fireburn.co.uk>

If you ask me that looks like it should work now, but I'm prepossessed 
because I helped creating this.

Alex, David or Jerry can somebody else take a look as well?

Thanks,
Christian.

> ---
>   include/drm/ttm/ttm_bo_driver.h | 28 ++++++++++++++++++++++++++++
>   1 file changed, 28 insertions(+)
>
> diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h
> index 3234cc3..e4fee8e 100644
> --- a/include/drm/ttm/ttm_bo_driver.h
> +++ b/include/drm/ttm/ttm_bo_driver.h
> @@ -491,6 +491,34 @@ struct ttm_bo_device {
>   };
>   
>   /**
> + * struct ttm_lru_bulk_move_pos
> + *
> + * @first: first BO in the bulk move range
> + * @last: last BO in the bulk move range
> + *
> + * Positions for a lru bulk move.
> + */
> +struct ttm_lru_bulk_move_pos {
> +	struct ttm_buffer_object *first;
> +	struct ttm_buffer_object *last;
> +};
> +
> +/**
> + * struct ttm_lru_bulk_move
> + *
> + * @tt: first/last lru entry for BOs in the TT domain
> + * @vram: first/last lru entry for BOs in the VRAM domain
> + * @swap: first/last lru entry for BOs on the swap list
> + *
> + * Helper structure for bulk moves on the LRU list.
> + */
> +struct ttm_lru_bulk_move {
> +	struct ttm_lru_bulk_move_pos tt[TTM_MAX_BO_PRIORITY];
> +	struct ttm_lru_bulk_move_pos vram[TTM_MAX_BO_PRIORITY];
> +	struct ttm_lru_bulk_move_pos swap[TTM_MAX_BO_PRIORITY];
> +};
> +
> +/**
>    * ttm_flag_masked
>    *
>    * @old: Pointer to the result and original value.

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 1/5] drm/ttm: add helper structures for bulk moves on lru list
       [not found]         ` <d0ebdd92-73a7-7959-4df0-391f3dd27526-5C7GfCeVMHo@public.gmane.org>
@ 2018-08-14  2:02           ` zhoucm1
       [not found]             ` <b993176a-cf49-d3b7-9be1-feb7dc95456f-5C7GfCeVMHo@public.gmane.org>
  2018-08-14  2:22           ` Zhang, Jerry (Junwei)
  1 sibling, 1 reply; 19+ messages in thread
From: zhoucm1 @ 2018-08-14  2:02 UTC (permalink / raw)
  To: Christian König, Huang Rui,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW



On 2018年08月13日 18:16, Christian König wrote:
> Am 13.08.2018 um 11:58 schrieb Huang Rui:
>> From: Christian König <christian.koenig@amd.com>
>>
>> Add bulk move pos to store the pointer of first and last buffer object.
>> The list in between will be bulk moved on lru list.
>>
>> Signed-off-by: Christian König <christian.koenig@amd.com>
>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>> Tested-by: Mike Lothian <mike@fireburn.co.uk>
>
> If you ask me that looks like it should work now, but I'm prepossessed 
> because I helped creating this.
>
> Alex, David or Jerry can somebody else take a look as well?
remember position, list ops...
Acked-by: Chunming Zhou <david1.zhou@amd.com>

>
> Thanks,
> Christian.
>
>> ---
>>   include/drm/ttm/ttm_bo_driver.h | 28 ++++++++++++++++++++++++++++
>>   1 file changed, 28 insertions(+)
>>
>> diff --git a/include/drm/ttm/ttm_bo_driver.h 
>> b/include/drm/ttm/ttm_bo_driver.h
>> index 3234cc3..e4fee8e 100644
>> --- a/include/drm/ttm/ttm_bo_driver.h
>> +++ b/include/drm/ttm/ttm_bo_driver.h
>> @@ -491,6 +491,34 @@ struct ttm_bo_device {
>>   };
>>     /**
>> + * struct ttm_lru_bulk_move_pos
>> + *
>> + * @first: first BO in the bulk move range
>> + * @last: last BO in the bulk move range
>> + *
>> + * Positions for a lru bulk move.
>> + */
>> +struct ttm_lru_bulk_move_pos {
>> +    struct ttm_buffer_object *first;
>> +    struct ttm_buffer_object *last;
>> +};
>> +
>> +/**
>> + * struct ttm_lru_bulk_move
>> + *
>> + * @tt: first/last lru entry for BOs in the TT domain
>> + * @vram: first/last lru entry for BOs in the VRAM domain
>> + * @swap: first/last lru entry for BOs on the swap list
>> + *
>> + * Helper structure for bulk moves on the LRU list.
>> + */
>> +struct ttm_lru_bulk_move {
>> +    struct ttm_lru_bulk_move_pos tt[TTM_MAX_BO_PRIORITY];
>> +    struct ttm_lru_bulk_move_pos vram[TTM_MAX_BO_PRIORITY];
>> +    struct ttm_lru_bulk_move_pos swap[TTM_MAX_BO_PRIORITY];
>> +};
>> +
>> +/**
>>    * ttm_flag_masked
>>    *
>>    * @old: Pointer to the result and original value.
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 1/5] drm/ttm: add helper structures for bulk moves on lru list
       [not found]         ` <d0ebdd92-73a7-7959-4df0-391f3dd27526-5C7GfCeVMHo@public.gmane.org>
  2018-08-14  2:02           ` zhoucm1
@ 2018-08-14  2:22           ` Zhang, Jerry (Junwei)
       [not found]             ` <5B723CEA.1070903-5C7GfCeVMHo@public.gmane.org>
  1 sibling, 1 reply; 19+ messages in thread
From: Zhang, Jerry (Junwei) @ 2018-08-14  2:22 UTC (permalink / raw)
  To: Christian König, Huang Rui,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

On 08/13/2018 06:16 PM, Christian König wrote:
> Am 13.08.2018 um 11:58 schrieb Huang Rui:
>> From: Christian König <christian.koenig@amd.com>
>>
>> Add bulk move pos to store the pointer of first and last buffer object.
>> The list in between will be bulk moved on lru list.
>>
>> Signed-off-by: Christian König <christian.koenig@amd.com>
>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>> Tested-by: Mike Lothian <mike@fireburn.co.uk>
>
> If you ask me that looks like it should work now, but I'm prepossessed because I helped creating this.
>
> Alex, David or Jerry can somebody else take a look as well?

Patch 1, 2, 3, 5 are

Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>

Patch 4: comments inline.

BTW, a per-vm bo lru may be more efficient, but not a common way.
bulk move could improve that, while possibly there are some worse cases and better cases.

e.g. remark the bo position for target BO, like PD/PT bo and per-vm bo, in which range may include other BOs
If the target BO range include more other BOs, may cause evict or anything else low efficiency.
We hope target BO grouped together as much as possible, and be moved.
(if not, please correct me)

Regards,
Jerry

>
> Thanks,
> Christian.
>
>> ---
>>   include/drm/ttm/ttm_bo_driver.h | 28 ++++++++++++++++++++++++++++
>>   1 file changed, 28 insertions(+)
>>
>> diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h
>> index 3234cc3..e4fee8e 100644
>> --- a/include/drm/ttm/ttm_bo_driver.h
>> +++ b/include/drm/ttm/ttm_bo_driver.h
>> @@ -491,6 +491,34 @@ struct ttm_bo_device {
>>   };
>>   /**
>> + * struct ttm_lru_bulk_move_pos
>> + *
>> + * @first: first BO in the bulk move range
>> + * @last: last BO in the bulk move range
>> + *
>> + * Positions for a lru bulk move.
>> + */
>> +struct ttm_lru_bulk_move_pos {
>> +    struct ttm_buffer_object *first;
>> +    struct ttm_buffer_object *last;
>> +};
>> +
>> +/**
>> + * struct ttm_lru_bulk_move
>> + *
>> + * @tt: first/last lru entry for BOs in the TT domain
>> + * @vram: first/last lru entry for BOs in the VRAM domain
>> + * @swap: first/last lru entry for BOs on the swap list
>> + *
>> + * Helper structure for bulk moves on the LRU list.
>> + */
>> +struct ttm_lru_bulk_move {
>> +    struct ttm_lru_bulk_move_pos tt[TTM_MAX_BO_PRIORITY];
>> +    struct ttm_lru_bulk_move_pos vram[TTM_MAX_BO_PRIORITY];
>> +    struct ttm_lru_bulk_move_pos swap[TTM_MAX_BO_PRIORITY];
>> +};
>> +
>> +/**
>>    * ttm_flag_masked
>>    *
>>    * @old: Pointer to the result and original value.
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 1/5] drm/ttm: add helper structures for bulk moves on lru list
       [not found]             ` <b993176a-cf49-d3b7-9be1-feb7dc95456f-5C7GfCeVMHo@public.gmane.org>
@ 2018-08-14  2:25               ` Huang Rui
  0 siblings, 0 replies; 19+ messages in thread
From: Huang Rui @ 2018-08-14  2:25 UTC (permalink / raw)
  To: Zhou, David(ChunMing)
  Cc: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW, Koenig, Christian,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

On Tue, Aug 14, 2018 at 10:02:00AM +0800, Zhou, David(ChunMing) wrote:
> 
> 
> On 2018年08月13日 18:16, Christian König wrote:
> > Am 13.08.2018 um 11:58 schrieb Huang Rui:
> >> From: Christian König <christian.koenig@amd.com>
> >>
> >> Add bulk move pos to store the pointer of first and last buffer object.
> >> The list in between will be bulk moved on lru list.
> >>
> >> Signed-off-by: Christian König <christian.koenig@amd.com>
> >> Signed-off-by: Huang Rui <ray.huang@amd.com>
> >> Tested-by: Mike Lothian <mike@fireburn.co.uk>
> >
> > If you ask me that looks like it should work now, but I'm prepossessed 
> > because I helped creating this.
> >
> > Alex, David or Jerry can somebody else take a look as well?
> remember position, list ops...
> Acked-by: Chunming Zhou <david1.zhou@amd.com>
> 

Thanks David, any comments are warm for me.

Best Regards,
Ray

> >
> > Thanks,
> > Christian.
> >
> >> ---
> >>   include/drm/ttm/ttm_bo_driver.h | 28 ++++++++++++++++++++++++++++
> >>   1 file changed, 28 insertions(+)
> >>
> >> diff --git a/include/drm/ttm/ttm_bo_driver.h 
> >> b/include/drm/ttm/ttm_bo_driver.h
> >> index 3234cc3..e4fee8e 100644
> >> --- a/include/drm/ttm/ttm_bo_driver.h
> >> +++ b/include/drm/ttm/ttm_bo_driver.h
> >> @@ -491,6 +491,34 @@ struct ttm_bo_device {
> >>   };
> >>     /**
> >> + * struct ttm_lru_bulk_move_pos
> >> + *
> >> + * @first: first BO in the bulk move range
> >> + * @last: last BO in the bulk move range
> >> + *
> >> + * Positions for a lru bulk move.
> >> + */
> >> +struct ttm_lru_bulk_move_pos {
> >> +    struct ttm_buffer_object *first;
> >> +    struct ttm_buffer_object *last;
> >> +};
> >> +
> >> +/**
> >> + * struct ttm_lru_bulk_move
> >> + *
> >> + * @tt: first/last lru entry for BOs in the TT domain
> >> + * @vram: first/last lru entry for BOs in the VRAM domain
> >> + * @swap: first/last lru entry for BOs on the swap list
> >> + *
> >> + * Helper structure for bulk moves on the LRU list.
> >> + */
> >> +struct ttm_lru_bulk_move {
> >> +    struct ttm_lru_bulk_move_pos tt[TTM_MAX_BO_PRIORITY];
> >> +    struct ttm_lru_bulk_move_pos vram[TTM_MAX_BO_PRIORITY];
> >> +    struct ttm_lru_bulk_move_pos swap[TTM_MAX_BO_PRIORITY];
> >> +};
> >> +
> >> +/**
> >>    * ttm_flag_masked
> >>    *
> >>    * @old: Pointer to the result and original value.
> >
> > _______________________________________________
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> 
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v3)
  2018-08-13  9:58   ` [PATCH v3 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v3) Huang Rui
@ 2018-08-14  2:26     ` Zhang, Jerry (Junwei)
       [not found]       ` <5B723DE3.50005-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 19+ messages in thread
From: Zhang, Jerry (Junwei) @ 2018-08-14  2:26 UTC (permalink / raw)
  To: Huang Rui, dri-devel, amd-gfx; +Cc: Christian König

On 08/13/2018 05:58 PM, Huang Rui wrote:
> I continue to work for bulk moving that based on the proposal by Christian.
>
> Background:
> amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
> them on the end of LRU list one by one. Thus, that cause so many BOs moved to
> the end of the LRU, and impact performance seriously.
>
> Then Christian provided a workaround to not move PD/PT BOs on LRU with below
> patch:
> "drm/amdgpu: band aid validating VM PTs"
> Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
>
> However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
> instead of one by one.
>
> Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
> validated we move all BOs together to the end of the LRU without dropping the
> lock for the LRU.
>
> While doing so we note the beginning and end of this block in the LRU list.
>
> Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
> we don't move every BO one by one, but instead cut the LRU list into pieces so
> that we bulk move everything to the end in just one operation.
>
> Test data:
> +--------------+-----------------+-----------+---------------------------------------+
> |              |The Talos        |Clpeak(OCL)|BusSpeedReadback(OCL)                  |
> |              |Principle(Vulkan)|           |                                       |
> +------------------------------------------------------------------------------------+
> |              |                 |           |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
> | Original     |  147.7 FPS      |  76.86 us |0.307 ms(8K) 0.310 ms(16K)             |
> +------------------------------------------------------------------------------------+
> | Orignial + WA|                 |           |0.254 ms(1K) 0.241 ms(2K)              |
> |(don't move   |  162.1 FPS      |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
> |PT BOs on LRU)|                 |           |                                       |
> +------------------------------------------------------------------------------------+
> | Bulk move    |  163.1 FPS      |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
> |              |                 |           |0.214 ms(8K) 0.225 ms(16K)             |
> +--------------+-----------------+-----------+---------------------------------------+
>
> After test them with above three benchmarks include vulkan and opencl. We can
> see the visible improvement than original, and even better than original with
> workaround.
>
> v2: move all BOs include idle, relocated, and moved list to the end of LRU and
> put them together.
> v3: remove unused parameter and use list_for_each_entry instead of the one with
> save entry.
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> Tested-by: Mike Lothian <mike@fireburn.co.uk>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 73 ++++++++++++++++++++++++++--------
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  4 ++
>   2 files changed, 61 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 9c84770..ee1af53 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -268,6 +268,53 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
>   }
>
>   /**
> + * amdgpu_vm_move_to_lru_tail - move one list of BOs to end of LRU
> + *
> + * @vm: vm providing the BOs
> + * @list: the list that stored BOs
> + *
> + * Move one list of BOs to the end of LRU and update the positions.
> + */
> +static void
> +amdgpu_vm_move_to_lru_tail_by_list(struct amdgpu_vm *vm, struct list_head *list)
> +{
> +	struct amdgpu_vm_bo_base *bo_base;
> +
> +	list_for_each_entry(bo_base, list, vm_status) {
> +		struct amdgpu_bo *bo = bo_base->bo;
> +
> +		if (!bo->parent)
> +			continue;
> +
> +		ttm_bo_move_to_lru_tail(&bo->tbo, &vm->lru_bulk_move);
> +		if (bo->shadow)
> +			ttm_bo_move_to_lru_tail(&bo->shadow->tbo,
> +						&vm->lru_bulk_move);
> +	}
> +}
> +
> +/**
> + * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
> + *
> + * @adev: amdgpu device pointer
> + * @vm: vm providing the BOs
> + *
> + * Move all BOs to the end of LRU and remember their positions to put them
> + * together.
> + */
> +static void
> +amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> +{
> +	struct ttm_bo_global *glob = adev->mman.bdev.glob;
> +
> +	spin_lock(&glob->lru_lock);
> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->idle);
> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->relocated);
> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->moved);

Moved list is working under vm->moved_lock, so we may hold that as well.
otherwise, to use the same one.
(not sure the detail of history about moved_lock)

> +	spin_unlock(&glob->lru_lock);
> +}
> +
> +/**
>    * amdgpu_vm_validate_pt_bos - validate the page table BOs
>    *
>    * @adev: amdgpu device pointer
> @@ -286,6 +333,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>   {
>   	struct ttm_bo_global *glob = adev->mman.bdev.glob;
>   	struct amdgpu_vm_bo_base *bo_base, *tmp;
> +	bool validated = false;
>   	int r = 0;
>
>   	list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
> @@ -295,14 +343,9 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>   			r = validate(param, bo);
>   			if (r)
>   				break;
> -
> -			spin_lock(&glob->lru_lock);
> -			ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> -			if (bo->shadow)
> -				ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> -			spin_unlock(&glob->lru_lock);
>   		}
>
> +		validated = true;
>   		if (bo->tbo.type != ttm_bo_type_kernel) {
>   			spin_lock(&vm->moved_lock);
>   			list_move(&bo_base->vm_status, &vm->moved);
> @@ -312,18 +355,16 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>   		}
>   	}
>
> -	spin_lock(&glob->lru_lock);
> -	list_for_each_entry(bo_base, &vm->idle, vm_status) {
> -		struct amdgpu_bo *bo = bo_base->bo;
> +	if (!validated) {
> +		spin_lock(&glob->lru_lock);
> +		ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);

To confirm
we only do actual bulk move when no evicted or validate failure?

Regards,
Jerry

> +		spin_unlock(&glob->lru_lock);
> +		return 0;
> +	}
>
> -		if (!bo->parent)
> -			continue;
> +	memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
>
> -		ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> -		if (bo->shadow)
> -			ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> -	}
> -	spin_unlock(&glob->lru_lock);
> +	amdgpu_vm_move_to_lru_tail(adev, vm);
>
>   	return r;
>   }
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> index 67a15d4..92725ac 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> @@ -29,6 +29,7 @@
>   #include <linux/rbtree.h>
>   #include <drm/gpu_scheduler.h>
>   #include <drm/drm_file.h>
> +#include <drm/ttm/ttm_bo_driver.h>
>
>   #include "amdgpu_sync.h"
>   #include "amdgpu_ring.h"
> @@ -226,6 +227,9 @@ struct amdgpu_vm {
>
>   	/* Some basic info about the task */
>   	struct amdgpu_task_info task_info;
> +
> +	/* Store positions of group of BOs */
> +	struct ttm_lru_bulk_move lru_bulk_move;
>   };
>
>   struct amdgpu_vm_manager {
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 1/5] drm/ttm: add helper structures for bulk moves on lru list
       [not found]             ` <5B723CEA.1070903-5C7GfCeVMHo@public.gmane.org>
@ 2018-08-14  2:49               ` Huang Rui
  0 siblings, 0 replies; 19+ messages in thread
From: Huang Rui @ 2018-08-14  2:49 UTC (permalink / raw)
  To: Zhang, Jerry (Junwei)
  Cc: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW, Christian König,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

On Tue, Aug 14, 2018 at 10:22:34AM +0800, Zhang, Jerry (Junwei) wrote:
> On 08/13/2018 06:16 PM, Christian König wrote:
> >Am 13.08.2018 um 11:58 schrieb Huang Rui:
> >>From: Christian König <christian.koenig@amd.com>
> >>
> >>Add bulk move pos to store the pointer of first and last buffer object.
> >>The list in between will be bulk moved on lru list.
> >>
> >>Signed-off-by: Christian König <christian.koenig@amd.com>
> >>Signed-off-by: Huang Rui <ray.huang@amd.com>
> >>Tested-by: Mike Lothian <mike@fireburn.co.uk>
> >
> >If you ask me that looks like it should work now, but I'm prepossessed because I helped creating this.
> >
> >Alex, David or Jerry can somebody else take a look as well?
> 
> Patch 1, 2, 3, 5 are
> 
> Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
> 
> Patch 4: comments inline.
> 
> BTW, a per-vm bo lru may be more efficient, but not a common way.
> bulk move could improve that, while possibly there are some worse cases and better cases.
> 
> e.g. remark the bo position for target BO, like PD/PT bo and per-vm bo, in which range may include other BOs
> If the target BO range include more other BOs, may cause evict or anything else low efficiency.
> We hope target BO grouped together as much as possible, and be moved.
> (if not, please correct me)
> 

Thanks, Jerry. Actually, here we will remember both per-vm and PD/PT BOs
positions in the same VM. And move them one time to the end of LRU. And
won't move them one by one. The time complexity is from O(n) to O(1).
The performance issue is not caused by eviction, but by the many times of
moving on LRU list. So for now, we do the bulk move instead.

Thanks,
Ray

> Regards,
> Jerry
> 
> >
> >Thanks,
> >Christian.
> >
> >>---
> >>  include/drm/ttm/ttm_bo_driver.h | 28 ++++++++++++++++++++++++++++
> >>  1 file changed, 28 insertions(+)
> >>
> >>diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h
> >>index 3234cc3..e4fee8e 100644
> >>--- a/include/drm/ttm/ttm_bo_driver.h
> >>+++ b/include/drm/ttm/ttm_bo_driver.h
> >>@@ -491,6 +491,34 @@ struct ttm_bo_device {
> >>  };
> >>  /**
> >>+ * struct ttm_lru_bulk_move_pos
> >>+ *
> >>+ * @first: first BO in the bulk move range
> >>+ * @last: last BO in the bulk move range
> >>+ *
> >>+ * Positions for a lru bulk move.
> >>+ */
> >>+struct ttm_lru_bulk_move_pos {
> >>+    struct ttm_buffer_object *first;
> >>+    struct ttm_buffer_object *last;
> >>+};
> >>+
> >>+/**
> >>+ * struct ttm_lru_bulk_move
> >>+ *
> >>+ * @tt: first/last lru entry for BOs in the TT domain
> >>+ * @vram: first/last lru entry for BOs in the VRAM domain
> >>+ * @swap: first/last lru entry for BOs on the swap list
> >>+ *
> >>+ * Helper structure for bulk moves on the LRU list.
> >>+ */
> >>+struct ttm_lru_bulk_move {
> >>+    struct ttm_lru_bulk_move_pos tt[TTM_MAX_BO_PRIORITY];
> >>+    struct ttm_lru_bulk_move_pos vram[TTM_MAX_BO_PRIORITY];
> >>+    struct ttm_lru_bulk_move_pos swap[TTM_MAX_BO_PRIORITY];
> >>+};
> >>+
> >>+/**
> >>   * ttm_flag_masked
> >>   *
> >>   * @old: Pointer to the result and original value.
> >
> >_______________________________________________
> >amd-gfx mailing list
> >amd-gfx@lists.freedesktop.org
> >https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v3)
       [not found]       ` <5B723DE3.50005-5C7GfCeVMHo@public.gmane.org>
@ 2018-08-14  3:05         ` Huang Rui
  2018-08-14  6:45           ` Christian König
  0 siblings, 1 reply; 19+ messages in thread
From: Huang Rui @ 2018-08-14  3:05 UTC (permalink / raw)
  To: Zhang, Jerry
  Cc: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW, Koenig, Christian

On Tue, Aug 14, 2018 at 10:26:43AM +0800, Zhang, Jerry wrote:
> On 08/13/2018 05:58 PM, Huang Rui wrote:
> > I continue to work for bulk moving that based on the proposal by Christian.
> >
> > Background:
> > amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
> > them on the end of LRU list one by one. Thus, that cause so many BOs moved to
> > the end of the LRU, and impact performance seriously.
> >
> > Then Christian provided a workaround to not move PD/PT BOs on LRU with below
> > patch:
> > "drm/amdgpu: band aid validating VM PTs"
> > Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
> >
> > However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
> > instead of one by one.
> >
> > Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
> > validated we move all BOs together to the end of the LRU without dropping the
> > lock for the LRU.
> >
> > While doing so we note the beginning and end of this block in the LRU list.
> >
> > Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
> > we don't move every BO one by one, but instead cut the LRU list into pieces so
> > that we bulk move everything to the end in just one operation.
> >
> > Test data:
> > +--------------+-----------------+-----------+---------------------------------------+
> > |              |The Talos        |Clpeak(OCL)|BusSpeedReadback(OCL)                  |
> > |              |Principle(Vulkan)|           |                                       |
> > +------------------------------------------------------------------------------------+
> > |              |                 |           |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
> > | Original     |  147.7 FPS      |  76.86 us |0.307 ms(8K) 0.310 ms(16K)             |
> > +------------------------------------------------------------------------------------+
> > | Orignial + WA|                 |           |0.254 ms(1K) 0.241 ms(2K)              |
> > |(don't move   |  162.1 FPS      |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
> > |PT BOs on LRU)|                 |           |                                       |
> > +------------------------------------------------------------------------------------+
> > | Bulk move    |  163.1 FPS      |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
> > |              |                 |           |0.214 ms(8K) 0.225 ms(16K)             |
> > +--------------+-----------------+-----------+---------------------------------------+
> >
> > After test them with above three benchmarks include vulkan and opencl. We can
> > see the visible improvement than original, and even better than original with
> > workaround.
> >
> > v2: move all BOs include idle, relocated, and moved list to the end of LRU and
> > put them together.
> > v3: remove unused parameter and use list_for_each_entry instead of the one with
> > save entry.
> >
> > Signed-off-by: Christian König <christian.koenig@amd.com>
> > Signed-off-by: Huang Rui <ray.huang@amd.com>
> > Tested-by: Mike Lothian <mike@fireburn.co.uk>
> > ---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 73 ++++++++++++++++++++++++++--------
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  4 ++
> >   2 files changed, 61 insertions(+), 16 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > index 9c84770..ee1af53 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > @@ -268,6 +268,53 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
> >   }
> >
> >   /**
> > + * amdgpu_vm_move_to_lru_tail - move one list of BOs to end of LRU
> > + *
> > + * @vm: vm providing the BOs
> > + * @list: the list that stored BOs
> > + *
> > + * Move one list of BOs to the end of LRU and update the positions.
> > + */
> > +static void
> > +amdgpu_vm_move_to_lru_tail_by_list(struct amdgpu_vm *vm, struct list_head *list)
> > +{
> > +	struct amdgpu_vm_bo_base *bo_base;
> > +
> > +	list_for_each_entry(bo_base, list, vm_status) {
> > +		struct amdgpu_bo *bo = bo_base->bo;
> > +
> > +		if (!bo->parent)
> > +			continue;
> > +
> > +		ttm_bo_move_to_lru_tail(&bo->tbo, &vm->lru_bulk_move);
> > +		if (bo->shadow)
> > +			ttm_bo_move_to_lru_tail(&bo->shadow->tbo,
> > +						&vm->lru_bulk_move);
> > +	}
> > +}
> > +
> > +/**
> > + * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
> > + *
> > + * @adev: amdgpu device pointer
> > + * @vm: vm providing the BOs
> > + *
> > + * Move all BOs to the end of LRU and remember their positions to put them
> > + * together.
> > + */
> > +static void
> > +amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> > +{
> > +	struct ttm_bo_global *glob = adev->mman.bdev.glob;
> > +
> > +	spin_lock(&glob->lru_lock);
> > +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->idle);
> > +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->relocated);
> > +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->moved);
> 
> Moved list is working under vm->moved_lock, so we may hold that as well.
> otherwise, to use the same one.
> (not sure the detail of history about moved_lock)

We actually don't remove them from moved list, just move bo->lru to the end
of lru. So here, it doesn't need moved_lock.

> 
> > +	spin_unlock(&glob->lru_lock);
> > +}
> > +
> > +/**
> >    * amdgpu_vm_validate_pt_bos - validate the page table BOs
> >    *
> >    * @adev: amdgpu device pointer
> > @@ -286,6 +333,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> >   {
> >   	struct ttm_bo_global *glob = adev->mman.bdev.glob;
> >   	struct amdgpu_vm_bo_base *bo_base, *tmp;
> > +	bool validated = false;
> >   	int r = 0;
> >
> >   	list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
> > @@ -295,14 +343,9 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> >   			r = validate(param, bo);
> >   			if (r)
> >   				break;
> > -
> > -			spin_lock(&glob->lru_lock);
> > -			ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> > -			if (bo->shadow)
> > -				ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> > -			spin_unlock(&glob->lru_lock);
> >   		}
> >
> > +		validated = true;
> >   		if (bo->tbo.type != ttm_bo_type_kernel) {
> >   			spin_lock(&vm->moved_lock);
> >   			list_move(&bo_base->vm_status, &vm->moved);
> > @@ -312,18 +355,16 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> >   		}
> >   	}
> >
> > -	spin_lock(&glob->lru_lock);
> > -	list_for_each_entry(bo_base, &vm->idle, vm_status) {
> > -		struct amdgpu_bo *bo = bo_base->bo;
> > +	if (!validated) {
> > +		spin_lock(&glob->lru_lock);
> > +		ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
> 
> To confirm
> we only do actual bulk move when no evicted or validate failure?
> 

Yes. Because if some BOs are evicted back, they will be moved to
moved/relocated list. Then we need update the positions of bulk moving.

Thanks,
Ray

> Regards,
> Jerry
> 
> > +		spin_unlock(&glob->lru_lock);
> > +		return 0;
> > +	}
> >
> > -		if (!bo->parent)
> > -			continue;
> > +	memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
> >
> > -		ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> > -		if (bo->shadow)
> > -			ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> > -	}
> > -	spin_unlock(&glob->lru_lock);
> > +	amdgpu_vm_move_to_lru_tail(adev, vm);
> >
> >   	return r;
> >   }
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> > index 67a15d4..92725ac 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> > @@ -29,6 +29,7 @@
> >   #include <linux/rbtree.h>
> >   #include <drm/gpu_scheduler.h>
> >   #include <drm/drm_file.h>
> > +#include <drm/ttm/ttm_bo_driver.h>
> >
> >   #include "amdgpu_sync.h"
> >   #include "amdgpu_ring.h"
> > @@ -226,6 +227,9 @@ struct amdgpu_vm {
> >
> >   	/* Some basic info about the task */
> >   	struct amdgpu_task_info task_info;
> > +
> > +	/* Store positions of group of BOs */
> > +	struct ttm_lru_bulk_move lru_bulk_move;
> >   };
> >
> >   struct amdgpu_vm_manager {
> >
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v3)
  2018-08-14  3:05         ` Huang Rui
@ 2018-08-14  6:45           ` Christian König
       [not found]             ` <4f7e6d61-0b4a-5c12-38a9-ea905b9f6234-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 19+ messages in thread
From: Christian König @ 2018-08-14  6:45 UTC (permalink / raw)
  To: Huang Rui, Zhang, Jerry
  Cc: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 14.08.2018 um 05:05 schrieb Huang Rui:
> On Tue, Aug 14, 2018 at 10:26:43AM +0800, Zhang, Jerry wrote:
>> On 08/13/2018 05:58 PM, Huang Rui wrote:
>>> I continue to work for bulk moving that based on the proposal by Christian.
>>>
>>> Background:
>>> amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
>>> them on the end of LRU list one by one. Thus, that cause so many BOs moved to
>>> the end of the LRU, and impact performance seriously.
>>>
>>> Then Christian provided a workaround to not move PD/PT BOs on LRU with below
>>> patch:
>>> "drm/amdgpu: band aid validating VM PTs"
>>> Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
>>>
>>> However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
>>> instead of one by one.
>>>
>>> Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
>>> validated we move all BOs together to the end of the LRU without dropping the
>>> lock for the LRU.
>>>
>>> While doing so we note the beginning and end of this block in the LRU list.
>>>
>>> Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
>>> we don't move every BO one by one, but instead cut the LRU list into pieces so
>>> that we bulk move everything to the end in just one operation.
>>>
>>> Test data:
>>> +--------------+-----------------+-----------+---------------------------------------+
>>> |              |The Talos        |Clpeak(OCL)|BusSpeedReadback(OCL)                  |
>>> |              |Principle(Vulkan)|           |                                       |
>>> +------------------------------------------------------------------------------------+
>>> |              |                 |           |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
>>> | Original     |  147.7 FPS      |  76.86 us |0.307 ms(8K) 0.310 ms(16K)             |
>>> +------------------------------------------------------------------------------------+
>>> | Orignial + WA|                 |           |0.254 ms(1K) 0.241 ms(2K)              |
>>> |(don't move   |  162.1 FPS      |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
>>> |PT BOs on LRU)|                 |           |                                       |
>>> +------------------------------------------------------------------------------------+
>>> | Bulk move    |  163.1 FPS      |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
>>> |              |                 |           |0.214 ms(8K) 0.225 ms(16K)             |
>>> +--------------+-----------------+-----------+---------------------------------------+
>>>
>>> After test them with above three benchmarks include vulkan and opencl. We can
>>> see the visible improvement than original, and even better than original with
>>> workaround.
>>>
>>> v2: move all BOs include idle, relocated, and moved list to the end of LRU and
>>> put them together.
>>> v3: remove unused parameter and use list_for_each_entry instead of the one with
>>> save entry.
>>>
>>> Signed-off-by: Christian König <christian.koenig@amd.com>
>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>> Tested-by: Mike Lothian <mike@fireburn.co.uk>
>>> ---
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 73 ++++++++++++++++++++++++++--------
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  4 ++
>>>    2 files changed, 61 insertions(+), 16 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index 9c84770..ee1af53 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -268,6 +268,53 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
>>>    }
>>>
>>>    /**
>>> + * amdgpu_vm_move_to_lru_tail - move one list of BOs to end of LRU
>>> + *
>>> + * @vm: vm providing the BOs
>>> + * @list: the list that stored BOs
>>> + *
>>> + * Move one list of BOs to the end of LRU and update the positions.
>>> + */
>>> +static void
>>> +amdgpu_vm_move_to_lru_tail_by_list(struct amdgpu_vm *vm, struct list_head *list)
>>> +{
>>> +	struct amdgpu_vm_bo_base *bo_base;
>>> +
>>> +	list_for_each_entry(bo_base, list, vm_status) {
>>> +		struct amdgpu_bo *bo = bo_base->bo;
>>> +
>>> +		if (!bo->parent)
>>> +			continue;
>>> +
>>> +		ttm_bo_move_to_lru_tail(&bo->tbo, &vm->lru_bulk_move);
>>> +		if (bo->shadow)
>>> +			ttm_bo_move_to_lru_tail(&bo->shadow->tbo,
>>> +						&vm->lru_bulk_move);
>>> +	}
>>> +}
>>> +
>>> +/**
>>> + * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
>>> + *
>>> + * @adev: amdgpu device pointer
>>> + * @vm: vm providing the BOs
>>> + *
>>> + * Move all BOs to the end of LRU and remember their positions to put them
>>> + * together.
>>> + */
>>> +static void
>>> +amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>>> +{
>>> +	struct ttm_bo_global *glob = adev->mman.bdev.glob;
>>> +
>>> +	spin_lock(&glob->lru_lock);
>>> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->idle);
>>> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->relocated);
>>> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->moved);
>> Moved list is working under vm->moved_lock, so we may hold that as well.
>> otherwise, to use the same one.
>> (not sure the detail of history about moved_lock)
> We actually don't remove them from moved list, just move bo->lru to the end
> of lru. So here, it doesn't need moved_lock.
>
>>> +	spin_unlock(&glob->lru_lock);
>>> +}
>>> +
>>> +/**
>>>     * amdgpu_vm_validate_pt_bos - validate the page table BOs
>>>     *
>>>     * @adev: amdgpu device pointer
>>> @@ -286,6 +333,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>>    {
>>>    	struct ttm_bo_global *glob = adev->mman.bdev.glob;
>>>    	struct amdgpu_vm_bo_base *bo_base, *tmp;
>>> +	bool validated = false;
>>>    	int r = 0;
>>>
>>>    	list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
>>> @@ -295,14 +343,9 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>>    			r = validate(param, bo);
>>>    			if (r)
>>>    				break;
>>> -
>>> -			spin_lock(&glob->lru_lock);
>>> -			ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
>>> -			if (bo->shadow)
>>> -				ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
>>> -			spin_unlock(&glob->lru_lock);
>>>    		}
>>>
>>> +		validated = true;
>>>    		if (bo->tbo.type != ttm_bo_type_kernel) {
>>>    			spin_lock(&vm->moved_lock);
>>>    			list_move(&bo_base->vm_status, &vm->moved);
>>> @@ -312,18 +355,16 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>>    		}
>>>    	}
>>>
>>> -	spin_lock(&glob->lru_lock);
>>> -	list_for_each_entry(bo_base, &vm->idle, vm_status) {
>>> -		struct amdgpu_bo *bo = bo_base->bo;
>>> +	if (!validated) {
>>> +		spin_lock(&glob->lru_lock);
>>> +		ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
>> To confirm
>> we only do actual bulk move when no evicted or validate failure?
>>
> Yes. Because if some BOs are evicted back, they will be moved to
> moved/relocated list. Then we need update the positions of bulk moving.

Ah, crap that won't work. Jerry pointed out a quite important bug here.

The moved list contains both per-VM as well as independent BOs, so 
walking it and moving everything on the LRU won't work as expected.

Probably better to just walk the idle list after we are done with the 
state machine.

Christian.

>
> Thanks,
> Ray
>
>> Regards,
>> Jerry
>>
>>> +		spin_unlock(&glob->lru_lock);
>>> +		return 0;
>>> +	}
>>>
>>> -		if (!bo->parent)
>>> -			continue;
>>> +	memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
>>>
>>> -		ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
>>> -		if (bo->shadow)
>>> -			ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
>>> -	}
>>> -	spin_unlock(&glob->lru_lock);
>>> +	amdgpu_vm_move_to_lru_tail(adev, vm);
>>>
>>>    	return r;
>>>    }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> index 67a15d4..92725ac 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> @@ -29,6 +29,7 @@
>>>    #include <linux/rbtree.h>
>>>    #include <drm/gpu_scheduler.h>
>>>    #include <drm/drm_file.h>
>>> +#include <drm/ttm/ttm_bo_driver.h>
>>>
>>>    #include "amdgpu_sync.h"
>>>    #include "amdgpu_ring.h"
>>> @@ -226,6 +227,9 @@ struct amdgpu_vm {
>>>
>>>    	/* Some basic info about the task */
>>>    	struct amdgpu_task_info task_info;
>>> +
>>> +	/* Store positions of group of BOs */
>>> +	struct ttm_lru_bulk_move lru_bulk_move;
>>>    };
>>>
>>>    struct amdgpu_vm_manager {
>>>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v3)
       [not found]             ` <4f7e6d61-0b4a-5c12-38a9-ea905b9f6234-5C7GfCeVMHo@public.gmane.org>
@ 2018-08-14  7:24               ` Huang Rui
  2018-08-14  7:35                 ` Christian König
  0 siblings, 1 reply; 19+ messages in thread
From: Huang Rui @ 2018-08-14  7:24 UTC (permalink / raw)
  To: Koenig, Christian
  Cc: Zhang, Jerry, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

On Tue, Aug 14, 2018 at 02:45:22PM +0800, Koenig, Christian wrote:
> Am 14.08.2018 um 05:05 schrieb Huang Rui:
> > On Tue, Aug 14, 2018 at 10:26:43AM +0800, Zhang, Jerry wrote:
> >> On 08/13/2018 05:58 PM, Huang Rui wrote:
> >>> I continue to work for bulk moving that based on the proposal by Christian.
> >>>
> >>> Background:
> >>> amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
> >>> them on the end of LRU list one by one. Thus, that cause so many BOs moved to
> >>> the end of the LRU, and impact performance seriously.
> >>>
> >>> Then Christian provided a workaround to not move PD/PT BOs on LRU with below
> >>> patch:
> >>> "drm/amdgpu: band aid validating VM PTs"
> >>> Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
> >>>
> >>> However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
> >>> instead of one by one.
> >>>
> >>> Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
> >>> validated we move all BOs together to the end of the LRU without dropping the
> >>> lock for the LRU.
> >>>
> >>> While doing so we note the beginning and end of this block in the LRU list.
> >>>
> >>> Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
> >>> we don't move every BO one by one, but instead cut the LRU list into pieces so
> >>> that we bulk move everything to the end in just one operation.
> >>>
> >>> Test data:
> >>> +--------------+-----------------+-----------+---------------------------------------+
> >>> |              |The Talos        |Clpeak(OCL)|BusSpeedReadback(OCL)                  |
> >>> |              |Principle(Vulkan)|           |                                       |
> >>> +------------------------------------------------------------------------------------+
> >>> |              |                 |           |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
> >>> | Original     |  147.7 FPS      |  76.86 us |0.307 ms(8K) 0.310 ms(16K)             |
> >>> +------------------------------------------------------------------------------------+
> >>> | Orignial + WA|                 |           |0.254 ms(1K) 0.241 ms(2K)              |
> >>> |(don't move   |  162.1 FPS      |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
> >>> |PT BOs on LRU)|                 |           |                                       |
> >>> +------------------------------------------------------------------------------------+
> >>> | Bulk move    |  163.1 FPS      |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
> >>> |              |                 |           |0.214 ms(8K) 0.225 ms(16K)             |
> >>> +--------------+-----------------+-----------+---------------------------------------+
> >>>
> >>> After test them with above three benchmarks include vulkan and opencl. We can
> >>> see the visible improvement than original, and even better than original with
> >>> workaround.
> >>>
> >>> v2: move all BOs include idle, relocated, and moved list to the end of LRU and
> >>> put them together.
> >>> v3: remove unused parameter and use list_for_each_entry instead of the one with
> >>> save entry.
> >>>
> >>> Signed-off-by: Christian König <christian.koenig@amd.com>
> >>> Signed-off-by: Huang Rui <ray.huang@amd.com>
> >>> Tested-by: Mike Lothian <mike@fireburn.co.uk>
> >>> ---
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 73 ++++++++++++++++++++++++++--------
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  4 ++
> >>>    2 files changed, 61 insertions(+), 16 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>> index 9c84770..ee1af53 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>> @@ -268,6 +268,53 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
> >>>    }
> >>>
> >>>    /**
> >>> + * amdgpu_vm_move_to_lru_tail - move one list of BOs to end of LRU
> >>> + *
> >>> + * @vm: vm providing the BOs
> >>> + * @list: the list that stored BOs
> >>> + *
> >>> + * Move one list of BOs to the end of LRU and update the positions.
> >>> + */
> >>> +static void
> >>> +amdgpu_vm_move_to_lru_tail_by_list(struct amdgpu_vm *vm, struct list_head *list)
> >>> +{
> >>> +	struct amdgpu_vm_bo_base *bo_base;
> >>> +
> >>> +	list_for_each_entry(bo_base, list, vm_status) {
> >>> +		struct amdgpu_bo *bo = bo_base->bo;
> >>> +
> >>> +		if (!bo->parent)
> >>> +			continue;
> >>> +
> >>> +		ttm_bo_move_to_lru_tail(&bo->tbo, &vm->lru_bulk_move);
> >>> +		if (bo->shadow)
> >>> +			ttm_bo_move_to_lru_tail(&bo->shadow->tbo,
> >>> +						&vm->lru_bulk_move);
> >>> +	}
> >>> +}
> >>> +
> >>> +/**
> >>> + * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
> >>> + *
> >>> + * @adev: amdgpu device pointer
> >>> + * @vm: vm providing the BOs
> >>> + *
> >>> + * Move all BOs to the end of LRU and remember their positions to put them
> >>> + * together.
> >>> + */
> >>> +static void
> >>> +amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> >>> +{
> >>> +	struct ttm_bo_global *glob = adev->mman.bdev.glob;
> >>> +
> >>> +	spin_lock(&glob->lru_lock);
> >>> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->idle);
> >>> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->relocated);
> >>> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->moved);
> >> Moved list is working under vm->moved_lock, so we may hold that as well.
> >> otherwise, to use the same one.
> >> (not sure the detail of history about moved_lock)
> > We actually don't remove them from moved list, just move bo->lru to the end
> > of lru. So here, it doesn't need moved_lock.
> >
> >>> +	spin_unlock(&glob->lru_lock);
> >>> +}
> >>> +
> >>> +/**
> >>>     * amdgpu_vm_validate_pt_bos - validate the page table BOs
> >>>     *
> >>>     * @adev: amdgpu device pointer
> >>> @@ -286,6 +333,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> >>>    {
> >>>    	struct ttm_bo_global *glob = adev->mman.bdev.glob;
> >>>    	struct amdgpu_vm_bo_base *bo_base, *tmp;
> >>> +	bool validated = false;
> >>>    	int r = 0;
> >>>
> >>>    	list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
> >>> @@ -295,14 +343,9 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> >>>    			r = validate(param, bo);
> >>>    			if (r)
> >>>    				break;
> >>> -
> >>> -			spin_lock(&glob->lru_lock);
> >>> -			ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> >>> -			if (bo->shadow)
> >>> -				ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> >>> -			spin_unlock(&glob->lru_lock);
> >>>    		}
> >>>
> >>> +		validated = true;
> >>>    		if (bo->tbo.type != ttm_bo_type_kernel) {
> >>>    			spin_lock(&vm->moved_lock);
> >>>    			list_move(&bo_base->vm_status, &vm->moved);
> >>> @@ -312,18 +355,16 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> >>>    		}
> >>>    	}
> >>>
> >>> -	spin_lock(&glob->lru_lock);
> >>> -	list_for_each_entry(bo_base, &vm->idle, vm_status) {
> >>> -		struct amdgpu_bo *bo = bo_base->bo;
> >>> +	if (!validated) {
> >>> +		spin_lock(&glob->lru_lock);
> >>> +		ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
> >> To confirm
> >> we only do actual bulk move when no evicted or validate failure?
> >>
> > Yes. Because if some BOs are evicted back, they will be moved to
> > moved/relocated list. Then we need update the positions of bulk moving.
> 
> Ah, crap that won't work. Jerry pointed out a quite important bug here.
> 
> The moved list contains both per-VM as well as independent BOs, so 
> walking it and moving everything on the LRU won't work as expected.
> 

Our purpose is not to move the independent BOs (shared with other VM),
right?

> Probably better to just walk the idle list after we are done with the 
> state machine.
> 

If we only walk the idle list here, we probably are not included all the
Per-VM BOs, right? Or walk the idle list after command submission.

Thanks,
Ray

> Christian.
> 
> >
> > Thanks,
> > Ray
> >
> >> Regards,
> >> Jerry
> >>
> >>> +		spin_unlock(&glob->lru_lock);
> >>> +		return 0;
> >>> +	}
> >>>
> >>> -		if (!bo->parent)
> >>> -			continue;
> >>> +	memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
> >>>
> >>> -		ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> >>> -		if (bo->shadow)
> >>> -			ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> >>> -	}
> >>> -	spin_unlock(&glob->lru_lock);
> >>> +	amdgpu_vm_move_to_lru_tail(adev, vm);
> >>>
> >>>    	return r;
> >>>    }
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> >>> index 67a15d4..92725ac 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> >>> @@ -29,6 +29,7 @@
> >>>    #include <linux/rbtree.h>
> >>>    #include <drm/gpu_scheduler.h>
> >>>    #include <drm/drm_file.h>
> >>> +#include <drm/ttm/ttm_bo_driver.h>
> >>>
> >>>    #include "amdgpu_sync.h"
> >>>    #include "amdgpu_ring.h"
> >>> @@ -226,6 +227,9 @@ struct amdgpu_vm {
> >>>
> >>>    	/* Some basic info about the task */
> >>>    	struct amdgpu_task_info task_info;
> >>> +
> >>> +	/* Store positions of group of BOs */
> >>> +	struct ttm_lru_bulk_move lru_bulk_move;
> >>>    };
> >>>
> >>>    struct amdgpu_vm_manager {
> >>>
> 
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v3)
  2018-08-14  7:24               ` Huang Rui
@ 2018-08-14  7:35                 ` Christian König
       [not found]                   ` <e1635b5e-e5a1-c4cf-005c-1920c6fc86e0-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 19+ messages in thread
From: Christian König @ 2018-08-14  7:35 UTC (permalink / raw)
  To: Huang Rui, Koenig, Christian
  Cc: Zhang, Jerry, dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 14.08.2018 um 09:24 schrieb Huang Rui:
> On Tue, Aug 14, 2018 at 02:45:22PM +0800, Koenig, Christian wrote:
>> Am 14.08.2018 um 05:05 schrieb Huang Rui:
>>> On Tue, Aug 14, 2018 at 10:26:43AM +0800, Zhang, Jerry wrote:
>>>> On 08/13/2018 05:58 PM, Huang Rui wrote:
>>>>> I continue to work for bulk moving that based on the proposal by Christian.
>>>>>
>>>>> Background:
>>>>> amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
>>>>> them on the end of LRU list one by one. Thus, that cause so many BOs moved to
>>>>> the end of the LRU, and impact performance seriously.
>>>>>
>>>>> Then Christian provided a workaround to not move PD/PT BOs on LRU with below
>>>>> patch:
>>>>> "drm/amdgpu: band aid validating VM PTs"
>>>>> Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
>>>>>
>>>>> However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
>>>>> instead of one by one.
>>>>>
>>>>> Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
>>>>> validated we move all BOs together to the end of the LRU without dropping the
>>>>> lock for the LRU.
>>>>>
>>>>> While doing so we note the beginning and end of this block in the LRU list.
>>>>>
>>>>> Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
>>>>> we don't move every BO one by one, but instead cut the LRU list into pieces so
>>>>> that we bulk move everything to the end in just one operation.
>>>>>
>>>>> Test data:
>>>>> +--------------+-----------------+-----------+---------------------------------------+
>>>>> |              |The Talos        |Clpeak(OCL)|BusSpeedReadback(OCL)                  |
>>>>> |              |Principle(Vulkan)|           |                                       |
>>>>> +------------------------------------------------------------------------------------+
>>>>> |              |                 |           |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
>>>>> | Original     |  147.7 FPS      |  76.86 us |0.307 ms(8K) 0.310 ms(16K)             |
>>>>> +------------------------------------------------------------------------------------+
>>>>> | Orignial + WA|                 |           |0.254 ms(1K) 0.241 ms(2K)              |
>>>>> |(don't move   |  162.1 FPS      |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
>>>>> |PT BOs on LRU)|                 |           |                                       |
>>>>> +------------------------------------------------------------------------------------+
>>>>> | Bulk move    |  163.1 FPS      |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
>>>>> |              |                 |           |0.214 ms(8K) 0.225 ms(16K)             |
>>>>> +--------------+-----------------+-----------+---------------------------------------+
>>>>>
>>>>> After test them with above three benchmarks include vulkan and opencl. We can
>>>>> see the visible improvement than original, and even better than original with
>>>>> workaround.
>>>>>
>>>>> v2: move all BOs include idle, relocated, and moved list to the end of LRU and
>>>>> put them together.
>>>>> v3: remove unused parameter and use list_for_each_entry instead of the one with
>>>>> save entry.
>>>>>
>>>>> Signed-off-by: Christian König <christian.koenig@amd.com>
>>>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>>>> Tested-by: Mike Lothian <mike@fireburn.co.uk>
>>>>> ---
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 73 ++++++++++++++++++++++++++--------
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  4 ++
>>>>>     2 files changed, 61 insertions(+), 16 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> index 9c84770..ee1af53 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> @@ -268,6 +268,53 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
>>>>>     }
>>>>>
>>>>>     /**
>>>>> + * amdgpu_vm_move_to_lru_tail - move one list of BOs to end of LRU
>>>>> + *
>>>>> + * @vm: vm providing the BOs
>>>>> + * @list: the list that stored BOs
>>>>> + *
>>>>> + * Move one list of BOs to the end of LRU and update the positions.
>>>>> + */
>>>>> +static void
>>>>> +amdgpu_vm_move_to_lru_tail_by_list(struct amdgpu_vm *vm, struct list_head *list)
>>>>> +{
>>>>> +	struct amdgpu_vm_bo_base *bo_base;
>>>>> +
>>>>> +	list_for_each_entry(bo_base, list, vm_status) {
>>>>> +		struct amdgpu_bo *bo = bo_base->bo;
>>>>> +
>>>>> +		if (!bo->parent)
>>>>> +			continue;
>>>>> +
>>>>> +		ttm_bo_move_to_lru_tail(&bo->tbo, &vm->lru_bulk_move);
>>>>> +		if (bo->shadow)
>>>>> +			ttm_bo_move_to_lru_tail(&bo->shadow->tbo,
>>>>> +						&vm->lru_bulk_move);
>>>>> +	}
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> + * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
>>>>> + *
>>>>> + * @adev: amdgpu device pointer
>>>>> + * @vm: vm providing the BOs
>>>>> + *
>>>>> + * Move all BOs to the end of LRU and remember their positions to put them
>>>>> + * together.
>>>>> + */
>>>>> +static void
>>>>> +amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>>>>> +{
>>>>> +	struct ttm_bo_global *glob = adev->mman.bdev.glob;
>>>>> +
>>>>> +	spin_lock(&glob->lru_lock);
>>>>> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->idle);
>>>>> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->relocated);
>>>>> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->moved);
>>>> Moved list is working under vm->moved_lock, so we may hold that as well.
>>>> otherwise, to use the same one.
>>>> (not sure the detail of history about moved_lock)
>>> We actually don't remove them from moved list, just move bo->lru to the end
>>> of lru. So here, it doesn't need moved_lock.
>>>
>>>>> +	spin_unlock(&glob->lru_lock);
>>>>> +}
>>>>> +
>>>>> +/**
>>>>>      * amdgpu_vm_validate_pt_bos - validate the page table BOs
>>>>>      *
>>>>>      * @adev: amdgpu device pointer
>>>>> @@ -286,6 +333,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>>>>     {
>>>>>     	struct ttm_bo_global *glob = adev->mman.bdev.glob;
>>>>>     	struct amdgpu_vm_bo_base *bo_base, *tmp;
>>>>> +	bool validated = false;
>>>>>     	int r = 0;
>>>>>
>>>>>     	list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
>>>>> @@ -295,14 +343,9 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>>>>     			r = validate(param, bo);
>>>>>     			if (r)
>>>>>     				break;
>>>>> -
>>>>> -			spin_lock(&glob->lru_lock);
>>>>> -			ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
>>>>> -			if (bo->shadow)
>>>>> -				ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
>>>>> -			spin_unlock(&glob->lru_lock);
>>>>>     		}
>>>>>
>>>>> +		validated = true;
>>>>>     		if (bo->tbo.type != ttm_bo_type_kernel) {
>>>>>     			spin_lock(&vm->moved_lock);
>>>>>     			list_move(&bo_base->vm_status, &vm->moved);
>>>>> @@ -312,18 +355,16 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>>>>     		}
>>>>>     	}
>>>>>
>>>>> -	spin_lock(&glob->lru_lock);
>>>>> -	list_for_each_entry(bo_base, &vm->idle, vm_status) {
>>>>> -		struct amdgpu_bo *bo = bo_base->bo;
>>>>> +	if (!validated) {
>>>>> +		spin_lock(&glob->lru_lock);
>>>>> +		ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
>>>> To confirm
>>>> we only do actual bulk move when no evicted or validate failure?
>>>>
>>> Yes. Because if some BOs are evicted back, they will be moved to
>>> moved/relocated list. Then we need update the positions of bulk moving.
>> Ah, crap that won't work. Jerry pointed out a quite important bug here.
>>
>> The moved list contains both per-VM as well as independent BOs, so
>> walking it and moving everything on the LRU won't work as expected.
>>
> Our purpose is not to move the independent BOs (shared with other VM),
> right?
>
>> Probably better to just walk the idle list after we are done with the
>> state machine.
>>
> If we only walk the idle list here, we probably are not included all the
> Per-VM BOs, right? Or walk the idle list after command submission.

Walking the idle list after command submission. At this point all BOs 
should be on there, except for the evicted ones and we can handle those 
separately.

Regards,
Christian.

>
> Thanks,
> Ray
>
>> Christian.
>>
>>> Thanks,
>>> Ray
>>>
>>>> Regards,
>>>> Jerry
>>>>
>>>>> +		spin_unlock(&glob->lru_lock);
>>>>> +		return 0;
>>>>> +	}
>>>>>
>>>>> -		if (!bo->parent)
>>>>> -			continue;
>>>>> +	memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
>>>>>
>>>>> -		ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
>>>>> -		if (bo->shadow)
>>>>> -			ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
>>>>> -	}
>>>>> -	spin_unlock(&glob->lru_lock);
>>>>> +	amdgpu_vm_move_to_lru_tail(adev, vm);
>>>>>
>>>>>     	return r;
>>>>>     }
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>>> index 67a15d4..92725ac 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>>> @@ -29,6 +29,7 @@
>>>>>     #include <linux/rbtree.h>
>>>>>     #include <drm/gpu_scheduler.h>
>>>>>     #include <drm/drm_file.h>
>>>>> +#include <drm/ttm/ttm_bo_driver.h>
>>>>>
>>>>>     #include "amdgpu_sync.h"
>>>>>     #include "amdgpu_ring.h"
>>>>> @@ -226,6 +227,9 @@ struct amdgpu_vm {
>>>>>
>>>>>     	/* Some basic info about the task */
>>>>>     	struct amdgpu_task_info task_info;
>>>>> +
>>>>> +	/* Store positions of group of BOs */
>>>>> +	struct ttm_lru_bulk_move lru_bulk_move;
>>>>>     };
>>>>>
>>>>>     struct amdgpu_vm_manager {
>>>>>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v3)
       [not found]                   ` <e1635b5e-e5a1-c4cf-005c-1920c6fc86e0-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
@ 2018-08-14  8:17                     ` Huang Rui
  0 siblings, 0 replies; 19+ messages in thread
From: Huang Rui @ 2018-08-14  8:17 UTC (permalink / raw)
  To: christian.koenig-5C7GfCeVMHo
  Cc: Zhang, Jerry, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

On Tue, Aug 14, 2018 at 09:35:50AM +0200, Christian König wrote:
> Am 14.08.2018 um 09:24 schrieb Huang Rui:
> >On Tue, Aug 14, 2018 at 02:45:22PM +0800, Koenig, Christian wrote:
> >>Am 14.08.2018 um 05:05 schrieb Huang Rui:
> >>>On Tue, Aug 14, 2018 at 10:26:43AM +0800, Zhang, Jerry wrote:
> >>>>On 08/13/2018 05:58 PM, Huang Rui wrote:
> >>>>>I continue to work for bulk moving that based on the proposal by Christian.
> >>>>>
> >>>>>Background:
> >>>>>amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
> >>>>>them on the end of LRU list one by one. Thus, that cause so many BOs moved to
> >>>>>the end of the LRU, and impact performance seriously.
> >>>>>
> >>>>>Then Christian provided a workaround to not move PD/PT BOs on LRU with below
> >>>>>patch:
> >>>>>"drm/amdgpu: band aid validating VM PTs"
> >>>>>Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
> >>>>>
> >>>>>However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
> >>>>>instead of one by one.
> >>>>>
> >>>>>Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
> >>>>>validated we move all BOs together to the end of the LRU without dropping the
> >>>>>lock for the LRU.
> >>>>>
> >>>>>While doing so we note the beginning and end of this block in the LRU list.
> >>>>>
> >>>>>Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
> >>>>>we don't move every BO one by one, but instead cut the LRU list into pieces so
> >>>>>that we bulk move everything to the end in just one operation.
> >>>>>
> >>>>>Test data:
> >>>>>+--------------+-----------------+-----------+---------------------------------------+
> >>>>>|              |The Talos        |Clpeak(OCL)|BusSpeedReadback(OCL)                  |
> >>>>>|              |Principle(Vulkan)|           |                                       |
> >>>>>+------------------------------------------------------------------------------------+
> >>>>>|              |                 |           |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
> >>>>>| Original     |  147.7 FPS      |  76.86 us |0.307 ms(8K) 0.310 ms(16K)             |
> >>>>>+------------------------------------------------------------------------------------+
> >>>>>| Orignial + WA|                 |           |0.254 ms(1K) 0.241 ms(2K)              |
> >>>>>|(don't move   |  162.1 FPS      |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
> >>>>>|PT BOs on LRU)|                 |           |                                       |
> >>>>>+------------------------------------------------------------------------------------+
> >>>>>| Bulk move    |  163.1 FPS      |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
> >>>>>|              |                 |           |0.214 ms(8K) 0.225 ms(16K)             |
> >>>>>+--------------+-----------------+-----------+---------------------------------------+
> >>>>>
> >>>>>After test them with above three benchmarks include vulkan and opencl. We can
> >>>>>see the visible improvement than original, and even better than original with
> >>>>>workaround.
> >>>>>
> >>>>>v2: move all BOs include idle, relocated, and moved list to the end of LRU and
> >>>>>put them together.
> >>>>>v3: remove unused parameter and use list_for_each_entry instead of the one with
> >>>>>save entry.
> >>>>>
> >>>>>Signed-off-by: Christian König <christian.koenig@amd.com>
> >>>>>Signed-off-by: Huang Rui <ray.huang@amd.com>
> >>>>>Tested-by: Mike Lothian <mike@fireburn.co.uk>
> >>>>>---
> >>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 73 ++++++++++++++++++++++++++--------
> >>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  4 ++
> >>>>>    2 files changed, 61 insertions(+), 16 deletions(-)
> >>>>>
> >>>>>diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>>>>index 9c84770..ee1af53 100644
> >>>>>--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>>>>+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>>>>@@ -268,6 +268,53 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
> >>>>>    }
> >>>>>
> >>>>>    /**
> >>>>>+ * amdgpu_vm_move_to_lru_tail - move one list of BOs to end of LRU
> >>>>>+ *
> >>>>>+ * @vm: vm providing the BOs
> >>>>>+ * @list: the list that stored BOs
> >>>>>+ *
> >>>>>+ * Move one list of BOs to the end of LRU and update the positions.
> >>>>>+ */
> >>>>>+static void
> >>>>>+amdgpu_vm_move_to_lru_tail_by_list(struct amdgpu_vm *vm, struct list_head *list)
> >>>>>+{
> >>>>>+	struct amdgpu_vm_bo_base *bo_base;
> >>>>>+
> >>>>>+	list_for_each_entry(bo_base, list, vm_status) {
> >>>>>+		struct amdgpu_bo *bo = bo_base->bo;
> >>>>>+
> >>>>>+		if (!bo->parent)
> >>>>>+			continue;
> >>>>>+
> >>>>>+		ttm_bo_move_to_lru_tail(&bo->tbo, &vm->lru_bulk_move);
> >>>>>+		if (bo->shadow)
> >>>>>+			ttm_bo_move_to_lru_tail(&bo->shadow->tbo,
> >>>>>+						&vm->lru_bulk_move);
> >>>>>+	}
> >>>>>+}
> >>>>>+
> >>>>>+/**
> >>>>>+ * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
> >>>>>+ *
> >>>>>+ * @adev: amdgpu device pointer
> >>>>>+ * @vm: vm providing the BOs
> >>>>>+ *
> >>>>>+ * Move all BOs to the end of LRU and remember their positions to put them
> >>>>>+ * together.
> >>>>>+ */
> >>>>>+static void
> >>>>>+amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> >>>>>+{
> >>>>>+	struct ttm_bo_global *glob = adev->mman.bdev.glob;
> >>>>>+
> >>>>>+	spin_lock(&glob->lru_lock);
> >>>>>+	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->idle);
> >>>>>+	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->relocated);
> >>>>>+	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->moved);
> >>>>Moved list is working under vm->moved_lock, so we may hold that as well.
> >>>>otherwise, to use the same one.
> >>>>(not sure the detail of history about moved_lock)
> >>>We actually don't remove them from moved list, just move bo->lru to the end
> >>>of lru. So here, it doesn't need moved_lock.
> >>>
> >>>>>+	spin_unlock(&glob->lru_lock);
> >>>>>+}
> >>>>>+
> >>>>>+/**
> >>>>>     * amdgpu_vm_validate_pt_bos - validate the page table BOs
> >>>>>     *
> >>>>>     * @adev: amdgpu device pointer
> >>>>>@@ -286,6 +333,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> >>>>>    {
> >>>>>    	struct ttm_bo_global *glob = adev->mman.bdev.glob;
> >>>>>    	struct amdgpu_vm_bo_base *bo_base, *tmp;
> >>>>>+	bool validated = false;
> >>>>>    	int r = 0;
> >>>>>
> >>>>>    	list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
> >>>>>@@ -295,14 +343,9 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> >>>>>    			r = validate(param, bo);
> >>>>>    			if (r)
> >>>>>    				break;
> >>>>>-
> >>>>>-			spin_lock(&glob->lru_lock);
> >>>>>-			ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> >>>>>-			if (bo->shadow)
> >>>>>-				ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> >>>>>-			spin_unlock(&glob->lru_lock);
> >>>>>    		}
> >>>>>
> >>>>>+		validated = true;
> >>>>>    		if (bo->tbo.type != ttm_bo_type_kernel) {
> >>>>>    			spin_lock(&vm->moved_lock);
> >>>>>    			list_move(&bo_base->vm_status, &vm->moved);
> >>>>>@@ -312,18 +355,16 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> >>>>>    		}
> >>>>>    	}
> >>>>>
> >>>>>-	spin_lock(&glob->lru_lock);
> >>>>>-	list_for_each_entry(bo_base, &vm->idle, vm_status) {
> >>>>>-		struct amdgpu_bo *bo = bo_base->bo;
> >>>>>+	if (!validated) {
> >>>>>+		spin_lock(&glob->lru_lock);
> >>>>>+		ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
> >>>>To confirm
> >>>>we only do actual bulk move when no evicted or validate failure?
> >>>>
> >>>Yes. Because if some BOs are evicted back, they will be moved to
> >>>moved/relocated list. Then we need update the positions of bulk moving.
> >>Ah, crap that won't work. Jerry pointed out a quite important bug here.
> >>
> >>The moved list contains both per-VM as well as independent BOs, so
> >>walking it and moving everything on the LRU won't work as expected.
> >>
> >Our purpose is not to move the independent BOs (shared with other VM),
> >right?
> >
> >>Probably better to just walk the idle list after we are done with the
> >>state machine.
> >>
> >If we only walk the idle list here, we probably are not included all the
> >Per-VM BOs, right? Or walk the idle list after command submission.
> 
> Walking the idle list after command submission. At this point all
> BOs should be on there, except for the evicted ones and we can
> handle those separately.
> 

Thanks. BTW, could you please elaborate more about the state machine?
I also see it mentioned on the comments of idle list.

/* All BOs of this VM not currently in the state machine */
struct list_head        idle;

Thanks,
Ray
> Regards,
> Christian.
> 
> >
> >Thanks,
> >Ray
> >
> >>Christian.
> >>
> >>>Thanks,
> >>>Ray
> >>>
> >>>>Regards,
> >>>>Jerry
> >>>>
> >>>>>+		spin_unlock(&glob->lru_lock);
> >>>>>+		return 0;
> >>>>>+	}
> >>>>>
> >>>>>-		if (!bo->parent)
> >>>>>-			continue;
> >>>>>+	memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
> >>>>>
> >>>>>-		ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> >>>>>-		if (bo->shadow)
> >>>>>-			ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> >>>>>-	}
> >>>>>-	spin_unlock(&glob->lru_lock);
> >>>>>+	amdgpu_vm_move_to_lru_tail(adev, vm);
> >>>>>
> >>>>>    	return r;
> >>>>>    }
> >>>>>diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> >>>>>index 67a15d4..92725ac 100644
> >>>>>--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> >>>>>+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> >>>>>@@ -29,6 +29,7 @@
> >>>>>    #include <linux/rbtree.h>
> >>>>>    #include <drm/gpu_scheduler.h>
> >>>>>    #include <drm/drm_file.h>
> >>>>>+#include <drm/ttm/ttm_bo_driver.h>
> >>>>>
> >>>>>    #include "amdgpu_sync.h"
> >>>>>    #include "amdgpu_ring.h"
> >>>>>@@ -226,6 +227,9 @@ struct amdgpu_vm {
> >>>>>
> >>>>>    	/* Some basic info about the task */
> >>>>>    	struct amdgpu_task_info task_info;
> >>>>>+
> >>>>>+	/* Store positions of group of BOs */
> >>>>>+	struct ttm_lru_bulk_move lru_bulk_move;
> >>>>>    };
> >>>>>
> >>>>>    struct amdgpu_vm_manager {
> >>>>>
> >_______________________________________________
> >dri-devel mailing list
> >dri-devel@lists.freedesktop.org
> >https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 0/5] drm/ttm,amdgpu: Introduce LRU bulk move functionality
  2018-08-13  9:58 [PATCH v3 0/5] drm/ttm,amdgpu: Introduce LRU bulk move functionality Huang Rui
  2018-08-13  9:58 ` [PATCH v3 3/5] drm/ttm: add bulk move function on LRU Huang Rui
       [not found] ` <1534154331-11810-1-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
@ 2018-08-16  0:41 ` Dieter Nützel
       [not found]   ` <70f3ba4a773f5ee3d1c46bc63991702a-0hun7QTegEsDD4udEopG9Q@public.gmane.org>
  2 siblings, 1 reply; 19+ messages in thread
From: Dieter Nützel @ 2018-08-16  0:41 UTC (permalink / raw)
  To: Huang Rui; +Cc: amd-gfx, dri-devel

For the series

Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>

on RX580,
amd-staging-drm-next
#5024f8dfe478

Dieter

Am 13.08.2018 11:58, schrieb Huang Rui:
> The idea and proposal is originally from Christian, and I continue to 
> work to
> deliver it.
> 
> Background:
> amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then 
> move all of
> them on the end of LRU list one by one. Thus, that cause so many BOs 
> moved to
> the end of the LRU, and impact performance seriously.
> 
> Then Christian provided a workaround to not move PD/PT BOs on LRU with 
> below
> patch:
> "drm/amdgpu: band aid validating VM PTs"
> Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
> 
> However, the final solution should bulk move all PD/PT and PerVM BOs on 
> the LRU
> instead of one by one.
> 
> Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which 
> need to be
> validated we move all BOs together to the end of the LRU without 
> dropping the
> lock for the LRU.
> 
> While doing so we note the beginning and end of this block in the LRU 
> list.
> 
> Now when amdgpu_vm_validate_pt_bos() is called and we don't have 
> anything to do,
> we don't move every BO one by one, but instead cut the LRU list into 
> pieces so
> that we bulk move everything to the end in just one operation.
> 
> Test data:
> +--------------+-----------------+-----------+---------------------------------------+
> |              |The Talos        |Clpeak(OCL)|BusSpeedReadback(OCL)
>               |
> |              |Principle(Vulkan)|           |
>               |
> +------------------------------------------------------------------------------------+
> |              |                 |           |0.319 ms(1k) 0.314
> ms(2K) 0.308 ms(4K) |
> | Original     |  147.7 FPS      |  76.86 us |0.307 ms(8K) 0.310
> ms(16K)             |
> +------------------------------------------------------------------------------------+
> | Orignial + WA|                 |           |0.254 ms(1K) 0.241
> ms(2K)              |
> |(don't move   |  162.1 FPS      |  42.15 us |0.230 ms(4K) 0.223
> ms(8K) 0.204 ms(16K)|
> |PT BOs on LRU)|                 |           |
>               |
> +------------------------------------------------------------------------------------+
> | Bulk move    |  163.1 FPS      |  40.52 us |0.244 ms(1K) 0.252
> ms(2K) 0.213 ms(4K) |
> |              |                 |           |0.214 ms(8K) 0.225
> ms(16K)             |
> +--------------+-----------------+-----------+---------------------------------------+
> 
> After test them with above three benchmarks include vulkan and opencl. 
> We can
> see the visible improvement than original, and even better than 
> original with
> workaround.
> 
> Changes from V1 -> V2:
> - Fix to missed the BOs in relocated/moved that should be also moved to 
> the end
>   of LRU.
> 
> Changes from V2 -> V3:
> - Remove unused parameter and use list_for_each_entry instead of the 
> one with
>   save entry.
> 
> Thanks,
> Rui
> 
> Christian König (2):
>   drm/ttm: add helper structures for bulk moves on lru list
>   drm/ttm: revise ttm_bo_move_to_lru_tail to support bulk moves
> 
> Huang Rui (3):
>   drm/ttm: add bulk move function on LRU
>   drm/amdgpu: use bulk moves for efficient VM LRU handling (v3)
>   drm/amdgpu: move PD/PT bos on LRU again
> 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 75 
> ++++++++++++++++++++++++--------
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  4 ++
>  drivers/gpu/drm/ttm/ttm_bo.c           | 78 
> +++++++++++++++++++++++++++++++++-
>  include/drm/ttm/ttm_bo_api.h           | 16 ++++++-
>  include/drm/ttm/ttm_bo_driver.h        | 28 ++++++++++++
>  5 files changed, 182 insertions(+), 19 deletions(-)
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 0/5] drm/ttm,amdgpu: Introduce LRU bulk move functionality
       [not found]   ` <70f3ba4a773f5ee3d1c46bc63991702a-0hun7QTegEsDD4udEopG9Q@public.gmane.org>
@ 2018-08-17 10:06     ` Huang Rui
  0 siblings, 0 replies; 19+ messages in thread
From: Huang Rui @ 2018-08-17 10:06 UTC (permalink / raw)
  To: Dieter Nützel
  Cc: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

On Thu, Aug 16, 2018 at 08:41:44AM +0800, Dieter Nützel wrote:
> For the series
> 
> Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
> 
> on RX580,
> amd-staging-drm-next
> #5024f8dfe478
> 

Thank you so much, will add tested-by in next version.

Thanks,
Ray

> Dieter
> 
> Am 13.08.2018 11:58, schrieb Huang Rui:
> > The idea and proposal is originally from Christian, and I continue to 
> > work to
> > deliver it.
> > 
> > Background:
> > amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then 
> > move all of
> > them on the end of LRU list one by one. Thus, that cause so many BOs 
> > moved to
> > the end of the LRU, and impact performance seriously.
> > 
> > Then Christian provided a workaround to not move PD/PT BOs on LRU with 
> > below
> > patch:
> > "drm/amdgpu: band aid validating VM PTs"
> > Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
> > 
> > However, the final solution should bulk move all PD/PT and PerVM BOs on 
> > the LRU
> > instead of one by one.
> > 
> > Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which 
> > need to be
> > validated we move all BOs together to the end of the LRU without 
> > dropping the
> > lock for the LRU.
> > 
> > While doing so we note the beginning and end of this block in the LRU 
> > list.
> > 
> > Now when amdgpu_vm_validate_pt_bos() is called and we don't have 
> > anything to do,
> > we don't move every BO one by one, but instead cut the LRU list into 
> > pieces so
> > that we bulk move everything to the end in just one operation.
> > 
> > Test data:
> > +--------------+-----------------+-----------+---------------------------------------+
> > |              |The Talos        |Clpeak(OCL)|BusSpeedReadback(OCL)
> >               |
> > |              |Principle(Vulkan)|           |
> >               |
> > +------------------------------------------------------------------------------------+
> > |              |                 |           |0.319 ms(1k) 0.314
> > ms(2K) 0.308 ms(4K) |
> > | Original     |  147.7 FPS      |  76.86 us |0.307 ms(8K) 0.310
> > ms(16K)             |
> > +------------------------------------------------------------------------------------+
> > | Orignial + WA|                 |           |0.254 ms(1K) 0.241
> > ms(2K)              |
> > |(don't move   |  162.1 FPS      |  42.15 us |0.230 ms(4K) 0.223
> > ms(8K) 0.204 ms(16K)|
> > |PT BOs on LRU)|                 |           |
> >               |
> > +------------------------------------------------------------------------------------+
> > | Bulk move    |  163.1 FPS      |  40.52 us |0.244 ms(1K) 0.252
> > ms(2K) 0.213 ms(4K) |
> > |              |                 |           |0.214 ms(8K) 0.225
> > ms(16K)             |
> > +--------------+-----------------+-----------+---------------------------------------+
> > 
> > After test them with above three benchmarks include vulkan and opencl. 
> > We can
> > see the visible improvement than original, and even better than 
> > original with
> > workaround.
> > 
> > Changes from V1 -> V2:
> > - Fix to missed the BOs in relocated/moved that should be also moved to 
> > the end
> >   of LRU.
> > 
> > Changes from V2 -> V3:
> > - Remove unused parameter and use list_for_each_entry instead of the 
> > one with
> >   save entry.
> > 
> > Thanks,
> > Rui
> > 
> > Christian König (2):
> >   drm/ttm: add helper structures for bulk moves on lru list
> >   drm/ttm: revise ttm_bo_move_to_lru_tail to support bulk moves
> > 
> > Huang Rui (3):
> >   drm/ttm: add bulk move function on LRU
> >   drm/amdgpu: use bulk moves for efficient VM LRU handling (v3)
> >   drm/amdgpu: move PD/PT bos on LRU again
> > 
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 75 
> > ++++++++++++++++++++++++--------
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  4 ++
> >  drivers/gpu/drm/ttm/ttm_bo.c           | 78 
> > +++++++++++++++++++++++++++++++++-
> >  include/drm/ttm/ttm_bo_api.h           | 16 ++++++-
> >  include/drm/ttm/ttm_bo_driver.h        | 28 ++++++++++++
> >  5 files changed, 182 insertions(+), 19 deletions(-)
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2018-08-17 10:06 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-13  9:58 [PATCH v3 0/5] drm/ttm,amdgpu: Introduce LRU bulk move functionality Huang Rui
2018-08-13  9:58 ` [PATCH v3 3/5] drm/ttm: add bulk move function on LRU Huang Rui
     [not found] ` <1534154331-11810-1-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
2018-08-13  9:58   ` [PATCH v3 1/5] drm/ttm: add helper structures for bulk moves on lru list Huang Rui
     [not found]     ` <1534154331-11810-2-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
2018-08-13 10:16       ` Christian König
     [not found]         ` <d0ebdd92-73a7-7959-4df0-391f3dd27526-5C7GfCeVMHo@public.gmane.org>
2018-08-14  2:02           ` zhoucm1
     [not found]             ` <b993176a-cf49-d3b7-9be1-feb7dc95456f-5C7GfCeVMHo@public.gmane.org>
2018-08-14  2:25               ` Huang Rui
2018-08-14  2:22           ` Zhang, Jerry (Junwei)
     [not found]             ` <5B723CEA.1070903-5C7GfCeVMHo@public.gmane.org>
2018-08-14  2:49               ` Huang Rui
2018-08-13  9:58   ` [PATCH v3 2/5] drm/ttm: revise ttm_bo_move_to_lru_tail to support bulk moves Huang Rui
2018-08-13  9:58   ` [PATCH v3 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v3) Huang Rui
2018-08-14  2:26     ` Zhang, Jerry (Junwei)
     [not found]       ` <5B723DE3.50005-5C7GfCeVMHo@public.gmane.org>
2018-08-14  3:05         ` Huang Rui
2018-08-14  6:45           ` Christian König
     [not found]             ` <4f7e6d61-0b4a-5c12-38a9-ea905b9f6234-5C7GfCeVMHo@public.gmane.org>
2018-08-14  7:24               ` Huang Rui
2018-08-14  7:35                 ` Christian König
     [not found]                   ` <e1635b5e-e5a1-c4cf-005c-1920c6fc86e0-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2018-08-14  8:17                     ` Huang Rui
2018-08-13  9:58   ` [PATCH v3 5/5] drm/amdgpu: move PD/PT bos on LRU again Huang Rui
2018-08-16  0:41 ` [PATCH v3 0/5] drm/ttm,amdgpu: Introduce LRU bulk move functionality Dieter Nützel
     [not found]   ` <70f3ba4a773f5ee3d1c46bc63991702a-0hun7QTegEsDD4udEopG9Q@public.gmane.org>
2018-08-17 10:06     ` Huang Rui

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.