All of lore.kernel.org
 help / color / mirror / Atom feed
* Gang submit v2
@ 2022-07-14 10:38 Christian König
  2022-07-14 10:38 ` [PATCH 01/10] drm/sched: move calling drm_sched_entity_select_rq Christian König
                   ` (9 more replies)
  0 siblings, 10 replies; 22+ messages in thread
From: Christian König @ 2022-07-14 10:38 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak, timur.kristof, andrey.grodzovsky,
	Yogesh.Mohanmarimuthu

Hi guys,

secound round for this patch set. I've fixed the minor comments and bugs
Andrey and Yogesh came up with and rebased everything.

The base is not the current amd-staging-drm-next branch, but rather Alex
rebased version since this depends on the dma-resv usage rework. So
don't bother trying to apply that anywhere.

The patches are also available here https://gitlab.freedesktop.org/ckoenig/linux-drm.git
on branch amd-gang-submit.

Please review and comment,
Christian.



^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 01/10] drm/sched: move calling drm_sched_entity_select_rq
  2022-07-14 10:38 Gang submit v2 Christian König
@ 2022-07-14 10:38 ` Christian König
  2022-07-14 15:43   ` Andrey Grodzovsky
  2022-07-14 10:38 ` [PATCH 02/10] drm/amdgpu: Protect the amdgpu_bo_list list with a mutex v2 Christian König
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 22+ messages in thread
From: Christian König @ 2022-07-14 10:38 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak, timur.kristof, andrey.grodzovsky,
	Yogesh.Mohanmarimuthu
  Cc: Christian König, dri-devel

We already discussed that the call to drm_sched_entity_select_rq() needs
to move to drm_sched_job_arm() to be able to set a new scheduler list
between _init() and _arm(). This was just not applied for some reason.

Signed-off-by: Christian König <christian.koenig@amd.com>
CC: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
CC: dri-devel@lists.freedesktop.org
---
 drivers/gpu/drm/scheduler/sched_main.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 68317d3a7a27..e0ab14e0fb6b 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -592,7 +592,6 @@ int drm_sched_job_init(struct drm_sched_job *job,
 		       struct drm_sched_entity *entity,
 		       void *owner)
 {
-	drm_sched_entity_select_rq(entity);
 	if (!entity->rq)
 		return -ENOENT;
 
@@ -628,7 +627,7 @@ void drm_sched_job_arm(struct drm_sched_job *job)
 	struct drm_sched_entity *entity = job->entity;
 
 	BUG_ON(!entity);
-
+	drm_sched_entity_select_rq(entity);
 	sched = entity->rq->sched;
 
 	job->sched = sched;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 02/10] drm/amdgpu: Protect the amdgpu_bo_list list with a mutex v2
  2022-07-14 10:38 Gang submit v2 Christian König
  2022-07-14 10:38 ` [PATCH 01/10] drm/sched: move calling drm_sched_entity_select_rq Christian König
@ 2022-07-14 10:38 ` Christian König
  2022-07-14 15:10   ` Alex Deucher
  2022-07-14 19:10   ` Luben Tuikov
  2022-07-14 10:38 ` [PATCH 03/10] drm/amdgpu: revert "partial revert "remove ctx->lock" v2" Christian König
                   ` (7 subsequent siblings)
  9 siblings, 2 replies; 22+ messages in thread
From: Christian König @ 2022-07-14 10:38 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak, timur.kristof, andrey.grodzovsky,
	Yogesh.Mohanmarimuthu
  Cc: Alex Deucher, Andrey Grodzovsky, Luben Tuikov,
	Christian König, Vitaly Prosyak

From: Luben Tuikov <luben.tuikov@amd.com>

Protect the struct amdgpu_bo_list with a mutex. This is used during command
submission in order to avoid buffer object corruption as recorded in
the link below.

v2 (chk): Keep the mutex looked for the whole CS to avoid using the
	  list from multiple CS threads at the same time.

Suggested-by: Christian König <christian.koenig@amd.com>
Cc: Alex Deucher <Alexander.Deucher@amd.com>
Cc: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
Cc: Vitaly Prosyak <Vitaly.Prosyak@amd.com>
Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2048
Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c |  3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h |  4 ++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c      | 16 +++++++++++++---
 3 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
index 714178f1b6c6..2168163aad2d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
@@ -40,7 +40,7 @@ static void amdgpu_bo_list_free_rcu(struct rcu_head *rcu)
 {
 	struct amdgpu_bo_list *list = container_of(rcu, struct amdgpu_bo_list,
 						   rhead);
-
+	mutex_destroy(&list->bo_list_mutex);
 	kvfree(list);
 }
 
@@ -136,6 +136,7 @@ int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp,
 
 	trace_amdgpu_cs_bo_status(list->num_entries, total_size);
 
+	mutex_init(&list->bo_list_mutex);
 	*result = list;
 	return 0;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h
index 529d52a204cf..9caea1688fc3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h
@@ -47,6 +47,10 @@ struct amdgpu_bo_list {
 	struct amdgpu_bo *oa_obj;
 	unsigned first_userptr;
 	unsigned num_entries;
+
+	/* Protect access during command submission.
+	 */
+	struct mutex bo_list_mutex;
 };
 
 int amdgpu_bo_list_get(struct amdgpu_fpriv *fpriv, int id,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index b28af04b0c3e..d8f1335bc68f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -519,6 +519,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 			return r;
 	}
 
+	mutex_lock(&p->bo_list->bo_list_mutex);
+
 	/* One for TTM and one for the CS job */
 	amdgpu_bo_list_for_each_entry(e, p->bo_list)
 		e->tv.num_shared = 2;
@@ -651,6 +653,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 			kvfree(e->user_pages);
 			e->user_pages = NULL;
 		}
+		mutex_unlock(&p->bo_list->bo_list_mutex);
 	}
 	return r;
 }
@@ -690,9 +693,11 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error,
 {
 	unsigned i;
 
-	if (error && backoff)
+	if (error && backoff) {
 		ttm_eu_backoff_reservation(&parser->ticket,
 					   &parser->validated);
+		mutex_unlock(&parser->bo_list->bo_list_mutex);
+	}
 
 	for (i = 0; i < parser->num_post_deps; i++) {
 		drm_syncobj_put(parser->post_deps[i].syncobj);
@@ -832,12 +837,16 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 			continue;
 
 		r = amdgpu_vm_bo_update(adev, bo_va, false);
-		if (r)
+		if (r) {
+			mutex_unlock(&p->bo_list->bo_list_mutex);
 			return r;
+		}
 
 		r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update);
-		if (r)
+		if (r) {
+			mutex_unlock(&p->bo_list->bo_list_mutex);
 			return r;
+		}
 	}
 
 	r = amdgpu_vm_handle_moved(adev, vm);
@@ -1278,6 +1287,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 
 	ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence);
 	mutex_unlock(&p->adev->notifier_lock);
+	mutex_unlock(&p->bo_list->bo_list_mutex);
 
 	return 0;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 03/10] drm/amdgpu: revert "partial revert "remove ctx->lock" v2"
  2022-07-14 10:38 Gang submit v2 Christian König
  2022-07-14 10:38 ` [PATCH 01/10] drm/sched: move calling drm_sched_entity_select_rq Christian König
  2022-07-14 10:38 ` [PATCH 02/10] drm/amdgpu: Protect the amdgpu_bo_list list with a mutex v2 Christian König
@ 2022-07-14 10:38 ` Christian König
  2022-07-14 10:38 ` [PATCH 04/10] drm/amdgpu: use DMA_RESV_USAGE_BOOKKEEP Christian König
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 22+ messages in thread
From: Christian König @ 2022-07-14 10:38 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak, timur.kristof, andrey.grodzovsky,
	Yogesh.Mohanmarimuthu
  Cc: Christian König

This reverts commit 94f4c4965e5513ba624488f4b601d6b385635aec.

We found that the bo_list is missing a protection for its list entries.
Since that is fixed now this workaround can be removed again.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 21 ++++++---------------
 drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c |  2 --
 drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h |  1 -
 3 files changed, 6 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index d8f1335bc68f..a3b8400c914e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -128,8 +128,6 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
 		goto free_chunk;
 	}
 
-	mutex_lock(&p->ctx->lock);
-
 	/* skip guilty context job */
 	if (atomic_read(&p->ctx->guilty) == 1) {
 		ret = -ECANCELED;
@@ -708,7 +706,6 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error,
 	dma_fence_put(parser->fence);
 
 	if (parser->ctx) {
-		mutex_unlock(&parser->ctx->lock);
 		amdgpu_ctx_put(parser->ctx);
 	}
 	if (parser->bo_list)
@@ -1161,9 +1158,6 @@ static int amdgpu_cs_dependencies(struct amdgpu_device *adev,
 {
 	int i, r;
 
-	/* TODO: Investigate why we still need the context lock */
-	mutex_unlock(&p->ctx->lock);
-
 	for (i = 0; i < p->nchunks; ++i) {
 		struct amdgpu_cs_chunk *chunk;
 
@@ -1174,34 +1168,32 @@ static int amdgpu_cs_dependencies(struct amdgpu_device *adev,
 		case AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES:
 			r = amdgpu_cs_process_fence_dep(p, chunk);
 			if (r)
-				goto out;
+				return r;
 			break;
 		case AMDGPU_CHUNK_ID_SYNCOBJ_IN:
 			r = amdgpu_cs_process_syncobj_in_dep(p, chunk);
 			if (r)
-				goto out;
+				return r;
 			break;
 		case AMDGPU_CHUNK_ID_SYNCOBJ_OUT:
 			r = amdgpu_cs_process_syncobj_out_dep(p, chunk);
 			if (r)
-				goto out;
+				return r;
 			break;
 		case AMDGPU_CHUNK_ID_SYNCOBJ_TIMELINE_WAIT:
 			r = amdgpu_cs_process_syncobj_timeline_in_dep(p, chunk);
 			if (r)
-				goto out;
+				return r;
 			break;
 		case AMDGPU_CHUNK_ID_SYNCOBJ_TIMELINE_SIGNAL:
 			r = amdgpu_cs_process_syncobj_timeline_out_dep(p, chunk);
 			if (r)
-				goto out;
+				return r;
 			break;
 		}
 	}
 
-out:
-	mutex_lock(&p->ctx->lock);
-	return r;
+	return 0;
 }
 
 static void amdgpu_cs_post_dependencies(struct amdgpu_cs_parser *p)
@@ -1363,7 +1355,6 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 		goto out;
 
 	r = amdgpu_cs_submit(&parser, cs);
-
 out:
 	amdgpu_cs_parser_fini(&parser, r, reserved_buffers);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index 2ef5296216d6..53f9268366f2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -286,7 +286,6 @@ static int amdgpu_ctx_init(struct amdgpu_ctx_mgr *mgr, int32_t priority,
 	kref_init(&ctx->refcount);
 	ctx->mgr = mgr;
 	spin_lock_init(&ctx->ring_lock);
-	mutex_init(&ctx->lock);
 
 	ctx->reset_counter = atomic_read(&mgr->adev->gpu_reset_counter);
 	ctx->reset_counter_query = ctx->reset_counter;
@@ -401,7 +400,6 @@ static void amdgpu_ctx_fini(struct kref *ref)
 		drm_dev_exit(idx);
 	}
 
-	mutex_destroy(&ctx->lock);
 	kfree(ctx);
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
index cc7c8afff414..0fa0e56daf67 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
@@ -53,7 +53,6 @@ struct amdgpu_ctx {
 	bool				preamble_presented;
 	int32_t				init_priority;
 	int32_t				override_priority;
-	struct mutex			lock;
 	atomic_t			guilty;
 	unsigned long			ras_counter_ce;
 	unsigned long			ras_counter_ue;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 04/10] drm/amdgpu: use DMA_RESV_USAGE_BOOKKEEP
  2022-07-14 10:38 Gang submit v2 Christian König
                   ` (2 preceding siblings ...)
  2022-07-14 10:38 ` [PATCH 03/10] drm/amdgpu: revert "partial revert "remove ctx->lock" v2" Christian König
@ 2022-07-14 10:38 ` Christian König
  2022-07-14 10:38 ` [PATCH 05/10] drm/amdgpu: cleanup and reorder amdgpu_cs.c Christian König
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 22+ messages in thread
From: Christian König @ 2022-07-14 10:38 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak, timur.kristof, andrey.grodzovsky,
	Yogesh.Mohanmarimuthu
  Cc: Christian König

Use DMA_RESV_USAGE_BOOKKEEP for VM page table updates and KFD preemption fence.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c      | 3 ++-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index 581c7ae41102..c6de30cffa2e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -314,7 +314,7 @@ static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo,
 	 */
 	replacement = dma_fence_get_stub();
 	dma_resv_replace_fences(bo->tbo.base.resv, ef->base.context,
-				replacement, DMA_RESV_USAGE_READ);
+				replacement, DMA_RESV_USAGE_BOOKKEEP);
 	dma_fence_put(replacement);
 	return 0;
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
index 1fd3cbca20a2..03ec099d64e0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
@@ -112,7 +112,8 @@ static int amdgpu_vm_sdma_commit(struct amdgpu_vm_update_params *p,
 		swap(p->vm->last_unlocked, tmp);
 		dma_fence_put(tmp);
 	} else {
-		amdgpu_bo_fence(p->vm->root.bo, f, true);
+		dma_resv_add_fence(p->vm->root.bo->tbo.base.resv, f,
+				   DMA_RESV_USAGE_BOOKKEEP);
 	}
 
 	if (fence && !p->immediate)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 05/10] drm/amdgpu: cleanup and reorder amdgpu_cs.c
  2022-07-14 10:38 Gang submit v2 Christian König
                   ` (3 preceding siblings ...)
  2022-07-14 10:38 ` [PATCH 04/10] drm/amdgpu: use DMA_RESV_USAGE_BOOKKEEP Christian König
@ 2022-07-14 10:38 ` Christian König
  2022-07-14 10:38 ` [PATCH 06/10] drm/amdgpu: remove SRIOV and MCBP dependencies from the CS Christian König
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 22+ messages in thread
From: Christian König @ 2022-07-14 10:38 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak, timur.kristof, andrey.grodzovsky,
	Yogesh.Mohanmarimuthu
  Cc: Christian König

Sort the functions in the order they are called and cleanup the coding
style and function names to represent the data they process.

Check the size of the IB chunk, initialize resulting entity and scheduler
job much earlier as well.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 1641 ++++++++++++------------
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h |    2 +-
 2 files changed, 817 insertions(+), 826 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index a3b8400c914e..b9de631a66a3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -39,9 +39,61 @@
 #include "amdgpu_gem.h"
 #include "amdgpu_ras.h"
 
-static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p,
-				      struct drm_amdgpu_cs_chunk_fence *data,
-				      uint32_t *offset)
+static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p,
+				 struct amdgpu_device *adev,
+				 struct drm_file *filp,
+				 union drm_amdgpu_cs *cs)
+{
+	struct amdgpu_fpriv *fpriv = filp->driver_priv;
+
+	if (cs->in.num_chunks == 0)
+		return -EINVAL;
+
+	memset(p, 0, sizeof(*p));
+	p->adev = adev;
+	p->filp = filp;
+
+	p->ctx = amdgpu_ctx_get(fpriv, cs->in.ctx_id);
+	if (!p->ctx)
+		return -EINVAL;
+
+	if (atomic_read(&p->ctx->guilty)) {
+		amdgpu_ctx_put(p->ctx);
+		return -ECANCELED;
+	}
+	return 0;
+}
+
+static int amdgpu_cs_p1_ib(struct amdgpu_cs_parser *p,
+			   struct drm_amdgpu_cs_chunk_ib *chunk_ib,
+			   unsigned int *num_ibs)
+{
+	struct drm_sched_entity *entity;
+	int r;
+
+	r = amdgpu_ctx_get_entity(p->ctx, chunk_ib->ip_type,
+				  chunk_ib->ip_instance,
+				  chunk_ib->ring, &entity);
+	if (r)
+		return r;
+
+	/* Abort if there is no run queue associated with this entity.
+	 * Possibly because of disabled HW IP*/
+	if (entity->rq == NULL)
+		return -EINVAL;
+
+	/* Currently we don't support submitting to multiple entities */
+	if (p->entity && p->entity != entity)
+		return -EINVAL;
+
+	p->entity = entity;
+	++(*num_ibs);
+	return 0;
+}
+
+static int amdgpu_cs_p1_user_fence(struct amdgpu_cs_parser *p,
+				   struct drm_amdgpu_cs_chunk_fence *data,
+				   uint32_t *offset)
 {
 	struct drm_gem_object *gobj;
 	struct amdgpu_bo *bo;
@@ -80,11 +132,11 @@ static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p,
 	return r;
 }
 
-static int amdgpu_cs_bo_handles_chunk(struct amdgpu_cs_parser *p,
-				      struct drm_amdgpu_bo_list_in *data)
+static int amdgpu_cs_p1_bo_handles(struct amdgpu_cs_parser *p,
+				   struct drm_amdgpu_bo_list_in *data)
 {
+	struct drm_amdgpu_bo_list_entry *info;
 	int r;
-	struct drm_amdgpu_bo_list_entry *info = NULL;
 
 	r = amdgpu_bo_create_list_entry_array(data, &info);
 	if (r)
@@ -104,7 +156,9 @@ static int amdgpu_cs_bo_handles_chunk(struct amdgpu_cs_parser *p,
 	return r;
 }
 
-static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs *cs)
+/* Copy the data from userspace and go over it the first time */
+static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
+			   union drm_amdgpu_cs *cs)
 {
 	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
 	struct amdgpu_vm *vm = &fpriv->vm;
@@ -112,28 +166,17 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
 	uint64_t *chunk_array;
 	unsigned size, num_ibs = 0;
 	uint32_t uf_offset = 0;
-	int i;
 	int ret;
+	int i;
 
 	if (cs->in.num_chunks == 0)
 		return -EINVAL;
 
-	chunk_array = kvmalloc_array(cs->in.num_chunks, sizeof(uint64_t), GFP_KERNEL);
+	chunk_array = kvmalloc_array(cs->in.num_chunks, sizeof(uint64_t),
+				     GFP_KERNEL);
 	if (!chunk_array)
 		return -ENOMEM;
 
-	p->ctx = amdgpu_ctx_get(fpriv, cs->in.ctx_id);
-	if (!p->ctx) {
-		ret = -EINVAL;
-		goto free_chunk;
-	}
-
-	/* skip guilty context job */
-	if (atomic_read(&p->ctx->guilty) == 1) {
-		ret = -ECANCELED;
-		goto free_chunk;
-	}
-
 	/* get chunks */
 	chunk_array_user = u64_to_user_ptr(cs->in.chunks);
 	if (copy_from_user(chunk_array, chunk_array_user,
@@ -168,7 +211,8 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
 		size = p->chunks[i].length_dw;
 		cdata = u64_to_user_ptr(user_chunk.chunk_data);
 
-		p->chunks[i].kdata = kvmalloc_array(size, sizeof(uint32_t), GFP_KERNEL);
+		p->chunks[i].kdata = kvmalloc_array(size, sizeof(uint32_t),
+						    GFP_KERNEL);
 		if (p->chunks[i].kdata == NULL) {
 			ret = -ENOMEM;
 			i--;
@@ -180,36 +224,35 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
 			goto free_partial_kdata;
 		}
 
+		/* Assume the worst on the following checks */
+		ret = -EINVAL;
 		switch (p->chunks[i].chunk_id) {
 		case AMDGPU_CHUNK_ID_IB:
-			++num_ibs;
+			if (size < sizeof(struct drm_amdgpu_cs_chunk_ib))
+				goto free_partial_kdata;
+
+			ret = amdgpu_cs_p1_ib(p, p->chunks[i].kdata, &num_ibs);
+			if (ret)
+				goto free_partial_kdata;
 			break;
 
 		case AMDGPU_CHUNK_ID_FENCE:
-			size = sizeof(struct drm_amdgpu_cs_chunk_fence);
-			if (p->chunks[i].length_dw * sizeof(uint32_t) < size) {
-				ret = -EINVAL;
+			if (size < sizeof(struct drm_amdgpu_cs_chunk_fence))
 				goto free_partial_kdata;
-			}
 
-			ret = amdgpu_cs_user_fence_chunk(p, p->chunks[i].kdata,
-							 &uf_offset);
+			ret = amdgpu_cs_p1_user_fence(p, p->chunks[i].kdata,
+						      &uf_offset);
 			if (ret)
 				goto free_partial_kdata;
-
 			break;
 
 		case AMDGPU_CHUNK_ID_BO_HANDLES:
-			size = sizeof(struct drm_amdgpu_bo_list_in);
-			if (p->chunks[i].length_dw * sizeof(uint32_t) < size) {
-				ret = -EINVAL;
+			if (size < sizeof(struct drm_amdgpu_bo_list_in))
 				goto free_partial_kdata;
-			}
 
-			ret = amdgpu_cs_bo_handles_chunk(p, p->chunks[i].kdata);
+			ret = amdgpu_cs_p1_bo_handles(p, p->chunks[i].kdata);
 			if (ret)
 				goto free_partial_kdata;
-
 			break;
 
 		case AMDGPU_CHUNK_ID_DEPENDENCIES:
@@ -221,7 +264,6 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
 			break;
 
 		default:
-			ret = -EINVAL;
 			goto free_partial_kdata;
 		}
 	}
@@ -230,6 +272,10 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
 	if (ret)
 		goto free_all_kdata;
 
+	ret = drm_sched_job_init(&p->job->base, p->entity, &fpriv->vm);
+	if (ret)
+		goto free_all_kdata;
+
 	if (p->ctx->vram_lost_counter != p->job->vram_lost_counter) {
 		ret = -ECANCELED;
 		goto free_all_kdata;
@@ -258,941 +304,864 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
 	return ret;
 }
 
-/* Convert microseconds to bytes. */
-static u64 us_to_bytes(struct amdgpu_device *adev, s64 us)
+static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
+			   struct amdgpu_cs_chunk *chunk,
+			   unsigned int *num_ibs,
+			   unsigned int *ce_preempt,
+			   unsigned int *de_preempt)
 {
-	if (us <= 0 || !adev->mm_stats.log2_max_MBps)
-		return 0;
+	struct amdgpu_ring *ring = to_amdgpu_ring(p->job->base.sched);
+	struct drm_amdgpu_cs_chunk_ib *chunk_ib = chunk->kdata;
+	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+	struct amdgpu_ib *ib = &p->job->ibs[*num_ibs];
+	struct amdgpu_vm *vm = &fpriv->vm;
+	int r;
 
-	/* Since accum_us is incremented by a million per second, just
-	 * multiply it by the number of MB/s to get the number of bytes.
-	 */
-	return us << adev->mm_stats.log2_max_MBps;
-}
 
-static s64 bytes_to_us(struct amdgpu_device *adev, u64 bytes)
-{
-	if (!adev->mm_stats.log2_max_MBps)
-		return 0;
+	/* MM engine doesn't support user fences */
+	if (p->job->uf_addr && ring->funcs->no_user_fence)
+		return -EINVAL;
 
-	return bytes >> adev->mm_stats.log2_max_MBps;
-}
+	if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX &&
+	    chunk_ib->flags & AMDGPU_IB_FLAG_PREEMPT &&
+	    (amdgpu_mcbp || amdgpu_sriov_vf(p->adev))) {
+		if (chunk_ib->flags & AMDGPU_IB_FLAG_CE)
+			(*ce_preempt)++;
+		else
+			(*de_preempt)++;
 
-/* Returns how many bytes TTM can move right now. If no bytes can be moved,
- * it returns 0. If it returns non-zero, it's OK to move at least one buffer,
- * which means it can go over the threshold once. If that happens, the driver
- * will be in debt and no other buffer migrations can be done until that debt
- * is repaid.
- *
- * This approach allows moving a buffer of any size (it's important to allow
- * that).
- *
- * The currency is simply time in microseconds and it increases as the clock
- * ticks. The accumulated microseconds (us) are converted to bytes and
- * returned.
- */
-static void amdgpu_cs_get_threshold_for_moves(struct amdgpu_device *adev,
-					      u64 *max_bytes,
-					      u64 *max_vis_bytes)
-{
-	s64 time_us, increment_us;
-	u64 free_vram, total_vram, used_vram;
-	/* Allow a maximum of 200 accumulated ms. This is basically per-IB
-	 * throttling.
-	 *
-	 * It means that in order to get full max MBps, at least 5 IBs per
-	 * second must be submitted and not more than 200ms apart from each
-	 * other.
-	 */
-	const s64 us_upper_bound = 200000;
+		/* Each GFX command submit allows only 1 IB max
+		 * preemptible for CE & DE */
+		if (*ce_preempt > 1 || *de_preempt > 1)
+			return -EINVAL;
+	}
 
-	if (!adev->mm_stats.log2_max_MBps) {
-		*max_bytes = 0;
-		*max_vis_bytes = 0;
-		return;
+	if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE)
+		p->job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT;
+
+	r =  amdgpu_ib_get(p->adev, vm, ring->funcs->parse_cs ?
+			   chunk_ib->ib_bytes : 0,
+			   AMDGPU_IB_POOL_DELAYED, ib);
+	if (r) {
+		DRM_ERROR("Failed to get ib !\n");
+		return r;
 	}
 
-	total_vram = adev->gmc.real_vram_size - atomic64_read(&adev->vram_pin_size);
-	used_vram = ttm_resource_manager_usage(&adev->mman.vram_mgr.manager);
-	free_vram = used_vram >= total_vram ? 0 : total_vram - used_vram;
+	ib->gpu_addr = chunk_ib->va_start;
+	ib->length_dw = chunk_ib->ib_bytes / 4;
+	ib->flags = chunk_ib->flags;
 
-	spin_lock(&adev->mm_stats.lock);
+	(*num_ibs)++;
+	return 0;
+}
 
-	/* Increase the amount of accumulated us. */
-	time_us = ktime_to_us(ktime_get());
-	increment_us = time_us - adev->mm_stats.last_update_us;
-	adev->mm_stats.last_update_us = time_us;
-	adev->mm_stats.accum_us = min(adev->mm_stats.accum_us + increment_us,
-				      us_upper_bound);
+static int amdgpu_cs_p2_dependencies(struct amdgpu_cs_parser *p,
+				     struct amdgpu_cs_chunk *chunk)
+{
+	struct drm_amdgpu_cs_chunk_dep *deps = chunk->kdata;
+	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+	unsigned num_deps;
+	int i, r;
 
-	/* This prevents the short period of low performance when the VRAM
-	 * usage is low and the driver is in debt or doesn't have enough
-	 * accumulated us to fill VRAM quickly.
-	 *
-	 * The situation can occur in these cases:
-	 * - a lot of VRAM is freed by userspace
-	 * - the presence of a big buffer causes a lot of evictions
-	 *   (solution: split buffers into smaller ones)
-	 *
-	 * If 128 MB or 1/8th of VRAM is free, start filling it now by setting
-	 * accum_us to a positive number.
-	 */
-	if (free_vram >= 128 * 1024 * 1024 || free_vram >= total_vram / 8) {
-		s64 min_us;
+	num_deps = chunk->length_dw * 4 /
+		sizeof(struct drm_amdgpu_cs_chunk_dep);
 
-		/* Be more aggressive on dGPUs. Try to fill a portion of free
-		 * VRAM now.
-		 */
-		if (!(adev->flags & AMD_IS_APU))
-			min_us = bytes_to_us(adev, free_vram / 4);
-		else
-			min_us = 0; /* Reset accum_us on APUs. */
+	for (i = 0; i < num_deps; ++i) {
+		struct amdgpu_ctx *ctx;
+		struct drm_sched_entity *entity;
+		struct dma_fence *fence;
 
-		adev->mm_stats.accum_us = max(min_us, adev->mm_stats.accum_us);
-	}
+		ctx = amdgpu_ctx_get(fpriv, deps[i].ctx_id);
+		if (ctx == NULL)
+			return -EINVAL;
 
-	/* This is set to 0 if the driver is in debt to disallow (optional)
-	 * buffer moves.
-	 */
-	*max_bytes = us_to_bytes(adev, adev->mm_stats.accum_us);
+		r = amdgpu_ctx_get_entity(ctx, deps[i].ip_type,
+					  deps[i].ip_instance,
+					  deps[i].ring, &entity);
+		if (r) {
+			amdgpu_ctx_put(ctx);
+			return r;
+		}
 
-	/* Do the same for visible VRAM if half of it is free */
-	if (!amdgpu_gmc_vram_full_visible(&adev->gmc)) {
-		u64 total_vis_vram = adev->gmc.visible_vram_size;
-		u64 used_vis_vram =
-		  amdgpu_vram_mgr_vis_usage(&adev->mman.vram_mgr);
+		fence = amdgpu_ctx_get_fence(ctx, entity, deps[i].handle);
+		amdgpu_ctx_put(ctx);
 
-		if (used_vis_vram < total_vis_vram) {
-			u64 free_vis_vram = total_vis_vram - used_vis_vram;
-			adev->mm_stats.accum_us_vis = min(adev->mm_stats.accum_us_vis +
-							  increment_us, us_upper_bound);
+		if (IS_ERR(fence))
+			return PTR_ERR(fence);
+		else if (!fence)
+			continue;
 
-			if (free_vis_vram >= total_vis_vram / 2)
-				adev->mm_stats.accum_us_vis =
-					max(bytes_to_us(adev, free_vis_vram / 2),
-					    adev->mm_stats.accum_us_vis);
+		if (chunk->chunk_id == AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES) {
+			struct drm_sched_fence *s_fence;
+			struct dma_fence *old = fence;
+
+			s_fence = to_drm_sched_fence(fence);
+			fence = dma_fence_get(&s_fence->scheduled);
+			dma_fence_put(old);
 		}
 
-		*max_vis_bytes = us_to_bytes(adev, adev->mm_stats.accum_us_vis);
-	} else {
-		*max_vis_bytes = 0;
+		r = amdgpu_sync_fence(&p->job->sync, fence);
+		dma_fence_put(fence);
+		if (r)
+			return r;
 	}
+	return 0;
+}
 
-	spin_unlock(&adev->mm_stats.lock);
+static int amdgpu_syncobj_lookup_and_add(struct amdgpu_cs_parser *p,
+					 uint32_t handle, u64 point,
+					 u64 flags)
+{
+	struct dma_fence *fence;
+	int r;
+
+	r = drm_syncobj_find_fence(p->filp, handle, point, flags, &fence);
+	if (r) {
+		DRM_ERROR("syncobj %u failed to find fence @ %llu (%d)!\n",
+			  handle, point, r);
+		return r;
+	}
+
+	r = amdgpu_sync_fence(&p->job->sync, fence);
+	dma_fence_put(fence);
+
+	return r;
 }
 
-/* Report how many bytes have really been moved for the last command
- * submission. This can result in a debt that can stop buffer migrations
- * temporarily.
- */
-void amdgpu_cs_report_moved_bytes(struct amdgpu_device *adev, u64 num_bytes,
-				  u64 num_vis_bytes)
+static int amdgpu_cs_p2_syncobj_in(struct amdgpu_cs_parser *p,
+				   struct amdgpu_cs_chunk *chunk)
 {
-	spin_lock(&adev->mm_stats.lock);
-	adev->mm_stats.accum_us -= bytes_to_us(adev, num_bytes);
-	adev->mm_stats.accum_us_vis -= bytes_to_us(adev, num_vis_bytes);
-	spin_unlock(&adev->mm_stats.lock);
+	struct drm_amdgpu_cs_chunk_sem *deps = chunk->kdata;
+	unsigned num_deps;
+	int i, r;
+
+	num_deps = chunk->length_dw * 4 /
+		sizeof(struct drm_amdgpu_cs_chunk_sem);
+	for (i = 0; i < num_deps; ++i) {
+		r = amdgpu_syncobj_lookup_and_add(p, deps[i].handle, 0, 0);
+		if (r)
+			return r;
+	}
+
+	return 0;
 }
 
-static int amdgpu_cs_bo_validate(void *param, struct amdgpu_bo *bo)
+static int amdgpu_cs_p2_syncobj_timeline_wait(struct amdgpu_cs_parser *p,
+					      struct amdgpu_cs_chunk *chunk)
 {
-	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
-	struct amdgpu_cs_parser *p = param;
-	struct ttm_operation_ctx ctx = {
-		.interruptible = true,
-		.no_wait_gpu = false,
-		.resv = bo->tbo.base.resv
-	};
-	uint32_t domain;
-	int r;
+	struct drm_amdgpu_cs_chunk_syncobj *syncobj_deps = chunk->kdata;
+	unsigned num_deps;
+	int i, r;
 
-	if (bo->tbo.pin_count)
-		return 0;
+	num_deps = chunk->length_dw * 4 /
+		sizeof(struct drm_amdgpu_cs_chunk_syncobj);
+	for (i = 0; i < num_deps; ++i) {
+		r = amdgpu_syncobj_lookup_and_add(p, syncobj_deps[i].handle,
+						  syncobj_deps[i].point,
+						  syncobj_deps[i].flags);
+		if (r)
+			return r;
+	}
 
-	/* Don't move this buffer if we have depleted our allowance
-	 * to move it. Don't move anything if the threshold is zero.
-	 */
-	if (p->bytes_moved < p->bytes_moved_threshold &&
-	    (!bo->tbo.base.dma_buf ||
-	    list_empty(&bo->tbo.base.dma_buf->attachments))) {
-		if (!amdgpu_gmc_vram_full_visible(&adev->gmc) &&
-		    (bo->flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED)) {
-			/* And don't move a CPU_ACCESS_REQUIRED BO to limited
-			 * visible VRAM if we've depleted our allowance to do
-			 * that.
-			 */
-			if (p->bytes_moved_vis < p->bytes_moved_vis_threshold)
-				domain = bo->preferred_domains;
-			else
-				domain = bo->allowed_domains;
-		} else {
-			domain = bo->preferred_domains;
-		}
-	} else {
-		domain = bo->allowed_domains;
-	}
-
-retry:
-	amdgpu_bo_placement_from_domain(bo, domain);
-	r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
-
-	p->bytes_moved += ctx.bytes_moved;
-	if (!amdgpu_gmc_vram_full_visible(&adev->gmc) &&
-	    amdgpu_bo_in_cpu_visible_vram(bo))
-		p->bytes_moved_vis += ctx.bytes_moved;
-
-	if (unlikely(r == -ENOMEM) && domain != bo->allowed_domains) {
-		domain = bo->allowed_domains;
-		goto retry;
-	}
-
-	return r;
+	return 0;
 }
 
-static int amdgpu_cs_list_validate(struct amdgpu_cs_parser *p,
-			    struct list_head *validated)
+static int amdgpu_cs_p2_syncobj_out(struct amdgpu_cs_parser *p,
+				    struct amdgpu_cs_chunk *chunk)
 {
-	struct ttm_operation_ctx ctx = { true, false };
-	struct amdgpu_bo_list_entry *lobj;
-	int r;
+	struct drm_amdgpu_cs_chunk_sem *deps = chunk->kdata;
+	unsigned num_deps;
+	int i;
 
-	list_for_each_entry(lobj, validated, tv.head) {
-		struct amdgpu_bo *bo = ttm_to_amdgpu_bo(lobj->tv.bo);
-		struct mm_struct *usermm;
+	num_deps = chunk->length_dw * 4 /
+		sizeof(struct drm_amdgpu_cs_chunk_sem);
 
-		usermm = amdgpu_ttm_tt_get_usermm(bo->tbo.ttm);
-		if (usermm && usermm != current->mm)
-			return -EPERM;
+	if (p->post_deps)
+		return -EINVAL;
 
-		if (amdgpu_ttm_tt_is_userptr(bo->tbo.ttm) &&
-		    lobj->user_invalidated && lobj->user_pages) {
-			amdgpu_bo_placement_from_domain(bo,
-							AMDGPU_GEM_DOMAIN_CPU);
-			r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
-			if (r)
-				return r;
+	p->post_deps = kmalloc_array(num_deps, sizeof(*p->post_deps),
+				     GFP_KERNEL);
+	p->num_post_deps = 0;
 
-			amdgpu_ttm_tt_set_user_pages(bo->tbo.ttm,
-						     lobj->user_pages);
-		}
+	if (!p->post_deps)
+		return -ENOMEM;
 
-		r = amdgpu_cs_bo_validate(p, bo);
-		if (r)
-			return r;
 
-		kvfree(lobj->user_pages);
-		lobj->user_pages = NULL;
+	for (i = 0; i < num_deps; ++i) {
+		p->post_deps[i].syncobj =
+			drm_syncobj_find(p->filp, deps[i].handle);
+		if (!p->post_deps[i].syncobj)
+			return -EINVAL;
+		p->post_deps[i].chain = NULL;
+		p->post_deps[i].point = 0;
+		p->num_post_deps++;
 	}
+
 	return 0;
 }
 
-static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
-				union drm_amdgpu_cs *cs)
+static int amdgpu_cs_p2_syncobj_timeline_signal(struct amdgpu_cs_parser *p,
+						struct amdgpu_cs_chunk *chunk)
 {
-	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
-	struct amdgpu_vm *vm = &fpriv->vm;
-	struct amdgpu_bo_list_entry *e;
-	struct list_head duplicates;
-	struct amdgpu_bo *gds;
-	struct amdgpu_bo *gws;
-	struct amdgpu_bo *oa;
-	int r;
-
-	INIT_LIST_HEAD(&p->validated);
-
-	/* p->bo_list could already be assigned if AMDGPU_CHUNK_ID_BO_HANDLES is present */
-	if (cs->in.bo_list_handle) {
-		if (p->bo_list)
-			return -EINVAL;
-
-		r = amdgpu_bo_list_get(fpriv, cs->in.bo_list_handle,
-				       &p->bo_list);
-		if (r)
-			return r;
-	} else if (!p->bo_list) {
-		/* Create a empty bo_list when no handle is provided */
-		r = amdgpu_bo_list_create(p->adev, p->filp, NULL, 0,
-					  &p->bo_list);
-		if (r)
-			return r;
-	}
-
-	mutex_lock(&p->bo_list->bo_list_mutex);
-
-	/* One for TTM and one for the CS job */
-	amdgpu_bo_list_for_each_entry(e, p->bo_list)
-		e->tv.num_shared = 2;
+	struct drm_amdgpu_cs_chunk_syncobj *syncobj_deps = chunk->kdata;
+	unsigned num_deps;
+	int i;
 
-	amdgpu_bo_list_get_list(p->bo_list, &p->validated);
+	num_deps = chunk->length_dw * 4 /
+		sizeof(struct drm_amdgpu_cs_chunk_syncobj);
 
-	INIT_LIST_HEAD(&duplicates);
-	amdgpu_vm_get_pd_bo(&fpriv->vm, &p->validated, &p->vm_pd);
+	if (p->post_deps)
+		return -EINVAL;
 
-	if (p->uf_entry.tv.bo && !ttm_to_amdgpu_bo(p->uf_entry.tv.bo)->parent)
-		list_add(&p->uf_entry.tv.head, &p->validated);
+	p->post_deps = kmalloc_array(num_deps, sizeof(*p->post_deps),
+				     GFP_KERNEL);
+	p->num_post_deps = 0;
 
-	/* Get userptr backing pages. If pages are updated after registered
-	 * in amdgpu_gem_userptr_ioctl(), amdgpu_cs_list_validate() will do
-	 * amdgpu_ttm_backend_bind() to flush and invalidate new pages
-	 */
-	amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) {
-		struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
-		bool userpage_invalidated = false;
-		int i;
+	if (!p->post_deps)
+		return -ENOMEM;
 
-		e->user_pages = kvmalloc_array(bo->tbo.ttm->num_pages,
-					sizeof(struct page *),
-					GFP_KERNEL | __GFP_ZERO);
-		if (!e->user_pages) {
-			DRM_ERROR("kvmalloc_array failure\n");
-			r = -ENOMEM;
-			goto out_free_user_pages;
-		}
+	for (i = 0; i < num_deps; ++i) {
+		struct amdgpu_cs_post_dep *dep = &p->post_deps[i];
 
-		r = amdgpu_ttm_tt_get_user_pages(bo, e->user_pages);
-		if (r) {
-			kvfree(e->user_pages);
-			e->user_pages = NULL;
-			goto out_free_user_pages;
+		dep->chain = NULL;
+		if (syncobj_deps[i].point) {
+			dep->chain = dma_fence_chain_alloc();
+			if (!dep->chain)
+				return -ENOMEM;
 		}
 
-		for (i = 0; i < bo->tbo.ttm->num_pages; i++) {
-			if (bo->tbo.ttm->pages[i] != e->user_pages[i]) {
-				userpage_invalidated = true;
-				break;
-			}
+		dep->syncobj = drm_syncobj_find(p->filp,
+						syncobj_deps[i].handle);
+		if (!dep->syncobj) {
+			dma_fence_chain_free(dep->chain);
+			return -EINVAL;
 		}
-		e->user_invalidated = userpage_invalidated;
-	}
-
-	r = ttm_eu_reserve_buffers(&p->ticket, &p->validated, true,
-				   &duplicates);
-	if (unlikely(r != 0)) {
-		if (r != -ERESTARTSYS)
-			DRM_ERROR("ttm_eu_reserve_buffers failed.\n");
-		goto out_free_user_pages;
-	}
-
-	amdgpu_bo_list_for_each_entry(e, p->bo_list) {
-		struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
-
-		e->bo_va = amdgpu_vm_bo_find(vm, bo);
-	}
-
-	/* Move fence waiting after getting reservation lock of
-	 * PD root. Then there is no need on a ctx mutex lock.
-	 */
-	r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entity);
-	if (unlikely(r != 0)) {
-		if (r != -ERESTARTSYS)
-			DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n");
-		goto error_validate;
-	}
-
-	amdgpu_cs_get_threshold_for_moves(p->adev, &p->bytes_moved_threshold,
-					  &p->bytes_moved_vis_threshold);
-	p->bytes_moved = 0;
-	p->bytes_moved_vis = 0;
-
-	r = amdgpu_vm_validate_pt_bos(p->adev, &fpriv->vm,
-				      amdgpu_cs_bo_validate, p);
-	if (r) {
-		DRM_ERROR("amdgpu_vm_validate_pt_bos() failed.\n");
-		goto error_validate;
+		dep->point = syncobj_deps[i].point;
+		p->num_post_deps++;
 	}
 
-	r = amdgpu_cs_list_validate(p, &duplicates);
-	if (r)
-		goto error_validate;
-
-	r = amdgpu_cs_list_validate(p, &p->validated);
-	if (r)
-		goto error_validate;
-
-	amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved,
-				     p->bytes_moved_vis);
+	return 0;
+}
 
-	gds = p->bo_list->gds_obj;
-	gws = p->bo_list->gws_obj;
-	oa = p->bo_list->oa_obj;
+static int amdgpu_cs_pass2(struct amdgpu_cs_parser *p)
+{
+	unsigned int num_ibs = 0, ce_preempt = 0, de_preempt = 0;
+	int i, r;
 
-	if (gds) {
-		p->job->gds_base = amdgpu_bo_gpu_offset(gds) >> PAGE_SHIFT;
-		p->job->gds_size = amdgpu_bo_size(gds) >> PAGE_SHIFT;
-	}
-	if (gws) {
-		p->job->gws_base = amdgpu_bo_gpu_offset(gws) >> PAGE_SHIFT;
-		p->job->gws_size = amdgpu_bo_size(gws) >> PAGE_SHIFT;
-	}
-	if (oa) {
-		p->job->oa_base = amdgpu_bo_gpu_offset(oa) >> PAGE_SHIFT;
-		p->job->oa_size = amdgpu_bo_size(oa) >> PAGE_SHIFT;
-	}
+	for (i = 0; i < p->nchunks; ++i) {
+		struct amdgpu_cs_chunk *chunk;
 
-	if (!r && p->uf_entry.tv.bo) {
-		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(p->uf_entry.tv.bo);
+		chunk = &p->chunks[i];
 
-		r = amdgpu_ttm_alloc_gart(&uf->tbo);
-		p->job->uf_addr += amdgpu_bo_gpu_offset(uf);
+		switch (chunk->chunk_id) {
+		case AMDGPU_CHUNK_ID_IB:
+			r = amdgpu_cs_p2_ib(p, chunk, &num_ibs,
+					    &ce_preempt, &de_preempt);
+			if (r)
+				return r;
+			break;
+		case AMDGPU_CHUNK_ID_DEPENDENCIES:
+		case AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES:
+			r = amdgpu_cs_p2_dependencies(p, chunk);
+			if (r)
+				return r;
+			break;
+		case AMDGPU_CHUNK_ID_SYNCOBJ_IN:
+			r = amdgpu_cs_p2_syncobj_in(p, chunk);
+			if (r)
+				return r;
+			break;
+		case AMDGPU_CHUNK_ID_SYNCOBJ_OUT:
+			r = amdgpu_cs_p2_syncobj_out(p, chunk);
+			if (r)
+				return r;
+			break;
+		case AMDGPU_CHUNK_ID_SYNCOBJ_TIMELINE_WAIT:
+			r = amdgpu_cs_p2_syncobj_timeline_wait(p, chunk);
+			if (r)
+				return r;
+			break;
+		case AMDGPU_CHUNK_ID_SYNCOBJ_TIMELINE_SIGNAL:
+			r = amdgpu_cs_p2_syncobj_timeline_signal(p, chunk);
+			if (r)
+				return r;
+			break;
+		}
 	}
 
-error_validate:
-	if (r)
-		ttm_eu_backoff_reservation(&p->ticket, &p->validated);
+	return 0;
+}
 
-out_free_user_pages:
-	if (r) {
-		amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) {
-			struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
+/* Convert microseconds to bytes. */
+static u64 us_to_bytes(struct amdgpu_device *adev, s64 us)
+{
+	if (us <= 0 || !adev->mm_stats.log2_max_MBps)
+		return 0;
 
-			if (!e->user_pages)
-				continue;
-			amdgpu_ttm_tt_get_user_pages_done(bo->tbo.ttm);
-			kvfree(e->user_pages);
-			e->user_pages = NULL;
-		}
-		mutex_unlock(&p->bo_list->bo_list_mutex);
-	}
-	return r;
+	/* Since accum_us is incremented by a million per second, just
+	 * multiply it by the number of MB/s to get the number of bytes.
+	 */
+	return us << adev->mm_stats.log2_max_MBps;
 }
 
-static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
+static s64 bytes_to_us(struct amdgpu_device *adev, u64 bytes)
 {
-	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
-	struct amdgpu_bo_list_entry *e;
-	int r;
-
-	list_for_each_entry(e, &p->validated, tv.head) {
-		struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
-		struct dma_resv *resv = bo->tbo.base.resv;
-		enum amdgpu_sync_mode sync_mode;
+	if (!adev->mm_stats.log2_max_MBps)
+		return 0;
 
-		sync_mode = amdgpu_bo_explicit_sync(bo) ?
-			AMDGPU_SYNC_EXPLICIT : AMDGPU_SYNC_NE_OWNER;
-		r = amdgpu_sync_resv(p->adev, &p->job->sync, resv, sync_mode,
-				     &fpriv->vm);
-		if (r)
-			return r;
-	}
-	return 0;
+	return bytes >> adev->mm_stats.log2_max_MBps;
 }
 
-/**
- * amdgpu_cs_parser_fini() - clean parser states
- * @parser:	parser structure holding parsing context.
- * @error:	error number
- * @backoff:	indicator to backoff the reservation
+/* Returns how many bytes TTM can move right now. If no bytes can be moved,
+ * it returns 0. If it returns non-zero, it's OK to move at least one buffer,
+ * which means it can go over the threshold once. If that happens, the driver
+ * will be in debt and no other buffer migrations can be done until that debt
+ * is repaid.
  *
- * If error is set then unvalidate buffer, otherwise just free memory
- * used by parsing context.
- **/
-static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error,
-				  bool backoff)
+ * This approach allows moving a buffer of any size (it's important to allow
+ * that).
+ *
+ * The currency is simply time in microseconds and it increases as the clock
+ * ticks. The accumulated microseconds (us) are converted to bytes and
+ * returned.
+ */
+static void amdgpu_cs_get_threshold_for_moves(struct amdgpu_device *adev,
+					      u64 *max_bytes,
+					      u64 *max_vis_bytes)
 {
-	unsigned i;
-
-	if (error && backoff) {
-		ttm_eu_backoff_reservation(&parser->ticket,
-					   &parser->validated);
-		mutex_unlock(&parser->bo_list->bo_list_mutex);
-	}
+	s64 time_us, increment_us;
+	u64 free_vram, total_vram, used_vram;
+	/* Allow a maximum of 200 accumulated ms. This is basically per-IB
+	 * throttling.
+	 *
+	 * It means that in order to get full max MBps, at least 5 IBs per
+	 * second must be submitted and not more than 200ms apart from each
+	 * other.
+	 */
+	const s64 us_upper_bound = 200000;
 
-	for (i = 0; i < parser->num_post_deps; i++) {
-		drm_syncobj_put(parser->post_deps[i].syncobj);
-		kfree(parser->post_deps[i].chain);
+	if (!adev->mm_stats.log2_max_MBps) {
+		*max_bytes = 0;
+		*max_vis_bytes = 0;
+		return;
 	}
-	kfree(parser->post_deps);
-
-	dma_fence_put(parser->fence);
 
-	if (parser->ctx) {
-		amdgpu_ctx_put(parser->ctx);
-	}
-	if (parser->bo_list)
-		amdgpu_bo_list_put(parser->bo_list);
+	total_vram = adev->gmc.real_vram_size -
+		atomic64_read(&adev->vram_pin_size);
+	used_vram = ttm_resource_manager_usage(&adev->mman.vram_mgr.manager);
+	free_vram = used_vram >= total_vram ? 0 : total_vram - used_vram;
 
-	for (i = 0; i < parser->nchunks; i++)
-		kvfree(parser->chunks[i].kdata);
-	kvfree(parser->chunks);
-	if (parser->job)
-		amdgpu_job_free(parser->job);
-	if (parser->uf_entry.tv.bo) {
-		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(parser->uf_entry.tv.bo);
+	spin_lock(&adev->mm_stats.lock);
 
-		amdgpu_bo_unref(&uf);
-	}
-}
+	/* Increase the amount of accumulated us. */
+	time_us = ktime_to_us(ktime_get());
+	increment_us = time_us - adev->mm_stats.last_update_us;
+	adev->mm_stats.last_update_us = time_us;
+	adev->mm_stats.accum_us = min(adev->mm_stats.accum_us + increment_us,
+				      us_upper_bound);
 
-static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
-{
-	struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
-	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
-	struct amdgpu_device *adev = p->adev;
-	struct amdgpu_vm *vm = &fpriv->vm;
-	struct amdgpu_bo_list_entry *e;
-	struct amdgpu_bo_va *bo_va;
-	struct amdgpu_bo *bo;
-	int r;
+	/* This prevents the short period of low performance when the VRAM
+	 * usage is low and the driver is in debt or doesn't have enough
+	 * accumulated us to fill VRAM quickly.
+	 *
+	 * The situation can occur in these cases:
+	 * - a lot of VRAM is freed by userspace
+	 * - the presence of a big buffer causes a lot of evictions
+	 *   (solution: split buffers into smaller ones)
+	 *
+	 * If 128 MB or 1/8th of VRAM is free, start filling it now by setting
+	 * accum_us to a positive number.
+	 */
+	if (free_vram >= 128 * 1024 * 1024 || free_vram >= total_vram / 8) {
+		s64 min_us;
 
-	/* Only for UVD/VCE VM emulation */
-	if (ring->funcs->parse_cs || ring->funcs->patch_cs_in_place) {
-		unsigned i, j;
-
-		for (i = 0, j = 0; i < p->nchunks && j < p->job->num_ibs; i++) {
-			struct drm_amdgpu_cs_chunk_ib *chunk_ib;
-			struct amdgpu_bo_va_mapping *m;
-			struct amdgpu_bo *aobj = NULL;
-			struct amdgpu_cs_chunk *chunk;
-			uint64_t offset, va_start;
-			struct amdgpu_ib *ib;
-			uint8_t *kptr;
-
-			chunk = &p->chunks[i];
-			ib = &p->job->ibs[j];
-			chunk_ib = chunk->kdata;
-
-			if (chunk->chunk_id != AMDGPU_CHUNK_ID_IB)
-				continue;
+		/* Be more aggresive on dGPUs. Try to fill a portion of free
+		 * VRAM now.
+		 */
+		if (!(adev->flags & AMD_IS_APU))
+			min_us = bytes_to_us(adev, free_vram / 4);
+		else
+			min_us = 0; /* Reset accum_us on APUs. */
 
-			va_start = chunk_ib->va_start & AMDGPU_GMC_HOLE_MASK;
-			r = amdgpu_cs_find_mapping(p, va_start, &aobj, &m);
-			if (r) {
-				DRM_ERROR("IB va_start is invalid\n");
-				return r;
-			}
+		adev->mm_stats.accum_us = max(min_us, adev->mm_stats.accum_us);
+	}
 
-			if ((va_start + chunk_ib->ib_bytes) >
-			    (m->last + 1) * AMDGPU_GPU_PAGE_SIZE) {
-				DRM_ERROR("IB va_start+ib_bytes is invalid\n");
-				return -EINVAL;
-			}
+	/* This is set to 0 if the driver is in debt to disallow (optional)
+	 * buffer moves.
+	 */
+	*max_bytes = us_to_bytes(adev, adev->mm_stats.accum_us);
 
-			/* the IB should be reserved at this point */
-			r = amdgpu_bo_kmap(aobj, (void **)&kptr);
-			if (r) {
-				return r;
-			}
+	/* Do the same for visible VRAM if half of it is free */
+	if (!amdgpu_gmc_vram_full_visible(&adev->gmc)) {
+		u64 total_vis_vram = adev->gmc.visible_vram_size;
+		u64 used_vis_vram =
+		  amdgpu_vram_mgr_vis_usage(&adev->mman.vram_mgr);
 
-			offset = m->start * AMDGPU_GPU_PAGE_SIZE;
-			kptr += va_start - offset;
-
-			if (ring->funcs->parse_cs) {
-				memcpy(ib->ptr, kptr, chunk_ib->ib_bytes);
-				amdgpu_bo_kunmap(aobj);
-
-				r = amdgpu_ring_parse_cs(ring, p, p->job, ib);
-				if (r)
-					return r;
-			} else {
-				ib->ptr = (uint32_t *)kptr;
-				r = amdgpu_ring_patch_cs_in_place(ring, p, p->job, ib);
-				amdgpu_bo_kunmap(aobj);
-				if (r)
-					return r;
-			}
+		if (used_vis_vram < total_vis_vram) {
+			u64 free_vis_vram = total_vis_vram - used_vis_vram;
+			adev->mm_stats.accum_us_vis =
+				min(adev->mm_stats.accum_us_vis +
+				    increment_us, us_upper_bound);
 
-			j++;
+			if (free_vis_vram >= total_vis_vram / 2)
+				adev->mm_stats.accum_us_vis =
+					max(bytes_to_us(adev, free_vis_vram / 2),
+					    adev->mm_stats.accum_us_vis);
 		}
-	}
-
-	if (!p->job->vm)
-		return amdgpu_cs_sync_rings(p);
 
+		*max_vis_bytes = us_to_bytes(adev, adev->mm_stats.accum_us_vis);
+	} else {
+		*max_vis_bytes = 0;
+	}
 
-	r = amdgpu_vm_clear_freed(adev, vm, NULL);
-	if (r)
-		return r;
-
-	r = amdgpu_vm_bo_update(adev, fpriv->prt_va, false);
-	if (r)
-		return r;
+	spin_unlock(&adev->mm_stats.lock);
+}
 
-	r = amdgpu_sync_fence(&p->job->sync, fpriv->prt_va->last_pt_update);
-	if (r)
-		return r;
+/* Report how many bytes have really been moved for the last command
+ * submission. This can result in a debt that can stop buffer migrations
+ * temporarily.
+ */
+void amdgpu_cs_report_moved_bytes(struct amdgpu_device *adev, u64 num_bytes,
+				  u64 num_vis_bytes)
+{
+	spin_lock(&adev->mm_stats.lock);
+	adev->mm_stats.accum_us -= bytes_to_us(adev, num_bytes);
+	adev->mm_stats.accum_us_vis -= bytes_to_us(adev, num_vis_bytes);
+	spin_unlock(&adev->mm_stats.lock);
+}
 
-	if (amdgpu_mcbp || amdgpu_sriov_vf(adev)) {
-		bo_va = fpriv->csa_va;
-		BUG_ON(!bo_va);
-		r = amdgpu_vm_bo_update(adev, bo_va, false);
-		if (r)
-			return r;
+static int amdgpu_cs_bo_validate(void *param, struct amdgpu_bo *bo)
+{
+	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
+	struct amdgpu_cs_parser *p = param;
+	struct ttm_operation_ctx ctx = {
+		.interruptible = true,
+		.no_wait_gpu = false,
+		.resv = bo->tbo.base.resv
+	};
+	uint32_t domain;
+	int r;
 
-		r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update);
-		if (r)
-			return r;
-	}
+	if (bo->tbo.pin_count)
+		return 0;
 
-	amdgpu_bo_list_for_each_entry(e, p->bo_list) {
-		/* ignore duplicates */
-		bo = ttm_to_amdgpu_bo(e->tv.bo);
-		if (!bo)
-			continue;
+	/* Don't move this buffer if we have depleted our allowance
+	 * to move it. Don't move anything if the threshold is zero.
+	 */
+	if (p->bytes_moved < p->bytes_moved_threshold &&
+	    (!bo->tbo.base.dma_buf ||
+	    list_empty(&bo->tbo.base.dma_buf->attachments))) {
+		if (!amdgpu_gmc_vram_full_visible(&adev->gmc) &&
+		    (bo->flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED)) {
+			/* And don't move a CPU_ACCESS_REQUIRED BO to limited
+			 * visible VRAM if we've depleted our allowance to do
+			 * that.
+			 */
+			if (p->bytes_moved_vis < p->bytes_moved_vis_threshold)
+				domain = bo->preferred_domains;
+			else
+				domain = bo->allowed_domains;
+		} else {
+			domain = bo->preferred_domains;
+		}
+	} else {
+		domain = bo->allowed_domains;
+	}
 
-		bo_va = e->bo_va;
-		if (bo_va == NULL)
-			continue;
+retry:
+	amdgpu_bo_placement_from_domain(bo, domain);
+	r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
 
-		r = amdgpu_vm_bo_update(adev, bo_va, false);
-		if (r) {
-			mutex_unlock(&p->bo_list->bo_list_mutex);
-			return r;
-		}
+	p->bytes_moved += ctx.bytes_moved;
+	if (!amdgpu_gmc_vram_full_visible(&adev->gmc) &&
+	    amdgpu_bo_in_cpu_visible_vram(bo))
+		p->bytes_moved_vis += ctx.bytes_moved;
 
-		r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update);
-		if (r) {
-			mutex_unlock(&p->bo_list->bo_list_mutex);
-			return r;
-		}
+	if (unlikely(r == -ENOMEM) && domain != bo->allowed_domains) {
+		domain = bo->allowed_domains;
+		goto retry;
 	}
 
-	r = amdgpu_vm_handle_moved(adev, vm);
-	if (r)
-		return r;
-
-	r = amdgpu_vm_update_pdes(adev, vm, false);
-	if (r)
-		return r;
+	return r;
+}
 
-	r = amdgpu_sync_fence(&p->job->sync, vm->last_update);
-	if (r)
-		return r;
+static int amdgpu_cs_list_validate(struct amdgpu_cs_parser *p,
+			    struct list_head *validated)
+{
+	struct ttm_operation_ctx ctx = { true, false };
+	struct amdgpu_bo_list_entry *lobj;
+	int r;
 
-	p->job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo);
+	list_for_each_entry(lobj, validated, tv.head) {
+		struct amdgpu_bo *bo = ttm_to_amdgpu_bo(lobj->tv.bo);
+		struct mm_struct *usermm;
 
-	if (amdgpu_vm_debug) {
-		/* Invalidate all BOs to test for userspace bugs */
-		amdgpu_bo_list_for_each_entry(e, p->bo_list) {
-			struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
+		usermm = amdgpu_ttm_tt_get_usermm(bo->tbo.ttm);
+		if (usermm && usermm != current->mm)
+			return -EPERM;
 
-			/* ignore duplicates */
-			if (!bo)
-				continue;
+		if (amdgpu_ttm_tt_is_userptr(bo->tbo.ttm) &&
+		    lobj->user_invalidated && lobj->user_pages) {
+			amdgpu_bo_placement_from_domain(bo,
+							AMDGPU_GEM_DOMAIN_CPU);
+			r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
+			if (r)
+				return r;
 
-			amdgpu_vm_bo_invalidate(adev, bo, false);
+			amdgpu_ttm_tt_set_user_pages(bo->tbo.ttm,
+						     lobj->user_pages);
 		}
-	}
 
-	return amdgpu_cs_sync_rings(p);
+		r = amdgpu_cs_bo_validate(p, bo);
+		if (r)
+			return r;
+
+		kvfree(lobj->user_pages);
+		lobj->user_pages = NULL;
+	}
+	return 0;
 }
 
-static int amdgpu_cs_ib_fill(struct amdgpu_device *adev,
-			     struct amdgpu_cs_parser *parser)
+static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
+				union drm_amdgpu_cs *cs)
 {
-	struct amdgpu_fpriv *fpriv = parser->filp->driver_priv;
+	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
 	struct amdgpu_vm *vm = &fpriv->vm;
-	int r, ce_preempt = 0, de_preempt = 0;
-	struct amdgpu_ring *ring;
-	int i, j;
+	struct amdgpu_bo_list_entry *e;
+	struct list_head duplicates;
+	struct amdgpu_bo *gds;
+	struct amdgpu_bo *gws;
+	struct amdgpu_bo *oa;
+	int r;
 
-	for (i = 0, j = 0; i < parser->nchunks && j < parser->job->num_ibs; i++) {
-		struct amdgpu_cs_chunk *chunk;
-		struct amdgpu_ib *ib;
-		struct drm_amdgpu_cs_chunk_ib *chunk_ib;
-		struct drm_sched_entity *entity;
+	INIT_LIST_HEAD(&p->validated);
 
-		chunk = &parser->chunks[i];
-		ib = &parser->job->ibs[j];
-		chunk_ib = (struct drm_amdgpu_cs_chunk_ib *)chunk->kdata;
+	/* p->bo_list could already be assigned if AMDGPU_CHUNK_ID_BO_HANDLES is present */
+	if (cs->in.bo_list_handle) {
+		if (p->bo_list)
+			return -EINVAL;
 
-		if (chunk->chunk_id != AMDGPU_CHUNK_ID_IB)
-			continue;
+		r = amdgpu_bo_list_get(fpriv, cs->in.bo_list_handle,
+				       &p->bo_list);
+		if (r)
+			return r;
+	} else if (!p->bo_list) {
+		/* Create a empty bo_list when no handle is provided */
+		r = amdgpu_bo_list_create(p->adev, p->filp, NULL, 0,
+					  &p->bo_list);
+		if (r)
+			return r;
+	}
 
-		if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX &&
-		    (amdgpu_mcbp || amdgpu_sriov_vf(adev))) {
-			if (chunk_ib->flags & AMDGPU_IB_FLAG_PREEMPT) {
-				if (chunk_ib->flags & AMDGPU_IB_FLAG_CE)
-					ce_preempt++;
-				else
-					de_preempt++;
-			}
+	mutex_lock(&p->bo_list->bo_list_mutex);
 
-			/* each GFX command submit allows 0 or 1 IB preemptible for CE & DE */
-			if (ce_preempt > 1 || de_preempt > 1)
-				return -EINVAL;
-		}
+	/* One for TTM and one for the CS job */
+	amdgpu_bo_list_for_each_entry(e, p->bo_list)
+		e->tv.num_shared = 2;
 
-		r = amdgpu_ctx_get_entity(parser->ctx, chunk_ib->ip_type,
-					  chunk_ib->ip_instance, chunk_ib->ring,
-					  &entity);
-		if (r)
-			return r;
+	amdgpu_bo_list_get_list(p->bo_list, &p->validated);
 
-		if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE)
-			parser->job->preamble_status |=
-				AMDGPU_PREAMBLE_IB_PRESENT;
+	INIT_LIST_HEAD(&duplicates);
+	amdgpu_vm_get_pd_bo(&fpriv->vm, &p->validated, &p->vm_pd);
 
-		if (parser->entity && parser->entity != entity)
-			return -EINVAL;
+	if (p->uf_entry.tv.bo && !ttm_to_amdgpu_bo(p->uf_entry.tv.bo)->parent)
+		list_add(&p->uf_entry.tv.head, &p->validated);
 
-		/* Return if there is no run queue associated with this entity.
-		 * Possibly because of disabled HW IP*/
-		if (entity->rq == NULL)
-			return -EINVAL;
+	/* Get userptr backing pages. If pages are updated after registered
+	 * in amdgpu_gem_userptr_ioctl(), amdgpu_cs_list_validate() will do
+	 * amdgpu_ttm_backend_bind() to flush and invalidate new pages
+	 */
+	amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) {
+		struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
+		bool userpage_invalidated = false;
+		int i;
 
-		parser->entity = entity;
+		e->user_pages = kvmalloc_array(bo->tbo.ttm->num_pages,
+					sizeof(struct page *),
+					GFP_KERNEL | __GFP_ZERO);
+		if (!e->user_pages) {
+			DRM_ERROR("kvmalloc_array failure\n");
+			r = -ENOMEM;
+			goto out_free_user_pages;
+		}
 
-		ring = to_amdgpu_ring(entity->rq->sched);
-		r =  amdgpu_ib_get(adev, vm, ring->funcs->parse_cs ?
-				   chunk_ib->ib_bytes : 0,
-				   AMDGPU_IB_POOL_DELAYED, ib);
+		r = amdgpu_ttm_tt_get_user_pages(bo, e->user_pages);
 		if (r) {
-			DRM_ERROR("Failed to get ib !\n");
-			return r;
+			kvfree(e->user_pages);
+			e->user_pages = NULL;
+			goto out_free_user_pages;
 		}
 
-		ib->gpu_addr = chunk_ib->va_start;
-		ib->length_dw = chunk_ib->ib_bytes / 4;
-		ib->flags = chunk_ib->flags;
+		for (i = 0; i < bo->tbo.ttm->num_pages; i++) {
+			if (bo->tbo.ttm->pages[i] != e->user_pages[i]) {
+				userpage_invalidated = true;
+				break;
+			}
+		}
+		e->user_invalidated = userpage_invalidated;
+	}
 
-		j++;
+	r = ttm_eu_reserve_buffers(&p->ticket, &p->validated, true,
+				   &duplicates);
+	if (unlikely(r != 0)) {
+		if (r != -ERESTARTSYS)
+			DRM_ERROR("ttm_eu_reserve_buffers failed.\n");
+		goto out_free_user_pages;
 	}
 
-	/* MM engine doesn't support user fences */
-	ring = to_amdgpu_ring(parser->entity->rq->sched);
-	if (parser->job->uf_addr && ring->funcs->no_user_fence)
-		return -EINVAL;
+	amdgpu_bo_list_for_each_entry(e, p->bo_list) {
+		struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
 
-	return 0;
-}
+		e->bo_va = amdgpu_vm_bo_find(vm, bo);
+	}
 
-static int amdgpu_cs_process_fence_dep(struct amdgpu_cs_parser *p,
-				       struct amdgpu_cs_chunk *chunk)
-{
-	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
-	unsigned num_deps;
-	int i, r;
-	struct drm_amdgpu_cs_chunk_dep *deps;
+	/* Move fence waiting after getting reservation lock of
+	 * PD root. Then there is no need on a ctx mutex lock.
+	 */
+	r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entity);
+	if (unlikely(r != 0)) {
+		if (r != -ERESTARTSYS)
+			DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n");
+		goto error_validate;
+	}
 
-	deps = (struct drm_amdgpu_cs_chunk_dep *)chunk->kdata;
-	num_deps = chunk->length_dw * 4 /
-		sizeof(struct drm_amdgpu_cs_chunk_dep);
+	amdgpu_cs_get_threshold_for_moves(p->adev, &p->bytes_moved_threshold,
+					  &p->bytes_moved_vis_threshold);
+	p->bytes_moved = 0;
+	p->bytes_moved_vis = 0;
 
-	for (i = 0; i < num_deps; ++i) {
-		struct amdgpu_ctx *ctx;
-		struct drm_sched_entity *entity;
-		struct dma_fence *fence;
+	r = amdgpu_vm_validate_pt_bos(p->adev, &fpriv->vm,
+				      amdgpu_cs_bo_validate, p);
+	if (r) {
+		DRM_ERROR("amdgpu_vm_validate_pt_bos() failed.\n");
+		goto error_validate;
+	}
 
-		ctx = amdgpu_ctx_get(fpriv, deps[i].ctx_id);
-		if (ctx == NULL)
-			return -EINVAL;
+	r = amdgpu_cs_list_validate(p, &duplicates);
+	if (r)
+		goto error_validate;
 
-		r = amdgpu_ctx_get_entity(ctx, deps[i].ip_type,
-					  deps[i].ip_instance,
-					  deps[i].ring, &entity);
-		if (r) {
-			amdgpu_ctx_put(ctx);
-			return r;
-		}
+	r = amdgpu_cs_list_validate(p, &p->validated);
+	if (r)
+		goto error_validate;
 
-		fence = amdgpu_ctx_get_fence(ctx, entity, deps[i].handle);
-		amdgpu_ctx_put(ctx);
+	amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved,
+				     p->bytes_moved_vis);
 
-		if (IS_ERR(fence))
-			return PTR_ERR(fence);
-		else if (!fence)
-			continue;
+	gds = p->bo_list->gds_obj;
+	gws = p->bo_list->gws_obj;
+	oa = p->bo_list->oa_obj;
 
-		if (chunk->chunk_id == AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES) {
-			struct drm_sched_fence *s_fence;
-			struct dma_fence *old = fence;
+	if (gds) {
+		p->job->gds_base = amdgpu_bo_gpu_offset(gds) >> PAGE_SHIFT;
+		p->job->gds_size = amdgpu_bo_size(gds) >> PAGE_SHIFT;
+	}
+	if (gws) {
+		p->job->gws_base = amdgpu_bo_gpu_offset(gws) >> PAGE_SHIFT;
+		p->job->gws_size = amdgpu_bo_size(gws) >> PAGE_SHIFT;
+	}
+	if (oa) {
+		p->job->oa_base = amdgpu_bo_gpu_offset(oa) >> PAGE_SHIFT;
+		p->job->oa_size = amdgpu_bo_size(oa) >> PAGE_SHIFT;
+	}
 
-			s_fence = to_drm_sched_fence(fence);
-			fence = dma_fence_get(&s_fence->scheduled);
-			dma_fence_put(old);
-		}
+	if (p->uf_entry.tv.bo) {
+		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(p->uf_entry.tv.bo);
 
-		r = amdgpu_sync_fence(&p->job->sync, fence);
-		dma_fence_put(fence);
+		r = amdgpu_ttm_alloc_gart(&uf->tbo);
 		if (r)
-			return r;
-	}
-	return 0;
-}
-
-static int amdgpu_syncobj_lookup_and_add_to_sync(struct amdgpu_cs_parser *p,
-						 uint32_t handle, u64 point,
-						 u64 flags)
-{
-	struct dma_fence *fence;
-	int r;
+			goto error_validate;
 
-	r = drm_syncobj_find_fence(p->filp, handle, point, flags, &fence);
-	if (r) {
-		DRM_ERROR("syncobj %u failed to find fence @ %llu (%d)!\n",
-			  handle, point, r);
-		return r;
+		p->job->uf_addr += amdgpu_bo_gpu_offset(uf);
 	}
+	return 0;
 
-	r = amdgpu_sync_fence(&p->job->sync, fence);
-	dma_fence_put(fence);
+error_validate:
+	ttm_eu_backoff_reservation(&p->ticket, &p->validated);
+
+out_free_user_pages:
+	amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) {
+		struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
 
+		if (!e->user_pages)
+			continue;
+		amdgpu_ttm_tt_get_user_pages_done(bo->tbo.ttm);
+		kvfree(e->user_pages);
+		e->user_pages = NULL;
+	}
+	mutex_unlock(&p->bo_list->bo_list_mutex);
 	return r;
 }
 
-static int amdgpu_cs_process_syncobj_in_dep(struct amdgpu_cs_parser *p,
-					    struct amdgpu_cs_chunk *chunk)
+static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *parser)
 {
-	struct drm_amdgpu_cs_chunk_sem *deps;
-	unsigned num_deps;
-	int i, r;
+	int i;
 
-	deps = (struct drm_amdgpu_cs_chunk_sem *)chunk->kdata;
-	num_deps = chunk->length_dw * 4 /
-		sizeof(struct drm_amdgpu_cs_chunk_sem);
-	for (i = 0; i < num_deps; ++i) {
-		r = amdgpu_syncobj_lookup_and_add_to_sync(p, deps[i].handle,
-							  0, 0);
-		if (r)
-			return r;
-	}
+	if (!trace_amdgpu_cs_enabled())
+		return;
 
-	return 0;
+	for (i = 0; i < parser->job->num_ibs; i++)
+		trace_amdgpu_cs(parser, i);
 }
 
-
-static int amdgpu_cs_process_syncobj_timeline_in_dep(struct amdgpu_cs_parser *p,
-						     struct amdgpu_cs_chunk *chunk)
+static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
 {
-	struct drm_amdgpu_cs_chunk_syncobj *syncobj_deps;
-	unsigned num_deps;
-	int i, r;
+	struct amdgpu_job *job = p->job;
+	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
+	unsigned int i;
+	int r;
 
-	syncobj_deps = (struct drm_amdgpu_cs_chunk_syncobj *)chunk->kdata;
-	num_deps = chunk->length_dw * 4 /
-		sizeof(struct drm_amdgpu_cs_chunk_syncobj);
-	for (i = 0; i < num_deps; ++i) {
-		r = amdgpu_syncobj_lookup_and_add_to_sync(p,
-							  syncobj_deps[i].handle,
-							  syncobj_deps[i].point,
-							  syncobj_deps[i].flags);
-		if (r)
+	/* Only for UVD/VCE VM emulation */
+	if (!ring->funcs->parse_cs && !ring->funcs->patch_cs_in_place)
+		return 0;
+
+	for (i = 0; i < job->num_ibs; ++i) {
+		struct amdgpu_ib *ib = &job->ibs[i];
+		struct amdgpu_bo_va_mapping *m;
+		struct amdgpu_bo *aobj;
+		uint64_t va_start;
+		uint8_t *kptr;
+
+		va_start = ib->gpu_addr;
+		r = amdgpu_cs_find_mapping(p, va_start, &aobj, &m);
+		if (r) {
+			DRM_ERROR("IB va_start is invalid\n");
 			return r;
+		}
+
+		if ((va_start + ib->length_dw * 4) >
+		    (m->last + 1) * AMDGPU_GPU_PAGE_SIZE) {
+			DRM_ERROR("IB va_start+ib_bytes is invalid\n");
+			return -EINVAL;
+		}
+
+		/* the IB should be reserved at this point */
+		r = amdgpu_bo_kmap(aobj, (void **)&kptr);
+		if (r) {
+			return r;
+		}
+
+		kptr += va_start - (m->start * AMDGPU_GPU_PAGE_SIZE);
+
+		if (ring->funcs->parse_cs) {
+			memcpy(ib->ptr, kptr, ib->length_dw * 4);
+			amdgpu_bo_kunmap(aobj);
+
+			r = amdgpu_ring_parse_cs(ring, p, p->job, ib);
+			if (r)
+				return r;
+		} else {
+			ib->ptr = (uint32_t *)kptr;
+			r = amdgpu_ring_patch_cs_in_place(ring, p, p->job, ib);
+			amdgpu_bo_kunmap(aobj);
+			if (r)
+				return r;
+		}
 	}
 
 	return 0;
 }
 
-static int amdgpu_cs_process_syncobj_out_dep(struct amdgpu_cs_parser *p,
-					     struct amdgpu_cs_chunk *chunk)
+static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 {
-	struct drm_amdgpu_cs_chunk_sem *deps;
-	unsigned num_deps;
-	int i;
-
-	deps = (struct drm_amdgpu_cs_chunk_sem *)chunk->kdata;
-	num_deps = chunk->length_dw * 4 /
-		sizeof(struct drm_amdgpu_cs_chunk_sem);
+	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+	struct amdgpu_device *adev = p->adev;
+	struct amdgpu_vm *vm = &fpriv->vm;
+	struct amdgpu_bo_list_entry *e;
+	struct amdgpu_bo_va *bo_va;
+	struct amdgpu_bo *bo;
+	int r;
 
-	if (p->post_deps)
-		return -EINVAL;
+	r = amdgpu_vm_clear_freed(adev, vm, NULL);
+	if (r)
+		return r;
 
-	p->post_deps = kmalloc_array(num_deps, sizeof(*p->post_deps),
-				     GFP_KERNEL);
-	p->num_post_deps = 0;
+	r = amdgpu_vm_bo_update(adev, fpriv->prt_va, false);
+	if (r)
+		return r;
 
-	if (!p->post_deps)
-		return -ENOMEM;
+	r = amdgpu_sync_fence(&p->job->sync, fpriv->prt_va->last_pt_update);
+	if (r)
+		return r;
 
+	if (amdgpu_mcbp || amdgpu_sriov_vf(adev)) {
+		bo_va = fpriv->csa_va;
+		r = amdgpu_vm_bo_update(adev, bo_va, false);
+		if (r)
+			return r;
 
-	for (i = 0; i < num_deps; ++i) {
-		p->post_deps[i].syncobj =
-			drm_syncobj_find(p->filp, deps[i].handle);
-		if (!p->post_deps[i].syncobj)
-			return -EINVAL;
-		p->post_deps[i].chain = NULL;
-		p->post_deps[i].point = 0;
-		p->num_post_deps++;
+		r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update);
+		if (r)
+			return r;
 	}
 
-	return 0;
-}
+	amdgpu_bo_list_for_each_entry(e, p->bo_list) {
+		/* ignore duplicates */
+		bo = ttm_to_amdgpu_bo(e->tv.bo);
+		if (!bo)
+			continue;
 
+		bo_va = e->bo_va;
+		if (bo_va == NULL)
+			continue;
 
-static int amdgpu_cs_process_syncobj_timeline_out_dep(struct amdgpu_cs_parser *p,
-						      struct amdgpu_cs_chunk *chunk)
-{
-	struct drm_amdgpu_cs_chunk_syncobj *syncobj_deps;
-	unsigned num_deps;
-	int i;
+		r = amdgpu_vm_bo_update(adev, bo_va, false);
+		if (r)
+			return r;
 
-	syncobj_deps = (struct drm_amdgpu_cs_chunk_syncobj *)chunk->kdata;
-	num_deps = chunk->length_dw * 4 /
-		sizeof(struct drm_amdgpu_cs_chunk_syncobj);
+		r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update);
+		if (r)
+			return r;
+	}
 
-	if (p->post_deps)
-		return -EINVAL;
+	r = amdgpu_vm_handle_moved(adev, vm);
+	if (r)
+		return r;
 
-	p->post_deps = kmalloc_array(num_deps, sizeof(*p->post_deps),
-				     GFP_KERNEL);
-	p->num_post_deps = 0;
+	r = amdgpu_vm_update_pdes(adev, vm, false);
+	if (r)
+		return r;
 
-	if (!p->post_deps)
-		return -ENOMEM;
+	r = amdgpu_sync_fence(&p->job->sync, vm->last_update);
+	if (r)
+		return r;
 
-	for (i = 0; i < num_deps; ++i) {
-		struct amdgpu_cs_post_dep *dep = &p->post_deps[i];
+	p->job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo);
 
-		dep->chain = NULL;
-		if (syncobj_deps[i].point) {
-			dep->chain = dma_fence_chain_alloc();
-			if (!dep->chain)
-				return -ENOMEM;
-		}
+	if (amdgpu_vm_debug) {
+		/* Invalidate all BOs to test for userspace bugs */
+		amdgpu_bo_list_for_each_entry(e, p->bo_list) {
+			struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
 
-		dep->syncobj = drm_syncobj_find(p->filp,
-						syncobj_deps[i].handle);
-		if (!dep->syncobj) {
-			dma_fence_chain_free(dep->chain);
-			return -EINVAL;
+			/* ignore duplicates */
+			if (!bo)
+				continue;
+
+			amdgpu_vm_bo_invalidate(adev, bo, false);
 		}
-		dep->point = syncobj_deps[i].point;
-		p->num_post_deps++;
 	}
 
 	return 0;
 }
 
-static int amdgpu_cs_dependencies(struct amdgpu_device *adev,
-				  struct amdgpu_cs_parser *p)
+static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
 {
-	int i, r;
-
-	for (i = 0; i < p->nchunks; ++i) {
-		struct amdgpu_cs_chunk *chunk;
+	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+	struct amdgpu_bo_list_entry *e;
+	int r;
 
-		chunk = &p->chunks[i];
+	list_for_each_entry(e, &p->validated, tv.head) {
+		struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
+		struct dma_resv *resv = bo->tbo.base.resv;
+		enum amdgpu_sync_mode sync_mode;
 
-		switch (chunk->chunk_id) {
-		case AMDGPU_CHUNK_ID_DEPENDENCIES:
-		case AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES:
-			r = amdgpu_cs_process_fence_dep(p, chunk);
-			if (r)
-				return r;
-			break;
-		case AMDGPU_CHUNK_ID_SYNCOBJ_IN:
-			r = amdgpu_cs_process_syncobj_in_dep(p, chunk);
-			if (r)
-				return r;
-			break;
-		case AMDGPU_CHUNK_ID_SYNCOBJ_OUT:
-			r = amdgpu_cs_process_syncobj_out_dep(p, chunk);
-			if (r)
-				return r;
-			break;
-		case AMDGPU_CHUNK_ID_SYNCOBJ_TIMELINE_WAIT:
-			r = amdgpu_cs_process_syncobj_timeline_in_dep(p, chunk);
-			if (r)
-				return r;
-			break;
-		case AMDGPU_CHUNK_ID_SYNCOBJ_TIMELINE_SIGNAL:
-			r = amdgpu_cs_process_syncobj_timeline_out_dep(p, chunk);
-			if (r)
-				return r;
-			break;
-		}
+		sync_mode = amdgpu_bo_explicit_sync(bo) ?
+			AMDGPU_SYNC_EXPLICIT : AMDGPU_SYNC_NE_OWNER;
+		r = amdgpu_sync_resv(p->adev, &p->job->sync, resv, sync_mode,
+				     &fpriv->vm);
+		if (r)
+			return r;
 	}
-
 	return 0;
 }
 
@@ -1226,10 +1195,6 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 	job = p->job;
 	p->job = NULL;
 
-	r = drm_sched_job_init(&job->base, entity, &fpriv->vm);
-	if (r)
-		goto error_unlock;
-
 	drm_sched_job_arm(&job->base);
 
 	/* No memory allocation is allowed while holding the notifier lock.
@@ -1286,29 +1251,45 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 error_abort:
 	drm_sched_job_cleanup(&job->base);
 	mutex_unlock(&p->adev->notifier_lock);
-
-error_unlock:
 	amdgpu_job_free(job);
 	return r;
 }
 
-static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *parser)
+/* Cleanup the parser structure */
+static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser)
 {
-	int i;
+	unsigned i;
 
-	if (!trace_amdgpu_cs_enabled())
-		return;
+	for (i = 0; i < parser->num_post_deps; i++) {
+		drm_syncobj_put(parser->post_deps[i].syncobj);
+		kfree(parser->post_deps[i].chain);
+	}
+	kfree(parser->post_deps);
 
-	for (i = 0; i < parser->job->num_ibs; i++)
-		trace_amdgpu_cs(parser, i);
+	dma_fence_put(parser->fence);
+
+	if (parser->ctx) {
+		amdgpu_ctx_put(parser->ctx);
+	}
+	if (parser->bo_list)
+		amdgpu_bo_list_put(parser->bo_list);
+
+	for (i = 0; i < parser->nchunks; i++)
+		kvfree(parser->chunks[i].kdata);
+	kvfree(parser->chunks);
+	if (parser->job)
+		amdgpu_job_free(parser->job);
+	if (parser->uf_entry.tv.bo) {
+		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(parser->uf_entry.tv.bo);
+
+		amdgpu_bo_unref(&uf);
+	}
 }
 
 int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 {
 	struct amdgpu_device *adev = drm_to_adev(dev);
-	union drm_amdgpu_cs *cs = data;
-	struct amdgpu_cs_parser parser = {};
-	bool reserved_buffers = false;
+	struct amdgpu_cs_parser parser;
 	int r;
 
 	if (amdgpu_ras_intr_triggered())
@@ -1317,25 +1298,20 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 	if (!adev->accel_working)
 		return -EBUSY;
 
-	parser.adev = adev;
-	parser.filp = filp;
-
-	r = amdgpu_cs_parser_init(&parser, data);
+	r = amdgpu_cs_parser_init(&parser, adev, filp, data);
 	if (r) {
 		if (printk_ratelimit())
 			DRM_ERROR("Failed to initialize parser %d!\n", r);
-		goto out;
+		return r;
 	}
 
-	r = amdgpu_cs_ib_fill(adev, &parser);
+	r = amdgpu_cs_pass1(&parser, data);
 	if (r)
-		goto out;
+		goto error_fini;
 
-	r = amdgpu_cs_dependencies(adev, &parser);
-	if (r) {
-		DRM_ERROR("Failed in the dependencies handling %d!\n", r);
-		goto out;
-	}
+	r = amdgpu_cs_pass2(&parser);
+	if (r)
+		goto error_fini;
 
 	r = amdgpu_cs_parser_bos(&parser, data);
 	if (r) {
@@ -1343,21 +1319,36 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 			DRM_ERROR("Not enough memory for command submission!\n");
 		else if (r != -ERESTARTSYS && r != -EAGAIN)
 			DRM_ERROR("Failed to process the buffer list %d!\n", r);
-		goto out;
+		goto error_fini;
 	}
 
-	reserved_buffers = true;
+	r = amdgpu_cs_patch_ibs(&parser);
+	if (r)
+		goto error_backoff;
+
+	r = amdgpu_cs_vm_handling(&parser);
+	if (r)
+		goto error_backoff;
+
+	r = amdgpu_cs_sync_rings(&parser);
+	if (r)
+		goto error_backoff;
 
 	trace_amdgpu_cs_ibs(&parser);
 
-	r = amdgpu_cs_vm_handling(&parser);
+	r = amdgpu_cs_submit(&parser, data);
 	if (r)
-		goto out;
+		goto error_backoff;
 
-	r = amdgpu_cs_submit(&parser, cs);
-out:
-	amdgpu_cs_parser_fini(&parser, r, reserved_buffers);
+	amdgpu_cs_parser_fini(&parser);
+	return 0;
+
+error_backoff:
+	ttm_eu_backoff_reservation(&parser.ticket, &parser.validated);
+	mutex_unlock(&parser.bo_list->bo_list_mutex);
 
+error_fini:
+	amdgpu_cs_parser_fini(&parser);
 	return r;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
index 30ecc4917f81..652b5593499f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
@@ -51,8 +51,8 @@ struct amdgpu_cs_parser {
 	struct amdgpu_cs_chunk	*chunks;
 
 	/* scheduler job object */
-	struct amdgpu_job	*job;
 	struct drm_sched_entity	*entity;
+	struct amdgpu_job	*job;
 
 	/* buffer objects */
 	struct ww_acquire_ctx		ticket;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 06/10] drm/amdgpu: remove SRIOV and MCBP dependencies from the CS
  2022-07-14 10:38 Gang submit v2 Christian König
                   ` (4 preceding siblings ...)
  2022-07-14 10:38 ` [PATCH 05/10] drm/amdgpu: cleanup and reorder amdgpu_cs.c Christian König
@ 2022-07-14 10:38 ` Christian König
  2022-07-14 10:38 ` [PATCH 07/10] drm/amdgpu: move setting the job resources Christian König
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 22+ messages in thread
From: Christian König @ 2022-07-14 10:38 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak, timur.kristof, andrey.grodzovsky,
	Yogesh.Mohanmarimuthu
  Cc: Christian König

We should not have any different CS constrains based
on the execution environment.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index b9de631a66a3..dfb7b4f46bc3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -323,8 +323,7 @@ static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
 		return -EINVAL;
 
 	if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX &&
-	    chunk_ib->flags & AMDGPU_IB_FLAG_PREEMPT &&
-	    (amdgpu_mcbp || amdgpu_sriov_vf(p->adev))) {
+	    chunk_ib->flags & AMDGPU_IB_FLAG_PREEMPT) {
 		if (chunk_ib->flags & AMDGPU_IB_FLAG_CE)
 			(*ce_preempt)++;
 		else
@@ -1084,7 +1083,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 	if (r)
 		return r;
 
-	if (amdgpu_mcbp || amdgpu_sriov_vf(adev)) {
+	if (fpriv->csa_va) {
 		bo_va = fpriv->csa_va;
 		r = amdgpu_vm_bo_update(adev, bo_va, false);
 		if (r)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 07/10] drm/amdgpu: move setting the job resources
  2022-07-14 10:38 Gang submit v2 Christian König
                   ` (5 preceding siblings ...)
  2022-07-14 10:38 ` [PATCH 06/10] drm/amdgpu: remove SRIOV and MCBP dependencies from the CS Christian König
@ 2022-07-14 10:38 ` Christian König
  2022-07-14 15:40   ` Luben Tuikov
  2022-07-14 18:32   ` Andrey Grodzovsky
  2022-07-14 10:39 ` [PATCH 08/10] drm/amdgpu: revert "fix limiting AV1 to the first instance on VCN3" Christian König
                   ` (2 subsequent siblings)
  9 siblings, 2 replies; 22+ messages in thread
From: Christian König @ 2022-07-14 10:38 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak, timur.kristof, andrey.grodzovsky,
	Yogesh.Mohanmarimuthu
  Cc: Christian König

Move setting the job resources into amdgpu_job.c

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 21 ++-------------------
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 17 +++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.h |  2 ++
 3 files changed, 21 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index dfb7b4f46bc3..88f491dc7ca2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -828,9 +828,6 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 	struct amdgpu_vm *vm = &fpriv->vm;
 	struct amdgpu_bo_list_entry *e;
 	struct list_head duplicates;
-	struct amdgpu_bo *gds;
-	struct amdgpu_bo *gws;
-	struct amdgpu_bo *oa;
 	int r;
 
 	INIT_LIST_HEAD(&p->validated);
@@ -947,22 +944,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 	amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved,
 				     p->bytes_moved_vis);
 
-	gds = p->bo_list->gds_obj;
-	gws = p->bo_list->gws_obj;
-	oa = p->bo_list->oa_obj;
-
-	if (gds) {
-		p->job->gds_base = amdgpu_bo_gpu_offset(gds) >> PAGE_SHIFT;
-		p->job->gds_size = amdgpu_bo_size(gds) >> PAGE_SHIFT;
-	}
-	if (gws) {
-		p->job->gws_base = amdgpu_bo_gpu_offset(gws) >> PAGE_SHIFT;
-		p->job->gws_size = amdgpu_bo_size(gws) >> PAGE_SHIFT;
-	}
-	if (oa) {
-		p->job->oa_base = amdgpu_bo_gpu_offset(oa) >> PAGE_SHIFT;
-		p->job->oa_size = amdgpu_bo_size(oa) >> PAGE_SHIFT;
-	}
+	amdgpu_job_set_resources(p->job, p->bo_list->gds_obj,
+				 p->bo_list->gws_obj, p->bo_list->oa_obj);
 
 	if (p->uf_entry.tv.bo) {
 		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(p->uf_entry.tv.bo);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 36c1be77bf8f..3255b2fca611 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -129,6 +129,23 @@ int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, unsigned size,
 	return r;
 }
 
+void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
+			      struct amdgpu_bo *gws, struct amdgpu_bo *oa)
+{
+	if (gds) {
+		job->gds_base = amdgpu_bo_gpu_offset(gds) >> PAGE_SHIFT;
+		job->gds_size = amdgpu_bo_size(gds) >> PAGE_SHIFT;
+	}
+	if (gws) {
+		job->gws_base = amdgpu_bo_gpu_offset(gws) >> PAGE_SHIFT;
+		job->gws_size = amdgpu_bo_size(gws) >> PAGE_SHIFT;
+	}
+	if (oa) {
+		job->oa_base = amdgpu_bo_gpu_offset(oa) >> PAGE_SHIFT;
+		job->oa_size = amdgpu_bo_size(oa) >> PAGE_SHIFT;
+	}
+}
+
 void amdgpu_job_free_resources(struct amdgpu_job *job)
 {
 	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
index d599c0540b46..0bab8fe0d419 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
@@ -77,6 +77,8 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,
 		     struct amdgpu_job **job, struct amdgpu_vm *vm);
 int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, unsigned size,
 		enum amdgpu_ib_pool_type pool, struct amdgpu_job **job);
+void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
+			      struct amdgpu_bo *gws, struct amdgpu_bo *oa);
 void amdgpu_job_free_resources(struct amdgpu_job *job);
 void amdgpu_job_free(struct amdgpu_job *job);
 int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 08/10] drm/amdgpu: revert "fix limiting AV1 to the first instance on VCN3"
  2022-07-14 10:38 Gang submit v2 Christian König
                   ` (6 preceding siblings ...)
  2022-07-14 10:38 ` [PATCH 07/10] drm/amdgpu: move setting the job resources Christian König
@ 2022-07-14 10:39 ` Christian König
  2022-07-14 10:39 ` [PATCH 09/10] drm/amdgpu: add gang submit backend Christian König
  2022-07-14 10:39 ` [PATCH 10/10] drm/amdgpu: add gang submit frontend v2 Christian König
  9 siblings, 0 replies; 22+ messages in thread
From: Christian König @ 2022-07-14 10:39 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak, timur.kristof, andrey.grodzovsky,
	Yogesh.Mohanmarimuthu
  Cc: Christian König

This reverts commit 250195ff744f260c169f5427422b6f39c58cb883.

The job should now be initialized when we reach the parser functions.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
index 39405f0db824..3cabceee5f57 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
@@ -1761,21 +1761,23 @@ static const struct amdgpu_ring_funcs vcn_v3_0_dec_sw_ring_vm_funcs = {
 	.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
 };
 
-static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p)
+static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p,
+				struct amdgpu_job *job)
 {
 	struct drm_gpu_scheduler **scheds;
 
 	/* The create msg must be in the first IB submitted */
-	if (atomic_read(&p->entity->fence_seq))
+	if (atomic_read(&job->base.entity->fence_seq))
 		return -EINVAL;
 
 	scheds = p->adev->gpu_sched[AMDGPU_HW_IP_VCN_DEC]
 		[AMDGPU_RING_PRIO_DEFAULT].sched;
-	drm_sched_entity_modify_sched(p->entity, scheds, 1);
+	drm_sched_entity_modify_sched(job->base.entity, scheds, 1);
 	return 0;
 }
 
-static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, uint64_t addr)
+static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
+			    uint64_t addr)
 {
 	struct ttm_operation_ctx ctx = { false, false };
 	struct amdgpu_bo_va_mapping *map;
@@ -1846,7 +1848,7 @@ static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, uint64_t addr)
 		if (create[0] == 0x7 || create[0] == 0x10 || create[0] == 0x11)
 			continue;
 
-		r = vcn_v3_0_limit_sched(p);
+		r = vcn_v3_0_limit_sched(p, job);
 		if (r)
 			goto out;
 	}
@@ -1860,7 +1862,7 @@ static int vcn_v3_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
 					   struct amdgpu_job *job,
 					   struct amdgpu_ib *ib)
 {
-	struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
+	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
 	uint32_t msg_lo = 0, msg_hi = 0;
 	unsigned i;
 	int r;
@@ -1879,7 +1881,8 @@ static int vcn_v3_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
 			msg_hi = val;
 		} else if (reg == PACKET0(p->adev->vcn.internal.cmd, 0) &&
 			   val == 0) {
-			r = vcn_v3_0_dec_msg(p, ((u64)msg_hi) << 32 | msg_lo);
+			r = vcn_v3_0_dec_msg(p, job,
+					     ((u64)msg_hi) << 32 | msg_lo);
 			if (r)
 				return r;
 		}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 09/10] drm/amdgpu: add gang submit backend
  2022-07-14 10:38 Gang submit v2 Christian König
                   ` (7 preceding siblings ...)
  2022-07-14 10:39 ` [PATCH 08/10] drm/amdgpu: revert "fix limiting AV1 to the first instance on VCN3" Christian König
@ 2022-07-14 10:39 ` Christian König
  2022-07-14 14:36   ` Andrey Grodzovsky
  2022-07-14 19:32   ` Luben Tuikov
  2022-07-14 10:39 ` [PATCH 10/10] drm/amdgpu: add gang submit frontend v2 Christian König
  9 siblings, 2 replies; 22+ messages in thread
From: Christian König @ 2022-07-14 10:39 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak, timur.kristof, andrey.grodzovsky,
	Yogesh.Mohanmarimuthu
  Cc: Christian König

Allows submitting jobs as gang which needs to run on multiple
engines at the same time.

Basic idea is that we have a global gang submit fence representing when the
gang leader is finally pushed to run on the hardware last.

Jobs submitted as gang are never re-submitted in case of a GPU reset since this
won't work and will just deadlock the hardware immediately again.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h        |  3 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 34 ++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c    | 28 ++++++++++++++++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.h    |  3 ++
 4 files changed, 66 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 2871a3e3801f..19308db52984 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -881,6 +881,7 @@ struct amdgpu_device {
 	u64				fence_context;
 	unsigned			num_rings;
 	struct amdgpu_ring		*rings[AMDGPU_MAX_RINGS];
+	struct dma_fence __rcu		*gang_submit;
 	bool				ib_pool_ready;
 	struct amdgpu_sa_manager	ib_pools[AMDGPU_IB_POOL_MAX];
 	struct amdgpu_sched		gpu_sched[AMDGPU_HW_IP_NUM][AMDGPU_RING_PRIO_MAX];
@@ -1288,6 +1289,8 @@ u32 amdgpu_device_pcie_port_rreg(struct amdgpu_device *adev,
 				u32 reg);
 void amdgpu_device_pcie_port_wreg(struct amdgpu_device *adev,
 				u32 reg, u32 v);
+struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev,
+					    struct dma_fence *gang);
 
 /* atpx handler */
 #if defined(CONFIG_VGA_SWITCHEROO)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index e1c9587f659b..f80beb7208c0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3499,6 +3499,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 	adev->gmc.gart_size = 512 * 1024 * 1024;
 	adev->accel_working = false;
 	adev->num_rings = 0;
+	RCU_INIT_POINTER(adev->gang_submit, dma_fence_get_stub());
 	adev->mman.buffer_funcs = NULL;
 	adev->mman.buffer_funcs_ring = NULL;
 	adev->vm_manager.vm_pte_funcs = NULL;
@@ -3979,6 +3980,7 @@ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
 	release_firmware(adev->firmware.gpu_info_fw);
 	adev->firmware.gpu_info_fw = NULL;
 	adev->accel_working = false;
+	dma_fence_put(rcu_dereference_protected(adev->gang_submit, true));
 
 	amdgpu_reset_fini(adev);
 
@@ -5905,3 +5907,35 @@ void amdgpu_device_pcie_port_wreg(struct amdgpu_device *adev,
 	(void)RREG32(data);
 	spin_unlock_irqrestore(&adev->pcie_idx_lock, flags);
 }
+
+/**
+ * amdgpu_device_switch_gang - switch to a new gang
+ * @adev: amdgpu_device pointer
+ * @gang: the gang to switch to
+ *
+ * Try to switch to a new gang or return a reference to the current gang if that
+ * isn't possible.
+ * Returns: Either NULL if we switched correctly or a reference to the existing
+ * gang.
+ */
+struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev,
+					    struct dma_fence *gang)
+{
+	struct dma_fence *old = NULL;
+
+	do {
+		dma_fence_put(old);
+		old = dma_fence_get_rcu_safe(&adev->gang_submit);
+
+		if (old == gang)
+			break;
+
+		if (!dma_fence_is_signaled(old))
+			return old;
+
+	} while (cmpxchg((struct dma_fence __force **)&adev->gang_submit,
+			 old, gang) != old);
+
+	dma_fence_put(old);
+	return NULL;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 3255b2fca611..f3a1fdbd41a3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -180,11 +180,29 @@ static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
 		kfree(job);
 }
 
+void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
+				struct amdgpu_job *leader)
+{
+	struct dma_fence *fence = &leader->base.s_fence->scheduled;
+
+	WARN_ON(job->gang_submit);
+
+	/*
+	 * Don't add a reference when we are the gang leader to avoid circle
+	 * dependency.
+	 */
+	if (job != leader)
+		dma_fence_get(fence);
+	job->gang_submit = fence;
+}
+
 void amdgpu_job_free(struct amdgpu_job *job)
 {
 	amdgpu_job_free_resources(job);
 	amdgpu_sync_free(&job->sync);
 	amdgpu_sync_free(&job->sched_sync);
+	if (job->gang_submit != &job->base.s_fence->scheduled)
+		dma_fence_put(job->gang_submit);
 
 	/* only put the hw fence if has embedded fence */
 	if (job->hw_fence.ops != NULL)
@@ -258,12 +276,16 @@ static struct dma_fence *amdgpu_job_dependency(struct drm_sched_job *sched_job,
 		fence = amdgpu_sync_get_fence(&job->sync);
 	}
 
+	if (!fence && !job->gang_submit)
+		fence = amdgpu_device_switch_gang(ring->adev, job->gang_submit);
+
 	return fence;
 }
 
 static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job)
 {
 	struct amdgpu_ring *ring = to_amdgpu_ring(sched_job->sched);
+	struct amdgpu_device *adev = ring->adev;
 	struct dma_fence *fence = NULL, *finished;
 	struct amdgpu_job *job;
 	int r = 0;
@@ -275,8 +297,10 @@ static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job)
 
 	trace_amdgpu_sched_run_job(job);
 
-	if (job->vram_lost_counter != atomic_read(&ring->adev->vram_lost_counter))
-		dma_fence_set_error(finished, -ECANCELED);/* skip IB as well if VRAM lost */
+	/* Skip job if VRAM is lost and never resubmit gangs */
+	if (job->vram_lost_counter != atomic_read(&adev->vram_lost_counter) ||
+	    (job->job_run_counter && job->gang_submit))
+		dma_fence_set_error(finished, -ECANCELED);
 
 	if (finished->error < 0) {
 		DRM_INFO("Skip scheduling IBs!\n");
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
index 0bab8fe0d419..615328130615 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
@@ -51,6 +51,7 @@ struct amdgpu_job {
 	struct amdgpu_sync	sched_sync;
 	struct dma_fence	hw_fence;
 	struct dma_fence	*external_hw_fence;
+	struct dma_fence	*gang_submit;
 	uint32_t		preamble_status;
 	uint32_t                preemption_status;
 	bool                    vm_needs_flush;
@@ -80,6 +81,8 @@ int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, unsigned size,
 void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
 			      struct amdgpu_bo *gws, struct amdgpu_bo *oa);
 void amdgpu_job_free_resources(struct amdgpu_job *job);
+void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
+				struct amdgpu_job *leader);
 void amdgpu_job_free(struct amdgpu_job *job);
 int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity,
 		      void *owner, struct dma_fence **f);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 10/10] drm/amdgpu: add gang submit frontend v2
  2022-07-14 10:38 Gang submit v2 Christian König
                   ` (8 preceding siblings ...)
  2022-07-14 10:39 ` [PATCH 09/10] drm/amdgpu: add gang submit backend Christian König
@ 2022-07-14 10:39 ` Christian König
  2022-07-14 19:10   ` Andrey Grodzovsky
  9 siblings, 1 reply; 22+ messages in thread
From: Christian König @ 2022-07-14 10:39 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak, timur.kristof, andrey.grodzovsky,
	Yogesh.Mohanmarimuthu
  Cc: Christian König

Allows submitting jobs as gang which needs to run on multiple engines at the
same time.

All members of the gang get the same implicit, explicit and VM dependencies. So
no gang member will start running until everything else is ready.

The last job is considered the gang leader (usually a submission to the GFX
ring) and used for signaling output dependencies.

Each job is remembered individually as user of a buffer object, so there is no
joining of work at the end.

v2: rebase and fix review comments from Andrey and Yogesh

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c    | 256 ++++++++++++++--------
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h    |  10 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h |  12 +-
 3 files changed, 183 insertions(+), 95 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 88f491dc7ca2..e1c41db20efb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -69,6 +69,7 @@ static int amdgpu_cs_p1_ib(struct amdgpu_cs_parser *p,
 			   unsigned int *num_ibs)
 {
 	struct drm_sched_entity *entity;
+	unsigned int i;
 	int r;
 
 	r = amdgpu_ctx_get_entity(p->ctx, chunk_ib->ip_type,
@@ -77,17 +78,28 @@ static int amdgpu_cs_p1_ib(struct amdgpu_cs_parser *p,
 	if (r)
 		return r;
 
-	/* Abort if there is no run queue associated with this entity.
-	 * Possibly because of disabled HW IP*/
+	/*
+	 * Abort if there is no run queue associated with this entity.
+	 * Possibly because of disabled HW IP.
+	 */
 	if (entity->rq == NULL)
 		return -EINVAL;
 
-	/* Currently we don't support submitting to multiple entities */
-	if (p->entity && p->entity != entity)
+	/* Check if we can add this IB to some existing job */
+	for (i = 0; i < p->gang_size; ++i) {
+		if (p->entities[i] == entity)
+			goto found;
+	}
+
+	/* If not increase the gang size if possible */
+	if (i == AMDGPU_CS_GANG_SIZE)
 		return -EINVAL;
 
-	p->entity = entity;
-	++(*num_ibs);
+	p->entities[i] = entity;
+	p->gang_size = i + 1;
+
+found:
+	++(num_ibs[i]);
 	return 0;
 }
 
@@ -161,11 +173,12 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
 			   union drm_amdgpu_cs *cs)
 {
 	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+	unsigned int num_ibs[AMDGPU_CS_GANG_SIZE] = { };
 	struct amdgpu_vm *vm = &fpriv->vm;
 	uint64_t *chunk_array_user;
 	uint64_t *chunk_array;
-	unsigned size, num_ibs = 0;
 	uint32_t uf_offset = 0;
+	unsigned int size;
 	int ret;
 	int i;
 
@@ -231,7 +244,7 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
 			if (size < sizeof(struct drm_amdgpu_cs_chunk_ib))
 				goto free_partial_kdata;
 
-			ret = amdgpu_cs_p1_ib(p, p->chunks[i].kdata, &num_ibs);
+			ret = amdgpu_cs_p1_ib(p, p->chunks[i].kdata, num_ibs);
 			if (ret)
 				goto free_partial_kdata;
 			break;
@@ -268,21 +281,28 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
 		}
 	}
 
-	ret = amdgpu_job_alloc(p->adev, num_ibs, &p->job, vm);
-	if (ret)
-		goto free_all_kdata;
+	if (!p->gang_size)
+		return -EINVAL;
 
-	ret = drm_sched_job_init(&p->job->base, p->entity, &fpriv->vm);
-	if (ret)
-		goto free_all_kdata;
+	for (i = 0; i < p->gang_size; ++i) {
+		ret = amdgpu_job_alloc(p->adev, num_ibs[i], &p->jobs[i], vm);
+		if (ret)
+			goto free_all_kdata;
+
+		ret = drm_sched_job_init(&p->jobs[i]->base, p->entities[i],
+					 &fpriv->vm);
+		if (ret)
+			goto free_all_kdata;
+	}
+	p->gang_leader = p->jobs[p->gang_size - 1];
 
-	if (p->ctx->vram_lost_counter != p->job->vram_lost_counter) {
+	if (p->ctx->vram_lost_counter != p->gang_leader->vram_lost_counter) {
 		ret = -ECANCELED;
 		goto free_all_kdata;
 	}
 
 	if (p->uf_entry.tv.bo)
-		p->job->uf_addr = uf_offset;
+		p->gang_leader->uf_addr = uf_offset;
 	kvfree(chunk_array);
 
 	/* Use this opportunity to fill in task info for the vm */
@@ -304,22 +324,18 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
 	return ret;
 }
 
-static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
-			   struct amdgpu_cs_chunk *chunk,
-			   unsigned int *num_ibs,
-			   unsigned int *ce_preempt,
-			   unsigned int *de_preempt)
+static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
+			   struct amdgpu_ib *ib, struct amdgpu_cs_chunk *chunk,
+			   unsigned int *ce_preempt, unsigned int *de_preempt)
 {
-	struct amdgpu_ring *ring = to_amdgpu_ring(p->job->base.sched);
+	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
 	struct drm_amdgpu_cs_chunk_ib *chunk_ib = chunk->kdata;
 	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
-	struct amdgpu_ib *ib = &p->job->ibs[*num_ibs];
 	struct amdgpu_vm *vm = &fpriv->vm;
 	int r;
 
-
 	/* MM engine doesn't support user fences */
-	if (p->job->uf_addr && ring->funcs->no_user_fence)
+	if (job->uf_addr && ring->funcs->no_user_fence)
 		return -EINVAL;
 
 	if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX &&
@@ -336,7 +352,7 @@ static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
 	}
 
 	if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE)
-		p->job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT;
+		job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT;
 
 	r =  amdgpu_ib_get(p->adev, vm, ring->funcs->parse_cs ?
 			   chunk_ib->ib_bytes : 0,
@@ -349,8 +365,6 @@ static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
 	ib->gpu_addr = chunk_ib->va_start;
 	ib->length_dw = chunk_ib->ib_bytes / 4;
 	ib->flags = chunk_ib->flags;
-
-	(*num_ibs)++;
 	return 0;
 }
 
@@ -399,7 +413,7 @@ static int amdgpu_cs_p2_dependencies(struct amdgpu_cs_parser *p,
 			dma_fence_put(old);
 		}
 
-		r = amdgpu_sync_fence(&p->job->sync, fence);
+		r = amdgpu_sync_fence(&p->gang_leader->sync, fence);
 		dma_fence_put(fence);
 		if (r)
 			return r;
@@ -421,7 +435,7 @@ static int amdgpu_syncobj_lookup_and_add(struct amdgpu_cs_parser *p,
 		return r;
 	}
 
-	r = amdgpu_sync_fence(&p->job->sync, fence);
+	r = amdgpu_sync_fence(&p->gang_leader->sync, fence);
 	dma_fence_put(fence);
 
 	return r;
@@ -544,20 +558,30 @@ static int amdgpu_cs_p2_syncobj_timeline_signal(struct amdgpu_cs_parser *p,
 
 static int amdgpu_cs_pass2(struct amdgpu_cs_parser *p)
 {
-	unsigned int num_ibs = 0, ce_preempt = 0, de_preempt = 0;
+	unsigned int ce_preempt = 0, de_preempt = 0;
+	unsigned int job_idx = 0, ib_idx = 0;
 	int i, r;
 
 	for (i = 0; i < p->nchunks; ++i) {
 		struct amdgpu_cs_chunk *chunk;
+		struct amdgpu_job *job;
 
 		chunk = &p->chunks[i];
 
 		switch (chunk->chunk_id) {
 		case AMDGPU_CHUNK_ID_IB:
-			r = amdgpu_cs_p2_ib(p, chunk, &num_ibs,
+			job = p->jobs[job_idx];
+			r = amdgpu_cs_p2_ib(p, job, &job->ibs[ib_idx], chunk,
 					    &ce_preempt, &de_preempt);
 			if (r)
 				return r;
+
+			if (++ib_idx == job->num_ibs) {
+				++job_idx;
+				ib_idx = 0;
+				ce_preempt = 0;
+				de_preempt = 0;
+			}
 			break;
 		case AMDGPU_CHUNK_ID_DEPENDENCIES:
 		case AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES:
@@ -828,6 +852,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 	struct amdgpu_vm *vm = &fpriv->vm;
 	struct amdgpu_bo_list_entry *e;
 	struct list_head duplicates;
+	unsigned int i;
 	int r;
 
 	INIT_LIST_HEAD(&p->validated);
@@ -911,16 +936,6 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 		e->bo_va = amdgpu_vm_bo_find(vm, bo);
 	}
 
-	/* Move fence waiting after getting reservation lock of
-	 * PD root. Then there is no need on a ctx mutex lock.
-	 */
-	r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entity);
-	if (unlikely(r != 0)) {
-		if (r != -ERESTARTSYS)
-			DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n");
-		goto error_validate;
-	}
-
 	amdgpu_cs_get_threshold_for_moves(p->adev, &p->bytes_moved_threshold,
 					  &p->bytes_moved_vis_threshold);
 	p->bytes_moved = 0;
@@ -944,8 +959,10 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 	amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved,
 				     p->bytes_moved_vis);
 
-	amdgpu_job_set_resources(p->job, p->bo_list->gds_obj,
-				 p->bo_list->gws_obj, p->bo_list->oa_obj);
+	for (i = 0; i < p->gang_size; ++i)
+		amdgpu_job_set_resources(p->jobs[i], p->bo_list->gds_obj,
+					 p->bo_list->gws_obj,
+					 p->bo_list->oa_obj);
 
 	if (p->uf_entry.tv.bo) {
 		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(p->uf_entry.tv.bo);
@@ -954,7 +971,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 		if (r)
 			goto error_validate;
 
-		p->job->uf_addr += amdgpu_bo_gpu_offset(uf);
+		p->gang_leader->uf_addr += amdgpu_bo_gpu_offset(uf);
 	}
 	return 0;
 
@@ -975,20 +992,24 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 	return r;
 }
 
-static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *parser)
+static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *p)
 {
-	int i;
+	int i, j;
 
 	if (!trace_amdgpu_cs_enabled())
 		return;
 
-	for (i = 0; i < parser->job->num_ibs; i++)
-		trace_amdgpu_cs(parser, i);
+	for (i = 0; i < p->gang_size; ++i) {
+		struct amdgpu_job *job = p->jobs[i];
+
+		for (j = 0; j < job->num_ibs; ++j)
+			trace_amdgpu_cs(p, job, &job->ibs[j]);
+	}
 }
 
-static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
+static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p,
+			       struct amdgpu_job *job)
 {
-	struct amdgpu_job *job = p->job;
 	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
 	unsigned int i;
 	int r;
@@ -1029,12 +1050,12 @@ static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
 			memcpy(ib->ptr, kptr, ib->length_dw * 4);
 			amdgpu_bo_kunmap(aobj);
 
-			r = amdgpu_ring_parse_cs(ring, p, p->job, ib);
+			r = amdgpu_ring_parse_cs(ring, p, job, ib);
 			if (r)
 				return r;
 		} else {
 			ib->ptr = (uint32_t *)kptr;
-			r = amdgpu_ring_patch_cs_in_place(ring, p, p->job, ib);
+			r = amdgpu_ring_patch_cs_in_place(ring, p, job, ib);
 			amdgpu_bo_kunmap(aobj);
 			if (r)
 				return r;
@@ -1044,14 +1065,29 @@ static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
 	return 0;
 }
 
+static int amdgpu_cs_patch_jobs(struct amdgpu_cs_parser *p)
+{
+	unsigned int i;
+	int r;
+
+	for (i = 0; i < p->gang_size; ++i) {
+		r = amdgpu_cs_patch_ibs(p, p->jobs[i]);
+		if (r)
+			return r;
+	}
+	return 0;
+}
+
 static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 {
 	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+	struct amdgpu_job *job = p->gang_leader;
 	struct amdgpu_device *adev = p->adev;
 	struct amdgpu_vm *vm = &fpriv->vm;
 	struct amdgpu_bo_list_entry *e;
 	struct amdgpu_bo_va *bo_va;
 	struct amdgpu_bo *bo;
+	unsigned int i;
 	int r;
 
 	r = amdgpu_vm_clear_freed(adev, vm, NULL);
@@ -1062,7 +1098,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 	if (r)
 		return r;
 
-	r = amdgpu_sync_fence(&p->job->sync, fpriv->prt_va->last_pt_update);
+	r = amdgpu_sync_fence(&job->sync, fpriv->prt_va->last_pt_update);
 	if (r)
 		return r;
 
@@ -1072,7 +1108,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 		if (r)
 			return r;
 
-		r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update);
+		r = amdgpu_sync_fence(&job->sync, bo_va->last_pt_update);
 		if (r)
 			return r;
 	}
@@ -1091,7 +1127,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 		if (r)
 			return r;
 
-		r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update);
+		r = amdgpu_sync_fence(&job->sync, bo_va->last_pt_update);
 		if (r)
 			return r;
 	}
@@ -1104,11 +1140,18 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 	if (r)
 		return r;
 
-	r = amdgpu_sync_fence(&p->job->sync, vm->last_update);
+	r = amdgpu_sync_fence(&job->sync, vm->last_update);
 	if (r)
 		return r;
 
-	p->job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo);
+	for (i = 0; i < p->gang_size; ++i) {
+		job = p->jobs[i];
+
+		if (!job->vm)
+			continue;
+
+		job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo);
+	}
 
 	if (amdgpu_vm_debug) {
 		/* Invalidate all BOs to test for userspace bugs */
@@ -1129,7 +1172,9 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
 {
 	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+	struct amdgpu_job *leader = p->gang_leader;
 	struct amdgpu_bo_list_entry *e;
+	unsigned int i;
 	int r;
 
 	list_for_each_entry(e, &p->validated, tv.head) {
@@ -1139,12 +1184,23 @@ static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
 
 		sync_mode = amdgpu_bo_explicit_sync(bo) ?
 			AMDGPU_SYNC_EXPLICIT : AMDGPU_SYNC_NE_OWNER;
-		r = amdgpu_sync_resv(p->adev, &p->job->sync, resv, sync_mode,
+		r = amdgpu_sync_resv(p->adev, &leader->sync, resv, sync_mode,
 				     &fpriv->vm);
 		if (r)
 			return r;
 	}
-	return 0;
+
+	for (i = 0; i < p->gang_size - 1; ++i) {
+		r = amdgpu_sync_clone(&leader->sync, &p->jobs[i]->sync);
+		if (r)
+			return r;
+	}
+
+	r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entities[p->gang_size - 1]);
+	if (r && r != -ERESTARTSYS)
+		DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n");
+
+	return r;
 }
 
 static void amdgpu_cs_post_dependencies(struct amdgpu_cs_parser *p)
@@ -1168,16 +1224,26 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 			    union drm_amdgpu_cs *cs)
 {
 	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
-	struct drm_sched_entity *entity = p->entity;
+	struct amdgpu_job *leader = p->gang_leader;
 	struct amdgpu_bo_list_entry *e;
-	struct amdgpu_job *job;
+	unsigned int i;
 	uint64_t seq;
 	int r;
 
-	job = p->job;
-	p->job = NULL;
+	for (i = 0; i < p->gang_size; ++i)
+		drm_sched_job_arm(&p->jobs[i]->base);
 
-	drm_sched_job_arm(&job->base);
+	for (i = 0; i < (p->gang_size - 1); ++i) {
+		struct dma_fence *fence;
+
+		fence = &p->jobs[i]->base.s_fence->scheduled;
+		r = amdgpu_sync_fence(&leader->sync, fence);
+		if (r)
+			goto error_cleanup;
+	}
+
+	for (i = 0; i < p->gang_size; ++i)
+		amdgpu_job_set_gang_leader(p->jobs[i], leader);
 
 	/* No memory allocation is allowed while holding the notifier lock.
 	 * The lock is held until amdgpu_cs_submit is finished and fence is
@@ -1195,45 +1261,60 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 	}
 	if (r) {
 		r = -EAGAIN;
-		goto error_abort;
+		goto error_unlock;
 	}
 
-	p->fence = dma_fence_get(&job->base.s_fence->finished);
+	p->fence = dma_fence_get(&leader->base.s_fence->finished);
 
-	seq = amdgpu_ctx_add_fence(p->ctx, entity, p->fence);
+	seq = amdgpu_ctx_add_fence(p->ctx, p->entities[p->gang_size - 1],
+				   p->fence);
 	amdgpu_cs_post_dependencies(p);
 
-	if ((job->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) &&
+	if ((leader->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) &&
 	    !p->ctx->preamble_presented) {
-		job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
+		leader->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
 		p->ctx->preamble_presented = true;
 	}
 
 	cs->out.handle = seq;
-	job->uf_sequence = seq;
-
-	amdgpu_job_free_resources(job);
+	leader->uf_sequence = seq;
 
-	trace_amdgpu_cs_ioctl(job);
 	amdgpu_vm_bo_trace_cs(&fpriv->vm, &p->ticket);
-	drm_sched_entity_push_job(&job->base);
+	for (i = 0; i < p->gang_size; ++i) {
+		amdgpu_job_free_resources(p->jobs[i]);
+		trace_amdgpu_cs_ioctl(p->jobs[i]);
+		drm_sched_entity_push_job(&p->jobs[i]->base);
+	}
 
 	amdgpu_vm_move_to_lru_tail(p->adev, &fpriv->vm);
 
-	/* Make sure all BOs are remembered as writers */
-	amdgpu_bo_list_for_each_entry(e, p->bo_list)
-		e->tv.num_shared = 0;
+	list_for_each_entry(e, &p->validated, tv.head) {
+
+		/* Everybody except for the gang leader uses BOOKKEEP */
+		for (i = 0; i < (p->gang_size - 1); ++i) {
+			dma_resv_add_fence(e->tv.bo->base.resv,
+					   &p->jobs[i]->base.s_fence->finished,
+					   DMA_RESV_USAGE_BOOKKEEP);
+		}
 
+		/* The gang leader as remembered as writer */
+		e->tv.num_shared = 0;
+	}
 	ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence);
+
+	for (i = 0; i < p->gang_size; ++i)
+		p->jobs[i] = NULL;
+
 	mutex_unlock(&p->adev->notifier_lock);
 	mutex_unlock(&p->bo_list->bo_list_mutex);
-
 	return 0;
 
-error_abort:
-	drm_sched_job_cleanup(&job->base);
+error_unlock:
 	mutex_unlock(&p->adev->notifier_lock);
-	amdgpu_job_free(job);
+
+error_cleanup:
+	for (i = 0; i < p->gang_size; ++i)
+		drm_sched_job_cleanup(&p->jobs[i]->base);
 	return r;
 }
 
@@ -1250,17 +1331,18 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser)
 
 	dma_fence_put(parser->fence);
 
-	if (parser->ctx) {
+	if (parser->ctx)
 		amdgpu_ctx_put(parser->ctx);
-	}
 	if (parser->bo_list)
 		amdgpu_bo_list_put(parser->bo_list);
 
 	for (i = 0; i < parser->nchunks; i++)
 		kvfree(parser->chunks[i].kdata);
 	kvfree(parser->chunks);
-	if (parser->job)
-		amdgpu_job_free(parser->job);
+	for (i = 0; i < parser->gang_size; ++i) {
+		if (parser->jobs[i])
+			amdgpu_job_free(parser->jobs[i]);
+	}
 	if (parser->uf_entry.tv.bo) {
 		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(parser->uf_entry.tv.bo);
 
@@ -1304,7 +1386,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 		goto error_fini;
 	}
 
-	r = amdgpu_cs_patch_ibs(&parser);
+	r = amdgpu_cs_patch_jobs(&parser);
 	if (r)
 		goto error_backoff;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
index 652b5593499f..cbaa19b2b8a3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
@@ -27,6 +27,8 @@
 #include "amdgpu_bo_list.h"
 #include "amdgpu_ring.h"
 
+#define AMDGPU_CS_GANG_SIZE	4
+
 struct amdgpu_bo_va_mapping;
 
 struct amdgpu_cs_chunk {
@@ -50,9 +52,11 @@ struct amdgpu_cs_parser {
 	unsigned		nchunks;
 	struct amdgpu_cs_chunk	*chunks;
 
-	/* scheduler job object */
-	struct drm_sched_entity	*entity;
-	struct amdgpu_job	*job;
+	/* scheduler job objects */
+	unsigned int		gang_size;
+	struct drm_sched_entity	*entities[AMDGPU_CS_GANG_SIZE];
+	struct amdgpu_job	*jobs[AMDGPU_CS_GANG_SIZE];
+	struct amdgpu_job	*gang_leader;
 
 	/* buffer objects */
 	struct ww_acquire_ctx		ticket;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
index 06dfcf297a8d..5e6ddc7e101c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
@@ -140,8 +140,10 @@ TRACE_EVENT(amdgpu_bo_create,
 );
 
 TRACE_EVENT(amdgpu_cs,
-	    TP_PROTO(struct amdgpu_cs_parser *p, int i),
-	    TP_ARGS(p, i),
+	    TP_PROTO(struct amdgpu_cs_parser *p,
+		     struct amdgpu_job *job,
+		     struct amdgpu_ib *ib),
+	    TP_ARGS(p, job, ib),
 	    TP_STRUCT__entry(
 			     __field(struct amdgpu_bo_list *, bo_list)
 			     __field(u32, ring)
@@ -151,10 +153,10 @@ TRACE_EVENT(amdgpu_cs,
 
 	    TP_fast_assign(
 			   __entry->bo_list = p->bo_list;
-			   __entry->ring = to_amdgpu_ring(p->entity->rq->sched)->idx;
-			   __entry->dw = p->job->ibs[i].length_dw;
+			   __entry->ring = to_amdgpu_ring(job->base.sched)->idx;
+			   __entry->dw = ib->length_dw;
 			   __entry->fences = amdgpu_fence_count_emitted(
-				to_amdgpu_ring(p->entity->rq->sched));
+				to_amdgpu_ring(job->base.sched));
 			   ),
 	    TP_printk("bo_list=%p, ring=%u, dw=%u, fences=%u",
 		      __entry->bo_list, __entry->ring, __entry->dw,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 09/10] drm/amdgpu: add gang submit backend
  2022-07-14 10:39 ` [PATCH 09/10] drm/amdgpu: add gang submit backend Christian König
@ 2022-07-14 14:36   ` Andrey Grodzovsky
  2022-07-14 19:32   ` Luben Tuikov
  1 sibling, 0 replies; 22+ messages in thread
From: Andrey Grodzovsky @ 2022-07-14 14:36 UTC (permalink / raw)
  To: Christian König, amd-gfx, Marek.Olsak, timur.kristof,
	Yogesh.Mohanmarimuthu
  Cc: Christian König


On 2022-07-14 06:39, Christian König wrote:
> Allows submitting jobs as gang which needs to run on multiple
> engines at the same time.
>
> Basic idea is that we have a global gang submit fence representing when the
> gang leader is finally pushed to run on the hardware last.
>
> Jobs submitted as gang are never re-submitted in case of a GPU reset since this
> won't work and will just deadlock the hardware immediately again.
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu.h        |  3 ++
>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 34 ++++++++++++++++++++++
>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c    | 28 ++++++++++++++++--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.h    |  3 ++
>   4 files changed, 66 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 2871a3e3801f..19308db52984 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -881,6 +881,7 @@ struct amdgpu_device {
>   	u64				fence_context;
>   	unsigned			num_rings;
>   	struct amdgpu_ring		*rings[AMDGPU_MAX_RINGS];
> +	struct dma_fence __rcu		*gang_submit;
>   	bool				ib_pool_ready;
>   	struct amdgpu_sa_manager	ib_pools[AMDGPU_IB_POOL_MAX];
>   	struct amdgpu_sched		gpu_sched[AMDGPU_HW_IP_NUM][AMDGPU_RING_PRIO_MAX];
> @@ -1288,6 +1289,8 @@ u32 amdgpu_device_pcie_port_rreg(struct amdgpu_device *adev,
>   				u32 reg);
>   void amdgpu_device_pcie_port_wreg(struct amdgpu_device *adev,
>   				u32 reg, u32 v);
> +struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev,
> +					    struct dma_fence *gang);
>   
>   /* atpx handler */
>   #if defined(CONFIG_VGA_SWITCHEROO)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index e1c9587f659b..f80beb7208c0 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -3499,6 +3499,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
>   	adev->gmc.gart_size = 512 * 1024 * 1024;
>   	adev->accel_working = false;
>   	adev->num_rings = 0;
> +	RCU_INIT_POINTER(adev->gang_submit, dma_fence_get_stub());
>   	adev->mman.buffer_funcs = NULL;
>   	adev->mman.buffer_funcs_ring = NULL;
>   	adev->vm_manager.vm_pte_funcs = NULL;
> @@ -3979,6 +3980,7 @@ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
>   	release_firmware(adev->firmware.gpu_info_fw);
>   	adev->firmware.gpu_info_fw = NULL;
>   	adev->accel_working = false;
> +	dma_fence_put(rcu_dereference_protected(adev->gang_submit, true));
>   
>   	amdgpu_reset_fini(adev);
>   
> @@ -5905,3 +5907,35 @@ void amdgpu_device_pcie_port_wreg(struct amdgpu_device *adev,
>   	(void)RREG32(data);
>   	spin_unlock_irqrestore(&adev->pcie_idx_lock, flags);
>   }
> +
> +/**
> + * amdgpu_device_switch_gang - switch to a new gang
> + * @adev: amdgpu_device pointer
> + * @gang: the gang to switch to
> + *
> + * Try to switch to a new gang or return a reference to the current gang if that
> + * isn't possible.
> + * Returns: Either NULL if we switched correctly or a reference to the existing
> + * gang.
> + */
> +struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev,
> +					    struct dma_fence *gang)
> +{
> +	struct dma_fence *old = NULL;
> +
> +	do {
> +		dma_fence_put(old);
> +		old = dma_fence_get_rcu_safe(&adev->gang_submit);
> +
> +		if (old == gang)
> +			break;
> +
> +		if (!dma_fence_is_signaled(old))
> +			return old;
> +
> +	} while (cmpxchg((struct dma_fence __force **)&adev->gang_submit,
> +			 old, gang) != old);
> +
> +	dma_fence_put(old);
> +	return NULL;
> +}
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index 3255b2fca611..f3a1fdbd41a3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -180,11 +180,29 @@ static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
>   		kfree(job);
>   }
>   
> +void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
> +				struct amdgpu_job *leader)
> +{
> +	struct dma_fence *fence = &leader->base.s_fence->scheduled;
> +
> +	WARN_ON(job->gang_submit);
> +
> +	/*
> +	 * Don't add a reference when we are the gang leader to avoid circle
> +	 * dependency.
> +	 */
> +	if (job != leader)
> +		dma_fence_get(fence);
> +	job->gang_submit = fence;
> +}
> +
>   void amdgpu_job_free(struct amdgpu_job *job)
>   {
>   	amdgpu_job_free_resources(job);
>   	amdgpu_sync_free(&job->sync);
>   	amdgpu_sync_free(&job->sched_sync);
> +	if (job->gang_submit != &job->base.s_fence->scheduled)
> +		dma_fence_put(job->gang_submit);
>   
>   	/* only put the hw fence if has embedded fence */
>   	if (job->hw_fence.ops != NULL)
> @@ -258,12 +276,16 @@ static struct dma_fence *amdgpu_job_dependency(struct drm_sched_job *sched_job,
>   		fence = amdgpu_sync_get_fence(&job->sync);
>   	}
>   
> +	if (!fence && !job->gang_submit)
> +		fence = amdgpu_device_switch_gang(ring->adev, job->gang_submit);
> +
>   	return fence;


Looks like you forgot to fix a functional 'typo' we discussed - 
https://lore.kernel.org/all/dd6839eb-da19-30fd-1422-6b0aba326e7f@amd.com/

Andrey


>   }
>   
>   static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job)
>   {
>   	struct amdgpu_ring *ring = to_amdgpu_ring(sched_job->sched);
> +	struct amdgpu_device *adev = ring->adev;
>   	struct dma_fence *fence = NULL, *finished;
>   	struct amdgpu_job *job;
>   	int r = 0;
> @@ -275,8 +297,10 @@ static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job)
>   
>   	trace_amdgpu_sched_run_job(job);
>   
> -	if (job->vram_lost_counter != atomic_read(&ring->adev->vram_lost_counter))
> -		dma_fence_set_error(finished, -ECANCELED);/* skip IB as well if VRAM lost */
> +	/* Skip job if VRAM is lost and never resubmit gangs */
> +	if (job->vram_lost_counter != atomic_read(&adev->vram_lost_counter) ||
> +	    (job->job_run_counter && job->gang_submit))
> +		dma_fence_set_error(finished, -ECANCELED);
>   
>   	if (finished->error < 0) {
>   		DRM_INFO("Skip scheduling IBs!\n");
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> index 0bab8fe0d419..615328130615 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> @@ -51,6 +51,7 @@ struct amdgpu_job {
>   	struct amdgpu_sync	sched_sync;
>   	struct dma_fence	hw_fence;
>   	struct dma_fence	*external_hw_fence;
> +	struct dma_fence	*gang_submit;
>   	uint32_t		preamble_status;
>   	uint32_t                preemption_status;
>   	bool                    vm_needs_flush;
> @@ -80,6 +81,8 @@ int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, unsigned size,
>   void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
>   			      struct amdgpu_bo *gws, struct amdgpu_bo *oa);
>   void amdgpu_job_free_resources(struct amdgpu_job *job);
> +void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
> +				struct amdgpu_job *leader);
>   void amdgpu_job_free(struct amdgpu_job *job);
>   int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity,
>   		      void *owner, struct dma_fence **f);

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 02/10] drm/amdgpu: Protect the amdgpu_bo_list list with a mutex v2
  2022-07-14 10:38 ` [PATCH 02/10] drm/amdgpu: Protect the amdgpu_bo_list list with a mutex v2 Christian König
@ 2022-07-14 15:10   ` Alex Deucher
  2022-07-14 19:10   ` Luben Tuikov
  1 sibling, 0 replies; 22+ messages in thread
From: Alex Deucher @ 2022-07-14 15:10 UTC (permalink / raw)
  To: Christian König
  Cc: Andrey Grodzovsky, Timur Kristóf, Marek Olšák,
	amd-gfx list, Yogesh.Mohanmarimuthu, Luben Tuikov,
	Vitaly Prosyak, Alex Deucher, Christian König

On Thu, Jul 14, 2022 at 6:39 AM Christian König
<ckoenig.leichtzumerken@gmail.com> wrote:
>
> From: Luben Tuikov <luben.tuikov@amd.com>
>
> Protect the struct amdgpu_bo_list with a mutex. This is used during command
> submission in order to avoid buffer object corruption as recorded in
> the link below.
>
> v2 (chk): Keep the mutex looked for the whole CS to avoid using the
>           list from multiple CS threads at the same time.
>
> Suggested-by: Christian König <christian.koenig@amd.com>
> Cc: Alex Deucher <Alexander.Deucher@amd.com>
> Cc: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
> Cc: Vitaly Prosyak <Vitaly.Prosyak@amd.com>
> Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2048
> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
> Signed-off-by: Christian König <christian.koenig@amd.com>

I think this is a valid bug fix on its own for stable.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c |  3 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h |  4 ++++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c      | 16 +++++++++++++---
>  3 files changed, 19 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
> index 714178f1b6c6..2168163aad2d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
> @@ -40,7 +40,7 @@ static void amdgpu_bo_list_free_rcu(struct rcu_head *rcu)
>  {
>         struct amdgpu_bo_list *list = container_of(rcu, struct amdgpu_bo_list,
>                                                    rhead);
> -
> +       mutex_destroy(&list->bo_list_mutex);
>         kvfree(list);
>  }
>
> @@ -136,6 +136,7 @@ int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp,
>
>         trace_amdgpu_cs_bo_status(list->num_entries, total_size);
>
> +       mutex_init(&list->bo_list_mutex);
>         *result = list;
>         return 0;
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h
> index 529d52a204cf..9caea1688fc3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h
> @@ -47,6 +47,10 @@ struct amdgpu_bo_list {
>         struct amdgpu_bo *oa_obj;
>         unsigned first_userptr;
>         unsigned num_entries;
> +
> +       /* Protect access during command submission.
> +        */
> +       struct mutex bo_list_mutex;
>  };
>
>  int amdgpu_bo_list_get(struct amdgpu_fpriv *fpriv, int id,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index b28af04b0c3e..d8f1335bc68f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -519,6 +519,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>                         return r;
>         }
>
> +       mutex_lock(&p->bo_list->bo_list_mutex);
> +
>         /* One for TTM and one for the CS job */
>         amdgpu_bo_list_for_each_entry(e, p->bo_list)
>                 e->tv.num_shared = 2;
> @@ -651,6 +653,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>                         kvfree(e->user_pages);
>                         e->user_pages = NULL;
>                 }
> +               mutex_unlock(&p->bo_list->bo_list_mutex);
>         }
>         return r;
>  }
> @@ -690,9 +693,11 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error,
>  {
>         unsigned i;
>
> -       if (error && backoff)
> +       if (error && backoff) {
>                 ttm_eu_backoff_reservation(&parser->ticket,
>                                            &parser->validated);
> +               mutex_unlock(&parser->bo_list->bo_list_mutex);
> +       }
>
>         for (i = 0; i < parser->num_post_deps; i++) {
>                 drm_syncobj_put(parser->post_deps[i].syncobj);
> @@ -832,12 +837,16 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
>                         continue;
>
>                 r = amdgpu_vm_bo_update(adev, bo_va, false);
> -               if (r)
> +               if (r) {
> +                       mutex_unlock(&p->bo_list->bo_list_mutex);
>                         return r;
> +               }
>
>                 r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update);
> -               if (r)
> +               if (r) {
> +                       mutex_unlock(&p->bo_list->bo_list_mutex);
>                         return r;
> +               }
>         }
>
>         r = amdgpu_vm_handle_moved(adev, vm);
> @@ -1278,6 +1287,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
>
>         ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence);
>         mutex_unlock(&p->adev->notifier_lock);
> +       mutex_unlock(&p->bo_list->bo_list_mutex);
>
>         return 0;
>
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 07/10] drm/amdgpu: move setting the job resources
  2022-07-14 10:38 ` [PATCH 07/10] drm/amdgpu: move setting the job resources Christian König
@ 2022-07-14 15:40   ` Luben Tuikov
  2022-07-14 18:32   ` Andrey Grodzovsky
  1 sibling, 0 replies; 22+ messages in thread
From: Luben Tuikov @ 2022-07-14 15:40 UTC (permalink / raw)
  To: Christian König, amd-gfx, Marek.Olsak, timur.kristof,
	andrey.grodzovsky, Yogesh.Mohanmarimuthu
  Cc: Christian König

Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>

On 2022-07-14 06:38, Christian König wrote:
> Move setting the job resources into amdgpu_job.c
> 
> Signed-off-by: Christian König <christian.koenig@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 21 ++-------------------
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 17 +++++++++++++++++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.h |  2 ++
>  3 files changed, 21 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index dfb7b4f46bc3..88f491dc7ca2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -828,9 +828,6 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>  	struct amdgpu_vm *vm = &fpriv->vm;
>  	struct amdgpu_bo_list_entry *e;
>  	struct list_head duplicates;
> -	struct amdgpu_bo *gds;
> -	struct amdgpu_bo *gws;
> -	struct amdgpu_bo *oa;
>  	int r;
>  
>  	INIT_LIST_HEAD(&p->validated);
> @@ -947,22 +944,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>  	amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved,
>  				     p->bytes_moved_vis);
>  
> -	gds = p->bo_list->gds_obj;
> -	gws = p->bo_list->gws_obj;
> -	oa = p->bo_list->oa_obj;
> -
> -	if (gds) {
> -		p->job->gds_base = amdgpu_bo_gpu_offset(gds) >> PAGE_SHIFT;
> -		p->job->gds_size = amdgpu_bo_size(gds) >> PAGE_SHIFT;
> -	}
> -	if (gws) {
> -		p->job->gws_base = amdgpu_bo_gpu_offset(gws) >> PAGE_SHIFT;
> -		p->job->gws_size = amdgpu_bo_size(gws) >> PAGE_SHIFT;
> -	}
> -	if (oa) {
> -		p->job->oa_base = amdgpu_bo_gpu_offset(oa) >> PAGE_SHIFT;
> -		p->job->oa_size = amdgpu_bo_size(oa) >> PAGE_SHIFT;
> -	}
> +	amdgpu_job_set_resources(p->job, p->bo_list->gds_obj,
> +				 p->bo_list->gws_obj, p->bo_list->oa_obj);
>  
>  	if (p->uf_entry.tv.bo) {
>  		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(p->uf_entry.tv.bo);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index 36c1be77bf8f..3255b2fca611 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -129,6 +129,23 @@ int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, unsigned size,
>  	return r;
>  }
>  
> +void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
> +			      struct amdgpu_bo *gws, struct amdgpu_bo *oa)
> +{
> +	if (gds) {
> +		job->gds_base = amdgpu_bo_gpu_offset(gds) >> PAGE_SHIFT;
> +		job->gds_size = amdgpu_bo_size(gds) >> PAGE_SHIFT;
> +	}
> +	if (gws) {
> +		job->gws_base = amdgpu_bo_gpu_offset(gws) >> PAGE_SHIFT;
> +		job->gws_size = amdgpu_bo_size(gws) >> PAGE_SHIFT;
> +	}
> +	if (oa) {
> +		job->oa_base = amdgpu_bo_gpu_offset(oa) >> PAGE_SHIFT;
> +		job->oa_size = amdgpu_bo_size(oa) >> PAGE_SHIFT;
> +	}
> +}
> +
>  void amdgpu_job_free_resources(struct amdgpu_job *job)
>  {
>  	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> index d599c0540b46..0bab8fe0d419 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> @@ -77,6 +77,8 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,
>  		     struct amdgpu_job **job, struct amdgpu_vm *vm);
>  int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, unsigned size,
>  		enum amdgpu_ib_pool_type pool, struct amdgpu_job **job);
> +void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
> +			      struct amdgpu_bo *gws, struct amdgpu_bo *oa);
>  void amdgpu_job_free_resources(struct amdgpu_job *job);
>  void amdgpu_job_free(struct amdgpu_job *job);
>  int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity,

Regards,
-- 
Luben

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 01/10] drm/sched: move calling drm_sched_entity_select_rq
  2022-07-14 10:38 ` [PATCH 01/10] drm/sched: move calling drm_sched_entity_select_rq Christian König
@ 2022-07-14 15:43   ` Andrey Grodzovsky
  2022-07-14 16:26     ` Christian König
  0 siblings, 1 reply; 22+ messages in thread
From: Andrey Grodzovsky @ 2022-07-14 15:43 UTC (permalink / raw)
  To: Christian König, amd-gfx, Marek.Olsak, timur.kristof,
	Yogesh.Mohanmarimuthu
  Cc: Christian König, dri-devel

Can you please remind me of the use case that requires this ? I browsed 
through
related mails in the past but haven't found when is that needed. For amdgpu
drm_sched_job_init and drm_sched_job_arm are called together and amdgpu
is the only one who supports modifying entity priority on the fly as far 
as i see.

Andrey

On 2022-07-14 06:38, Christian König wrote:
> We already discussed that the call to drm_sched_entity_select_rq() needs
> to move to drm_sched_job_arm() to be able to set a new scheduler list
> between _init() and _arm(). This was just not applied for some reason.
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> CC: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> CC: dri-devel@lists.freedesktop.org
> ---
>   drivers/gpu/drm/scheduler/sched_main.c | 3 +--
>   1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 68317d3a7a27..e0ab14e0fb6b 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -592,7 +592,6 @@ int drm_sched_job_init(struct drm_sched_job *job,
>   		       struct drm_sched_entity *entity,
>   		       void *owner)
>   {
> -	drm_sched_entity_select_rq(entity);
>   	if (!entity->rq)
>   		return -ENOENT;
>   
> @@ -628,7 +627,7 @@ void drm_sched_job_arm(struct drm_sched_job *job)
>   	struct drm_sched_entity *entity = job->entity;
>   
>   	BUG_ON(!entity);
> -
> +	drm_sched_entity_select_rq(entity);
>   	sched = entity->rq->sched;
>   
>   	job->sched = sched;

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 01/10] drm/sched: move calling drm_sched_entity_select_rq
  2022-07-14 15:43   ` Andrey Grodzovsky
@ 2022-07-14 16:26     ` Christian König
  2022-07-14 18:25       ` Andrey Grodzovsky
  0 siblings, 1 reply; 22+ messages in thread
From: Christian König @ 2022-07-14 16:26 UTC (permalink / raw)
  To: Andrey Grodzovsky, Christian König, amd-gfx, Marek.Olsak,
	timur.kristof, Yogesh.Mohanmarimuthu
  Cc: dri-devel

We need this for limiting codecs like AV1 to the first instance for VCN3.

Essentially the idea is that we first initialize the job with entity, id 
etc... and before we submit it we select a new rq for the entity. In the 
meantime the VCN3 inline parse will have modified the available rqs for 
the entity.

See the patch "revert "fix limiting AV1 to the first instance on VCN3"" 
as well.

Christian.

Am 14.07.22 um 17:43 schrieb Andrey Grodzovsky:
> Can you please remind me of the use case that requires this ? I 
> browsed through
> related mails in the past but haven't found when is that needed. For 
> amdgpu
> drm_sched_job_init and drm_sched_job_arm are called together and amdgpu
> is the only one who supports modifying entity priority on the fly as 
> far as i see.
>
> Andrey
>
> On 2022-07-14 06:38, Christian König wrote:
>> We already discussed that the call to drm_sched_entity_select_rq() needs
>> to move to drm_sched_job_arm() to be able to set a new scheduler list
>> between _init() and _arm(). This was just not applied for some reason.
>>
>> Signed-off-by: Christian König <christian.koenig@amd.com>
>> CC: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>> CC: dri-devel@lists.freedesktop.org
>> ---
>>   drivers/gpu/drm/scheduler/sched_main.c | 3 +--
>>   1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
>> b/drivers/gpu/drm/scheduler/sched_main.c
>> index 68317d3a7a27..e0ab14e0fb6b 100644
>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>> @@ -592,7 +592,6 @@ int drm_sched_job_init(struct drm_sched_job *job,
>>                  struct drm_sched_entity *entity,
>>                  void *owner)
>>   {
>> -    drm_sched_entity_select_rq(entity);
>>       if (!entity->rq)
>>           return -ENOENT;
>>   @@ -628,7 +627,7 @@ void drm_sched_job_arm(struct drm_sched_job *job)
>>       struct drm_sched_entity *entity = job->entity;
>>         BUG_ON(!entity);
>> -
>> +    drm_sched_entity_select_rq(entity);
>>       sched = entity->rq->sched;
>>         job->sched = sched;


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 01/10] drm/sched: move calling drm_sched_entity_select_rq
  2022-07-14 16:26     ` Christian König
@ 2022-07-14 18:25       ` Andrey Grodzovsky
  0 siblings, 0 replies; 22+ messages in thread
From: Andrey Grodzovsky @ 2022-07-14 18:25 UTC (permalink / raw)
  To: Christian König, Christian König, amd-gfx, Marek.Olsak,
	timur.kristof, Yogesh.Mohanmarimuthu
  Cc: dri-devel

Found the new use case from the 5/10 of reordering CS ioctl.

Reviewed-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>

Andrey

On 2022-07-14 12:26, Christian König wrote:
> We need this for limiting codecs like AV1 to the first instance for VCN3.
>
> Essentially the idea is that we first initialize the job with entity, 
> id etc... and before we submit it we select a new rq for the entity. 
> In the meantime the VCN3 inline parse will have modified the available 
> rqs for the entity.
>
> See the patch "revert "fix limiting AV1 to the first instance on 
> VCN3"" as well.
>
> Christian.
>
> Am 14.07.22 um 17:43 schrieb Andrey Grodzovsky:
>> Can you please remind me of the use case that requires this ? I 
>> browsed through
>> related mails in the past but haven't found when is that needed. For 
>> amdgpu
>> drm_sched_job_init and drm_sched_job_arm are called together and amdgpu
>> is the only one who supports modifying entity priority on the fly as 
>> far as i see.
>>
>> Andrey
>>
>> On 2022-07-14 06:38, Christian König wrote:
>>> We already discussed that the call to drm_sched_entity_select_rq() 
>>> needs
>>> to move to drm_sched_job_arm() to be able to set a new scheduler list
>>> between _init() and _arm(). This was just not applied for some reason.
>>>
>>> Signed-off-by: Christian König <christian.koenig@amd.com>
>>> CC: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>> CC: dri-devel@lists.freedesktop.org
>>> ---
>>>   drivers/gpu/drm/scheduler/sched_main.c | 3 +--
>>>   1 file changed, 1 insertion(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
>>> b/drivers/gpu/drm/scheduler/sched_main.c
>>> index 68317d3a7a27..e0ab14e0fb6b 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>> @@ -592,7 +592,6 @@ int drm_sched_job_init(struct drm_sched_job *job,
>>>                  struct drm_sched_entity *entity,
>>>                  void *owner)
>>>   {
>>> -    drm_sched_entity_select_rq(entity);
>>>       if (!entity->rq)
>>>           return -ENOENT;
>>>   @@ -628,7 +627,7 @@ void drm_sched_job_arm(struct drm_sched_job *job)
>>>       struct drm_sched_entity *entity = job->entity;
>>>         BUG_ON(!entity);
>>> -
>>> +    drm_sched_entity_select_rq(entity);
>>>       sched = entity->rq->sched;
>>>         job->sched = sched;
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 07/10] drm/amdgpu: move setting the job resources
  2022-07-14 10:38 ` [PATCH 07/10] drm/amdgpu: move setting the job resources Christian König
  2022-07-14 15:40   ` Luben Tuikov
@ 2022-07-14 18:32   ` Andrey Grodzovsky
  1 sibling, 0 replies; 22+ messages in thread
From: Andrey Grodzovsky @ 2022-07-14 18:32 UTC (permalink / raw)
  To: Christian König, amd-gfx, Marek.Olsak, timur.kristof,
	Yogesh.Mohanmarimuthu
  Cc: Christian König

Reviewed-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>

Andrey

On 2022-07-14 06:38, Christian König wrote:
> Move setting the job resources into amdgpu_job.c
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 21 ++-------------------
>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 17 +++++++++++++++++
>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.h |  2 ++
>   3 files changed, 21 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index dfb7b4f46bc3..88f491dc7ca2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -828,9 +828,6 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>   	struct amdgpu_vm *vm = &fpriv->vm;
>   	struct amdgpu_bo_list_entry *e;
>   	struct list_head duplicates;
> -	struct amdgpu_bo *gds;
> -	struct amdgpu_bo *gws;
> -	struct amdgpu_bo *oa;
>   	int r;
>   
>   	INIT_LIST_HEAD(&p->validated);
> @@ -947,22 +944,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>   	amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved,
>   				     p->bytes_moved_vis);
>   
> -	gds = p->bo_list->gds_obj;
> -	gws = p->bo_list->gws_obj;
> -	oa = p->bo_list->oa_obj;
> -
> -	if (gds) {
> -		p->job->gds_base = amdgpu_bo_gpu_offset(gds) >> PAGE_SHIFT;
> -		p->job->gds_size = amdgpu_bo_size(gds) >> PAGE_SHIFT;
> -	}
> -	if (gws) {
> -		p->job->gws_base = amdgpu_bo_gpu_offset(gws) >> PAGE_SHIFT;
> -		p->job->gws_size = amdgpu_bo_size(gws) >> PAGE_SHIFT;
> -	}
> -	if (oa) {
> -		p->job->oa_base = amdgpu_bo_gpu_offset(oa) >> PAGE_SHIFT;
> -		p->job->oa_size = amdgpu_bo_size(oa) >> PAGE_SHIFT;
> -	}
> +	amdgpu_job_set_resources(p->job, p->bo_list->gds_obj,
> +				 p->bo_list->gws_obj, p->bo_list->oa_obj);
>   
>   	if (p->uf_entry.tv.bo) {
>   		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(p->uf_entry.tv.bo);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index 36c1be77bf8f..3255b2fca611 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -129,6 +129,23 @@ int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, unsigned size,
>   	return r;
>   }
>   
> +void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
> +			      struct amdgpu_bo *gws, struct amdgpu_bo *oa)
> +{
> +	if (gds) {
> +		job->gds_base = amdgpu_bo_gpu_offset(gds) >> PAGE_SHIFT;
> +		job->gds_size = amdgpu_bo_size(gds) >> PAGE_SHIFT;
> +	}
> +	if (gws) {
> +		job->gws_base = amdgpu_bo_gpu_offset(gws) >> PAGE_SHIFT;
> +		job->gws_size = amdgpu_bo_size(gws) >> PAGE_SHIFT;
> +	}
> +	if (oa) {
> +		job->oa_base = amdgpu_bo_gpu_offset(oa) >> PAGE_SHIFT;
> +		job->oa_size = amdgpu_bo_size(oa) >> PAGE_SHIFT;
> +	}
> +}
> +
>   void amdgpu_job_free_resources(struct amdgpu_job *job)
>   {
>   	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> index d599c0540b46..0bab8fe0d419 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> @@ -77,6 +77,8 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,
>   		     struct amdgpu_job **job, struct amdgpu_vm *vm);
>   int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, unsigned size,
>   		enum amdgpu_ib_pool_type pool, struct amdgpu_job **job);
> +void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
> +			      struct amdgpu_bo *gws, struct amdgpu_bo *oa);
>   void amdgpu_job_free_resources(struct amdgpu_job *job);
>   void amdgpu_job_free(struct amdgpu_job *job);
>   int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity,

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 02/10] drm/amdgpu: Protect the amdgpu_bo_list list with a mutex v2
  2022-07-14 10:38 ` [PATCH 02/10] drm/amdgpu: Protect the amdgpu_bo_list list with a mutex v2 Christian König
  2022-07-14 15:10   ` Alex Deucher
@ 2022-07-14 19:10   ` Luben Tuikov
  1 sibling, 0 replies; 22+ messages in thread
From: Luben Tuikov @ 2022-07-14 19:10 UTC (permalink / raw)
  To: Christian König, amd-gfx, Marek.Olsak, timur.kristof,
	andrey.grodzovsky, Yogesh.Mohanmarimuthu
  Cc: Alex Deucher, Christian König, Vitaly Prosyak

Tested-by: Luben Tuikov <luben.tuikov@amd.com>

This passes all IGT amd_cs_nop tests.

Regards,
Luben

On 2022-07-14 06:38, Christian König wrote:
> From: Luben Tuikov <luben.tuikov@amd.com>
> 
> Protect the struct amdgpu_bo_list with a mutex. This is used during command
> submission in order to avoid buffer object corruption as recorded in
> the link below.
> 
> v2 (chk): Keep the mutex looked for the whole CS to avoid using the
> 	  list from multiple CS threads at the same time.
> 
> Suggested-by: Christian König <christian.koenig@amd.com>
> Cc: Alex Deucher <Alexander.Deucher@amd.com>
> Cc: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
> Cc: Vitaly Prosyak <Vitaly.Prosyak@amd.com>
> Link: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F2048&amp;data=05%7C01%7Cluben.tuikov%40amd.com%7C13c905354c8b49ca582c08da658514fe%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637933919507590618%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=CIO%2B%2BfexpAQFjyyy%2B6ZeTXQEEyrR9RIei3wEFq4OIaI%3D&amp;reserved=0
> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c |  3 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h |  4 ++++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c      | 16 +++++++++++++---
>  3 files changed, 19 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
> index 714178f1b6c6..2168163aad2d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
> @@ -40,7 +40,7 @@ static void amdgpu_bo_list_free_rcu(struct rcu_head *rcu)
>  {
>  	struct amdgpu_bo_list *list = container_of(rcu, struct amdgpu_bo_list,
>  						   rhead);
> -
> +	mutex_destroy(&list->bo_list_mutex);
>  	kvfree(list);
>  }
>  
> @@ -136,6 +136,7 @@ int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp,
>  
>  	trace_amdgpu_cs_bo_status(list->num_entries, total_size);
>  
> +	mutex_init(&list->bo_list_mutex);
>  	*result = list;
>  	return 0;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h
> index 529d52a204cf..9caea1688fc3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h
> @@ -47,6 +47,10 @@ struct amdgpu_bo_list {
>  	struct amdgpu_bo *oa_obj;
>  	unsigned first_userptr;
>  	unsigned num_entries;
> +
> +	/* Protect access during command submission.
> +	 */
> +	struct mutex bo_list_mutex;
>  };
>  
>  int amdgpu_bo_list_get(struct amdgpu_fpriv *fpriv, int id,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index b28af04b0c3e..d8f1335bc68f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -519,6 +519,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>  			return r;
>  	}
>  
> +	mutex_lock(&p->bo_list->bo_list_mutex);
> +
>  	/* One for TTM and one for the CS job */
>  	amdgpu_bo_list_for_each_entry(e, p->bo_list)
>  		e->tv.num_shared = 2;
> @@ -651,6 +653,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>  			kvfree(e->user_pages);
>  			e->user_pages = NULL;
>  		}
> +		mutex_unlock(&p->bo_list->bo_list_mutex);
>  	}
>  	return r;
>  }
> @@ -690,9 +693,11 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error,
>  {
>  	unsigned i;
>  
> -	if (error && backoff)
> +	if (error && backoff) {
>  		ttm_eu_backoff_reservation(&parser->ticket,
>  					   &parser->validated);
> +		mutex_unlock(&parser->bo_list->bo_list_mutex);
> +	}
>  
>  	for (i = 0; i < parser->num_post_deps; i++) {
>  		drm_syncobj_put(parser->post_deps[i].syncobj);
> @@ -832,12 +837,16 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
>  			continue;
>  
>  		r = amdgpu_vm_bo_update(adev, bo_va, false);
> -		if (r)
> +		if (r) {
> +			mutex_unlock(&p->bo_list->bo_list_mutex);
>  			return r;
> +		}
>  
>  		r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update);
> -		if (r)
> +		if (r) {
> +			mutex_unlock(&p->bo_list->bo_list_mutex);
>  			return r;
> +		}
>  	}
>  
>  	r = amdgpu_vm_handle_moved(adev, vm);
> @@ -1278,6 +1287,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
>  
>  	ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence);
>  	mutex_unlock(&p->adev->notifier_lock);
> +	mutex_unlock(&p->bo_list->bo_list_mutex);
>  
>  	return 0;
>  

Regards,
-- 
Luben

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 10/10] drm/amdgpu: add gang submit frontend v2
  2022-07-14 10:39 ` [PATCH 10/10] drm/amdgpu: add gang submit frontend v2 Christian König
@ 2022-07-14 19:10   ` Andrey Grodzovsky
  0 siblings, 0 replies; 22+ messages in thread
From: Andrey Grodzovsky @ 2022-07-14 19:10 UTC (permalink / raw)
  To: Christian König, amd-gfx, Marek.Olsak, timur.kristof,
	Yogesh.Mohanmarimuthu
  Cc: Christian König

Acked-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>

Andrey

On 2022-07-14 06:39, Christian König wrote:
> Allows submitting jobs as gang which needs to run on multiple engines at the
> same time.
>
> All members of the gang get the same implicit, explicit and VM dependencies. So
> no gang member will start running until everything else is ready.
>
> The last job is considered the gang leader (usually a submission to the GFX
> ring) and used for signaling output dependencies.
>
> Each job is remembered individually as user of a buffer object, so there is no
> joining of work at the end.
>
> v2: rebase and fix review comments from Andrey and Yogesh
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c    | 256 ++++++++++++++--------
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h    |  10 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h |  12 +-
>   3 files changed, 183 insertions(+), 95 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index 88f491dc7ca2..e1c41db20efb 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -69,6 +69,7 @@ static int amdgpu_cs_p1_ib(struct amdgpu_cs_parser *p,
>   			   unsigned int *num_ibs)
>   {
>   	struct drm_sched_entity *entity;
> +	unsigned int i;
>   	int r;
>   
>   	r = amdgpu_ctx_get_entity(p->ctx, chunk_ib->ip_type,
> @@ -77,17 +78,28 @@ static int amdgpu_cs_p1_ib(struct amdgpu_cs_parser *p,
>   	if (r)
>   		return r;
>   
> -	/* Abort if there is no run queue associated with this entity.
> -	 * Possibly because of disabled HW IP*/
> +	/*
> +	 * Abort if there is no run queue associated with this entity.
> +	 * Possibly because of disabled HW IP.
> +	 */
>   	if (entity->rq == NULL)
>   		return -EINVAL;
>   
> -	/* Currently we don't support submitting to multiple entities */
> -	if (p->entity && p->entity != entity)
> +	/* Check if we can add this IB to some existing job */
> +	for (i = 0; i < p->gang_size; ++i) {
> +		if (p->entities[i] == entity)
> +			goto found;
> +	}
> +
> +	/* If not increase the gang size if possible */
> +	if (i == AMDGPU_CS_GANG_SIZE)
>   		return -EINVAL;
>   
> -	p->entity = entity;
> -	++(*num_ibs);
> +	p->entities[i] = entity;
> +	p->gang_size = i + 1;
> +
> +found:
> +	++(num_ibs[i]);
>   	return 0;
>   }
>   
> @@ -161,11 +173,12 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
>   			   union drm_amdgpu_cs *cs)
>   {
>   	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> +	unsigned int num_ibs[AMDGPU_CS_GANG_SIZE] = { };
>   	struct amdgpu_vm *vm = &fpriv->vm;
>   	uint64_t *chunk_array_user;
>   	uint64_t *chunk_array;
> -	unsigned size, num_ibs = 0;
>   	uint32_t uf_offset = 0;
> +	unsigned int size;
>   	int ret;
>   	int i;
>   
> @@ -231,7 +244,7 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
>   			if (size < sizeof(struct drm_amdgpu_cs_chunk_ib))
>   				goto free_partial_kdata;
>   
> -			ret = amdgpu_cs_p1_ib(p, p->chunks[i].kdata, &num_ibs);
> +			ret = amdgpu_cs_p1_ib(p, p->chunks[i].kdata, num_ibs);
>   			if (ret)
>   				goto free_partial_kdata;
>   			break;
> @@ -268,21 +281,28 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
>   		}
>   	}
>   
> -	ret = amdgpu_job_alloc(p->adev, num_ibs, &p->job, vm);
> -	if (ret)
> -		goto free_all_kdata;
> +	if (!p->gang_size)
> +		return -EINVAL;
>   
> -	ret = drm_sched_job_init(&p->job->base, p->entity, &fpriv->vm);
> -	if (ret)
> -		goto free_all_kdata;
> +	for (i = 0; i < p->gang_size; ++i) {
> +		ret = amdgpu_job_alloc(p->adev, num_ibs[i], &p->jobs[i], vm);
> +		if (ret)
> +			goto free_all_kdata;
> +
> +		ret = drm_sched_job_init(&p->jobs[i]->base, p->entities[i],
> +					 &fpriv->vm);
> +		if (ret)
> +			goto free_all_kdata;
> +	}
> +	p->gang_leader = p->jobs[p->gang_size - 1];
>   
> -	if (p->ctx->vram_lost_counter != p->job->vram_lost_counter) {
> +	if (p->ctx->vram_lost_counter != p->gang_leader->vram_lost_counter) {
>   		ret = -ECANCELED;
>   		goto free_all_kdata;
>   	}
>   
>   	if (p->uf_entry.tv.bo)
> -		p->job->uf_addr = uf_offset;
> +		p->gang_leader->uf_addr = uf_offset;
>   	kvfree(chunk_array);
>   
>   	/* Use this opportunity to fill in task info for the vm */
> @@ -304,22 +324,18 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
>   	return ret;
>   }
>   
> -static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
> -			   struct amdgpu_cs_chunk *chunk,
> -			   unsigned int *num_ibs,
> -			   unsigned int *ce_preempt,
> -			   unsigned int *de_preempt)
> +static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
> +			   struct amdgpu_ib *ib, struct amdgpu_cs_chunk *chunk,
> +			   unsigned int *ce_preempt, unsigned int *de_preempt)
>   {
> -	struct amdgpu_ring *ring = to_amdgpu_ring(p->job->base.sched);
> +	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
>   	struct drm_amdgpu_cs_chunk_ib *chunk_ib = chunk->kdata;
>   	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> -	struct amdgpu_ib *ib = &p->job->ibs[*num_ibs];
>   	struct amdgpu_vm *vm = &fpriv->vm;
>   	int r;
>   
> -
>   	/* MM engine doesn't support user fences */
> -	if (p->job->uf_addr && ring->funcs->no_user_fence)
> +	if (job->uf_addr && ring->funcs->no_user_fence)
>   		return -EINVAL;
>   
>   	if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX &&
> @@ -336,7 +352,7 @@ static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
>   	}
>   
>   	if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE)
> -		p->job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT;
> +		job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT;
>   
>   	r =  amdgpu_ib_get(p->adev, vm, ring->funcs->parse_cs ?
>   			   chunk_ib->ib_bytes : 0,
> @@ -349,8 +365,6 @@ static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
>   	ib->gpu_addr = chunk_ib->va_start;
>   	ib->length_dw = chunk_ib->ib_bytes / 4;
>   	ib->flags = chunk_ib->flags;
> -
> -	(*num_ibs)++;
>   	return 0;
>   }
>   
> @@ -399,7 +413,7 @@ static int amdgpu_cs_p2_dependencies(struct amdgpu_cs_parser *p,
>   			dma_fence_put(old);
>   		}
>   
> -		r = amdgpu_sync_fence(&p->job->sync, fence);
> +		r = amdgpu_sync_fence(&p->gang_leader->sync, fence);
>   		dma_fence_put(fence);
>   		if (r)
>   			return r;
> @@ -421,7 +435,7 @@ static int amdgpu_syncobj_lookup_and_add(struct amdgpu_cs_parser *p,
>   		return r;
>   	}
>   
> -	r = amdgpu_sync_fence(&p->job->sync, fence);
> +	r = amdgpu_sync_fence(&p->gang_leader->sync, fence);
>   	dma_fence_put(fence);
>   
>   	return r;
> @@ -544,20 +558,30 @@ static int amdgpu_cs_p2_syncobj_timeline_signal(struct amdgpu_cs_parser *p,
>   
>   static int amdgpu_cs_pass2(struct amdgpu_cs_parser *p)
>   {
> -	unsigned int num_ibs = 0, ce_preempt = 0, de_preempt = 0;
> +	unsigned int ce_preempt = 0, de_preempt = 0;
> +	unsigned int job_idx = 0, ib_idx = 0;
>   	int i, r;
>   
>   	for (i = 0; i < p->nchunks; ++i) {
>   		struct amdgpu_cs_chunk *chunk;
> +		struct amdgpu_job *job;
>   
>   		chunk = &p->chunks[i];
>   
>   		switch (chunk->chunk_id) {
>   		case AMDGPU_CHUNK_ID_IB:
> -			r = amdgpu_cs_p2_ib(p, chunk, &num_ibs,
> +			job = p->jobs[job_idx];
> +			r = amdgpu_cs_p2_ib(p, job, &job->ibs[ib_idx], chunk,
>   					    &ce_preempt, &de_preempt);
>   			if (r)
>   				return r;
> +
> +			if (++ib_idx == job->num_ibs) {
> +				++job_idx;
> +				ib_idx = 0;
> +				ce_preempt = 0;
> +				de_preempt = 0;
> +			}
>   			break;
>   		case AMDGPU_CHUNK_ID_DEPENDENCIES:
>   		case AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES:
> @@ -828,6 +852,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>   	struct amdgpu_vm *vm = &fpriv->vm;
>   	struct amdgpu_bo_list_entry *e;
>   	struct list_head duplicates;
> +	unsigned int i;
>   	int r;
>   
>   	INIT_LIST_HEAD(&p->validated);
> @@ -911,16 +936,6 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>   		e->bo_va = amdgpu_vm_bo_find(vm, bo);
>   	}
>   
> -	/* Move fence waiting after getting reservation lock of
> -	 * PD root. Then there is no need on a ctx mutex lock.
> -	 */
> -	r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entity);
> -	if (unlikely(r != 0)) {
> -		if (r != -ERESTARTSYS)
> -			DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n");
> -		goto error_validate;
> -	}
> -
>   	amdgpu_cs_get_threshold_for_moves(p->adev, &p->bytes_moved_threshold,
>   					  &p->bytes_moved_vis_threshold);
>   	p->bytes_moved = 0;
> @@ -944,8 +959,10 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>   	amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved,
>   				     p->bytes_moved_vis);
>   
> -	amdgpu_job_set_resources(p->job, p->bo_list->gds_obj,
> -				 p->bo_list->gws_obj, p->bo_list->oa_obj);
> +	for (i = 0; i < p->gang_size; ++i)
> +		amdgpu_job_set_resources(p->jobs[i], p->bo_list->gds_obj,
> +					 p->bo_list->gws_obj,
> +					 p->bo_list->oa_obj);
>   
>   	if (p->uf_entry.tv.bo) {
>   		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(p->uf_entry.tv.bo);
> @@ -954,7 +971,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>   		if (r)
>   			goto error_validate;
>   
> -		p->job->uf_addr += amdgpu_bo_gpu_offset(uf);
> +		p->gang_leader->uf_addr += amdgpu_bo_gpu_offset(uf);
>   	}
>   	return 0;
>   
> @@ -975,20 +992,24 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>   	return r;
>   }
>   
> -static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *parser)
> +static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *p)
>   {
> -	int i;
> +	int i, j;
>   
>   	if (!trace_amdgpu_cs_enabled())
>   		return;
>   
> -	for (i = 0; i < parser->job->num_ibs; i++)
> -		trace_amdgpu_cs(parser, i);
> +	for (i = 0; i < p->gang_size; ++i) {
> +		struct amdgpu_job *job = p->jobs[i];
> +
> +		for (j = 0; j < job->num_ibs; ++j)
> +			trace_amdgpu_cs(p, job, &job->ibs[j]);
> +	}
>   }
>   
> -static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
> +static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p,
> +			       struct amdgpu_job *job)
>   {
> -	struct amdgpu_job *job = p->job;
>   	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
>   	unsigned int i;
>   	int r;
> @@ -1029,12 +1050,12 @@ static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
>   			memcpy(ib->ptr, kptr, ib->length_dw * 4);
>   			amdgpu_bo_kunmap(aobj);
>   
> -			r = amdgpu_ring_parse_cs(ring, p, p->job, ib);
> +			r = amdgpu_ring_parse_cs(ring, p, job, ib);
>   			if (r)
>   				return r;
>   		} else {
>   			ib->ptr = (uint32_t *)kptr;
> -			r = amdgpu_ring_patch_cs_in_place(ring, p, p->job, ib);
> +			r = amdgpu_ring_patch_cs_in_place(ring, p, job, ib);
>   			amdgpu_bo_kunmap(aobj);
>   			if (r)
>   				return r;
> @@ -1044,14 +1065,29 @@ static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
>   	return 0;
>   }
>   
> +static int amdgpu_cs_patch_jobs(struct amdgpu_cs_parser *p)
> +{
> +	unsigned int i;
> +	int r;
> +
> +	for (i = 0; i < p->gang_size; ++i) {
> +		r = amdgpu_cs_patch_ibs(p, p->jobs[i]);
> +		if (r)
> +			return r;
> +	}
> +	return 0;
> +}
> +
>   static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
>   {
>   	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> +	struct amdgpu_job *job = p->gang_leader;
>   	struct amdgpu_device *adev = p->adev;
>   	struct amdgpu_vm *vm = &fpriv->vm;
>   	struct amdgpu_bo_list_entry *e;
>   	struct amdgpu_bo_va *bo_va;
>   	struct amdgpu_bo *bo;
> +	unsigned int i;
>   	int r;
>   
>   	r = amdgpu_vm_clear_freed(adev, vm, NULL);
> @@ -1062,7 +1098,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
>   	if (r)
>   		return r;
>   
> -	r = amdgpu_sync_fence(&p->job->sync, fpriv->prt_va->last_pt_update);
> +	r = amdgpu_sync_fence(&job->sync, fpriv->prt_va->last_pt_update);
>   	if (r)
>   		return r;
>   
> @@ -1072,7 +1108,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
>   		if (r)
>   			return r;
>   
> -		r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update);
> +		r = amdgpu_sync_fence(&job->sync, bo_va->last_pt_update);
>   		if (r)
>   			return r;
>   	}
> @@ -1091,7 +1127,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
>   		if (r)
>   			return r;
>   
> -		r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update);
> +		r = amdgpu_sync_fence(&job->sync, bo_va->last_pt_update);
>   		if (r)
>   			return r;
>   	}
> @@ -1104,11 +1140,18 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
>   	if (r)
>   		return r;
>   
> -	r = amdgpu_sync_fence(&p->job->sync, vm->last_update);
> +	r = amdgpu_sync_fence(&job->sync, vm->last_update);
>   	if (r)
>   		return r;
>   
> -	p->job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo);
> +	for (i = 0; i < p->gang_size; ++i) {
> +		job = p->jobs[i];
> +
> +		if (!job->vm)
> +			continue;
> +
> +		job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo);
> +	}
>   
>   	if (amdgpu_vm_debug) {
>   		/* Invalidate all BOs to test for userspace bugs */
> @@ -1129,7 +1172,9 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
>   static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
>   {
>   	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> +	struct amdgpu_job *leader = p->gang_leader;
>   	struct amdgpu_bo_list_entry *e;
> +	unsigned int i;
>   	int r;
>   
>   	list_for_each_entry(e, &p->validated, tv.head) {
> @@ -1139,12 +1184,23 @@ static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
>   
>   		sync_mode = amdgpu_bo_explicit_sync(bo) ?
>   			AMDGPU_SYNC_EXPLICIT : AMDGPU_SYNC_NE_OWNER;
> -		r = amdgpu_sync_resv(p->adev, &p->job->sync, resv, sync_mode,
> +		r = amdgpu_sync_resv(p->adev, &leader->sync, resv, sync_mode,
>   				     &fpriv->vm);
>   		if (r)
>   			return r;
>   	}
> -	return 0;
> +
> +	for (i = 0; i < p->gang_size - 1; ++i) {
> +		r = amdgpu_sync_clone(&leader->sync, &p->jobs[i]->sync);
> +		if (r)
> +			return r;
> +	}
> +
> +	r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entities[p->gang_size - 1]);
> +	if (r && r != -ERESTARTSYS)
> +		DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n");
> +
> +	return r;
>   }
>   
>   static void amdgpu_cs_post_dependencies(struct amdgpu_cs_parser *p)
> @@ -1168,16 +1224,26 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
>   			    union drm_amdgpu_cs *cs)
>   {
>   	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> -	struct drm_sched_entity *entity = p->entity;
> +	struct amdgpu_job *leader = p->gang_leader;
>   	struct amdgpu_bo_list_entry *e;
> -	struct amdgpu_job *job;
> +	unsigned int i;
>   	uint64_t seq;
>   	int r;
>   
> -	job = p->job;
> -	p->job = NULL;
> +	for (i = 0; i < p->gang_size; ++i)
> +		drm_sched_job_arm(&p->jobs[i]->base);
>   
> -	drm_sched_job_arm(&job->base);
> +	for (i = 0; i < (p->gang_size - 1); ++i) {
> +		struct dma_fence *fence;
> +
> +		fence = &p->jobs[i]->base.s_fence->scheduled;
> +		r = amdgpu_sync_fence(&leader->sync, fence);
> +		if (r)
> +			goto error_cleanup;
> +	}
> +
> +	for (i = 0; i < p->gang_size; ++i)
> +		amdgpu_job_set_gang_leader(p->jobs[i], leader);
>   
>   	/* No memory allocation is allowed while holding the notifier lock.
>   	 * The lock is held until amdgpu_cs_submit is finished and fence is
> @@ -1195,45 +1261,60 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
>   	}
>   	if (r) {
>   		r = -EAGAIN;
> -		goto error_abort;
> +		goto error_unlock;
>   	}
>   
> -	p->fence = dma_fence_get(&job->base.s_fence->finished);
> +	p->fence = dma_fence_get(&leader->base.s_fence->finished);
>   
> -	seq = amdgpu_ctx_add_fence(p->ctx, entity, p->fence);
> +	seq = amdgpu_ctx_add_fence(p->ctx, p->entities[p->gang_size - 1],
> +				   p->fence);
>   	amdgpu_cs_post_dependencies(p);
>   
> -	if ((job->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) &&
> +	if ((leader->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) &&
>   	    !p->ctx->preamble_presented) {
> -		job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
> +		leader->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
>   		p->ctx->preamble_presented = true;
>   	}
>   
>   	cs->out.handle = seq;
> -	job->uf_sequence = seq;
> -
> -	amdgpu_job_free_resources(job);
> +	leader->uf_sequence = seq;
>   
> -	trace_amdgpu_cs_ioctl(job);
>   	amdgpu_vm_bo_trace_cs(&fpriv->vm, &p->ticket);
> -	drm_sched_entity_push_job(&job->base);
> +	for (i = 0; i < p->gang_size; ++i) {
> +		amdgpu_job_free_resources(p->jobs[i]);
> +		trace_amdgpu_cs_ioctl(p->jobs[i]);
> +		drm_sched_entity_push_job(&p->jobs[i]->base);
> +	}
>   
>   	amdgpu_vm_move_to_lru_tail(p->adev, &fpriv->vm);
>   
> -	/* Make sure all BOs are remembered as writers */
> -	amdgpu_bo_list_for_each_entry(e, p->bo_list)
> -		e->tv.num_shared = 0;
> +	list_for_each_entry(e, &p->validated, tv.head) {
> +
> +		/* Everybody except for the gang leader uses BOOKKEEP */
> +		for (i = 0; i < (p->gang_size - 1); ++i) {
> +			dma_resv_add_fence(e->tv.bo->base.resv,
> +					   &p->jobs[i]->base.s_fence->finished,
> +					   DMA_RESV_USAGE_BOOKKEEP);
> +		}
>   
> +		/* The gang leader as remembered as writer */
> +		e->tv.num_shared = 0;
> +	}
>   	ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence);
> +
> +	for (i = 0; i < p->gang_size; ++i)
> +		p->jobs[i] = NULL;
> +
>   	mutex_unlock(&p->adev->notifier_lock);
>   	mutex_unlock(&p->bo_list->bo_list_mutex);
> -
>   	return 0;
>   
> -error_abort:
> -	drm_sched_job_cleanup(&job->base);
> +error_unlock:
>   	mutex_unlock(&p->adev->notifier_lock);
> -	amdgpu_job_free(job);
> +
> +error_cleanup:
> +	for (i = 0; i < p->gang_size; ++i)
> +		drm_sched_job_cleanup(&p->jobs[i]->base);
>   	return r;
>   }
>   
> @@ -1250,17 +1331,18 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser)
>   
>   	dma_fence_put(parser->fence);
>   
> -	if (parser->ctx) {
> +	if (parser->ctx)
>   		amdgpu_ctx_put(parser->ctx);
> -	}
>   	if (parser->bo_list)
>   		amdgpu_bo_list_put(parser->bo_list);
>   
>   	for (i = 0; i < parser->nchunks; i++)
>   		kvfree(parser->chunks[i].kdata);
>   	kvfree(parser->chunks);
> -	if (parser->job)
> -		amdgpu_job_free(parser->job);
> +	for (i = 0; i < parser->gang_size; ++i) {
> +		if (parser->jobs[i])
> +			amdgpu_job_free(parser->jobs[i]);
> +	}
>   	if (parser->uf_entry.tv.bo) {
>   		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(parser->uf_entry.tv.bo);
>   
> @@ -1304,7 +1386,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
>   		goto error_fini;
>   	}
>   
> -	r = amdgpu_cs_patch_ibs(&parser);
> +	r = amdgpu_cs_patch_jobs(&parser);
>   	if (r)
>   		goto error_backoff;
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
> index 652b5593499f..cbaa19b2b8a3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
> @@ -27,6 +27,8 @@
>   #include "amdgpu_bo_list.h"
>   #include "amdgpu_ring.h"
>   
> +#define AMDGPU_CS_GANG_SIZE	4
> +
>   struct amdgpu_bo_va_mapping;
>   
>   struct amdgpu_cs_chunk {
> @@ -50,9 +52,11 @@ struct amdgpu_cs_parser {
>   	unsigned		nchunks;
>   	struct amdgpu_cs_chunk	*chunks;
>   
> -	/* scheduler job object */
> -	struct drm_sched_entity	*entity;
> -	struct amdgpu_job	*job;
> +	/* scheduler job objects */
> +	unsigned int		gang_size;
> +	struct drm_sched_entity	*entities[AMDGPU_CS_GANG_SIZE];
> +	struct amdgpu_job	*jobs[AMDGPU_CS_GANG_SIZE];
> +	struct amdgpu_job	*gang_leader;
>   
>   	/* buffer objects */
>   	struct ww_acquire_ctx		ticket;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
> index 06dfcf297a8d..5e6ddc7e101c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
> @@ -140,8 +140,10 @@ TRACE_EVENT(amdgpu_bo_create,
>   );
>   
>   TRACE_EVENT(amdgpu_cs,
> -	    TP_PROTO(struct amdgpu_cs_parser *p, int i),
> -	    TP_ARGS(p, i),
> +	    TP_PROTO(struct amdgpu_cs_parser *p,
> +		     struct amdgpu_job *job,
> +		     struct amdgpu_ib *ib),
> +	    TP_ARGS(p, job, ib),
>   	    TP_STRUCT__entry(
>   			     __field(struct amdgpu_bo_list *, bo_list)
>   			     __field(u32, ring)
> @@ -151,10 +153,10 @@ TRACE_EVENT(amdgpu_cs,
>   
>   	    TP_fast_assign(
>   			   __entry->bo_list = p->bo_list;
> -			   __entry->ring = to_amdgpu_ring(p->entity->rq->sched)->idx;
> -			   __entry->dw = p->job->ibs[i].length_dw;
> +			   __entry->ring = to_amdgpu_ring(job->base.sched)->idx;
> +			   __entry->dw = ib->length_dw;
>   			   __entry->fences = amdgpu_fence_count_emitted(
> -				to_amdgpu_ring(p->entity->rq->sched));
> +				to_amdgpu_ring(job->base.sched));
>   			   ),
>   	    TP_printk("bo_list=%p, ring=%u, dw=%u, fences=%u",
>   		      __entry->bo_list, __entry->ring, __entry->dw,

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 09/10] drm/amdgpu: add gang submit backend
  2022-07-14 10:39 ` [PATCH 09/10] drm/amdgpu: add gang submit backend Christian König
  2022-07-14 14:36   ` Andrey Grodzovsky
@ 2022-07-14 19:32   ` Luben Tuikov
  1 sibling, 0 replies; 22+ messages in thread
From: Luben Tuikov @ 2022-07-14 19:32 UTC (permalink / raw)
  To: Christian König, amd-gfx, Marek.Olsak, timur.kristof,
	andrey.grodzovsky, Yogesh.Mohanmarimuthu
  Cc: Christian König

Inlined:

On 2022-07-14 06:39, Christian König wrote:
> Allows submitting jobs as gang which needs to run on multiple
> engines at the same time.
> 
> Basic idea is that we have a global gang submit fence representing when the
> gang leader is finally pushed to run on the hardware last.
> 
> Jobs submitted as gang are never re-submitted in case of a GPU reset since this
> won't work and will just deadlock the hardware immediately again.
> 
> Signed-off-by: Christian König <christian.koenig@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h        |  3 ++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 34 ++++++++++++++++++++++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c    | 28 ++++++++++++++++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.h    |  3 ++
>  4 files changed, 66 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 2871a3e3801f..19308db52984 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -881,6 +881,7 @@ struct amdgpu_device {
>  	u64				fence_context;
>  	unsigned			num_rings;
>  	struct amdgpu_ring		*rings[AMDGPU_MAX_RINGS];
> +	struct dma_fence __rcu		*gang_submit;
>  	bool				ib_pool_ready;
>  	struct amdgpu_sa_manager	ib_pools[AMDGPU_IB_POOL_MAX];
>  	struct amdgpu_sched		gpu_sched[AMDGPU_HW_IP_NUM][AMDGPU_RING_PRIO_MAX];
> @@ -1288,6 +1289,8 @@ u32 amdgpu_device_pcie_port_rreg(struct amdgpu_device *adev,
>  				u32 reg);
>  void amdgpu_device_pcie_port_wreg(struct amdgpu_device *adev,
>  				u32 reg, u32 v);
> +struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev,
> +					    struct dma_fence *gang);
>  
>  /* atpx handler */
>  #if defined(CONFIG_VGA_SWITCHEROO)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index e1c9587f659b..f80beb7208c0 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -3499,6 +3499,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
>  	adev->gmc.gart_size = 512 * 1024 * 1024;
>  	adev->accel_working = false;
>  	adev->num_rings = 0;
> +	RCU_INIT_POINTER(adev->gang_submit, dma_fence_get_stub());
>  	adev->mman.buffer_funcs = NULL;
>  	adev->mman.buffer_funcs_ring = NULL;
>  	adev->vm_manager.vm_pte_funcs = NULL;
> @@ -3979,6 +3980,7 @@ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
>  	release_firmware(adev->firmware.gpu_info_fw);
>  	adev->firmware.gpu_info_fw = NULL;
>  	adev->accel_working = false;
> +	dma_fence_put(rcu_dereference_protected(adev->gang_submit, true));
>  
>  	amdgpu_reset_fini(adev);
>  
> @@ -5905,3 +5907,35 @@ void amdgpu_device_pcie_port_wreg(struct amdgpu_device *adev,
>  	(void)RREG32(data);
>  	spin_unlock_irqrestore(&adev->pcie_idx_lock, flags);
>  }
> +
> +/**
> + * amdgpu_device_switch_gang - switch to a new gang
> + * @adev: amdgpu_device pointer
> + * @gang: the gang to switch to
> + *
> + * Try to switch to a new gang or return a reference to the current gang if that
> + * isn't possible.

If you're redoing this patch (as it seems you would given Andrey's comment),
I'd drop mentioning the return result as it makes it a bit confusing being at the end
and referring to something mentioned earlier in the sentence. Perhaps just this
would suffice:

     Try to switch to a new gang.

> + * Returns: Either NULL if we switched correctly or a reference to the existing
> + * gang.

Here you explain the return result, I'd drop the "Either" and say this:

     Returns NULL if we switched to the new gang, or a reference to
     the current gang.

No need for a semi-colon.

Regards,
Luben

> + */
> +struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev,
> +					    struct dma_fence *gang)
> +{
> +	struct dma_fence *old = NULL;
> +
> +	do {
> +		dma_fence_put(old);
> +		old = dma_fence_get_rcu_safe(&adev->gang_submit);
> +
> +		if (old == gang)
> +			break;
> +
> +		if (!dma_fence_is_signaled(old))
> +			return old;
> +
> +	} while (cmpxchg((struct dma_fence __force **)&adev->gang_submit,
> +			 old, gang) != old);
> +
> +	dma_fence_put(old);
> +	return NULL;
> +}
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index 3255b2fca611..f3a1fdbd41a3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -180,11 +180,29 @@ static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
>  		kfree(job);
>  }
>  
> +void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
> +				struct amdgpu_job *leader)
> +{
> +	struct dma_fence *fence = &leader->base.s_fence->scheduled;
> +
> +	WARN_ON(job->gang_submit);
> +
> +	/*
> +	 * Don't add a reference when we are the gang leader to avoid circle
> +	 * dependency.
> +	 */
> +	if (job != leader)
> +		dma_fence_get(fence);
> +	job->gang_submit = fence;
> +}
> +
>  void amdgpu_job_free(struct amdgpu_job *job)
>  {
>  	amdgpu_job_free_resources(job);
>  	amdgpu_sync_free(&job->sync);
>  	amdgpu_sync_free(&job->sched_sync);
> +	if (job->gang_submit != &job->base.s_fence->scheduled)
> +		dma_fence_put(job->gang_submit);
>  
>  	/* only put the hw fence if has embedded fence */
>  	if (job->hw_fence.ops != NULL)
> @@ -258,12 +276,16 @@ static struct dma_fence *amdgpu_job_dependency(struct drm_sched_job *sched_job,
>  		fence = amdgpu_sync_get_fence(&job->sync);
>  	}
>  
> +	if (!fence && !job->gang_submit)
> +		fence = amdgpu_device_switch_gang(ring->adev, job->gang_submit);
> +
>  	return fence;
>  }
>  
>  static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job)
>  {
>  	struct amdgpu_ring *ring = to_amdgpu_ring(sched_job->sched);
> +	struct amdgpu_device *adev = ring->adev;
>  	struct dma_fence *fence = NULL, *finished;
>  	struct amdgpu_job *job;
>  	int r = 0;
> @@ -275,8 +297,10 @@ static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job)
>  
>  	trace_amdgpu_sched_run_job(job);
>  
> -	if (job->vram_lost_counter != atomic_read(&ring->adev->vram_lost_counter))
> -		dma_fence_set_error(finished, -ECANCELED);/* skip IB as well if VRAM lost */
> +	/* Skip job if VRAM is lost and never resubmit gangs */
> +	if (job->vram_lost_counter != atomic_read(&adev->vram_lost_counter) ||
> +	    (job->job_run_counter && job->gang_submit))
> +		dma_fence_set_error(finished, -ECANCELED);
>  
>  	if (finished->error < 0) {
>  		DRM_INFO("Skip scheduling IBs!\n");
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> index 0bab8fe0d419..615328130615 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> @@ -51,6 +51,7 @@ struct amdgpu_job {
>  	struct amdgpu_sync	sched_sync;
>  	struct dma_fence	hw_fence;
>  	struct dma_fence	*external_hw_fence;
> +	struct dma_fence	*gang_submit;
>  	uint32_t		preamble_status;
>  	uint32_t                preemption_status;
>  	bool                    vm_needs_flush;
> @@ -80,6 +81,8 @@ int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, unsigned size,
>  void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
>  			      struct amdgpu_bo *gws, struct amdgpu_bo *oa);
>  void amdgpu_job_free_resources(struct amdgpu_job *job);
> +void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
> +				struct amdgpu_job *leader);
>  void amdgpu_job_free(struct amdgpu_job *job);
>  int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity,
>  		      void *owner, struct dma_fence **f);


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 01/10] drm/sched: move calling drm_sched_entity_select_rq
  2022-08-15 18:59 Latest gang submit patches Christian König
@ 2022-08-15 18:59 ` Christian König
  0 siblings, 0 replies; 22+ messages in thread
From: Christian König @ 2022-08-15 18:59 UTC (permalink / raw)
  To: dri-devel; +Cc: Christian König

We already discussed that the call to drm_sched_entity_select_rq() needs
to move to drm_sched_job_arm() to be able to set a new scheduler list
between _init() and _arm(). This was just not applied for some reason.

Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
---
 drivers/gpu/drm/scheduler/sched_main.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 68317d3a7a27..e0ab14e0fb6b 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -592,7 +592,6 @@ int drm_sched_job_init(struct drm_sched_job *job,
 		       struct drm_sched_entity *entity,
 		       void *owner)
 {
-	drm_sched_entity_select_rq(entity);
 	if (!entity->rq)
 		return -ENOENT;
 
@@ -628,7 +627,7 @@ void drm_sched_job_arm(struct drm_sched_job *job)
 	struct drm_sched_entity *entity = job->entity;
 
 	BUG_ON(!entity);
-
+	drm_sched_entity_select_rq(entity);
 	sched = entity->rq->sched;
 
 	job->sched = sched;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2022-08-15 19:00 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-14 10:38 Gang submit v2 Christian König
2022-07-14 10:38 ` [PATCH 01/10] drm/sched: move calling drm_sched_entity_select_rq Christian König
2022-07-14 15:43   ` Andrey Grodzovsky
2022-07-14 16:26     ` Christian König
2022-07-14 18:25       ` Andrey Grodzovsky
2022-07-14 10:38 ` [PATCH 02/10] drm/amdgpu: Protect the amdgpu_bo_list list with a mutex v2 Christian König
2022-07-14 15:10   ` Alex Deucher
2022-07-14 19:10   ` Luben Tuikov
2022-07-14 10:38 ` [PATCH 03/10] drm/amdgpu: revert "partial revert "remove ctx->lock" v2" Christian König
2022-07-14 10:38 ` [PATCH 04/10] drm/amdgpu: use DMA_RESV_USAGE_BOOKKEEP Christian König
2022-07-14 10:38 ` [PATCH 05/10] drm/amdgpu: cleanup and reorder amdgpu_cs.c Christian König
2022-07-14 10:38 ` [PATCH 06/10] drm/amdgpu: remove SRIOV and MCBP dependencies from the CS Christian König
2022-07-14 10:38 ` [PATCH 07/10] drm/amdgpu: move setting the job resources Christian König
2022-07-14 15:40   ` Luben Tuikov
2022-07-14 18:32   ` Andrey Grodzovsky
2022-07-14 10:39 ` [PATCH 08/10] drm/amdgpu: revert "fix limiting AV1 to the first instance on VCN3" Christian König
2022-07-14 10:39 ` [PATCH 09/10] drm/amdgpu: add gang submit backend Christian König
2022-07-14 14:36   ` Andrey Grodzovsky
2022-07-14 19:32   ` Luben Tuikov
2022-07-14 10:39 ` [PATCH 10/10] drm/amdgpu: add gang submit frontend v2 Christian König
2022-07-14 19:10   ` Andrey Grodzovsky
2022-08-15 18:59 Latest gang submit patches Christian König
2022-08-15 18:59 ` [PATCH 01/10] drm/sched: move calling drm_sched_entity_select_rq Christian König

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.