All of lore.kernel.org
 help / color / mirror / Atom feed
* Gang submit
@ 2022-03-03  8:22 Christian König
  2022-03-03  8:22 ` [PATCH 01/10] drm/amdgpu: install ctx entities with cmpxchg Christian König
                   ` (10 more replies)
  0 siblings, 11 replies; 27+ messages in thread
From: Christian König @ 2022-03-03  8:22 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak

Hi guys,

this patch set implements the the requirement for so called gang submissions in the CS interface.

A gang submission guarantees that multiple IBs can run on different engines at the same time.

This is implemented by keeping a global per-device gang around represented by a dma_fence which signals as soon as all jobs in a gang are pushed to the hardware.

The effect is that as long as members of a gang are waiting to be submitted no other gang can start pushing jobs to the hardware and so deadlocks are effectively prevented.

The whole set is based on top of my dma_resv_usage work and a few patches merged over from amd-staging-drm-next, so it won't easily apply anywhere.

Please review and comment,
Christian.



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 01/10] drm/amdgpu: install ctx entities with cmpxchg
  2022-03-03  8:22 Gang submit Christian König
@ 2022-03-03  8:22 ` Christian König
  2022-03-03 19:52   ` Andrey Grodzovsky
  2022-03-03  8:23 ` [PATCH 02/10] drm/amdgpu: header cleanup Christian König
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 27+ messages in thread
From: Christian König @ 2022-03-03  8:22 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak; +Cc: Christian König

Since we removed the context lock we need to make sure that not two threads
are trying to install an entity at the same time.

Signed-off-by: Christian König <christian.koenig@amd.com>
Fixes: e68efb27647f ("drm/amdgpu: remove ctx->lock")
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index c1f8b0e37b93..72c5f1c53d6b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -204,9 +204,15 @@ static int amdgpu_ctx_init_entity(struct amdgpu_ctx *ctx, u32 hw_ip,
 	if (r)
 		goto error_free_entity;
 
-	ctx->entities[hw_ip][ring] = entity;
+	/* It's not an error if we fail to install the new entity */
+	if (cmpxchg(&ctx->entities[hw_ip][ring], NULL, entity))
+		goto cleanup_entity;
+
 	return 0;
 
+cleanup_entity:
+	drm_sched_entity_fini(&entity->entity);
+
 error_free_entity:
 	kfree(entity);
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 02/10] drm/amdgpu: header cleanup
  2022-03-03  8:22 Gang submit Christian König
  2022-03-03  8:22 ` [PATCH 01/10] drm/amdgpu: install ctx entities with cmpxchg Christian König
@ 2022-03-03  8:23 ` Christian König
  2022-03-03 19:56   ` Andrey Grodzovsky
  2022-03-03  8:23 ` [PATCH 03/10] drm/amdgpu: cleanup and reorder amdgpu_cs.c Christian König
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 27+ messages in thread
From: Christian König @ 2022-03-03  8:23 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak; +Cc: Christian König

No function change, just move a bunch of definitions from amdgpu.h into
separate header files.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h           | 95 -------------------
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c        |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h        | 93 ++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.h       |  3 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h      | 35 ++++++-
 .../gpu/drm/amd/amdgpu/amdgpu_trace_points.c  |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c       |  1 +
 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c         |  1 +
 drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c         |  1 +
 10 files changed, 132 insertions(+), 100 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index b89406b01694..7f447ed7a67f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -60,7 +60,6 @@
 #include <drm/amdgpu_drm.h>
 #include <drm/drm_gem.h>
 #include <drm/drm_ioctl.h>
-#include <drm/gpu_scheduler.h>
 
 #include <kgd_kfd_interface.h>
 #include "dm_pp_interface.h"
@@ -276,9 +275,6 @@ extern int amdgpu_num_kcq;
 #define AMDGPU_SMARTSHIFT_MIN_BIAS (-100)
 
 struct amdgpu_device;
-struct amdgpu_ib;
-struct amdgpu_cs_parser;
-struct amdgpu_job;
 struct amdgpu_irq_src;
 struct amdgpu_fpriv;
 struct amdgpu_bo_va_mapping;
@@ -465,20 +461,6 @@ struct amdgpu_flip_work {
 };
 
 
-/*
- * CP & rings.
- */
-
-struct amdgpu_ib {
-	struct amdgpu_sa_bo		*sa_bo;
-	uint32_t			length_dw;
-	uint64_t			gpu_addr;
-	uint32_t			*ptr;
-	uint32_t			flags;
-};
-
-extern const struct drm_sched_backend_ops amdgpu_sched_ops;
-
 /*
  * file private structure
  */
@@ -494,79 +476,6 @@ struct amdgpu_fpriv {
 
 int amdgpu_file_to_fpriv(struct file *filp, struct amdgpu_fpriv **fpriv);
 
-int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm,
-		  unsigned size,
-		  enum amdgpu_ib_pool_type pool,
-		  struct amdgpu_ib *ib);
-void amdgpu_ib_free(struct amdgpu_device *adev, struct amdgpu_ib *ib,
-		    struct dma_fence *f);
-int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
-		       struct amdgpu_ib *ibs, struct amdgpu_job *job,
-		       struct dma_fence **f);
-int amdgpu_ib_pool_init(struct amdgpu_device *adev);
-void amdgpu_ib_pool_fini(struct amdgpu_device *adev);
-int amdgpu_ib_ring_tests(struct amdgpu_device *adev);
-
-/*
- * CS.
- */
-struct amdgpu_cs_chunk {
-	uint32_t		chunk_id;
-	uint32_t		length_dw;
-	void			*kdata;
-};
-
-struct amdgpu_cs_post_dep {
-	struct drm_syncobj *syncobj;
-	struct dma_fence_chain *chain;
-	u64 point;
-};
-
-struct amdgpu_cs_parser {
-	struct amdgpu_device	*adev;
-	struct drm_file		*filp;
-	struct amdgpu_ctx	*ctx;
-
-	/* chunks */
-	unsigned		nchunks;
-	struct amdgpu_cs_chunk	*chunks;
-
-	/* scheduler job object */
-	struct amdgpu_job	*job;
-	struct drm_sched_entity	*entity;
-
-	/* buffer objects */
-	struct ww_acquire_ctx		ticket;
-	struct amdgpu_bo_list		*bo_list;
-	struct amdgpu_mn		*mn;
-	struct amdgpu_bo_list_entry	vm_pd;
-	struct list_head		validated;
-	struct dma_fence		*fence;
-	uint64_t			bytes_moved_threshold;
-	uint64_t			bytes_moved_vis_threshold;
-	uint64_t			bytes_moved;
-	uint64_t			bytes_moved_vis;
-
-	/* user fence */
-	struct amdgpu_bo_list_entry	uf_entry;
-
-	unsigned			num_post_deps;
-	struct amdgpu_cs_post_dep	*post_deps;
-};
-
-static inline u32 amdgpu_get_ib_value(struct amdgpu_cs_parser *p,
-				      uint32_t ib_idx, int idx)
-{
-	return p->job->ibs[ib_idx].ptr[idx];
-}
-
-static inline void amdgpu_set_ib_value(struct amdgpu_cs_parser *p,
-				       uint32_t ib_idx, int idx,
-				       uint32_t value)
-{
-	p->job->ibs[ib_idx].ptr[idx] = value;
-}
-
 /*
  * Writeback
  */
@@ -1425,10 +1334,6 @@ static inline int amdgpu_acpi_smart_shift_update(struct drm_device *dev,
 						 enum amdgpu_ss ss_state) { return 0; }
 #endif
 
-int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
-			   uint64_t addr, struct amdgpu_bo **bo,
-			   struct amdgpu_bo_va_mapping **mapping);
-
 #if defined(CONFIG_DRM_AMD_DC)
 int amdgpu_dm_display_resume(struct amdgpu_device *adev );
 #else
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index aff77a466f59..6b6a9d925994 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -32,6 +32,7 @@
 
 #include <drm/amdgpu_drm.h>
 #include <drm/drm_syncobj.h>
+#include "amdgpu_cs.h"
 #include "amdgpu.h"
 #include "amdgpu_trace.h"
 #include "amdgpu_gmc.h"
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
new file mode 100644
index 000000000000..92d07816743e
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
@@ -0,0 +1,93 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#ifndef __AMDGPU_CS_H__
+#define __AMDGPU_CS_H__
+
+#include "amdgpu_job.h"
+#include "amdgpu_bo_list.h"
+#include "amdgpu_ring.h"
+
+struct amdgpu_bo_va_mapping;
+
+struct amdgpu_cs_chunk {
+	uint32_t		chunk_id;
+	uint32_t		length_dw;
+	void			*kdata;
+};
+
+struct amdgpu_cs_post_dep {
+	struct drm_syncobj *syncobj;
+	struct dma_fence_chain *chain;
+	u64 point;
+};
+
+struct amdgpu_cs_parser {
+	struct amdgpu_device	*adev;
+	struct drm_file		*filp;
+	struct amdgpu_ctx	*ctx;
+
+	/* chunks */
+	unsigned		nchunks;
+	struct amdgpu_cs_chunk	*chunks;
+
+	/* scheduler job object */
+	struct amdgpu_job	*job;
+	struct drm_sched_entity	*entity;
+
+	/* buffer objects */
+	struct ww_acquire_ctx		ticket;
+	struct amdgpu_bo_list		*bo_list;
+	struct amdgpu_mn		*mn;
+	struct amdgpu_bo_list_entry	vm_pd;
+	struct list_head		validated;
+	struct dma_fence		*fence;
+	uint64_t			bytes_moved_threshold;
+	uint64_t			bytes_moved_vis_threshold;
+	uint64_t			bytes_moved;
+	uint64_t			bytes_moved_vis;
+
+	/* user fence */
+	struct amdgpu_bo_list_entry	uf_entry;
+
+	unsigned			num_post_deps;
+	struct amdgpu_cs_post_dep	*post_deps;
+};
+
+static inline u32 amdgpu_get_ib_value(struct amdgpu_cs_parser *p,
+				      uint32_t ib_idx, int idx)
+{
+	return p->job->ibs[ib_idx].ptr[idx];
+}
+
+static inline void amdgpu_set_ib_value(struct amdgpu_cs_parser *p,
+				       uint32_t ib_idx, int idx,
+				       uint32_t value)
+{
+	p->job->ibs[ib_idx].ptr[idx] = value;
+}
+
+int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
+			   uint64_t addr, struct amdgpu_bo **bo,
+			   struct amdgpu_bo_va_mapping **mapping);
+
+#endif
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
index 9e65730193b8..6d704772ff42 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
@@ -23,6 +23,9 @@
 #ifndef __AMDGPU_JOB_H__
 #define __AMDGPU_JOB_H__
 
+#include <drm/gpu_scheduler.h>
+#include "amdgpu_sync.h"
+
 /* bit set means command submit involves a preamble IB */
 #define AMDGPU_PREAMBLE_IB_PRESENT          (1 << 0)
 /* bit set means preamble IB is first presented in belonging context */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index 48365da213dc..05e789fc7a9e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -28,6 +28,13 @@
 #include <drm/gpu_scheduler.h>
 #include <drm/drm_print.h>
 
+struct amdgpu_device;
+struct amdgpu_ring;
+struct amdgpu_ib;
+struct amdgpu_cs_parser;
+struct amdgpu_job;
+struct amdgpu_vm;
+
 /* max number of rings */
 #define AMDGPU_MAX_RINGS		28
 #define AMDGPU_MAX_HWIP_RINGS		8
@@ -82,11 +89,13 @@ enum amdgpu_ib_pool_type {
 	AMDGPU_IB_POOL_MAX
 };
 
-struct amdgpu_device;
-struct amdgpu_ring;
-struct amdgpu_ib;
-struct amdgpu_cs_parser;
-struct amdgpu_job;
+struct amdgpu_ib {
+	struct amdgpu_sa_bo		*sa_bo;
+	uint32_t			length_dw;
+	uint64_t			gpu_addr;
+	uint32_t			*ptr;
+	uint32_t			flags;
+};
 
 struct amdgpu_sched {
 	u32				num_scheds;
@@ -111,6 +120,8 @@ struct amdgpu_fence_driver {
 	struct dma_fence		**fences;
 };
 
+extern const struct drm_sched_backend_ops amdgpu_sched_ops;
+
 void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);
 void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring);
 
@@ -352,4 +363,18 @@ int amdgpu_ring_test_helper(struct amdgpu_ring *ring);
 
 void amdgpu_debugfs_ring_init(struct amdgpu_device *adev,
 			      struct amdgpu_ring *ring);
+
+int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+		  unsigned size,
+		  enum amdgpu_ib_pool_type pool,
+		  struct amdgpu_ib *ib);
+void amdgpu_ib_free(struct amdgpu_device *adev, struct amdgpu_ib *ib,
+		    struct dma_fence *f);
+int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
+		       struct amdgpu_ib *ibs, struct amdgpu_job *job,
+		       struct dma_fence **f);
+int amdgpu_ib_pool_init(struct amdgpu_device *adev);
+void amdgpu_ib_pool_fini(struct amdgpu_device *adev);
+int amdgpu_ib_ring_tests(struct amdgpu_device *adev);
+
 #endif
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace_points.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace_points.c
index 57c6c39ba064..b96d885f6e33 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace_points.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace_points.c
@@ -23,6 +23,7 @@
  */
 
 #include <drm/amdgpu_drm.h>
+#include "amdgpu_cs.h"
 #include "amdgpu.h"
 
 #define CREATE_TRACE_POINTS
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
index 9e102080dad9..4927c10bdc80 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
@@ -37,6 +37,7 @@
 #include "amdgpu.h"
 #include "amdgpu_pm.h"
 #include "amdgpu_uvd.h"
+#include "amdgpu_cs.h"
 #include "cikd.h"
 #include "uvd/uvd_4_2_d.h"
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
index 344f711ad144..6179230b6c6e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
@@ -34,6 +34,7 @@
 #include "amdgpu.h"
 #include "amdgpu_pm.h"
 #include "amdgpu_vce.h"
+#include "amdgpu_cs.h"
 #include "cikd.h"
 
 /* 1 second timeout */
diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
index b483f03b4591..7afa660e341c 100644
--- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
@@ -25,6 +25,7 @@
 
 #include "amdgpu.h"
 #include "amdgpu_uvd.h"
+#include "amdgpu_cs.h"
 #include "soc15.h"
 #include "soc15d.h"
 #include "soc15_common.h"
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
index da11ceba0698..2bb75fdb9571 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
@@ -25,6 +25,7 @@
 #include "amdgpu.h"
 #include "amdgpu_vcn.h"
 #include "amdgpu_pm.h"
+#include "amdgpu_cs.h"
 #include "soc15.h"
 #include "soc15d.h"
 #include "vcn_v2_0.h"
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 03/10] drm/amdgpu: cleanup and reorder amdgpu_cs.c
  2022-03-03  8:22 Gang submit Christian König
  2022-03-03  8:22 ` [PATCH 01/10] drm/amdgpu: install ctx entities with cmpxchg Christian König
  2022-03-03  8:23 ` [PATCH 02/10] drm/amdgpu: header cleanup Christian König
@ 2022-03-03  8:23 ` Christian König
  2022-03-03  8:23 ` [PATCH 04/10] drm/amdgpu: remove SRIOV and MCBP dependencies from the CS Christian König
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 27+ messages in thread
From: Christian König @ 2022-03-03  8:23 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak; +Cc: Christian König

Sort the functions in the order they are called and cleanup the coding
style and function names to represent the data they process.

Check the size of the IB chunk, initialize resulting entity and scheduler job
much earlier as well.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 1374 ++++++++++++------------
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h |    2 +-
 2 files changed, 683 insertions(+), 693 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 6b6a9d925994..58ddc4241f04 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -39,9 +39,61 @@
 #include "amdgpu_gem.h"
 #include "amdgpu_ras.h"
 
-static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p,
-				      struct drm_amdgpu_cs_chunk_fence *data,
-				      uint32_t *offset)
+static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p,
+				 struct amdgpu_device *adev,
+				 struct drm_file *filp,
+				 union drm_amdgpu_cs *cs)
+{
+	struct amdgpu_fpriv *fpriv = filp->driver_priv;
+
+	if (cs->in.num_chunks == 0)
+		return -EINVAL;
+
+	memset(p, 0, sizeof(*p));
+	p->adev = adev;
+	p->filp = filp;
+
+	p->ctx = amdgpu_ctx_get(fpriv, cs->in.ctx_id);
+	if (!p->ctx)
+		return -EINVAL;
+
+	if (atomic_read(&p->ctx->guilty)) {
+		amdgpu_ctx_put(p->ctx);
+		return -ECANCELED;
+	}
+	return 0;
+}
+
+static int amdgpu_cs_p1_ib(struct amdgpu_cs_parser *p,
+			   struct drm_amdgpu_cs_chunk_ib *chunk_ib,
+			   unsigned int *num_ibs)
+{
+	struct drm_sched_entity *entity;
+	int r;
+
+	r = amdgpu_ctx_get_entity(p->ctx, chunk_ib->ip_type,
+				  chunk_ib->ip_instance,
+				  chunk_ib->ring, &entity);
+	if (r)
+		return r;
+
+	/* Abort if there is no run queue associated with this entity.
+	 * Possibly because of disabled HW IP*/
+	if (entity->rq == NULL)
+		return -EINVAL;
+
+	/* Currently we don't support submitting to multiple entities */
+	if (p->entity && p->entity != entity)
+		return -EINVAL;
+
+	p->entity = entity;
+	++(*num_ibs);
+	return 0;
+}
+
+static int amdgpu_cs_p1_user_fence(struct amdgpu_cs_parser *p,
+				   struct drm_amdgpu_cs_chunk_fence *data,
+				   uint32_t *offset)
 {
 	struct drm_gem_object *gobj;
 	struct amdgpu_bo *bo;
@@ -80,11 +132,11 @@ static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p,
 	return r;
 }
 
-static int amdgpu_cs_bo_handles_chunk(struct amdgpu_cs_parser *p,
-				      struct drm_amdgpu_bo_list_in *data)
+static int amdgpu_cs_p1_bo_handles(struct amdgpu_cs_parser *p,
+				   struct drm_amdgpu_bo_list_in *data)
 {
+	struct drm_amdgpu_bo_list_entry *info;
 	int r;
-	struct drm_amdgpu_bo_list_entry *info = NULL;
 
 	r = amdgpu_bo_create_list_entry_array(data, &info);
 	if (r)
@@ -104,7 +156,9 @@ static int amdgpu_cs_bo_handles_chunk(struct amdgpu_cs_parser *p,
 	return r;
 }
 
-static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs *cs)
+/* Copy the data from userspace and go over it the first time */
+static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
+			   union drm_amdgpu_cs *cs)
 {
 	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
 	struct amdgpu_vm *vm = &fpriv->vm;
@@ -112,28 +166,14 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
 	uint64_t *chunk_array;
 	unsigned size, num_ibs = 0;
 	uint32_t uf_offset = 0;
-	int i;
 	int ret;
+	int i;
 
-	if (cs->in.num_chunks == 0)
-		return 0;
-
-	chunk_array = kvmalloc_array(cs->in.num_chunks, sizeof(uint64_t), GFP_KERNEL);
+	chunk_array = kvmalloc_array(cs->in.num_chunks, sizeof(uint64_t),
+				     GFP_KERNEL);
 	if (!chunk_array)
 		return -ENOMEM;
 
-	p->ctx = amdgpu_ctx_get(fpriv, cs->in.ctx_id);
-	if (!p->ctx) {
-		ret = -EINVAL;
-		goto free_chunk;
-	}
-
-	/* skip guilty context job */
-	if (atomic_read(&p->ctx->guilty) == 1) {
-		ret = -ECANCELED;
-		goto free_chunk;
-	}
-
 	/* get chunks */
 	chunk_array_user = u64_to_user_ptr(cs->in.chunks);
 	if (copy_from_user(chunk_array, chunk_array_user,
@@ -168,7 +208,8 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
 		size = p->chunks[i].length_dw;
 		cdata = u64_to_user_ptr(user_chunk.chunk_data);
 
-		p->chunks[i].kdata = kvmalloc_array(size, sizeof(uint32_t), GFP_KERNEL);
+		p->chunks[i].kdata = kvmalloc_array(size, sizeof(uint32_t),
+						    GFP_KERNEL);
 		if (p->chunks[i].kdata == NULL) {
 			ret = -ENOMEM;
 			i--;
@@ -180,36 +221,35 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
 			goto free_partial_kdata;
 		}
 
+		/* Assume the worst on the following checks */
+		ret = -EINVAL;
 		switch (p->chunks[i].chunk_id) {
 		case AMDGPU_CHUNK_ID_IB:
-			++num_ibs;
+			if (size < sizeof(struct drm_amdgpu_cs_chunk_ib))
+				goto free_partial_kdata;
+
+			ret = amdgpu_cs_p1_ib(p, p->chunks[i].kdata, &num_ibs);
+			if (ret)
+				goto free_partial_kdata;
 			break;
 
 		case AMDGPU_CHUNK_ID_FENCE:
-			size = sizeof(struct drm_amdgpu_cs_chunk_fence);
-			if (p->chunks[i].length_dw * sizeof(uint32_t) < size) {
-				ret = -EINVAL;
+			if (size < sizeof(struct drm_amdgpu_cs_chunk_fence))
 				goto free_partial_kdata;
-			}
 
-			ret = amdgpu_cs_user_fence_chunk(p, p->chunks[i].kdata,
-							 &uf_offset);
+			ret = amdgpu_cs_p1_user_fence(p, p->chunks[i].kdata,
+						      &uf_offset);
 			if (ret)
 				goto free_partial_kdata;
-
 			break;
 
 		case AMDGPU_CHUNK_ID_BO_HANDLES:
-			size = sizeof(struct drm_amdgpu_bo_list_in);
-			if (p->chunks[i].length_dw * sizeof(uint32_t) < size) {
-				ret = -EINVAL;
+			if (size < sizeof(struct drm_amdgpu_bo_list_in))
 				goto free_partial_kdata;
-			}
 
-			ret = amdgpu_cs_bo_handles_chunk(p, p->chunks[i].kdata);
+			ret = amdgpu_cs_p1_bo_handles(p, p->chunks[i].kdata);
 			if (ret)
 				goto free_partial_kdata;
-
 			break;
 
 		case AMDGPU_CHUNK_ID_DEPENDENCIES:
@@ -221,7 +261,6 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
 			break;
 
 		default:
-			ret = -EINVAL;
 			goto free_partial_kdata;
 		}
 	}
@@ -230,6 +269,10 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
 	if (ret)
 		goto free_all_kdata;
 
+	ret = drm_sched_job_init(&p->job->base, p->entity, &fpriv->vm);
+	if (ret)
+		goto free_all_kdata;
+
 	if (p->ctx->vram_lost_counter != p->job->vram_lost_counter) {
 		ret = -ECANCELED;
 		goto free_all_kdata;
@@ -258,166 +301,456 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
 	return ret;
 }
 
-/* Convert microseconds to bytes. */
-static u64 us_to_bytes(struct amdgpu_device *adev, s64 us)
+static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
+			   struct amdgpu_cs_chunk *chunk,
+			   unsigned int *num_ibs,
+			   unsigned int *ce_preempt,
+			   unsigned int *de_preempt)
 {
-	if (us <= 0 || !adev->mm_stats.log2_max_MBps)
-		return 0;
+	struct amdgpu_ring *ring = to_amdgpu_ring(p->job->base.sched);
+	struct drm_amdgpu_cs_chunk_ib *chunk_ib = chunk->kdata;
+	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+	struct amdgpu_ib *ib = &p->job->ibs[*num_ibs];
+	struct amdgpu_vm *vm = &fpriv->vm;
+	int r;
 
-	/* Since accum_us is incremented by a million per second, just
-	 * multiply it by the number of MB/s to get the number of bytes.
-	 */
-	return us << adev->mm_stats.log2_max_MBps;
-}
 
-static s64 bytes_to_us(struct amdgpu_device *adev, u64 bytes)
-{
-	if (!adev->mm_stats.log2_max_MBps)
-		return 0;
+	/* MM engine doesn't support user fences */
+	if (p->job->uf_addr && ring->funcs->no_user_fence)
+		return -EINVAL;
 
-	return bytes >> adev->mm_stats.log2_max_MBps;
-}
+	if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX &&
+	    chunk_ib->flags & AMDGPU_IB_FLAG_PREEMPT &&
+	    (amdgpu_mcbp || amdgpu_sriov_vf(p->adev))) {
+		if (chunk_ib->flags & AMDGPU_IB_FLAG_CE)
+			(*ce_preempt)++;
+		else
+			(*de_preempt)++;
 
-/* Returns how many bytes TTM can move right now. If no bytes can be moved,
- * it returns 0. If it returns non-zero, it's OK to move at least one buffer,
- * which means it can go over the threshold once. If that happens, the driver
- * will be in debt and no other buffer migrations can be done until that debt
- * is repaid.
- *
- * This approach allows moving a buffer of any size (it's important to allow
- * that).
- *
- * The currency is simply time in microseconds and it increases as the clock
- * ticks. The accumulated microseconds (us) are converted to bytes and
- * returned.
- */
-static void amdgpu_cs_get_threshold_for_moves(struct amdgpu_device *adev,
-					      u64 *max_bytes,
-					      u64 *max_vis_bytes)
-{
-	s64 time_us, increment_us;
-	u64 free_vram, total_vram, used_vram;
-	/* Allow a maximum of 200 accumulated ms. This is basically per-IB
-	 * throttling.
-	 *
-	 * It means that in order to get full max MBps, at least 5 IBs per
-	 * second must be submitted and not more than 200ms apart from each
-	 * other.
-	 */
-	const s64 us_upper_bound = 200000;
+		/* Each GFX command submit allows only 1 IB max
+		 * preemptible for CE & DE */
+		if (*ce_preempt > 1 || *de_preempt > 1)
+			return -EINVAL;
+	}
 
-	if (!adev->mm_stats.log2_max_MBps) {
-		*max_bytes = 0;
-		*max_vis_bytes = 0;
-		return;
+	if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE)
+		p->job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT;
+
+	r =  amdgpu_ib_get(p->adev, vm, ring->funcs->parse_cs ?
+			   chunk_ib->ib_bytes : 0,
+			   AMDGPU_IB_POOL_DELAYED, ib);
+	if (r) {
+		DRM_ERROR("Failed to get ib !\n");
+		return r;
 	}
 
-	total_vram = adev->gmc.real_vram_size - atomic64_read(&adev->vram_pin_size);
-	used_vram = ttm_resource_manager_usage(&adev->mman.vram_mgr.manager);
-	free_vram = used_vram >= total_vram ? 0 : total_vram - used_vram;
+	ib->gpu_addr = chunk_ib->va_start;
+	ib->length_dw = chunk_ib->ib_bytes / 4;
+	ib->flags = chunk_ib->flags;
 
-	spin_lock(&adev->mm_stats.lock);
+	(*num_ibs)++;
+	return 0;
+}
 
-	/* Increase the amount of accumulated us. */
-	time_us = ktime_to_us(ktime_get());
-	increment_us = time_us - adev->mm_stats.last_update_us;
-	adev->mm_stats.last_update_us = time_us;
-	adev->mm_stats.accum_us = min(adev->mm_stats.accum_us + increment_us,
-				      us_upper_bound);
+static int amdgpu_cs_p2_dependencies(struct amdgpu_cs_parser *p,
+				     struct amdgpu_cs_chunk *chunk)
+{
+	struct drm_amdgpu_cs_chunk_dep *deps = chunk->kdata;
+	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+	unsigned num_deps;
+	int i, r;
 
-	/* This prevents the short period of low performance when the VRAM
-	 * usage is low and the driver is in debt or doesn't have enough
-	 * accumulated us to fill VRAM quickly.
-	 *
-	 * The situation can occur in these cases:
-	 * - a lot of VRAM is freed by userspace
-	 * - the presence of a big buffer causes a lot of evictions
-	 *   (solution: split buffers into smaller ones)
-	 *
-	 * If 128 MB or 1/8th of VRAM is free, start filling it now by setting
-	 * accum_us to a positive number.
-	 */
-	if (free_vram >= 128 * 1024 * 1024 || free_vram >= total_vram / 8) {
-		s64 min_us;
+	num_deps = chunk->length_dw * 4 /
+		sizeof(struct drm_amdgpu_cs_chunk_dep);
 
-		/* Be more aggresive on dGPUs. Try to fill a portion of free
-		 * VRAM now.
-		 */
-		if (!(adev->flags & AMD_IS_APU))
-			min_us = bytes_to_us(adev, free_vram / 4);
-		else
-			min_us = 0; /* Reset accum_us on APUs. */
+	for (i = 0; i < num_deps; ++i) {
+		struct amdgpu_ctx *ctx;
+		struct drm_sched_entity *entity;
+		struct dma_fence *fence;
 
-		adev->mm_stats.accum_us = max(min_us, adev->mm_stats.accum_us);
-	}
+		ctx = amdgpu_ctx_get(fpriv, deps[i].ctx_id);
+		if (ctx == NULL)
+			return -EINVAL;
 
-	/* This is set to 0 if the driver is in debt to disallow (optional)
-	 * buffer moves.
-	 */
-	*max_bytes = us_to_bytes(adev, adev->mm_stats.accum_us);
+		r = amdgpu_ctx_get_entity(ctx, deps[i].ip_type,
+					  deps[i].ip_instance,
+					  deps[i].ring, &entity);
+		if (r) {
+			amdgpu_ctx_put(ctx);
+			return r;
+		}
 
-	/* Do the same for visible VRAM if half of it is free */
-	if (!amdgpu_gmc_vram_full_visible(&adev->gmc)) {
-		u64 total_vis_vram = adev->gmc.visible_vram_size;
-		u64 used_vis_vram =
-		  amdgpu_vram_mgr_vis_usage(&adev->mman.vram_mgr);
+		fence = amdgpu_ctx_get_fence(ctx, entity, deps[i].handle);
+		amdgpu_ctx_put(ctx);
 
-		if (used_vis_vram < total_vis_vram) {
-			u64 free_vis_vram = total_vis_vram - used_vis_vram;
-			adev->mm_stats.accum_us_vis = min(adev->mm_stats.accum_us_vis +
-							  increment_us, us_upper_bound);
+		if (IS_ERR(fence))
+			return PTR_ERR(fence);
+		else if (!fence)
+			continue;
 
-			if (free_vis_vram >= total_vis_vram / 2)
-				adev->mm_stats.accum_us_vis =
-					max(bytes_to_us(adev, free_vis_vram / 2),
-					    adev->mm_stats.accum_us_vis);
+		if (chunk->chunk_id == AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES) {
+			struct drm_sched_fence *s_fence;
+			struct dma_fence *old = fence;
+
+			s_fence = to_drm_sched_fence(fence);
+			fence = dma_fence_get(&s_fence->scheduled);
+			dma_fence_put(old);
 		}
 
-		*max_vis_bytes = us_to_bytes(adev, adev->mm_stats.accum_us_vis);
-	} else {
-		*max_vis_bytes = 0;
+		r = amdgpu_sync_fence(&p->job->sync, fence);
+		dma_fence_put(fence);
+		if (r)
+			return r;
 	}
-
-	spin_unlock(&adev->mm_stats.lock);
+	return 0;
 }
 
-/* Report how many bytes have really been moved for the last command
- * submission. This can result in a debt that can stop buffer migrations
- * temporarily.
- */
-void amdgpu_cs_report_moved_bytes(struct amdgpu_device *adev, u64 num_bytes,
-				  u64 num_vis_bytes)
+static int amdgpu_syncobj_lookup_and_add(struct amdgpu_cs_parser *p,
+					 uint32_t handle, u64 point,
+					 u64 flags)
 {
-	spin_lock(&adev->mm_stats.lock);
-	adev->mm_stats.accum_us -= bytes_to_us(adev, num_bytes);
-	adev->mm_stats.accum_us_vis -= bytes_to_us(adev, num_vis_bytes);
-	spin_unlock(&adev->mm_stats.lock);
+	struct dma_fence *fence;
+	int r;
+
+	r = drm_syncobj_find_fence(p->filp, handle, point, flags, &fence);
+	if (r) {
+		DRM_ERROR("syncobj %u failed to find fence @ %llu (%d)!\n",
+			  handle, point, r);
+		return r;
+	}
+
+	r = amdgpu_sync_fence(&p->job->sync, fence);
+	dma_fence_put(fence);
+
+	return r;
 }
 
-static int amdgpu_cs_bo_validate(void *param, struct amdgpu_bo *bo)
+static int amdgpu_cs_p2_syncobj_in(struct amdgpu_cs_parser *p,
+				   struct amdgpu_cs_chunk *chunk)
 {
-	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
-	struct amdgpu_cs_parser *p = param;
-	struct ttm_operation_ctx ctx = {
-		.interruptible = true,
-		.no_wait_gpu = false,
-		.resv = bo->tbo.base.resv
-	};
-	uint32_t domain;
-	int r;
+	struct drm_amdgpu_cs_chunk_sem *deps = chunk->kdata;
+	unsigned num_deps;
+	int i, r;
 
-	if (bo->tbo.pin_count)
-		return 0;
+	num_deps = chunk->length_dw * 4 /
+		sizeof(struct drm_amdgpu_cs_chunk_sem);
+	for (i = 0; i < num_deps; ++i) {
+		r = amdgpu_syncobj_lookup_and_add(p, deps[i].handle, 0, 0);
+		if (r)
+			return r;
+	}
 
-	/* Don't move this buffer if we have depleted our allowance
-	 * to move it. Don't move anything if the threshold is zero.
-	 */
-	if (p->bytes_moved < p->bytes_moved_threshold &&
-	    (!bo->tbo.base.dma_buf ||
-	    list_empty(&bo->tbo.base.dma_buf->attachments))) {
-		if (!amdgpu_gmc_vram_full_visible(&adev->gmc) &&
-		    (bo->flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED)) {
-			/* And don't move a CPU_ACCESS_REQUIRED BO to limited
+	return 0;
+}
+
+static int amdgpu_cs_p2_syncobj_timeline_wait(struct amdgpu_cs_parser *p,
+					      struct amdgpu_cs_chunk *chunk)
+{
+	struct drm_amdgpu_cs_chunk_syncobj *syncobj_deps = chunk->kdata;
+	unsigned num_deps;
+	int i, r;
+
+	num_deps = chunk->length_dw * 4 /
+		sizeof(struct drm_amdgpu_cs_chunk_syncobj);
+	for (i = 0; i < num_deps; ++i) {
+		r = amdgpu_syncobj_lookup_and_add(p, syncobj_deps[i].handle,
+						  syncobj_deps[i].point,
+						  syncobj_deps[i].flags);
+		if (r)
+			return r;
+	}
+
+	return 0;
+}
+
+static int amdgpu_cs_p2_syncobj_out(struct amdgpu_cs_parser *p,
+				    struct amdgpu_cs_chunk *chunk)
+{
+	struct drm_amdgpu_cs_chunk_sem *deps = chunk->kdata;
+	unsigned num_deps;
+	int i;
+
+	num_deps = chunk->length_dw * 4 /
+		sizeof(struct drm_amdgpu_cs_chunk_sem);
+
+	if (p->post_deps)
+		return -EINVAL;
+
+	p->post_deps = kmalloc_array(num_deps, sizeof(*p->post_deps),
+				     GFP_KERNEL);
+	p->num_post_deps = 0;
+
+	if (!p->post_deps)
+		return -ENOMEM;
+
+
+	for (i = 0; i < num_deps; ++i) {
+		p->post_deps[i].syncobj =
+			drm_syncobj_find(p->filp, deps[i].handle);
+		if (!p->post_deps[i].syncobj)
+			return -EINVAL;
+		p->post_deps[i].chain = NULL;
+		p->post_deps[i].point = 0;
+		p->num_post_deps++;
+	}
+
+	return 0;
+}
+
+static int amdgpu_cs_p2_syncobj_timeline_signal(struct amdgpu_cs_parser *p,
+						struct amdgpu_cs_chunk *chunk)
+{
+	struct drm_amdgpu_cs_chunk_syncobj *syncobj_deps = chunk->kdata;
+	unsigned num_deps;
+	int i;
+
+	num_deps = chunk->length_dw * 4 /
+		sizeof(struct drm_amdgpu_cs_chunk_syncobj);
+
+	if (p->post_deps)
+		return -EINVAL;
+
+	p->post_deps = kmalloc_array(num_deps, sizeof(*p->post_deps),
+				     GFP_KERNEL);
+	p->num_post_deps = 0;
+
+	if (!p->post_deps)
+		return -ENOMEM;
+
+	for (i = 0; i < num_deps; ++i) {
+		struct amdgpu_cs_post_dep *dep = &p->post_deps[i];
+
+		dep->chain = NULL;
+		if (syncobj_deps[i].point) {
+			dep->chain = dma_fence_chain_alloc();
+			if (!dep->chain)
+				return -ENOMEM;
+		}
+
+		dep->syncobj = drm_syncobj_find(p->filp,
+						syncobj_deps[i].handle);
+		if (!dep->syncobj) {
+			dma_fence_chain_free(dep->chain);
+			return -EINVAL;
+		}
+		dep->point = syncobj_deps[i].point;
+		p->num_post_deps++;
+	}
+
+	return 0;
+}
+
+static int amdgpu_cs_pass2(struct amdgpu_cs_parser *p)
+{
+	unsigned int num_ibs = 0, ce_preempt = 0, de_preempt = 0;
+	int i, r;
+
+	for (i = 0; i < p->nchunks; ++i) {
+		struct amdgpu_cs_chunk *chunk;
+
+		chunk = &p->chunks[i];
+
+		switch (chunk->chunk_id) {
+		case AMDGPU_CHUNK_ID_IB:
+			r = amdgpu_cs_p2_ib(p, chunk, &num_ibs,
+					    &ce_preempt, &de_preempt);
+			if (r)
+				return r;
+			break;
+		case AMDGPU_CHUNK_ID_DEPENDENCIES:
+		case AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES:
+			r = amdgpu_cs_p2_dependencies(p, chunk);
+			if (r)
+				return r;
+			break;
+		case AMDGPU_CHUNK_ID_SYNCOBJ_IN:
+			r = amdgpu_cs_p2_syncobj_in(p, chunk);
+			if (r)
+				return r;
+			break;
+		case AMDGPU_CHUNK_ID_SYNCOBJ_OUT:
+			r = amdgpu_cs_p2_syncobj_out(p, chunk);
+			if (r)
+				return r;
+			break;
+		case AMDGPU_CHUNK_ID_SYNCOBJ_TIMELINE_WAIT:
+			r = amdgpu_cs_p2_syncobj_timeline_wait(p, chunk);
+			if (r)
+				return r;
+			break;
+		case AMDGPU_CHUNK_ID_SYNCOBJ_TIMELINE_SIGNAL:
+			r = amdgpu_cs_p2_syncobj_timeline_signal(p, chunk);
+			if (r)
+				return r;
+			break;
+		}
+	}
+
+	return 0;
+}
+
+/* Convert microseconds to bytes. */
+static u64 us_to_bytes(struct amdgpu_device *adev, s64 us)
+{
+	if (us <= 0 || !adev->mm_stats.log2_max_MBps)
+		return 0;
+
+	/* Since accum_us is incremented by a million per second, just
+	 * multiply it by the number of MB/s to get the number of bytes.
+	 */
+	return us << adev->mm_stats.log2_max_MBps;
+}
+
+static s64 bytes_to_us(struct amdgpu_device *adev, u64 bytes)
+{
+	if (!adev->mm_stats.log2_max_MBps)
+		return 0;
+
+	return bytes >> adev->mm_stats.log2_max_MBps;
+}
+
+/* Returns how many bytes TTM can move right now. If no bytes can be moved,
+ * it returns 0. If it returns non-zero, it's OK to move at least one buffer,
+ * which means it can go over the threshold once. If that happens, the driver
+ * will be in debt and no other buffer migrations can be done until that debt
+ * is repaid.
+ *
+ * This approach allows moving a buffer of any size (it's important to allow
+ * that).
+ *
+ * The currency is simply time in microseconds and it increases as the clock
+ * ticks. The accumulated microseconds (us) are converted to bytes and
+ * returned.
+ */
+static void amdgpu_cs_get_threshold_for_moves(struct amdgpu_device *adev,
+					      u64 *max_bytes,
+					      u64 *max_vis_bytes)
+{
+	s64 time_us, increment_us;
+	u64 free_vram, total_vram, used_vram;
+	/* Allow a maximum of 200 accumulated ms. This is basically per-IB
+	 * throttling.
+	 *
+	 * It means that in order to get full max MBps, at least 5 IBs per
+	 * second must be submitted and not more than 200ms apart from each
+	 * other.
+	 */
+	const s64 us_upper_bound = 200000;
+
+	if (!adev->mm_stats.log2_max_MBps) {
+		*max_bytes = 0;
+		*max_vis_bytes = 0;
+		return;
+	}
+
+	total_vram = adev->gmc.real_vram_size -
+		atomic64_read(&adev->vram_pin_size);
+	used_vram = ttm_resource_manager_usage(&adev->mman.vram_mgr.manager);
+	free_vram = used_vram >= total_vram ? 0 : total_vram - used_vram;
+
+	spin_lock(&adev->mm_stats.lock);
+
+	/* Increase the amount of accumulated us. */
+	time_us = ktime_to_us(ktime_get());
+	increment_us = time_us - adev->mm_stats.last_update_us;
+	adev->mm_stats.last_update_us = time_us;
+	adev->mm_stats.accum_us = min(adev->mm_stats.accum_us + increment_us,
+				      us_upper_bound);
+
+	/* This prevents the short period of low performance when the VRAM
+	 * usage is low and the driver is in debt or doesn't have enough
+	 * accumulated us to fill VRAM quickly.
+	 *
+	 * The situation can occur in these cases:
+	 * - a lot of VRAM is freed by userspace
+	 * - the presence of a big buffer causes a lot of evictions
+	 *   (solution: split buffers into smaller ones)
+	 *
+	 * If 128 MB or 1/8th of VRAM is free, start filling it now by setting
+	 * accum_us to a positive number.
+	 */
+	if (free_vram >= 128 * 1024 * 1024 || free_vram >= total_vram / 8) {
+		s64 min_us;
+
+		/* Be more aggresive on dGPUs. Try to fill a portion of free
+		 * VRAM now.
+		 */
+		if (!(adev->flags & AMD_IS_APU))
+			min_us = bytes_to_us(adev, free_vram / 4);
+		else
+			min_us = 0; /* Reset accum_us on APUs. */
+
+		adev->mm_stats.accum_us = max(min_us, adev->mm_stats.accum_us);
+	}
+
+	/* This is set to 0 if the driver is in debt to disallow (optional)
+	 * buffer moves.
+	 */
+	*max_bytes = us_to_bytes(adev, adev->mm_stats.accum_us);
+
+	/* Do the same for visible VRAM if half of it is free */
+	if (!amdgpu_gmc_vram_full_visible(&adev->gmc)) {
+		u64 total_vis_vram = adev->gmc.visible_vram_size;
+		u64 used_vis_vram =
+		  amdgpu_vram_mgr_vis_usage(&adev->mman.vram_mgr);
+
+		if (used_vis_vram < total_vis_vram) {
+			u64 free_vis_vram = total_vis_vram - used_vis_vram;
+			adev->mm_stats.accum_us_vis =
+				min(adev->mm_stats.accum_us_vis +
+				    increment_us, us_upper_bound);
+
+			if (free_vis_vram >= total_vis_vram / 2)
+				adev->mm_stats.accum_us_vis =
+					max(bytes_to_us(adev, free_vis_vram / 2),
+					    adev->mm_stats.accum_us_vis);
+		}
+
+		*max_vis_bytes = us_to_bytes(adev, adev->mm_stats.accum_us_vis);
+	} else {
+		*max_vis_bytes = 0;
+	}
+
+	spin_unlock(&adev->mm_stats.lock);
+}
+
+/* Report how many bytes have really been moved for the last command
+ * submission. This can result in a debt that can stop buffer migrations
+ * temporarily.
+ */
+void amdgpu_cs_report_moved_bytes(struct amdgpu_device *adev, u64 num_bytes,
+				  u64 num_vis_bytes)
+{
+	spin_lock(&adev->mm_stats.lock);
+	adev->mm_stats.accum_us -= bytes_to_us(adev, num_bytes);
+	adev->mm_stats.accum_us_vis -= bytes_to_us(adev, num_vis_bytes);
+	spin_unlock(&adev->mm_stats.lock);
+}
+
+static int amdgpu_cs_bo_validate(void *param, struct amdgpu_bo *bo)
+{
+	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
+	struct amdgpu_cs_parser *p = param;
+	struct ttm_operation_ctx ctx = {
+		.interruptible = true,
+		.no_wait_gpu = false,
+		.resv = bo->tbo.base.resv
+	};
+	uint32_t domain;
+	int r;
+
+	if (bo->tbo.pin_count)
+		return 0;
+
+	/* Don't move this buffer if we have depleted our allowance
+	 * to move it. Don't move anything if the threshold is zero.
+	 */
+	if (p->bytes_moved < p->bytes_moved_threshold &&
+	    (!bo->tbo.base.dma_buf ||
+	    list_empty(&bo->tbo.base.dma_buf->attachments))) {
+		if (!amdgpu_gmc_vram_full_visible(&adev->gmc) &&
+		    (bo->flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED)) {
+			/* And don't move a CPU_ACCESS_REQUIRED BO to limited
 			 * visible VRAM if we've depleted our allowance to do
 			 * that.
 			 */
@@ -640,537 +973,173 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 	return r;
 }
 
-static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
-{
-	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
-	struct amdgpu_bo_list_entry *e;
-	int r;
-
-	list_for_each_entry(e, &p->validated, tv.head) {
-		struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
-		struct dma_resv *resv = bo->tbo.base.resv;
-		enum amdgpu_sync_mode sync_mode;
-
-		sync_mode = amdgpu_bo_explicit_sync(bo) ?
-			AMDGPU_SYNC_EXPLICIT : AMDGPU_SYNC_NE_OWNER;
-		r = amdgpu_sync_resv(p->adev, &p->job->sync, resv, sync_mode,
-				     &fpriv->vm);
-		if (r)
-			return r;
-	}
-	return 0;
-}
-
-/**
- * amdgpu_cs_parser_fini() - clean parser states
- * @parser:	parser structure holding parsing context.
- * @error:	error number
- * @backoff:	indicator to backoff the reservation
- *
- * If error is set then unvalidate buffer, otherwise just free memory
- * used by parsing context.
- **/
-static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error,
-				  bool backoff)
-{
-	unsigned i;
-
-	if (error && backoff)
-		ttm_eu_backoff_reservation(&parser->ticket,
-					   &parser->validated);
-
-	for (i = 0; i < parser->num_post_deps; i++) {
-		drm_syncobj_put(parser->post_deps[i].syncobj);
-		kfree(parser->post_deps[i].chain);
-	}
-	kfree(parser->post_deps);
-
-	dma_fence_put(parser->fence);
-
-	if (parser->ctx) {
-		amdgpu_ctx_put(parser->ctx);
-	}
-	if (parser->bo_list)
-		amdgpu_bo_list_put(parser->bo_list);
-
-	for (i = 0; i < parser->nchunks; i++)
-		kvfree(parser->chunks[i].kdata);
-	kvfree(parser->chunks);
-	if (parser->job)
-		amdgpu_job_free(parser->job);
-	if (parser->uf_entry.tv.bo) {
-		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(parser->uf_entry.tv.bo);
-
-		amdgpu_bo_unref(&uf);
-	}
-}
-
-static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
-{
-	struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
-	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
-	struct amdgpu_device *adev = p->adev;
-	struct amdgpu_vm *vm = &fpriv->vm;
-	struct amdgpu_bo_list_entry *e;
-	struct amdgpu_bo_va *bo_va;
-	struct amdgpu_bo *bo;
-	int r;
-
-	/* Only for UVD/VCE VM emulation */
-	if (ring->funcs->parse_cs || ring->funcs->patch_cs_in_place) {
-		unsigned i, j;
-
-		for (i = 0, j = 0; i < p->nchunks && j < p->job->num_ibs; i++) {
-			struct drm_amdgpu_cs_chunk_ib *chunk_ib;
-			struct amdgpu_bo_va_mapping *m;
-			struct amdgpu_bo *aobj = NULL;
-			struct amdgpu_cs_chunk *chunk;
-			uint64_t offset, va_start;
-			struct amdgpu_ib *ib;
-			uint8_t *kptr;
-
-			chunk = &p->chunks[i];
-			ib = &p->job->ibs[j];
-			chunk_ib = chunk->kdata;
-
-			if (chunk->chunk_id != AMDGPU_CHUNK_ID_IB)
-				continue;
-
-			va_start = chunk_ib->va_start & AMDGPU_GMC_HOLE_MASK;
-			r = amdgpu_cs_find_mapping(p, va_start, &aobj, &m);
-			if (r) {
-				DRM_ERROR("IB va_start is invalid\n");
-				return r;
-			}
-
-			if ((va_start + chunk_ib->ib_bytes) >
-			    (m->last + 1) * AMDGPU_GPU_PAGE_SIZE) {
-				DRM_ERROR("IB va_start+ib_bytes is invalid\n");
-				return -EINVAL;
-			}
-
-			/* the IB should be reserved at this point */
-			r = amdgpu_bo_kmap(aobj, (void **)&kptr);
-			if (r) {
-				return r;
-			}
-
-			offset = m->start * AMDGPU_GPU_PAGE_SIZE;
-			kptr += va_start - offset;
-
-			if (ring->funcs->parse_cs) {
-				memcpy(ib->ptr, kptr, chunk_ib->ib_bytes);
-				amdgpu_bo_kunmap(aobj);
-
-				r = amdgpu_ring_parse_cs(ring, p, j);
-				if (r)
-					return r;
-			} else {
-				ib->ptr = (uint32_t *)kptr;
-				r = amdgpu_ring_patch_cs_in_place(ring, p, j);
-				amdgpu_bo_kunmap(aobj);
-				if (r)
-					return r;
-			}
-
-			j++;
-		}
-	}
-
-	if (!p->job->vm)
-		return amdgpu_cs_sync_rings(p);
-
-
-	r = amdgpu_vm_clear_freed(adev, vm, NULL);
-	if (r)
-		return r;
-
-	r = amdgpu_vm_bo_update(adev, fpriv->prt_va, false, NULL);
-	if (r)
-		return r;
-
-	r = amdgpu_sync_vm_fence(&p->job->sync, fpriv->prt_va->last_pt_update);
-	if (r)
-		return r;
-
-	if (amdgpu_mcbp || amdgpu_sriov_vf(adev)) {
-		bo_va = fpriv->csa_va;
-		BUG_ON(!bo_va);
-		r = amdgpu_vm_bo_update(adev, bo_va, false, NULL);
-		if (r)
-			return r;
-
-		r = amdgpu_sync_vm_fence(&p->job->sync, bo_va->last_pt_update);
-		if (r)
-			return r;
-	}
-
-	amdgpu_bo_list_for_each_entry(e, p->bo_list) {
-		/* ignore duplicates */
-		bo = ttm_to_amdgpu_bo(e->tv.bo);
-		if (!bo)
-			continue;
-
-		bo_va = e->bo_va;
-		if (bo_va == NULL)
-			continue;
-
-		r = amdgpu_vm_bo_update(adev, bo_va, false, NULL);
-		if (r)
-			return r;
-
-		r = amdgpu_sync_vm_fence(&p->job->sync, bo_va->last_pt_update);
-		if (r)
-			return r;
-	}
-
-	r = amdgpu_vm_handle_moved(adev, vm);
-	if (r)
-		return r;
-
-	r = amdgpu_vm_update_pdes(adev, vm, false);
-	if (r)
-		return r;
-
-	r = amdgpu_sync_vm_fence(&p->job->sync, vm->last_update);
-	if (r)
-		return r;
-
-	p->job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo);
-
-	if (amdgpu_vm_debug) {
-		/* Invalidate all BOs to test for userspace bugs */
-		amdgpu_bo_list_for_each_entry(e, p->bo_list) {
-			struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
-
-			/* ignore duplicates */
-			if (!bo)
-				continue;
-
-			amdgpu_vm_bo_invalidate(adev, bo, false);
-		}
-	}
-
-	return amdgpu_cs_sync_rings(p);
-}
-
-static int amdgpu_cs_ib_fill(struct amdgpu_device *adev,
-			     struct amdgpu_cs_parser *parser)
-{
-	struct amdgpu_fpriv *fpriv = parser->filp->driver_priv;
-	struct amdgpu_vm *vm = &fpriv->vm;
-	int r, ce_preempt = 0, de_preempt = 0;
-	struct amdgpu_ring *ring;
-	int i, j;
-
-	for (i = 0, j = 0; i < parser->nchunks && j < parser->job->num_ibs; i++) {
-		struct amdgpu_cs_chunk *chunk;
-		struct amdgpu_ib *ib;
-		struct drm_amdgpu_cs_chunk_ib *chunk_ib;
-		struct drm_sched_entity *entity;
-
-		chunk = &parser->chunks[i];
-		ib = &parser->job->ibs[j];
-		chunk_ib = (struct drm_amdgpu_cs_chunk_ib *)chunk->kdata;
-
-		if (chunk->chunk_id != AMDGPU_CHUNK_ID_IB)
-			continue;
-
-		if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX &&
-		    (amdgpu_mcbp || amdgpu_sriov_vf(adev))) {
-			if (chunk_ib->flags & AMDGPU_IB_FLAG_PREEMPT) {
-				if (chunk_ib->flags & AMDGPU_IB_FLAG_CE)
-					ce_preempt++;
-				else
-					de_preempt++;
-			}
-
-			/* each GFX command submit allows 0 or 1 IB preemptible for CE & DE */
-			if (ce_preempt > 1 || de_preempt > 1)
-				return -EINVAL;
-		}
-
-		r = amdgpu_ctx_get_entity(parser->ctx, chunk_ib->ip_type,
-					  chunk_ib->ip_instance, chunk_ib->ring,
-					  &entity);
-		if (r)
-			return r;
-
-		if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE)
-			parser->job->preamble_status |=
-				AMDGPU_PREAMBLE_IB_PRESENT;
-
-		if (parser->entity && parser->entity != entity)
-			return -EINVAL;
-
-		/* Return if there is no run queue associated with this entity.
-		 * Possibly because of disabled HW IP*/
-		if (entity->rq == NULL)
-			return -EINVAL;
-
-		parser->entity = entity;
-
-		ring = to_amdgpu_ring(entity->rq->sched);
-		r =  amdgpu_ib_get(adev, vm, ring->funcs->parse_cs ?
-				   chunk_ib->ib_bytes : 0,
-				   AMDGPU_IB_POOL_DELAYED, ib);
-		if (r) {
-			DRM_ERROR("Failed to get ib !\n");
-			return r;
-		}
-
-		ib->gpu_addr = chunk_ib->va_start;
-		ib->length_dw = chunk_ib->ib_bytes / 4;
-		ib->flags = chunk_ib->flags;
-
-		j++;
-	}
-
-	/* MM engine doesn't support user fences */
-	ring = to_amdgpu_ring(parser->entity->rq->sched);
-	if (parser->job->uf_addr && ring->funcs->no_user_fence)
-		return -EINVAL;
-
-	return 0;
-}
-
-static int amdgpu_cs_process_fence_dep(struct amdgpu_cs_parser *p,
-				       struct amdgpu_cs_chunk *chunk)
-{
-	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
-	unsigned num_deps;
-	int i, r;
-	struct drm_amdgpu_cs_chunk_dep *deps;
-
-	deps = (struct drm_amdgpu_cs_chunk_dep *)chunk->kdata;
-	num_deps = chunk->length_dw * 4 /
-		sizeof(struct drm_amdgpu_cs_chunk_dep);
-
-	for (i = 0; i < num_deps; ++i) {
-		struct amdgpu_ctx *ctx;
-		struct drm_sched_entity *entity;
-		struct dma_fence *fence;
-
-		ctx = amdgpu_ctx_get(fpriv, deps[i].ctx_id);
-		if (ctx == NULL)
-			return -EINVAL;
-
-		r = amdgpu_ctx_get_entity(ctx, deps[i].ip_type,
-					  deps[i].ip_instance,
-					  deps[i].ring, &entity);
-		if (r) {
-			amdgpu_ctx_put(ctx);
-			return r;
-		}
-
-		fence = amdgpu_ctx_get_fence(ctx, entity, deps[i].handle);
-		amdgpu_ctx_put(ctx);
-
-		if (IS_ERR(fence))
-			return PTR_ERR(fence);
-		else if (!fence)
-			continue;
-
-		if (chunk->chunk_id == AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES) {
-			struct drm_sched_fence *s_fence;
-			struct dma_fence *old = fence;
-
-			s_fence = to_drm_sched_fence(fence);
-			fence = dma_fence_get(&s_fence->scheduled);
-			dma_fence_put(old);
-		}
-
-		r = amdgpu_sync_fence(&p->job->sync, fence);
-		dma_fence_put(fence);
-		if (r)
-			return r;
-	}
-	return 0;
-}
-
-static int amdgpu_syncobj_lookup_and_add_to_sync(struct amdgpu_cs_parser *p,
-						 uint32_t handle, u64 point,
-						 u64 flags)
+static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *parser)
 {
-	struct dma_fence *fence;
-	int r;
-
-	r = drm_syncobj_find_fence(p->filp, handle, point, flags, &fence);
-	if (r) {
-		DRM_ERROR("syncobj %u failed to find fence @ %llu (%d)!\n",
-			  handle, point, r);
-		return r;
-	}
+	int i;
 
-	r = amdgpu_sync_fence(&p->job->sync, fence);
-	dma_fence_put(fence);
+	if (!trace_amdgpu_cs_enabled())
+		return;
 
-	return r;
+	for (i = 0; i < parser->job->num_ibs; i++)
+		trace_amdgpu_cs(parser, i);
 }
 
-static int amdgpu_cs_process_syncobj_in_dep(struct amdgpu_cs_parser *p,
-					    struct amdgpu_cs_chunk *chunk)
+static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
 {
-	struct drm_amdgpu_cs_chunk_sem *deps;
-	unsigned num_deps;
-	int i, r;
+	struct amdgpu_job *job = p->job;
+	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
+	unsigned int i;
+	int r;
 
-	deps = (struct drm_amdgpu_cs_chunk_sem *)chunk->kdata;
-	num_deps = chunk->length_dw * 4 /
-		sizeof(struct drm_amdgpu_cs_chunk_sem);
-	for (i = 0; i < num_deps; ++i) {
-		r = amdgpu_syncobj_lookup_and_add_to_sync(p, deps[i].handle,
-							  0, 0);
-		if (r)
-			return r;
-	}
+	/* Only for UVD/VCE VM emulation */
+	if (!ring->funcs->parse_cs && !ring->funcs->patch_cs_in_place)
+		return 0;
 
-	return 0;
-}
+	for (i = 0; i < job->num_ibs; ++i) {
+		struct amdgpu_bo_va_mapping *m;
+		struct amdgpu_bo *aobj;
+		struct amdgpu_ib *ib;
+		uint64_t va_start;
+		uint8_t *kptr;
 
+		va_start = job->ibs[i].gpu_addr;
+		r = amdgpu_cs_find_mapping(p, va_start, &aobj, &m);
+		if (r) {
+			DRM_ERROR("IB va_start is invalid\n");
+			return r;
+		}
 
-static int amdgpu_cs_process_syncobj_timeline_in_dep(struct amdgpu_cs_parser *p,
-						     struct amdgpu_cs_chunk *chunk)
-{
-	struct drm_amdgpu_cs_chunk_syncobj *syncobj_deps;
-	unsigned num_deps;
-	int i, r;
+		if ((va_start + job->ibs[i].length_dw * 4) >
+		    (m->last + 1) * AMDGPU_GPU_PAGE_SIZE) {
+			DRM_ERROR("IB va_start+ib_bytes is invalid\n");
+			return -EINVAL;
+		}
 
-	syncobj_deps = (struct drm_amdgpu_cs_chunk_syncobj *)chunk->kdata;
-	num_deps = chunk->length_dw * 4 /
-		sizeof(struct drm_amdgpu_cs_chunk_syncobj);
-	for (i = 0; i < num_deps; ++i) {
-		r = amdgpu_syncobj_lookup_and_add_to_sync(p,
-							  syncobj_deps[i].handle,
-							  syncobj_deps[i].point,
-							  syncobj_deps[i].flags);
+		/* the IB should be reserved at this point */
+		r = amdgpu_bo_kmap(aobj, (void **)&kptr);
 		if (r)
 			return r;
+
+		kptr += va_start - (m->start * AMDGPU_GPU_PAGE_SIZE);
+		if (ring->funcs->parse_cs) {
+			memcpy(ib->ptr, kptr, job->ibs[i].length_dw * 4);
+			amdgpu_bo_kunmap(aobj);
+
+			r = amdgpu_ring_parse_cs(ring, p, i);
+			if (r)
+				return r;
+		} else {
+			ib->ptr = (uint32_t *)kptr;
+			r = amdgpu_ring_patch_cs_in_place(ring, p, i);
+			amdgpu_bo_kunmap(aobj);
+			if (r)
+				return r;
+		}
 	}
 
 	return 0;
 }
 
-static int amdgpu_cs_process_syncobj_out_dep(struct amdgpu_cs_parser *p,
-					     struct amdgpu_cs_chunk *chunk)
+static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 {
-	struct drm_amdgpu_cs_chunk_sem *deps;
-	unsigned num_deps;
-	int i;
-
-	deps = (struct drm_amdgpu_cs_chunk_sem *)chunk->kdata;
-	num_deps = chunk->length_dw * 4 /
-		sizeof(struct drm_amdgpu_cs_chunk_sem);
+	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+	struct amdgpu_device *adev = p->adev;
+	struct amdgpu_vm *vm = &fpriv->vm;
+	struct amdgpu_bo_list_entry *e;
+	struct amdgpu_bo_va *bo_va;
+	struct amdgpu_bo *bo;
+	int r;
 
-	if (p->post_deps)
-		return -EINVAL;
+	r = amdgpu_vm_clear_freed(adev, vm, NULL);
+	if (r)
+		return r;
 
-	p->post_deps = kmalloc_array(num_deps, sizeof(*p->post_deps),
-				     GFP_KERNEL);
-	p->num_post_deps = 0;
+	r = amdgpu_vm_bo_update(adev, fpriv->prt_va, false, NULL);
+	if (r)
+		return r;
 
-	if (!p->post_deps)
-		return -ENOMEM;
+	r = amdgpu_sync_vm_fence(&p->job->sync, fpriv->prt_va->last_pt_update);
+	if (r)
+		return r;
 
+	if (amdgpu_mcbp || amdgpu_sriov_vf(adev)) {
+		bo_va = fpriv->csa_va;
+		r = amdgpu_vm_bo_update(adev, bo_va, false, NULL);
+		if (r)
+			return r;
 
-	for (i = 0; i < num_deps; ++i) {
-		p->post_deps[i].syncobj =
-			drm_syncobj_find(p->filp, deps[i].handle);
-		if (!p->post_deps[i].syncobj)
-			return -EINVAL;
-		p->post_deps[i].chain = NULL;
-		p->post_deps[i].point = 0;
-		p->num_post_deps++;
+		r = amdgpu_sync_vm_fence(&p->job->sync, bo_va->last_pt_update);
+		if (r)
+			return r;
 	}
 
-	return 0;
-}
+	amdgpu_bo_list_for_each_entry(e, p->bo_list) {
+		/* ignore duplicates */
+		bo = ttm_to_amdgpu_bo(e->tv.bo);
+		if (!bo)
+			continue;
 
+		bo_va = e->bo_va;
+		if (bo_va == NULL)
+			continue;
 
-static int amdgpu_cs_process_syncobj_timeline_out_dep(struct amdgpu_cs_parser *p,
-						      struct amdgpu_cs_chunk *chunk)
-{
-	struct drm_amdgpu_cs_chunk_syncobj *syncobj_deps;
-	unsigned num_deps;
-	int i;
+		r = amdgpu_vm_bo_update(adev, bo_va, false, NULL);
+		if (r)
+			return r;
 
-	syncobj_deps = (struct drm_amdgpu_cs_chunk_syncobj *)chunk->kdata;
-	num_deps = chunk->length_dw * 4 /
-		sizeof(struct drm_amdgpu_cs_chunk_syncobj);
+		r = amdgpu_sync_vm_fence(&p->job->sync, bo_va->last_pt_update);
+		if (r)
+			return r;
+	}
 
-	if (p->post_deps)
-		return -EINVAL;
+	r = amdgpu_vm_handle_moved(adev, vm);
+	if (r)
+		return r;
 
-	p->post_deps = kmalloc_array(num_deps, sizeof(*p->post_deps),
-				     GFP_KERNEL);
-	p->num_post_deps = 0;
+	r = amdgpu_vm_update_pdes(adev, vm, false);
+	if (r)
+		return r;
 
-	if (!p->post_deps)
-		return -ENOMEM;
+	r = amdgpu_sync_vm_fence(&p->job->sync, vm->last_update);
+	if (r)
+		return r;
 
-	for (i = 0; i < num_deps; ++i) {
-		struct amdgpu_cs_post_dep *dep = &p->post_deps[i];
+	p->job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo);
 
-		dep->chain = NULL;
-		if (syncobj_deps[i].point) {
-			dep->chain = dma_fence_chain_alloc();
-			if (!dep->chain)
-				return -ENOMEM;
-		}
+	if (amdgpu_vm_debug) {
+		/* Invalidate all BOs to test for userspace bugs */
+		amdgpu_bo_list_for_each_entry(e, p->bo_list) {
+			struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
 
-		dep->syncobj = drm_syncobj_find(p->filp,
-						syncobj_deps[i].handle);
-		if (!dep->syncobj) {
-			dma_fence_chain_free(dep->chain);
-			return -EINVAL;
+			/* ignore duplicates */
+			if (!bo)
+				continue;
+
+			amdgpu_vm_bo_invalidate(adev, bo, false);
 		}
-		dep->point = syncobj_deps[i].point;
-		p->num_post_deps++;
 	}
 
 	return 0;
 }
 
-static int amdgpu_cs_dependencies(struct amdgpu_device *adev,
-				  struct amdgpu_cs_parser *p)
+static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
 {
-	int i, r;
-
-	for (i = 0; i < p->nchunks; ++i) {
-		struct amdgpu_cs_chunk *chunk;
+	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+	struct amdgpu_bo_list_entry *e;
+	int r;
 
-		chunk = &p->chunks[i];
+	list_for_each_entry(e, &p->validated, tv.head) {
+		struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
+		struct dma_resv *resv = bo->tbo.base.resv;
+		enum amdgpu_sync_mode sync_mode;
 
-		switch (chunk->chunk_id) {
-		case AMDGPU_CHUNK_ID_DEPENDENCIES:
-		case AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES:
-			r = amdgpu_cs_process_fence_dep(p, chunk);
-			if (r)
-				return r;
-			break;
-		case AMDGPU_CHUNK_ID_SYNCOBJ_IN:
-			r = amdgpu_cs_process_syncobj_in_dep(p, chunk);
-			if (r)
-				return r;
-			break;
-		case AMDGPU_CHUNK_ID_SYNCOBJ_OUT:
-			r = amdgpu_cs_process_syncobj_out_dep(p, chunk);
-			if (r)
-				return r;
-			break;
-		case AMDGPU_CHUNK_ID_SYNCOBJ_TIMELINE_WAIT:
-			r = amdgpu_cs_process_syncobj_timeline_in_dep(p, chunk);
-			if (r)
-				return r;
-			break;
-		case AMDGPU_CHUNK_ID_SYNCOBJ_TIMELINE_SIGNAL:
-			r = amdgpu_cs_process_syncobj_timeline_out_dep(p, chunk);
-			if (r)
-				return r;
-			break;
-		}
+		sync_mode = amdgpu_bo_explicit_sync(bo) ?
+			AMDGPU_SYNC_EXPLICIT : AMDGPU_SYNC_NE_OWNER;
+		r = amdgpu_sync_resv(p->adev, &p->job->sync, resv, sync_mode,
+				     &fpriv->vm);
+		if (r)
+			return r;
 	}
-
 	return 0;
 }
 
@@ -1204,10 +1173,6 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 	job = p->job;
 	p->job = NULL;
 
-	r = drm_sched_job_init(&job->base, entity, &fpriv->vm);
-	if (r)
-		goto error_unlock;
-
 	drm_sched_job_arm(&job->base);
 
 	/* No memory allocation is allowed while holding the notifier lock.
@@ -1263,29 +1228,45 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 error_abort:
 	drm_sched_job_cleanup(&job->base);
 	mutex_unlock(&p->adev->notifier_lock);
-
-error_unlock:
 	amdgpu_job_free(job);
 	return r;
 }
 
-static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *parser)
+/* Cleanup the parser structure */
+static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser)
 {
-	int i;
+	unsigned i;
 
-	if (!trace_amdgpu_cs_enabled())
-		return;
+	for (i = 0; i < parser->num_post_deps; i++) {
+		drm_syncobj_put(parser->post_deps[i].syncobj);
+		kfree(parser->post_deps[i].chain);
+	}
+	kfree(parser->post_deps);
 
-	for (i = 0; i < parser->job->num_ibs; i++)
-		trace_amdgpu_cs(parser, i);
+	dma_fence_put(parser->fence);
+
+	if (parser->ctx) {
+		amdgpu_ctx_put(parser->ctx);
+	}
+	if (parser->bo_list)
+		amdgpu_bo_list_put(parser->bo_list);
+
+	for (i = 0; i < parser->nchunks; i++)
+		kvfree(parser->chunks[i].kdata);
+	kvfree(parser->chunks);
+	if (parser->job)
+		amdgpu_job_free(parser->job);
+	if (parser->uf_entry.tv.bo) {
+		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(parser->uf_entry.tv.bo);
+
+		amdgpu_bo_unref(&uf);
+	}
 }
 
 int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 {
 	struct amdgpu_device *adev = drm_to_adev(dev);
-	union drm_amdgpu_cs *cs = data;
-	struct amdgpu_cs_parser parser = {};
-	bool reserved_buffers = false;
+	struct amdgpu_cs_parser parser;
 	int r;
 
 	if (amdgpu_ras_intr_triggered())
@@ -1294,25 +1275,20 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 	if (!adev->accel_working)
 		return -EBUSY;
 
-	parser.adev = adev;
-	parser.filp = filp;
-
-	r = amdgpu_cs_parser_init(&parser, data);
+	r = amdgpu_cs_parser_init(&parser, adev, filp, data);
 	if (r) {
 		if (printk_ratelimit())
 			DRM_ERROR("Failed to initialize parser %d!\n", r);
-		goto out;
+		return r;
 	}
 
-	r = amdgpu_cs_ib_fill(adev, &parser);
+	r = amdgpu_cs_pass1(&parser, data);
 	if (r)
-		goto out;
+		goto error_fini;
 
-	r = amdgpu_cs_dependencies(adev, &parser);
-	if (r) {
-		DRM_ERROR("Failed in the dependencies handling %d!\n", r);
-		goto out;
-	}
+	r = amdgpu_cs_pass2(&parser);
+	if (r)
+		goto error_fini;
 
 	r = amdgpu_cs_parser_bos(&parser, data);
 	if (r) {
@@ -1320,21 +1296,35 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 			DRM_ERROR("Not enough memory for command submission!\n");
 		else if (r != -ERESTARTSYS && r != -EAGAIN)
 			DRM_ERROR("Failed to process the buffer list %d!\n", r);
-		goto out;
+		goto error_fini;
 	}
 
-	reserved_buffers = true;
+	r = amdgpu_cs_patch_ibs(&parser);
+	if (r)
+		goto error_backoff;
+
+	r = amdgpu_cs_vm_handling(&parser);
+	if (r)
+		goto error_backoff;
+
+	r = amdgpu_cs_sync_rings(&parser);
+	if (r)
+		goto error_backoff;
 
 	trace_amdgpu_cs_ibs(&parser);
 
-	r = amdgpu_cs_vm_handling(&parser);
+	r = amdgpu_cs_submit(&parser, data);
 	if (r)
-		goto out;
+		goto error_backoff;
 
-	r = amdgpu_cs_submit(&parser, cs);
-out:
-	amdgpu_cs_parser_fini(&parser, r, reserved_buffers);
+	amdgpu_cs_parser_fini(&parser);
+	return 0;
+
+error_backoff:
+	ttm_eu_backoff_reservation(&parser.ticket, &parser.validated);
 
+error_fini:
+	amdgpu_cs_parser_fini(&parser);
 	return r;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
index 92d07816743e..30136eb50d2a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
@@ -51,8 +51,8 @@ struct amdgpu_cs_parser {
 	struct amdgpu_cs_chunk	*chunks;
 
 	/* scheduler job object */
-	struct amdgpu_job	*job;
 	struct drm_sched_entity	*entity;
+	struct amdgpu_job	*job;
 
 	/* buffer objects */
 	struct ww_acquire_ctx		ticket;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 04/10] drm/amdgpu: remove SRIOV and MCBP dependencies from the CS
  2022-03-03  8:22 Gang submit Christian König
                   ` (2 preceding siblings ...)
  2022-03-03  8:23 ` [PATCH 03/10] drm/amdgpu: cleanup and reorder amdgpu_cs.c Christian König
@ 2022-03-03  8:23 ` Christian König
  2022-03-03  8:23 ` [PATCH 05/10] drm/amdgpu: use job and ib structures directly in CS parsers Christian König
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 27+ messages in thread
From: Christian König @ 2022-03-03  8:23 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak; +Cc: Christian König

We should not have any different CS constrains based
on the execution environment.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 58ddc4241f04..20bf6134baca 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -320,8 +320,7 @@ static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
 		return -EINVAL;
 
 	if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX &&
-	    chunk_ib->flags & AMDGPU_IB_FLAG_PREEMPT &&
-	    (amdgpu_mcbp || amdgpu_sriov_vf(p->adev))) {
+	    chunk_ib->flags & AMDGPU_IB_FLAG_PREEMPT) {
 		if (chunk_ib->flags & AMDGPU_IB_FLAG_CE)
 			(*ce_preempt)++;
 		else
@@ -1062,7 +1061,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 	if (r)
 		return r;
 
-	if (amdgpu_mcbp || amdgpu_sriov_vf(adev)) {
+	if (fpriv->csa_va) {
 		bo_va = fpriv->csa_va;
 		r = amdgpu_vm_bo_update(adev, bo_va, false, NULL);
 		if (r)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 05/10] drm/amdgpu: use job and ib structures directly in CS parsers
  2022-03-03  8:22 Gang submit Christian König
                   ` (3 preceding siblings ...)
  2022-03-03  8:23 ` [PATCH 04/10] drm/amdgpu: remove SRIOV and MCBP dependencies from the CS Christian König
@ 2022-03-03  8:23 ` Christian König
  2022-03-03 20:16   ` Andrey Grodzovsky
  2022-03-03  8:23 ` [PATCH 06/10] drm/amdgpu: properly imbed the IBs into the job Christian König
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 27+ messages in thread
From: Christian König @ 2022-03-03  8:23 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak; +Cc: Christian König

Instead of providing the ib index provide the job and ib pointers directly to
the patch and parse functions for UVD and VCE.

Also move the set/get functions for IB values to the IB declerations.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h   |  13 ---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  23 ++++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c  |  36 ++++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h  |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c  | 116 ++++++++++++-----------
 drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h  |   7 +-
 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c    |  13 +--
 drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c    |  25 ++---
 9 files changed, 129 insertions(+), 114 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 20bf6134baca..dd9e708fe97f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1024,12 +1024,14 @@ static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
 			memcpy(ib->ptr, kptr, job->ibs[i].length_dw * 4);
 			amdgpu_bo_kunmap(aobj);
 
-			r = amdgpu_ring_parse_cs(ring, p, i);
+			r = amdgpu_ring_parse_cs(ring, p, p->job,
+						 &p->job->ibs[i]);
 			if (r)
 				return r;
 		} else {
 			ib->ptr = (uint32_t *)kptr;
-			r = amdgpu_ring_patch_cs_in_place(ring, p, i);
+			r = amdgpu_ring_patch_cs_in_place(ring, p, p->job,
+							  &p->job->ibs[i]);
 			amdgpu_bo_kunmap(aobj);
 			if (r)
 				return r;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
index 30136eb50d2a..652b5593499f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
@@ -73,19 +73,6 @@ struct amdgpu_cs_parser {
 	struct amdgpu_cs_post_dep	*post_deps;
 };
 
-static inline u32 amdgpu_get_ib_value(struct amdgpu_cs_parser *p,
-				      uint32_t ib_idx, int idx)
-{
-	return p->job->ibs[ib_idx].ptr[idx];
-}
-
-static inline void amdgpu_set_ib_value(struct amdgpu_cs_parser *p,
-				       uint32_t ib_idx, int idx,
-				       uint32_t value)
-{
-	p->job->ibs[ib_idx].ptr[idx] = value;
-}
-
 int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
 			   uint64_t addr, struct amdgpu_bo **bo,
 			   struct amdgpu_bo_va_mapping **mapping);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index 05e789fc7a9e..a8bed1b47899 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -163,8 +163,12 @@ struct amdgpu_ring_funcs {
 	u64 (*get_wptr)(struct amdgpu_ring *ring);
 	void (*set_wptr)(struct amdgpu_ring *ring);
 	/* validating and patching of IBs */
-	int (*parse_cs)(struct amdgpu_cs_parser *p, uint32_t ib_idx);
-	int (*patch_cs_in_place)(struct amdgpu_cs_parser *p, uint32_t ib_idx);
+	int (*parse_cs)(struct amdgpu_cs_parser *p,
+			struct amdgpu_job *job,
+			struct amdgpu_ib *ib);
+	int (*patch_cs_in_place)(struct amdgpu_cs_parser *p,
+				 struct amdgpu_job *job,
+				 struct amdgpu_ib *ib);
 	/* constants to calculate how many DW are needed for an emit */
 	unsigned emit_frame_size;
 	unsigned emit_ib_size;
@@ -264,8 +268,8 @@ struct amdgpu_ring {
 	atomic_t		*sched_score;
 };
 
-#define amdgpu_ring_parse_cs(r, p, ib) ((r)->funcs->parse_cs((p), (ib)))
-#define amdgpu_ring_patch_cs_in_place(r, p, ib) ((r)->funcs->patch_cs_in_place((p), (ib)))
+#define amdgpu_ring_parse_cs(r, p, job, ib) ((r)->funcs->parse_cs((p), (job), (ib)))
+#define amdgpu_ring_patch_cs_in_place(r, p, job, ib) ((r)->funcs->patch_cs_in_place((p), (job), (ib)))
 #define amdgpu_ring_test_ring(r) (r)->funcs->test_ring((r))
 #define amdgpu_ring_test_ib(r, t) (r)->funcs->test_ib((r), (t))
 #define amdgpu_ring_get_rptr(r) (r)->funcs->get_rptr((r))
@@ -364,6 +368,17 @@ int amdgpu_ring_test_helper(struct amdgpu_ring *ring);
 void amdgpu_debugfs_ring_init(struct amdgpu_device *adev,
 			      struct amdgpu_ring *ring);
 
+static inline u32 amdgpu_ib_get_value(struct amdgpu_ib *ib, int idx)
+{
+	return ib->ptr[idx];
+}
+
+static inline void amdgpu_ib_set_value(struct amdgpu_ib *ib, int idx,
+				       uint32_t value)
+{
+	ib->ptr[idx] = value;
+}
+
 int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 		  unsigned size,
 		  enum amdgpu_ib_pool_type pool,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
index 4927c10bdc80..2ebd133a5222 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
@@ -99,7 +99,7 @@ struct amdgpu_uvd_cs_ctx {
 	unsigned reg, count;
 	unsigned data0, data1;
 	unsigned idx;
-	unsigned ib_idx;
+	struct amdgpu_ib *ib;
 
 	/* does the IB has a msg command */
 	bool has_msg_cmd;
@@ -558,8 +558,8 @@ static u64 amdgpu_uvd_get_addr_from_ctx(struct amdgpu_uvd_cs_ctx *ctx)
 	uint32_t lo, hi;
 	uint64_t addr;
 
-	lo = amdgpu_get_ib_value(ctx->parser, ctx->ib_idx, ctx->data0);
-	hi = amdgpu_get_ib_value(ctx->parser, ctx->ib_idx, ctx->data1);
+	lo = amdgpu_ib_get_value(ctx->ib, ctx->data0);
+	hi = amdgpu_ib_get_value(ctx->ib, ctx->data1);
 	addr = ((uint64_t)lo) | (((uint64_t)hi) << 32);
 
 	return addr;
@@ -590,7 +590,7 @@ static int amdgpu_uvd_cs_pass1(struct amdgpu_uvd_cs_ctx *ctx)
 
 	if (!ctx->parser->adev->uvd.address_64_bit) {
 		/* check if it's a message or feedback command */
-		cmd = amdgpu_get_ib_value(ctx->parser, ctx->ib_idx, ctx->idx) >> 1;
+		cmd = amdgpu_ib_get_value(ctx->ib, ctx->idx) >> 1;
 		if (cmd == 0x0 || cmd == 0x3) {
 			/* yes, force it into VRAM */
 			uint32_t domain = AMDGPU_GEM_DOMAIN_VRAM;
@@ -926,12 +926,10 @@ static int amdgpu_uvd_cs_pass2(struct amdgpu_uvd_cs_ctx *ctx)
 	addr -= mapping->start * AMDGPU_GPU_PAGE_SIZE;
 	start += addr;
 
-	amdgpu_set_ib_value(ctx->parser, ctx->ib_idx, ctx->data0,
-			    lower_32_bits(start));
-	amdgpu_set_ib_value(ctx->parser, ctx->ib_idx, ctx->data1,
-			    upper_32_bits(start));
+	amdgpu_ib_set_value(ctx->ib, ctx->data0, lower_32_bits(start));
+	amdgpu_ib_set_value(ctx->ib, ctx->data1, upper_32_bits(start));
 
-	cmd = amdgpu_get_ib_value(ctx->parser, ctx->ib_idx, ctx->idx) >> 1;
+	cmd = amdgpu_ib_get_value(ctx->ib, ctx->idx) >> 1;
 	if (cmd < 0x4) {
 		if ((end - start) < ctx->buf_sizes[cmd]) {
 			DRM_ERROR("buffer (%d) to small (%d / %d)!\n", cmd,
@@ -991,14 +989,13 @@ static int amdgpu_uvd_cs_pass2(struct amdgpu_uvd_cs_ctx *ctx)
 static int amdgpu_uvd_cs_reg(struct amdgpu_uvd_cs_ctx *ctx,
 			     int (*cb)(struct amdgpu_uvd_cs_ctx *ctx))
 {
-	struct amdgpu_ib *ib = &ctx->parser->job->ibs[ctx->ib_idx];
 	int i, r;
 
 	ctx->idx++;
 	for (i = 0; i <= ctx->count; ++i) {
 		unsigned reg = ctx->reg + i;
 
-		if (ctx->idx >= ib->length_dw) {
+		if (ctx->idx >= ctx->ib->length_dw) {
 			DRM_ERROR("Register command after end of CS!\n");
 			return -EINVAL;
 		}
@@ -1038,11 +1035,10 @@ static int amdgpu_uvd_cs_reg(struct amdgpu_uvd_cs_ctx *ctx,
 static int amdgpu_uvd_cs_packets(struct amdgpu_uvd_cs_ctx *ctx,
 				 int (*cb)(struct amdgpu_uvd_cs_ctx *ctx))
 {
-	struct amdgpu_ib *ib = &ctx->parser->job->ibs[ctx->ib_idx];
 	int r;
 
-	for (ctx->idx = 0 ; ctx->idx < ib->length_dw; ) {
-		uint32_t cmd = amdgpu_get_ib_value(ctx->parser, ctx->ib_idx, ctx->idx);
+	for (ctx->idx = 0 ; ctx->idx < ctx->ib->length_dw; ) {
+		uint32_t cmd = amdgpu_ib_get_value(ctx->ib, ctx->idx);
 		unsigned type = CP_PACKET_GET_TYPE(cmd);
 		switch (type) {
 		case PACKET_TYPE0:
@@ -1067,11 +1063,14 @@ static int amdgpu_uvd_cs_packets(struct amdgpu_uvd_cs_ctx *ctx,
  * amdgpu_uvd_ring_parse_cs - UVD command submission parser
  *
  * @parser: Command submission parser context
- * @ib_idx: Which indirect buffer to use
+ * @job: the job to parse
+ * @ib: the IB to patch
  *
  * Parse the command stream, patch in addresses as necessary.
  */
-int amdgpu_uvd_ring_parse_cs(struct amdgpu_cs_parser *parser, uint32_t ib_idx)
+int amdgpu_uvd_ring_parse_cs(struct amdgpu_cs_parser *parser,
+			     struct amdgpu_job *job,
+			     struct amdgpu_ib *ib)
 {
 	struct amdgpu_uvd_cs_ctx ctx = {};
 	unsigned buf_sizes[] = {
@@ -1081,10 +1080,9 @@ int amdgpu_uvd_ring_parse_cs(struct amdgpu_cs_parser *parser, uint32_t ib_idx)
 		[0x00000003]	=	2048,
 		[0x00000004]	=	0xFFFFFFFF,
 	};
-	struct amdgpu_ib *ib = &parser->job->ibs[ib_idx];
 	int r;
 
-	parser->job->vm = NULL;
+	job->vm = NULL;
 	ib->gpu_addr = amdgpu_sa_bo_gpu_addr(ib->sa_bo);
 
 	if (ib->length_dw % 16) {
@@ -1095,7 +1093,7 @@ int amdgpu_uvd_ring_parse_cs(struct amdgpu_cs_parser *parser, uint32_t ib_idx)
 
 	ctx.parser = parser;
 	ctx.buf_sizes = buf_sizes;
-	ctx.ib_idx = ib_idx;
+	ctx.ib = ib;
 
 	/* first round only required on chips without UVD 64 bit address support */
 	if (!parser->adev->uvd.address_64_bit) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
index 76ac9699885d..9f89bb7cd60b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
@@ -82,7 +82,9 @@ int amdgpu_uvd_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handle,
 			       bool direct, struct dma_fence **fence);
 void amdgpu_uvd_free_handles(struct amdgpu_device *adev,
 			     struct drm_file *filp);
-int amdgpu_uvd_ring_parse_cs(struct amdgpu_cs_parser *parser, uint32_t ib_idx);
+int amdgpu_uvd_ring_parse_cs(struct amdgpu_cs_parser *parser,
+			     struct amdgpu_job *job,
+			     struct amdgpu_ib *ib);
 void amdgpu_uvd_ring_begin_use(struct amdgpu_ring *ring);
 void amdgpu_uvd_ring_end_use(struct amdgpu_ring *ring);
 int amdgpu_uvd_ring_test_ib(struct amdgpu_ring *ring, long timeout);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
index 6179230b6c6e..02cb3a12dd76 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
@@ -588,8 +588,7 @@ static int amdgpu_vce_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handle,
 /**
  * amdgpu_vce_validate_bo - make sure not to cross 4GB boundary
  *
- * @p: parser context
- * @ib_idx: indirect buffer to use
+ * @ib: indirect buffer to use
  * @lo: address of lower dword
  * @hi: address of higher dword
  * @size: minimum size
@@ -597,8 +596,9 @@ static int amdgpu_vce_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handle,
  *
  * Make sure that no BO cross a 4GB boundary.
  */
-static int amdgpu_vce_validate_bo(struct amdgpu_cs_parser *p, uint32_t ib_idx,
-				  int lo, int hi, unsigned size, int32_t index)
+static int amdgpu_vce_validate_bo(struct amdgpu_cs_parser *p,
+				  struct amdgpu_ib *ib, int lo, int hi,
+				  unsigned size, int32_t index)
 {
 	int64_t offset = ((uint64_t)size) * ((int64_t)index);
 	struct ttm_operation_ctx ctx = { false, false };
@@ -608,8 +608,8 @@ static int amdgpu_vce_validate_bo(struct amdgpu_cs_parser *p, uint32_t ib_idx,
 	uint64_t addr;
 	int r;
 
-	addr = ((uint64_t)amdgpu_get_ib_value(p, ib_idx, lo)) |
-	       ((uint64_t)amdgpu_get_ib_value(p, ib_idx, hi)) << 32;
+	addr = ((uint64_t)amdgpu_ib_get_value(ib, lo)) |
+	       ((uint64_t)amdgpu_ib_get_value(ib, hi)) << 32;
 	if (index >= 0) {
 		addr += offset;
 		fpfn = PAGE_ALIGN(offset) >> PAGE_SHIFT;
@@ -639,7 +639,7 @@ static int amdgpu_vce_validate_bo(struct amdgpu_cs_parser *p, uint32_t ib_idx,
  * amdgpu_vce_cs_reloc - command submission relocation
  *
  * @p: parser context
- * @ib_idx: indirect buffer to use
+ * @ib: indirect buffer to use
  * @lo: address of lower dword
  * @hi: address of higher dword
  * @size: minimum size
@@ -647,7 +647,7 @@ static int amdgpu_vce_validate_bo(struct amdgpu_cs_parser *p, uint32_t ib_idx,
  *
  * Patch relocation inside command stream with real buffer address
  */
-static int amdgpu_vce_cs_reloc(struct amdgpu_cs_parser *p, uint32_t ib_idx,
+static int amdgpu_vce_cs_reloc(struct amdgpu_cs_parser *p, struct amdgpu_ib *ib,
 			       int lo, int hi, unsigned size, uint32_t index)
 {
 	struct amdgpu_bo_va_mapping *mapping;
@@ -658,8 +658,8 @@ static int amdgpu_vce_cs_reloc(struct amdgpu_cs_parser *p, uint32_t ib_idx,
 	if (index == 0xffffffff)
 		index = 0;
 
-	addr = ((uint64_t)amdgpu_get_ib_value(p, ib_idx, lo)) |
-	       ((uint64_t)amdgpu_get_ib_value(p, ib_idx, hi)) << 32;
+	addr = ((uint64_t)amdgpu_ib_get_value(ib, lo)) |
+	       ((uint64_t)amdgpu_ib_get_value(ib, hi)) << 32;
 	addr += ((uint64_t)size) * ((uint64_t)index);
 
 	r = amdgpu_cs_find_mapping(p, addr, &bo, &mapping);
@@ -680,8 +680,8 @@ static int amdgpu_vce_cs_reloc(struct amdgpu_cs_parser *p, uint32_t ib_idx,
 	addr += amdgpu_bo_gpu_offset(bo);
 	addr -= ((uint64_t)size) * ((uint64_t)index);
 
-	amdgpu_set_ib_value(p, ib_idx, lo, lower_32_bits(addr));
-	amdgpu_set_ib_value(p, ib_idx, hi, upper_32_bits(addr));
+	amdgpu_ib_set_value(ib, lo, lower_32_bits(addr));
+	amdgpu_ib_set_value(ib, hi, upper_32_bits(addr));
 
 	return 0;
 }
@@ -730,11 +730,13 @@ static int amdgpu_vce_validate_handle(struct amdgpu_cs_parser *p,
  * amdgpu_vce_ring_parse_cs - parse and validate the command stream
  *
  * @p: parser context
- * @ib_idx: indirect buffer to use
+ * @job: the job to parse
+ * @ib: the IB to patch
  */
-int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
+int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p,
+			     struct amdgpu_job *job,
+			     struct amdgpu_ib *ib)
 {
-	struct amdgpu_ib *ib = &p->job->ibs[ib_idx];
 	unsigned fb_idx = 0, bs_idx = 0;
 	int session_idx = -1;
 	uint32_t destroyed = 0;
@@ -745,12 +747,12 @@ int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
 	unsigned idx;
 	int i, r = 0;
 
-	p->job->vm = NULL;
+	job->vm = NULL;
 	ib->gpu_addr = amdgpu_sa_bo_gpu_addr(ib->sa_bo);
 
 	for (idx = 0; idx < ib->length_dw;) {
-		uint32_t len = amdgpu_get_ib_value(p, ib_idx, idx);
-		uint32_t cmd = amdgpu_get_ib_value(p, ib_idx, idx + 1);
+		uint32_t len = amdgpu_ib_get_value(ib, idx);
+		uint32_t cmd = amdgpu_ib_get_value(ib, idx + 1);
 
 		if ((len < 8) || (len & 3)) {
 			DRM_ERROR("invalid VCE command length (%d)!\n", len);
@@ -760,52 +762,52 @@ int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
 
 		switch (cmd) {
 		case 0x00000002: /* task info */
-			fb_idx = amdgpu_get_ib_value(p, ib_idx, idx + 6);
-			bs_idx = amdgpu_get_ib_value(p, ib_idx, idx + 7);
+			fb_idx = amdgpu_ib_get_value(ib, idx + 6);
+			bs_idx = amdgpu_ib_get_value(ib, idx + 7);
 			break;
 
 		case 0x03000001: /* encode */
-			r = amdgpu_vce_validate_bo(p, ib_idx, idx + 10,
-						   idx + 9, 0, 0);
+			r = amdgpu_vce_validate_bo(p, ib, idx + 10, idx + 9,
+						   0, 0);
 			if (r)
 				goto out;
 
-			r = amdgpu_vce_validate_bo(p, ib_idx, idx + 12,
-						   idx + 11, 0, 0);
+			r = amdgpu_vce_validate_bo(p, ib, idx + 12, idx + 11,
+						   0, 0);
 			if (r)
 				goto out;
 			break;
 
 		case 0x05000001: /* context buffer */
-			r = amdgpu_vce_validate_bo(p, ib_idx, idx + 3,
-						   idx + 2, 0, 0);
+			r = amdgpu_vce_validate_bo(p, ib, idx + 3, idx + 2,
+						   0, 0);
 			if (r)
 				goto out;
 			break;
 
 		case 0x05000004: /* video bitstream buffer */
-			tmp = amdgpu_get_ib_value(p, ib_idx, idx + 4);
-			r = amdgpu_vce_validate_bo(p, ib_idx, idx + 3, idx + 2,
+			tmp = amdgpu_ib_get_value(ib, idx + 4);
+			r = amdgpu_vce_validate_bo(p, ib, idx + 3, idx + 2,
 						   tmp, bs_idx);
 			if (r)
 				goto out;
 			break;
 
 		case 0x05000005: /* feedback buffer */
-			r = amdgpu_vce_validate_bo(p, ib_idx, idx + 3, idx + 2,
+			r = amdgpu_vce_validate_bo(p, ib, idx + 3, idx + 2,
 						   4096, fb_idx);
 			if (r)
 				goto out;
 			break;
 
 		case 0x0500000d: /* MV buffer */
-			r = amdgpu_vce_validate_bo(p, ib_idx, idx + 3,
-							idx + 2, 0, 0);
+			r = amdgpu_vce_validate_bo(p, ib, idx + 3, idx + 2,
+						   0, 0);
 			if (r)
 				goto out;
 
-			r = amdgpu_vce_validate_bo(p, ib_idx, idx + 8,
-							idx + 7, 0, 0);
+			r = amdgpu_vce_validate_bo(p, ib, idx + 8, idx + 7,
+						   0, 0);
 			if (r)
 				goto out;
 			break;
@@ -815,12 +817,12 @@ int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
 	}
 
 	for (idx = 0; idx < ib->length_dw;) {
-		uint32_t len = amdgpu_get_ib_value(p, ib_idx, idx);
-		uint32_t cmd = amdgpu_get_ib_value(p, ib_idx, idx + 1);
+		uint32_t len = amdgpu_ib_get_value(ib, idx);
+		uint32_t cmd = amdgpu_ib_get_value(ib, idx + 1);
 
 		switch (cmd) {
 		case 0x00000001: /* session */
-			handle = amdgpu_get_ib_value(p, ib_idx, idx + 2);
+			handle = amdgpu_ib_get_value(ib, idx + 2);
 			session_idx = amdgpu_vce_validate_handle(p, handle,
 								 &allocated);
 			if (session_idx < 0) {
@@ -831,8 +833,8 @@ int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
 			break;
 
 		case 0x00000002: /* task info */
-			fb_idx = amdgpu_get_ib_value(p, ib_idx, idx + 6);
-			bs_idx = amdgpu_get_ib_value(p, ib_idx, idx + 7);
+			fb_idx = amdgpu_ib_get_value(ib, idx + 6);
+			bs_idx = amdgpu_ib_get_value(ib, idx + 7);
 			break;
 
 		case 0x01000001: /* create */
@@ -847,8 +849,8 @@ int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
 				goto out;
 			}
 
-			*size = amdgpu_get_ib_value(p, ib_idx, idx + 8) *
-				amdgpu_get_ib_value(p, ib_idx, idx + 10) *
+			*size = amdgpu_ib_get_value(ib, idx + 8) *
+				amdgpu_ib_get_value(ib, idx + 10) *
 				8 * 3 / 2;
 			break;
 
@@ -877,12 +879,12 @@ int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
 			break;
 
 		case 0x03000001: /* encode */
-			r = amdgpu_vce_cs_reloc(p, ib_idx, idx + 10, idx + 9,
+			r = amdgpu_vce_cs_reloc(p, ib, idx + 10, idx + 9,
 						*size, 0);
 			if (r)
 				goto out;
 
-			r = amdgpu_vce_cs_reloc(p, ib_idx, idx + 12, idx + 11,
+			r = amdgpu_vce_cs_reloc(p, ib, idx + 12, idx + 11,
 						*size / 3, 0);
 			if (r)
 				goto out;
@@ -893,35 +895,35 @@ int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
 			break;
 
 		case 0x05000001: /* context buffer */
-			r = amdgpu_vce_cs_reloc(p, ib_idx, idx + 3, idx + 2,
+			r = amdgpu_vce_cs_reloc(p, ib, idx + 3, idx + 2,
 						*size * 2, 0);
 			if (r)
 				goto out;
 			break;
 
 		case 0x05000004: /* video bitstream buffer */
-			tmp = amdgpu_get_ib_value(p, ib_idx, idx + 4);
-			r = amdgpu_vce_cs_reloc(p, ib_idx, idx + 3, idx + 2,
+			tmp = amdgpu_ib_get_value(ib, idx + 4);
+			r = amdgpu_vce_cs_reloc(p, ib, idx + 3, idx + 2,
 						tmp, bs_idx);
 			if (r)
 				goto out;
 			break;
 
 		case 0x05000005: /* feedback buffer */
-			r = amdgpu_vce_cs_reloc(p, ib_idx, idx + 3, idx + 2,
+			r = amdgpu_vce_cs_reloc(p, ib, idx + 3, idx + 2,
 						4096, fb_idx);
 			if (r)
 				goto out;
 			break;
 
 		case 0x0500000d: /* MV buffer */
-			r = amdgpu_vce_cs_reloc(p, ib_idx, idx + 3,
-							idx + 2, *size, 0);
+			r = amdgpu_vce_cs_reloc(p, ib, idx + 3,
+						idx + 2, *size, 0);
 			if (r)
 				goto out;
 
-			r = amdgpu_vce_cs_reloc(p, ib_idx, idx + 8,
-							idx + 7, *size / 12, 0);
+			r = amdgpu_vce_cs_reloc(p, ib, idx + 8,
+						idx + 7, *size / 12, 0);
 			if (r)
 				goto out;
 			break;
@@ -966,11 +968,13 @@ int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
  * amdgpu_vce_ring_parse_cs_vm - parse the command stream in VM mode
  *
  * @p: parser context
- * @ib_idx: indirect buffer to use
+ * @job: the job to parse
+ * @ib: the IB to patch
  */
-int amdgpu_vce_ring_parse_cs_vm(struct amdgpu_cs_parser *p, uint32_t ib_idx)
+int amdgpu_vce_ring_parse_cs_vm(struct amdgpu_cs_parser *p,
+				struct amdgpu_job *job,
+				struct amdgpu_ib *ib)
 {
-	struct amdgpu_ib *ib = &p->job->ibs[ib_idx];
 	int session_idx = -1;
 	uint32_t destroyed = 0;
 	uint32_t created = 0;
@@ -979,8 +983,8 @@ int amdgpu_vce_ring_parse_cs_vm(struct amdgpu_cs_parser *p, uint32_t ib_idx)
 	int i, r = 0, idx = 0;
 
 	while (idx < ib->length_dw) {
-		uint32_t len = amdgpu_get_ib_value(p, ib_idx, idx);
-		uint32_t cmd = amdgpu_get_ib_value(p, ib_idx, idx + 1);
+		uint32_t len = amdgpu_ib_get_value(ib, idx);
+		uint32_t cmd = amdgpu_ib_get_value(ib, idx + 1);
 
 		if ((len < 8) || (len & 3)) {
 			DRM_ERROR("invalid VCE command length (%d)!\n", len);
@@ -990,7 +994,7 @@ int amdgpu_vce_ring_parse_cs_vm(struct amdgpu_cs_parser *p, uint32_t ib_idx)
 
 		switch (cmd) {
 		case 0x00000001: /* session */
-			handle = amdgpu_get_ib_value(p, ib_idx, idx + 2);
+			handle = amdgpu_ib_get_value(ib, idx + 2);
 			session_idx = amdgpu_vce_validate_handle(p, handle,
 								 &allocated);
 			if (session_idx < 0) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
index be4a6e773c5b..ea680fc9a6c3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
@@ -59,8 +59,11 @@ int amdgpu_vce_entity_init(struct amdgpu_device *adev);
 int amdgpu_vce_suspend(struct amdgpu_device *adev);
 int amdgpu_vce_resume(struct amdgpu_device *adev);
 void amdgpu_vce_free_handles(struct amdgpu_device *adev, struct drm_file *filp);
-int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx);
-int amdgpu_vce_ring_parse_cs_vm(struct amdgpu_cs_parser *p, uint32_t ib_idx);
+int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
+			     struct amdgpu_ib *ib);
+int amdgpu_vce_ring_parse_cs_vm(struct amdgpu_cs_parser *p,
+				struct amdgpu_job *job,
+				struct amdgpu_ib *ib);
 void amdgpu_vce_ring_emit_ib(struct amdgpu_ring *ring, struct amdgpu_job *job,
 				struct amdgpu_ib *ib, uint32_t flags);
 void amdgpu_vce_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 seq,
diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
index 7afa660e341c..2f15b8e0f7d7 100644
--- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
@@ -1276,14 +1276,15 @@ static int uvd_v7_0_ring_test_ring(struct amdgpu_ring *ring)
  * uvd_v7_0_ring_patch_cs_in_place - Patch the IB for command submission.
  *
  * @p: the CS parser with the IBs
- * @ib_idx: which IB to patch
+ * @job: which job this ib is in
+ * @ib: which IB to patch
  *
  */
 static int uvd_v7_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
-					   uint32_t ib_idx)
+					   struct amdgpu_job *job,
+					   struct amdgpu_ib *ib)
 {
-	struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
-	struct amdgpu_ib *ib = &p->job->ibs[ib_idx];
+	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
 	unsigned i;
 
 	/* No patching necessary for the first instance */
@@ -1291,12 +1292,12 @@ static int uvd_v7_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
 		return 0;
 
 	for (i = 0; i < ib->length_dw; i += 2) {
-		uint32_t reg = amdgpu_get_ib_value(p, ib_idx, i);
+		uint32_t reg = amdgpu_ib_get_value(ib, i);
 
 		reg -= p->adev->reg_offset[UVD_HWIP][0][1];
 		reg += p->adev->reg_offset[UVD_HWIP][1][1];
 
-		amdgpu_set_ib_value(p, ib_idx, i, reg);
+		amdgpu_ib_set_value(ib, i, reg);
 	}
 	return 0;
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
index 2bb75fdb9571..5f9ad129464f 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
@@ -1807,21 +1807,23 @@ static const struct amdgpu_ring_funcs vcn_v3_0_dec_sw_ring_vm_funcs = {
 	.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
 };
 
-static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p)
+static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p,
+				struct amdgpu_job *job)
 {
 	struct drm_gpu_scheduler **scheds;
 
 	/* The create msg must be in the first IB submitted */
-	if (atomic_read(&p->entity->fence_seq))
+	if (atomic_read(&job->base.entity->fence_seq))
 		return -EINVAL;
 
 	scheds = p->adev->gpu_sched[AMDGPU_HW_IP_VCN_DEC]
 		[AMDGPU_RING_PRIO_DEFAULT].sched;
-	drm_sched_entity_modify_sched(p->entity, scheds, 1);
+	drm_sched_entity_modify_sched(job->base.entity, scheds, 1);
 	return 0;
 }
 
-static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, uint64_t addr)
+static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
+			    uint64_t addr)
 {
 	struct ttm_operation_ctx ctx = { false, false };
 	struct amdgpu_bo_va_mapping *map;
@@ -1892,7 +1894,7 @@ static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, uint64_t addr)
 		if (create[0] == 0x7 || create[0] == 0x10 || create[0] == 0x11)
 			continue;
 
-		r = vcn_v3_0_limit_sched(p);
+		r = vcn_v3_0_limit_sched(p, job);
 		if (r)
 			goto out;
 	}
@@ -1903,10 +1905,10 @@ static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, uint64_t addr)
 }
 
 static int vcn_v3_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
-					   uint32_t ib_idx)
+					   struct amdgpu_job *job,
+					   struct amdgpu_ib *ib)
 {
-	struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
-	struct amdgpu_ib *ib = &p->job->ibs[ib_idx];
+	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
 	uint32_t msg_lo = 0, msg_hi = 0;
 	unsigned i;
 	int r;
@@ -1916,8 +1918,8 @@ static int vcn_v3_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
 		return 0;
 
 	for (i = 0; i < ib->length_dw; i += 2) {
-		uint32_t reg = amdgpu_get_ib_value(p, ib_idx, i);
-		uint32_t val = amdgpu_get_ib_value(p, ib_idx, i + 1);
+		uint32_t reg = amdgpu_ib_get_value(ib, i);
+		uint32_t val = amdgpu_ib_get_value(ib, i + 1);
 
 		if (reg == PACKET0(p->adev->vcn.internal.data0, 0)) {
 			msg_lo = val;
@@ -1925,7 +1927,8 @@ static int vcn_v3_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
 			msg_hi = val;
 		} else if (reg == PACKET0(p->adev->vcn.internal.cmd, 0) &&
 			   val == 0) {
-			r = vcn_v3_0_dec_msg(p, ((u64)msg_hi) << 32 | msg_lo);
+			r = vcn_v3_0_dec_msg(p, job,
+					     ((u64)msg_hi) << 32 | msg_lo);
 			if (r)
 				return r;
 		}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 06/10] drm/amdgpu: properly imbed the IBs into the job
  2022-03-03  8:22 Gang submit Christian König
                   ` (4 preceding siblings ...)
  2022-03-03  8:23 ` [PATCH 05/10] drm/amdgpu: use job and ib structures directly in CS parsers Christian König
@ 2022-03-03  8:23 ` Christian König
  2022-03-03 20:25   ` Andrey Grodzovsky
  2022-03-03  8:23 ` [PATCH 07/10] drm/amdgpu: move setting the job resources Christian König
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 27+ messages in thread
From: Christian König @ 2022-03-03  8:23 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak; +Cc: Christian König

We now have standard macros for that.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 7 +------
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.h | 6 ++++--
 2 files changed, 5 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 38c9fd7b7ad4..e4ca62225996 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -78,14 +78,10 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
 int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,
 		     struct amdgpu_job **job, struct amdgpu_vm *vm)
 {
-	size_t size = sizeof(struct amdgpu_job);
-
 	if (num_ibs == 0)
 		return -EINVAL;
 
-	size += sizeof(struct amdgpu_ib) * num_ibs;
-
-	*job = kzalloc(size, GFP_KERNEL);
+	*job = kzalloc(struct_size(*job, ibs, num_ibs), GFP_KERNEL);
 	if (!*job)
 		return -ENOMEM;
 
@@ -95,7 +91,6 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,
 	 */
 	(*job)->base.sched = &adev->rings[0]->sched;
 	(*job)->vm = vm;
-	(*job)->ibs = (void *)&(*job)[1];
 	(*job)->num_ibs = num_ibs;
 
 	amdgpu_sync_create(&(*job)->sync);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
index 6d704772ff42..d599c0540b46 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
@@ -25,6 +25,7 @@
 
 #include <drm/gpu_scheduler.h>
 #include "amdgpu_sync.h"
+#include "amdgpu_ring.h"
 
 /* bit set means command submit involves a preamble IB */
 #define AMDGPU_PREAMBLE_IB_PRESENT          (1 << 0)
@@ -48,12 +49,10 @@ struct amdgpu_job {
 	struct amdgpu_vm	*vm;
 	struct amdgpu_sync	sync;
 	struct amdgpu_sync	sched_sync;
-	struct amdgpu_ib	*ibs;
 	struct dma_fence	hw_fence;
 	struct dma_fence	*external_hw_fence;
 	uint32_t		preamble_status;
 	uint32_t                preemption_status;
-	uint32_t		num_ibs;
 	bool                    vm_needs_flush;
 	uint64_t		vm_pd_addr;
 	unsigned		vmid;
@@ -69,6 +68,9 @@ struct amdgpu_job {
 
 	/* job_run_counter >= 1 means a resubmit job */
 	uint32_t		job_run_counter;
+
+	uint32_t		num_ibs;
+	struct amdgpu_ib	ibs[];
 };
 
 int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 07/10] drm/amdgpu: move setting the job resources
  2022-03-03  8:22 Gang submit Christian König
                   ` (5 preceding siblings ...)
  2022-03-03  8:23 ` [PATCH 06/10] drm/amdgpu: properly imbed the IBs into the job Christian König
@ 2022-03-03  8:23 ` Christian König
  2022-03-03  8:23 ` [PATCH 08/10] drm/amdgpu: initialize the vmid_wait with the stub fence Christian König
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 27+ messages in thread
From: Christian König @ 2022-03-03  8:23 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak; +Cc: Christian König

Move setting the job resources into amdgpu_job.c

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 21 ++-------------------
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 17 +++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.h |  2 ++
 3 files changed, 21 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index dd9e708fe97f..c6541f7b8f54 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -825,9 +825,6 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 	struct amdgpu_vm *vm = &fpriv->vm;
 	struct amdgpu_bo_list_entry *e;
 	struct list_head duplicates;
-	struct amdgpu_bo *gds;
-	struct amdgpu_bo *gws;
-	struct amdgpu_bo *oa;
 	int r;
 
 	INIT_LIST_HEAD(&p->validated);
@@ -941,22 +938,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 	amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved,
 				     p->bytes_moved_vis);
 
-	gds = p->bo_list->gds_obj;
-	gws = p->bo_list->gws_obj;
-	oa = p->bo_list->oa_obj;
-
-	if (gds) {
-		p->job->gds_base = amdgpu_bo_gpu_offset(gds) >> PAGE_SHIFT;
-		p->job->gds_size = amdgpu_bo_size(gds) >> PAGE_SHIFT;
-	}
-	if (gws) {
-		p->job->gws_base = amdgpu_bo_gpu_offset(gws) >> PAGE_SHIFT;
-		p->job->gws_size = amdgpu_bo_size(gws) >> PAGE_SHIFT;
-	}
-	if (oa) {
-		p->job->oa_base = amdgpu_bo_gpu_offset(oa) >> PAGE_SHIFT;
-		p->job->oa_size = amdgpu_bo_size(oa) >> PAGE_SHIFT;
-	}
+	amdgpu_job_set_resources(p->job, p->bo_list->gds_obj,
+				 p->bo_list->gws_obj, p->bo_list->oa_obj);
 
 	if (!r && p->uf_entry.tv.bo) {
 		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(p->uf_entry.tv.bo);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index e4ca62225996..e07ceae36a5c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -118,6 +118,23 @@ int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, unsigned size,
 	return r;
 }
 
+void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
+			      struct amdgpu_bo *gws, struct amdgpu_bo *oa)
+{
+	if (gds) {
+		job->gds_base = amdgpu_bo_gpu_offset(gds) >> PAGE_SHIFT;
+		job->gds_size = amdgpu_bo_size(gds) >> PAGE_SHIFT;
+	}
+	if (gws) {
+		job->gws_base = amdgpu_bo_gpu_offset(gws) >> PAGE_SHIFT;
+		job->gws_size = amdgpu_bo_size(gws) >> PAGE_SHIFT;
+	}
+	if (oa) {
+		job->oa_base = amdgpu_bo_gpu_offset(oa) >> PAGE_SHIFT;
+		job->oa_size = amdgpu_bo_size(oa) >> PAGE_SHIFT;
+	}
+}
+
 void amdgpu_job_free_resources(struct amdgpu_job *job)
 {
 	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
index d599c0540b46..0bab8fe0d419 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
@@ -77,6 +77,8 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,
 		     struct amdgpu_job **job, struct amdgpu_vm *vm);
 int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, unsigned size,
 		enum amdgpu_ib_pool_type pool, struct amdgpu_job **job);
+void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
+			      struct amdgpu_bo *gws, struct amdgpu_bo *oa);
 void amdgpu_job_free_resources(struct amdgpu_job *job);
 void amdgpu_job_free(struct amdgpu_job *job);
 int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 08/10] drm/amdgpu: initialize the vmid_wait with the stub fence
  2022-03-03  8:22 Gang submit Christian König
                   ` (6 preceding siblings ...)
  2022-03-03  8:23 ` [PATCH 07/10] drm/amdgpu: move setting the job resources Christian König
@ 2022-03-03  8:23 ` Christian König
  2022-03-03 20:31   ` Andrey Grodzovsky
  2022-03-03  8:23 ` [PATCH 09/10] drm/amdgpu: add gang submit backend Christian König
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 27+ messages in thread
From: Christian König @ 2022-03-03  8:23 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak; +Cc: Christian König

This way we don't need to check for NULL any more.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c  | 2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 1 +
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
index ddf46802b1ff..4ba4b54092f1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
@@ -188,7 +188,7 @@ static int amdgpu_vmid_grab_idle(struct amdgpu_vm *vm,
 	unsigned i;
 	int r;
 
-	if (ring->vmid_wait && !dma_fence_is_signaled(ring->vmid_wait))
+	if (!dma_fence_is_signaled(ring->vmid_wait))
 		return amdgpu_sync_fence(sync, ring->vmid_wait);
 
 	fences = kmalloc_array(id_mgr->num_ids, sizeof(void *), GFP_KERNEL);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
index 35bcb6dc1816..7f33ae87cb41 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
@@ -193,6 +193,7 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
 		adev->rings[ring->idx] = ring;
 		ring->num_hw_submission = sched_hw_submission;
 		ring->sched_score = sched_score;
+		ring->vmid_wait = dma_fence_get_stub();
 		r = amdgpu_fence_driver_init_ring(ring);
 		if (r)
 			return r;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 09/10] drm/amdgpu: add gang submit backend
  2022-03-03  8:22 Gang submit Christian König
                   ` (7 preceding siblings ...)
  2022-03-03  8:23 ` [PATCH 08/10] drm/amdgpu: initialize the vmid_wait with the stub fence Christian König
@ 2022-03-03  8:23 ` Christian König
  2022-03-04 17:10   ` Andrey Grodzovsky
  2022-03-03  8:23 ` [PATCH 10/10] drm/amdgpu: add gang submit frontend Christian König
  2022-09-06  1:43 ` Gang submit Liu, Monk
  10 siblings, 1 reply; 27+ messages in thread
From: Christian König @ 2022-03-03  8:23 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak; +Cc: Christian König

Allows submitting jobs as gang which needs to run on multiple
engines at the same time.

Basic idea is that we have a global gang submit fence representing when the
gang leader is finally pushed to run on the hardware last.

Jobs submitted as gang are never re-submitted in case of a GPU reset since this
won't work and will just deadlock the hardware immediately again.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h        |  3 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 34 ++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c    | 28 ++++++++++++++++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.h    |  3 ++
 4 files changed, 66 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 7f447ed7a67f..a664d43d7502 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -852,6 +852,7 @@ struct amdgpu_device {
 	u64				fence_context;
 	unsigned			num_rings;
 	struct amdgpu_ring		*rings[AMDGPU_MAX_RINGS];
+	struct dma_fence __rcu		*gang_submit;
 	bool				ib_pool_ready;
 	struct amdgpu_sa_manager	ib_pools[AMDGPU_IB_POOL_MAX];
 	struct amdgpu_sched		gpu_sched[AMDGPU_HW_IP_NUM][AMDGPU_RING_PRIO_MAX];
@@ -1233,6 +1234,8 @@ void amdgpu_device_invalidate_hdp(struct amdgpu_device *adev,
 		struct amdgpu_ring *ring);
 
 void amdgpu_device_halt(struct amdgpu_device *adev);
+struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev,
+					    struct dma_fence *gang);
 
 /* atpx handler */
 #if defined(CONFIG_VGA_SWITCHEROO)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index d78141e2c509..a116b8c08827 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3512,6 +3512,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 	adev->gmc.gart_size = 512 * 1024 * 1024;
 	adev->accel_working = false;
 	adev->num_rings = 0;
+	RCU_INIT_POINTER(adev->gang_submit, dma_fence_get_stub());
 	adev->mman.buffer_funcs = NULL;
 	adev->mman.buffer_funcs_ring = NULL;
 	adev->vm_manager.vm_pte_funcs = NULL;
@@ -3989,6 +3990,7 @@ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
 	release_firmware(adev->firmware.gpu_info_fw);
 	adev->firmware.gpu_info_fw = NULL;
 	adev->accel_working = false;
+	dma_fence_put(rcu_dereference_protected(adev->gang_submit, true));
 
 	amdgpu_reset_fini(adev);
 
@@ -5744,3 +5746,35 @@ void amdgpu_device_halt(struct amdgpu_device *adev)
 	pci_disable_device(pdev);
 	pci_wait_for_pending_transaction(pdev);
 }
+
+/**
+ * amdgpu_device_switch_gang - switch to a new gang
+ * @adev: amdgpu_device pointer
+ * @gang: the gang to switch to
+ *
+ * Try to switch to a new gang or return a reference to the current gang if that
+ * isn't possible.
+ * Returns: Either NULL if we switched correctly or a reference to the existing
+ * gang.
+ */
+struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev,
+					    struct dma_fence *gang)
+{
+	struct dma_fence *old = NULL;
+
+	do {
+		dma_fence_put(old);
+		old = dma_fence_get_rcu_safe(&adev->gang_submit);
+
+		if (old == gang)
+			break;
+
+		if (!dma_fence_is_signaled(old))
+			return old;
+
+	} while (cmpxchg((struct dma_fence __force **)&adev->gang_submit,
+			 old, gang) != old);
+
+	dma_fence_put(old);
+	return NULL;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index e07ceae36a5c..059e11c7898c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -169,11 +169,29 @@ static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
 		kfree(job);
 }
 
+void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
+				struct amdgpu_job *leader)
+{
+	struct dma_fence *fence = &leader->base.s_fence->scheduled;
+
+	WARN_ON(job->gang_submit);
+
+	/*
+	 * Don't add a reference when we are the gang leader to avoid circle
+	 * dependency.
+	 */
+	if (job != leader)
+		dma_fence_get(fence);
+	job->gang_submit = fence;
+}
+
 void amdgpu_job_free(struct amdgpu_job *job)
 {
 	amdgpu_job_free_resources(job);
 	amdgpu_sync_free(&job->sync);
 	amdgpu_sync_free(&job->sched_sync);
+	if (job->gang_submit != &job->base.s_fence->scheduled)
+		dma_fence_put(job->gang_submit);
 
 	/* only put the hw fence if has embedded fence */
 	if (job->hw_fence.ops != NULL)
@@ -247,12 +265,16 @@ static struct dma_fence *amdgpu_job_dependency(struct drm_sched_job *sched_job,
 		fence = amdgpu_sync_get_fence(&job->sync);
 	}
 
+	if (!fence && !job->gang_submit)
+		fence = amdgpu_device_switch_gang(ring->adev, job->gang_submit);
+
 	return fence;
 }
 
 static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job)
 {
 	struct amdgpu_ring *ring = to_amdgpu_ring(sched_job->sched);
+	struct amdgpu_device *adev = ring->adev;
 	struct dma_fence *fence = NULL, *finished;
 	struct amdgpu_job *job;
 	int r = 0;
@@ -264,8 +286,10 @@ static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job)
 
 	trace_amdgpu_sched_run_job(job);
 
-	if (job->vram_lost_counter != atomic_read(&ring->adev->vram_lost_counter))
-		dma_fence_set_error(finished, -ECANCELED);/* skip IB as well if VRAM lost */
+	/* Skip job if VRAM is lost and never resubmit gangs */
+	if (job->vram_lost_counter != atomic_read(&adev->vram_lost_counter) ||
+	    (job->job_run_counter && job->gang_submit))
+		dma_fence_set_error(finished, -ECANCELED);
 
 	if (finished->error < 0) {
 		DRM_INFO("Skip scheduling IBs!\n");
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
index 0bab8fe0d419..615328130615 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
@@ -51,6 +51,7 @@ struct amdgpu_job {
 	struct amdgpu_sync	sched_sync;
 	struct dma_fence	hw_fence;
 	struct dma_fence	*external_hw_fence;
+	struct dma_fence	*gang_submit;
 	uint32_t		preamble_status;
 	uint32_t                preemption_status;
 	bool                    vm_needs_flush;
@@ -80,6 +81,8 @@ int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, unsigned size,
 void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
 			      struct amdgpu_bo *gws, struct amdgpu_bo *oa);
 void amdgpu_job_free_resources(struct amdgpu_job *job);
+void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
+				struct amdgpu_job *leader);
 void amdgpu_job_free(struct amdgpu_job *job);
 int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity,
 		      void *owner, struct dma_fence **f);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 10/10] drm/amdgpu: add gang submit frontend
  2022-03-03  8:22 Gang submit Christian König
                   ` (8 preceding siblings ...)
  2022-03-03  8:23 ` [PATCH 09/10] drm/amdgpu: add gang submit backend Christian König
@ 2022-03-03  8:23 ` Christian König
  2022-03-07 17:02   ` Andrey Grodzovsky
  2022-06-01 12:09   ` Mohan Marimuthu, Yogesh
  2022-09-06  1:43 ` Gang submit Liu, Monk
  10 siblings, 2 replies; 27+ messages in thread
From: Christian König @ 2022-03-03  8:23 UTC (permalink / raw)
  To: amd-gfx, Marek.Olsak; +Cc: Christian König

Allows submitting jobs as gang which needs to run on multiple engines at the
same time.

All members of the gang get the same implicit, explicit and VM dependencies. So
no gang member will start running until everything else is ready.

The last job is considered the gang leader (usually a submission to the GFX
ring) and used for signaling output dependencies.

Each job is remembered individually as user of a buffer object, so there is no
joining of work at the end.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c    | 244 ++++++++++++++--------
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h    |   9 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h |  12 +-
 3 files changed, 173 insertions(+), 92 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index c6541f7b8f54..7429e64919fe 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -69,6 +69,7 @@ static int amdgpu_cs_p1_ib(struct amdgpu_cs_parser *p,
 			   unsigned int *num_ibs)
 {
 	struct drm_sched_entity *entity;
+	unsigned int i;
 	int r;
 
 	r = amdgpu_ctx_get_entity(p->ctx, chunk_ib->ip_type,
@@ -83,11 +84,19 @@ static int amdgpu_cs_p1_ib(struct amdgpu_cs_parser *p,
 		return -EINVAL;
 
 	/* Currently we don't support submitting to multiple entities */
-	if (p->entity && p->entity != entity)
+	for (i = 0; i < p->gang_size; ++i) {
+		if (p->entities[i] == entity)
+			goto found;
+	}
+
+	if (i == AMDGPU_CS_GANG_SIZE)
 		return -EINVAL;
 
-	p->entity = entity;
-	++(*num_ibs);
+	p->entities[i] = entity;
+	p->gang_size = i + 1;
+
+found:
+	++(num_ibs[i]);
 	return 0;
 }
 
@@ -161,11 +170,12 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
 			   union drm_amdgpu_cs *cs)
 {
 	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+	unsigned int num_ibs[AMDGPU_CS_GANG_SIZE] = { };
 	struct amdgpu_vm *vm = &fpriv->vm;
 	uint64_t *chunk_array_user;
 	uint64_t *chunk_array;
-	unsigned size, num_ibs = 0;
 	uint32_t uf_offset = 0;
+	unsigned int size;
 	int ret;
 	int i;
 
@@ -228,7 +238,7 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
 			if (size < sizeof(struct drm_amdgpu_cs_chunk_ib))
 				goto free_partial_kdata;
 
-			ret = amdgpu_cs_p1_ib(p, p->chunks[i].kdata, &num_ibs);
+			ret = amdgpu_cs_p1_ib(p, p->chunks[i].kdata, num_ibs);
 			if (ret)
 				goto free_partial_kdata;
 			break;
@@ -265,21 +275,27 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
 		}
 	}
 
-	ret = amdgpu_job_alloc(p->adev, num_ibs, &p->job, vm);
-	if (ret)
-		goto free_all_kdata;
+	if (!p->gang_size)
+		return -EINVAL;
 
-	ret = drm_sched_job_init(&p->job->base, p->entity, &fpriv->vm);
-	if (ret)
-		goto free_all_kdata;
+	for (i = 0; i < p->gang_size; ++i) {
+		ret = amdgpu_job_alloc(p->adev, num_ibs[i], &p->jobs[i], vm);
+		if (ret)
+			goto free_all_kdata;
+
+		ret = drm_sched_job_init(&p->jobs[i]->base, p->entities[i],
+					 &fpriv->vm);
+		if (ret)
+			goto free_all_kdata;
+	}
 
-	if (p->ctx->vram_lost_counter != p->job->vram_lost_counter) {
+	if (p->ctx->vram_lost_counter != p->jobs[0]->vram_lost_counter) {
 		ret = -ECANCELED;
 		goto free_all_kdata;
 	}
 
 	if (p->uf_entry.tv.bo)
-		p->job->uf_addr = uf_offset;
+		p->jobs[p->gang_size - 1]->uf_addr = uf_offset;
 	kvfree(chunk_array);
 
 	/* Use this opportunity to fill in task info for the vm */
@@ -301,22 +317,18 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
 	return ret;
 }
 
-static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
-			   struct amdgpu_cs_chunk *chunk,
-			   unsigned int *num_ibs,
-			   unsigned int *ce_preempt,
-			   unsigned int *de_preempt)
+static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
+			   struct amdgpu_ib *ib, struct amdgpu_cs_chunk *chunk,
+			   unsigned int *ce_preempt, unsigned int *de_preempt)
 {
-	struct amdgpu_ring *ring = to_amdgpu_ring(p->job->base.sched);
+	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
 	struct drm_amdgpu_cs_chunk_ib *chunk_ib = chunk->kdata;
 	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
-	struct amdgpu_ib *ib = &p->job->ibs[*num_ibs];
 	struct amdgpu_vm *vm = &fpriv->vm;
 	int r;
 
-
 	/* MM engine doesn't support user fences */
-	if (p->job->uf_addr && ring->funcs->no_user_fence)
+	if (job->uf_addr && ring->funcs->no_user_fence)
 		return -EINVAL;
 
 	if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX &&
@@ -333,7 +345,7 @@ static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
 	}
 
 	if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE)
-		p->job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT;
+		job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT;
 
 	r =  amdgpu_ib_get(p->adev, vm, ring->funcs->parse_cs ?
 			   chunk_ib->ib_bytes : 0,
@@ -346,8 +358,6 @@ static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
 	ib->gpu_addr = chunk_ib->va_start;
 	ib->length_dw = chunk_ib->ib_bytes / 4;
 	ib->flags = chunk_ib->flags;
-
-	(*num_ibs)++;
 	return 0;
 }
 
@@ -396,7 +406,7 @@ static int amdgpu_cs_p2_dependencies(struct amdgpu_cs_parser *p,
 			dma_fence_put(old);
 		}
 
-		r = amdgpu_sync_fence(&p->job->sync, fence);
+		r = amdgpu_sync_fence(&p->jobs[0]->sync, fence);
 		dma_fence_put(fence);
 		if (r)
 			return r;
@@ -418,7 +428,7 @@ static int amdgpu_syncobj_lookup_and_add(struct amdgpu_cs_parser *p,
 		return r;
 	}
 
-	r = amdgpu_sync_fence(&p->job->sync, fence);
+	r = amdgpu_sync_fence(&p->jobs[0]->sync, fence);
 	dma_fence_put(fence);
 
 	return r;
@@ -541,20 +551,30 @@ static int amdgpu_cs_p2_syncobj_timeline_signal(struct amdgpu_cs_parser *p,
 
 static int amdgpu_cs_pass2(struct amdgpu_cs_parser *p)
 {
-	unsigned int num_ibs = 0, ce_preempt = 0, de_preempt = 0;
+	unsigned int ce_preempt = 0, de_preempt = 0;
+	unsigned int job_idx = 0, ib_idx = 0;
 	int i, r;
 
 	for (i = 0; i < p->nchunks; ++i) {
 		struct amdgpu_cs_chunk *chunk;
+		struct amdgpu_job *job;
 
 		chunk = &p->chunks[i];
 
 		switch (chunk->chunk_id) {
 		case AMDGPU_CHUNK_ID_IB:
-			r = amdgpu_cs_p2_ib(p, chunk, &num_ibs,
+			job = p->jobs[job_idx];
+			r = amdgpu_cs_p2_ib(p, job, &job->ibs[ib_idx], chunk,
 					    &ce_preempt, &de_preempt);
 			if (r)
 				return r;
+
+			if (++ib_idx == job->num_ibs) {
+				++job_idx;
+				ib_idx = 0;
+				ce_preempt = 0;
+				de_preempt = 0;
+			}
 			break;
 		case AMDGPU_CHUNK_ID_DEPENDENCIES:
 		case AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES:
@@ -825,6 +845,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 	struct amdgpu_vm *vm = &fpriv->vm;
 	struct amdgpu_bo_list_entry *e;
 	struct list_head duplicates;
+	unsigned int i;
 	int r;
 
 	INIT_LIST_HEAD(&p->validated);
@@ -905,16 +926,6 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 		e->bo_va = amdgpu_vm_bo_find(vm, bo);
 	}
 
-	/* Move fence waiting after getting reservation lock of
-	 * PD root. Then there is no need on a ctx mutex lock.
-	 */
-	r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entity);
-	if (unlikely(r != 0)) {
-		if (r != -ERESTARTSYS)
-			DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n");
-		goto error_validate;
-	}
-
 	amdgpu_cs_get_threshold_for_moves(p->adev, &p->bytes_moved_threshold,
 					  &p->bytes_moved_vis_threshold);
 	p->bytes_moved = 0;
@@ -938,14 +949,16 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 	amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved,
 				     p->bytes_moved_vis);
 
-	amdgpu_job_set_resources(p->job, p->bo_list->gds_obj,
-				 p->bo_list->gws_obj, p->bo_list->oa_obj);
+	for (i = 0; i < p->gang_size; ++i)
+		amdgpu_job_set_resources(p->jobs[i], p->bo_list->gds_obj,
+					 p->bo_list->gws_obj,
+					 p->bo_list->oa_obj);
 
 	if (!r && p->uf_entry.tv.bo) {
 		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(p->uf_entry.tv.bo);
 
 		r = amdgpu_ttm_alloc_gart(&uf->tbo);
-		p->job->uf_addr += amdgpu_bo_gpu_offset(uf);
+		p->jobs[p->gang_size - 1]->uf_addr += amdgpu_bo_gpu_offset(uf);
 	}
 
 error_validate:
@@ -955,20 +968,24 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 	return r;
 }
 
-static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *parser)
+static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *p)
 {
-	int i;
+	int i, j;
 
 	if (!trace_amdgpu_cs_enabled())
 		return;
 
-	for (i = 0; i < parser->job->num_ibs; i++)
-		trace_amdgpu_cs(parser, i);
+	for (i = 0; i < p->gang_size; ++i) {
+		struct amdgpu_job *job = p->jobs[i];
+
+		for (j = 0; j < job->num_ibs; ++j)
+			trace_amdgpu_cs(p, job, &job->ibs[j]);
+	}
 }
 
-static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
+static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p,
+			       struct amdgpu_job *job)
 {
-	struct amdgpu_job *job = p->job;
 	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
 	unsigned int i;
 	int r;
@@ -1007,14 +1024,13 @@ static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
 			memcpy(ib->ptr, kptr, job->ibs[i].length_dw * 4);
 			amdgpu_bo_kunmap(aobj);
 
-			r = amdgpu_ring_parse_cs(ring, p, p->job,
-						 &p->job->ibs[i]);
+			r = amdgpu_ring_parse_cs(ring, p, job, &job->ibs[i]);
 			if (r)
 				return r;
 		} else {
 			ib->ptr = (uint32_t *)kptr;
-			r = amdgpu_ring_patch_cs_in_place(ring, p, p->job,
-							  &p->job->ibs[i]);
+			r = amdgpu_ring_patch_cs_in_place(ring, p, job,
+							  &job->ibs[i]);
 			amdgpu_bo_kunmap(aobj);
 			if (r)
 				return r;
@@ -1024,14 +1040,29 @@ static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
 	return 0;
 }
 
+static int amdgpu_cs_patch_jobs(struct amdgpu_cs_parser *p)
+{
+	unsigned int i;
+	int r;
+
+	for (i = 0; i < p->gang_size; ++i) {
+		r = amdgpu_cs_patch_ibs(p, p->jobs[i]);
+		if (r)
+			return r;
+	}
+	return 0;
+}
+
 static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 {
 	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
 	struct amdgpu_device *adev = p->adev;
+	struct amdgpu_job *job = p->jobs[0];
 	struct amdgpu_vm *vm = &fpriv->vm;
 	struct amdgpu_bo_list_entry *e;
 	struct amdgpu_bo_va *bo_va;
 	struct amdgpu_bo *bo;
+	unsigned int i;
 	int r;
 
 	r = amdgpu_vm_clear_freed(adev, vm, NULL);
@@ -1042,7 +1073,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 	if (r)
 		return r;
 
-	r = amdgpu_sync_vm_fence(&p->job->sync, fpriv->prt_va->last_pt_update);
+	r = amdgpu_sync_vm_fence(&job->sync, fpriv->prt_va->last_pt_update);
 	if (r)
 		return r;
 
@@ -1052,7 +1083,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 		if (r)
 			return r;
 
-		r = amdgpu_sync_vm_fence(&p->job->sync, bo_va->last_pt_update);
+		r = amdgpu_sync_vm_fence(&job->sync, bo_va->last_pt_update);
 		if (r)
 			return r;
 	}
@@ -1071,7 +1102,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 		if (r)
 			return r;
 
-		r = amdgpu_sync_vm_fence(&p->job->sync, bo_va->last_pt_update);
+		r = amdgpu_sync_vm_fence(&job->sync, bo_va->last_pt_update);
 		if (r)
 			return r;
 	}
@@ -1084,11 +1115,18 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 	if (r)
 		return r;
 
-	r = amdgpu_sync_vm_fence(&p->job->sync, vm->last_update);
+	r = amdgpu_sync_vm_fence(&job->sync, vm->last_update);
 	if (r)
 		return r;
 
-	p->job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo);
+	for (i = 0; i < p->gang_size; ++i) {
+		job = p->jobs[i];
+
+		if (!job->vm)
+			continue;
+
+		job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo);
+	}
 
 	if (amdgpu_vm_debug) {
 		/* Invalidate all BOs to test for userspace bugs */
@@ -1109,7 +1147,9 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
 {
 	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+	struct amdgpu_job *job = p->jobs[0];
 	struct amdgpu_bo_list_entry *e;
+	unsigned int i;
 	int r;
 
 	list_for_each_entry(e, &p->validated, tv.head) {
@@ -1119,12 +1159,23 @@ static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
 
 		sync_mode = amdgpu_bo_explicit_sync(bo) ?
 			AMDGPU_SYNC_EXPLICIT : AMDGPU_SYNC_NE_OWNER;
-		r = amdgpu_sync_resv(p->adev, &p->job->sync, resv, sync_mode,
+		r = amdgpu_sync_resv(p->adev, &job->sync, resv, sync_mode,
 				     &fpriv->vm);
 		if (r)
 			return r;
 	}
-	return 0;
+
+	for (i = 1; i < p->gang_size; ++i) {
+		r = amdgpu_sync_clone(&job->sync, &p->jobs[i]->sync);
+		if (r)
+			return r;
+	}
+
+	r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entities[p->gang_size - 1]);
+	if (r && r != -ERESTARTSYS)
+		DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n");
+
+	return r;
 }
 
 static void amdgpu_cs_post_dependencies(struct amdgpu_cs_parser *p)
@@ -1147,17 +1198,27 @@ static void amdgpu_cs_post_dependencies(struct amdgpu_cs_parser *p)
 static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 			    union drm_amdgpu_cs *cs)
 {
+	struct amdgpu_job *last = p->jobs[p->gang_size - 1];
 	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
-	struct drm_sched_entity *entity = p->entity;
 	struct amdgpu_bo_list_entry *e;
-	struct amdgpu_job *job;
+	unsigned int i;
 	uint64_t seq;
 	int r;
 
-	job = p->job;
-	p->job = NULL;
+	for (i = 0; i < p->gang_size; ++i)
+		drm_sched_job_arm(&p->jobs[i]->base);
 
-	drm_sched_job_arm(&job->base);
+	for (i = 0; i < (p->gang_size - 1); ++i) {
+		struct dma_fence *fence;
+
+		fence = &p->jobs[i]->base.s_fence->scheduled;
+		r = amdgpu_sync_fence(&last->sync, fence);
+		if (r)
+			goto error_cleanup;
+	}
+
+	for (i = 0; i < p->gang_size; ++i)
+		amdgpu_job_set_gang_leader(p->jobs[i], last);
 
 	/* No memory allocation is allowed while holding the notifier lock.
 	 * The lock is held until amdgpu_cs_submit is finished and fence is
@@ -1175,44 +1236,58 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 	}
 	if (r) {
 		r = -EAGAIN;
-		goto error_abort;
+		goto error_unlock;
 	}
 
-	p->fence = dma_fence_get(&job->base.s_fence->finished);
+	p->fence = dma_fence_get(&last->base.s_fence->finished);
 
-	amdgpu_ctx_add_fence(p->ctx, entity, p->fence, &seq);
+	amdgpu_ctx_add_fence(p->ctx, p->entities[p->gang_size - 1], p->fence,
+			     &seq);
 	amdgpu_cs_post_dependencies(p);
 
-	if ((job->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) &&
+	if ((last->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) &&
 	    !p->ctx->preamble_presented) {
-		job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
+		last->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
 		p->ctx->preamble_presented = true;
 	}
 
 	cs->out.handle = seq;
-	job->uf_sequence = seq;
-
-	amdgpu_job_free_resources(job);
+	last->uf_sequence = seq;
 
-	trace_amdgpu_cs_ioctl(job);
 	amdgpu_vm_bo_trace_cs(&fpriv->vm, &p->ticket);
-	drm_sched_entity_push_job(&job->base);
+	for (i = 0; i < p->gang_size; ++i) {
+		amdgpu_job_free_resources(p->jobs[i]);
+		trace_amdgpu_cs_ioctl(p->jobs[i]);
+		drm_sched_entity_push_job(&p->jobs[i]->base);
+		p->jobs[i] = NULL;
+	}
 
 	amdgpu_vm_move_to_lru_tail(p->adev, &fpriv->vm);
 
-	/* Make sure all BOs are remembered as writers */
-	amdgpu_bo_list_for_each_entry(e, p->bo_list)
+	list_for_each_entry(e, &p->validated, tv.head) {
+
+		/* Everybody except for the gang leader uses BOOKKEEP */
+		for (i = 0; i < (p->gang_size - 1); ++i) {
+			dma_resv_add_fence(e->tv.bo->base.resv,
+					   &p->jobs[i]->base.s_fence->finished,
+					   DMA_RESV_USAGE_BOOKKEEP);
+		}
+
+		/* The gang leader as remembered as writer */
 		e->tv.num_shared = 0;
+	}
 
 	ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence);
 	mutex_unlock(&p->adev->notifier_lock);
 
 	return 0;
 
-error_abort:
-	drm_sched_job_cleanup(&job->base);
+error_unlock:
 	mutex_unlock(&p->adev->notifier_lock);
-	amdgpu_job_free(job);
+
+error_cleanup:
+	for (i = 0; i < p->gang_size; ++i)
+		drm_sched_job_cleanup(&p->jobs[i]->base);
 	return r;
 }
 
@@ -1229,17 +1304,18 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser)
 
 	dma_fence_put(parser->fence);
 
-	if (parser->ctx) {
+	if (parser->ctx)
 		amdgpu_ctx_put(parser->ctx);
-	}
 	if (parser->bo_list)
 		amdgpu_bo_list_put(parser->bo_list);
 
 	for (i = 0; i < parser->nchunks; i++)
 		kvfree(parser->chunks[i].kdata);
 	kvfree(parser->chunks);
-	if (parser->job)
-		amdgpu_job_free(parser->job);
+	for (i = 0; i < parser->gang_size; ++i) {
+		if (parser->jobs[i])
+			amdgpu_job_free(parser->jobs[i]);
+	}
 	if (parser->uf_entry.tv.bo) {
 		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(parser->uf_entry.tv.bo);
 
@@ -1283,7 +1359,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 		goto error_fini;
 	}
 
-	r = amdgpu_cs_patch_ibs(&parser);
+	r = amdgpu_cs_patch_jobs(&parser);
 	if (r)
 		goto error_backoff;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
index 652b5593499f..ba5860c08270 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
@@ -27,6 +27,8 @@
 #include "amdgpu_bo_list.h"
 #include "amdgpu_ring.h"
 
+#define AMDGPU_CS_GANG_SIZE	4
+
 struct amdgpu_bo_va_mapping;
 
 struct amdgpu_cs_chunk {
@@ -50,9 +52,10 @@ struct amdgpu_cs_parser {
 	unsigned		nchunks;
 	struct amdgpu_cs_chunk	*chunks;
 
-	/* scheduler job object */
-	struct drm_sched_entity	*entity;
-	struct amdgpu_job	*job;
+	/* scheduler job objects */
+	unsigned int		gang_size;
+	struct drm_sched_entity	*entities[AMDGPU_CS_GANG_SIZE];
+	struct amdgpu_job	*jobs[AMDGPU_CS_GANG_SIZE];
 
 	/* buffer objects */
 	struct ww_acquire_ctx		ticket;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
index d855cb53c7e0..a5167cb91ba5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
@@ -140,8 +140,10 @@ TRACE_EVENT(amdgpu_bo_create,
 );
 
 TRACE_EVENT(amdgpu_cs,
-	    TP_PROTO(struct amdgpu_cs_parser *p, int i),
-	    TP_ARGS(p, i),
+	    TP_PROTO(struct amdgpu_cs_parser *p,
+		     struct amdgpu_job *job,
+		     struct amdgpu_ib *ib),
+	    TP_ARGS(p, job, ib),
 	    TP_STRUCT__entry(
 			     __field(struct amdgpu_bo_list *, bo_list)
 			     __field(u32, ring)
@@ -151,10 +153,10 @@ TRACE_EVENT(amdgpu_cs,
 
 	    TP_fast_assign(
 			   __entry->bo_list = p->bo_list;
-			   __entry->ring = to_amdgpu_ring(p->entity->rq->sched)->idx;
-			   __entry->dw = p->job->ibs[i].length_dw;
+			   __entry->ring = to_amdgpu_ring(job->base.sched)->idx;
+			   __entry->dw = ib->length_dw;
 			   __entry->fences = amdgpu_fence_count_emitted(
-				to_amdgpu_ring(p->entity->rq->sched));
+				to_amdgpu_ring(job->base.sched));
 			   ),
 	    TP_printk("bo_list=%p, ring=%u, dw=%u, fences=%u",
 		      __entry->bo_list, __entry->ring, __entry->dw,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 01/10] drm/amdgpu: install ctx entities with cmpxchg
  2022-03-03  8:22 ` [PATCH 01/10] drm/amdgpu: install ctx entities with cmpxchg Christian König
@ 2022-03-03 19:52   ` Andrey Grodzovsky
  0 siblings, 0 replies; 27+ messages in thread
From: Andrey Grodzovsky @ 2022-03-03 19:52 UTC (permalink / raw)
  To: Christian König, amd-gfx, Marek.Olsak; +Cc: Christian König

Reviewed-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>

Andrey

On 2022-03-03 03:22, Christian König wrote:
> Since we removed the context lock we need to make sure that not two threads
> are trying to install an entity at the same time.
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> Fixes: e68efb27647f ("drm/amdgpu: remove ctx->lock")
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 8 +++++++-
>   1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> index c1f8b0e37b93..72c5f1c53d6b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> @@ -204,9 +204,15 @@ static int amdgpu_ctx_init_entity(struct amdgpu_ctx *ctx, u32 hw_ip,
>   	if (r)
>   		goto error_free_entity;
>   
> -	ctx->entities[hw_ip][ring] = entity;
> +	/* It's not an error if we fail to install the new entity */
> +	if (cmpxchg(&ctx->entities[hw_ip][ring], NULL, entity))
> +		goto cleanup_entity;
> +
>   	return 0;
>   
> +cleanup_entity:
> +	drm_sched_entity_fini(&entity->entity);
> +
>   error_free_entity:
>   	kfree(entity);
>   

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 02/10] drm/amdgpu: header cleanup
  2022-03-03  8:23 ` [PATCH 02/10] drm/amdgpu: header cleanup Christian König
@ 2022-03-03 19:56   ` Andrey Grodzovsky
  0 siblings, 0 replies; 27+ messages in thread
From: Andrey Grodzovsky @ 2022-03-03 19:56 UTC (permalink / raw)
  To: Christian König, amd-gfx, Marek.Olsak; +Cc: Christian König

Acked-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>

Andrey

On 2022-03-03 03:23, Christian König wrote:
> No function change, just move a bunch of definitions from amdgpu.h into
> separate header files.
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu.h           | 95 -------------------
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c        |  1 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h        | 93 ++++++++++++++++++
>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.h       |  3 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h      | 35 ++++++-
>   .../gpu/drm/amd/amdgpu/amdgpu_trace_points.c  |  1 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |  1 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c       |  1 +
>   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c         |  1 +
>   drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c         |  1 +
>   10 files changed, 132 insertions(+), 100 deletions(-)
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index b89406b01694..7f447ed7a67f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -60,7 +60,6 @@
>   #include <drm/amdgpu_drm.h>
>   #include <drm/drm_gem.h>
>   #include <drm/drm_ioctl.h>
> -#include <drm/gpu_scheduler.h>
>   
>   #include <kgd_kfd_interface.h>
>   #include "dm_pp_interface.h"
> @@ -276,9 +275,6 @@ extern int amdgpu_num_kcq;
>   #define AMDGPU_SMARTSHIFT_MIN_BIAS (-100)
>   
>   struct amdgpu_device;
> -struct amdgpu_ib;
> -struct amdgpu_cs_parser;
> -struct amdgpu_job;
>   struct amdgpu_irq_src;
>   struct amdgpu_fpriv;
>   struct amdgpu_bo_va_mapping;
> @@ -465,20 +461,6 @@ struct amdgpu_flip_work {
>   };
>   
>   
> -/*
> - * CP & rings.
> - */
> -
> -struct amdgpu_ib {
> -	struct amdgpu_sa_bo		*sa_bo;
> -	uint32_t			length_dw;
> -	uint64_t			gpu_addr;
> -	uint32_t			*ptr;
> -	uint32_t			flags;
> -};
> -
> -extern const struct drm_sched_backend_ops amdgpu_sched_ops;
> -
>   /*
>    * file private structure
>    */
> @@ -494,79 +476,6 @@ struct amdgpu_fpriv {
>   
>   int amdgpu_file_to_fpriv(struct file *filp, struct amdgpu_fpriv **fpriv);
>   
> -int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> -		  unsigned size,
> -		  enum amdgpu_ib_pool_type pool,
> -		  struct amdgpu_ib *ib);
> -void amdgpu_ib_free(struct amdgpu_device *adev, struct amdgpu_ib *ib,
> -		    struct dma_fence *f);
> -int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
> -		       struct amdgpu_ib *ibs, struct amdgpu_job *job,
> -		       struct dma_fence **f);
> -int amdgpu_ib_pool_init(struct amdgpu_device *adev);
> -void amdgpu_ib_pool_fini(struct amdgpu_device *adev);
> -int amdgpu_ib_ring_tests(struct amdgpu_device *adev);
> -
> -/*
> - * CS.
> - */
> -struct amdgpu_cs_chunk {
> -	uint32_t		chunk_id;
> -	uint32_t		length_dw;
> -	void			*kdata;
> -};
> -
> -struct amdgpu_cs_post_dep {
> -	struct drm_syncobj *syncobj;
> -	struct dma_fence_chain *chain;
> -	u64 point;
> -};
> -
> -struct amdgpu_cs_parser {
> -	struct amdgpu_device	*adev;
> -	struct drm_file		*filp;
> -	struct amdgpu_ctx	*ctx;
> -
> -	/* chunks */
> -	unsigned		nchunks;
> -	struct amdgpu_cs_chunk	*chunks;
> -
> -	/* scheduler job object */
> -	struct amdgpu_job	*job;
> -	struct drm_sched_entity	*entity;
> -
> -	/* buffer objects */
> -	struct ww_acquire_ctx		ticket;
> -	struct amdgpu_bo_list		*bo_list;
> -	struct amdgpu_mn		*mn;
> -	struct amdgpu_bo_list_entry	vm_pd;
> -	struct list_head		validated;
> -	struct dma_fence		*fence;
> -	uint64_t			bytes_moved_threshold;
> -	uint64_t			bytes_moved_vis_threshold;
> -	uint64_t			bytes_moved;
> -	uint64_t			bytes_moved_vis;
> -
> -	/* user fence */
> -	struct amdgpu_bo_list_entry	uf_entry;
> -
> -	unsigned			num_post_deps;
> -	struct amdgpu_cs_post_dep	*post_deps;
> -};
> -
> -static inline u32 amdgpu_get_ib_value(struct amdgpu_cs_parser *p,
> -				      uint32_t ib_idx, int idx)
> -{
> -	return p->job->ibs[ib_idx].ptr[idx];
> -}
> -
> -static inline void amdgpu_set_ib_value(struct amdgpu_cs_parser *p,
> -				       uint32_t ib_idx, int idx,
> -				       uint32_t value)
> -{
> -	p->job->ibs[ib_idx].ptr[idx] = value;
> -}
> -
>   /*
>    * Writeback
>    */
> @@ -1425,10 +1334,6 @@ static inline int amdgpu_acpi_smart_shift_update(struct drm_device *dev,
>   						 enum amdgpu_ss ss_state) { return 0; }
>   #endif
>   
> -int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
> -			   uint64_t addr, struct amdgpu_bo **bo,
> -			   struct amdgpu_bo_va_mapping **mapping);
> -
>   #if defined(CONFIG_DRM_AMD_DC)
>   int amdgpu_dm_display_resume(struct amdgpu_device *adev );
>   #else
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index aff77a466f59..6b6a9d925994 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -32,6 +32,7 @@
>   
>   #include <drm/amdgpu_drm.h>
>   #include <drm/drm_syncobj.h>
> +#include "amdgpu_cs.h"
>   #include "amdgpu.h"
>   #include "amdgpu_trace.h"
>   #include "amdgpu_gmc.h"
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
> new file mode 100644
> index 000000000000..92d07816743e
> --- /dev/null
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
> @@ -0,0 +1,93 @@
> +/*
> + * Copyright 2022 Advanced Micro Devices, Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + *
> + */
> +#ifndef __AMDGPU_CS_H__
> +#define __AMDGPU_CS_H__
> +
> +#include "amdgpu_job.h"
> +#include "amdgpu_bo_list.h"
> +#include "amdgpu_ring.h"
> +
> +struct amdgpu_bo_va_mapping;
> +
> +struct amdgpu_cs_chunk {
> +	uint32_t		chunk_id;
> +	uint32_t		length_dw;
> +	void			*kdata;
> +};
> +
> +struct amdgpu_cs_post_dep {
> +	struct drm_syncobj *syncobj;
> +	struct dma_fence_chain *chain;
> +	u64 point;
> +};
> +
> +struct amdgpu_cs_parser {
> +	struct amdgpu_device	*adev;
> +	struct drm_file		*filp;
> +	struct amdgpu_ctx	*ctx;
> +
> +	/* chunks */
> +	unsigned		nchunks;
> +	struct amdgpu_cs_chunk	*chunks;
> +
> +	/* scheduler job object */
> +	struct amdgpu_job	*job;
> +	struct drm_sched_entity	*entity;
> +
> +	/* buffer objects */
> +	struct ww_acquire_ctx		ticket;
> +	struct amdgpu_bo_list		*bo_list;
> +	struct amdgpu_mn		*mn;
> +	struct amdgpu_bo_list_entry	vm_pd;
> +	struct list_head		validated;
> +	struct dma_fence		*fence;
> +	uint64_t			bytes_moved_threshold;
> +	uint64_t			bytes_moved_vis_threshold;
> +	uint64_t			bytes_moved;
> +	uint64_t			bytes_moved_vis;
> +
> +	/* user fence */
> +	struct amdgpu_bo_list_entry	uf_entry;
> +
> +	unsigned			num_post_deps;
> +	struct amdgpu_cs_post_dep	*post_deps;
> +};
> +
> +static inline u32 amdgpu_get_ib_value(struct amdgpu_cs_parser *p,
> +				      uint32_t ib_idx, int idx)
> +{
> +	return p->job->ibs[ib_idx].ptr[idx];
> +}
> +
> +static inline void amdgpu_set_ib_value(struct amdgpu_cs_parser *p,
> +				       uint32_t ib_idx, int idx,
> +				       uint32_t value)
> +{
> +	p->job->ibs[ib_idx].ptr[idx] = value;
> +}
> +
> +int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
> +			   uint64_t addr, struct amdgpu_bo **bo,
> +			   struct amdgpu_bo_va_mapping **mapping);
> +
> +#endif
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> index 9e65730193b8..6d704772ff42 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> @@ -23,6 +23,9 @@
>   #ifndef __AMDGPU_JOB_H__
>   #define __AMDGPU_JOB_H__
>   
> +#include <drm/gpu_scheduler.h>
> +#include "amdgpu_sync.h"
> +
>   /* bit set means command submit involves a preamble IB */
>   #define AMDGPU_PREAMBLE_IB_PRESENT          (1 << 0)
>   /* bit set means preamble IB is first presented in belonging context */
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> index 48365da213dc..05e789fc7a9e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> @@ -28,6 +28,13 @@
>   #include <drm/gpu_scheduler.h>
>   #include <drm/drm_print.h>
>   
> +struct amdgpu_device;
> +struct amdgpu_ring;
> +struct amdgpu_ib;
> +struct amdgpu_cs_parser;
> +struct amdgpu_job;
> +struct amdgpu_vm;
> +
>   /* max number of rings */
>   #define AMDGPU_MAX_RINGS		28
>   #define AMDGPU_MAX_HWIP_RINGS		8
> @@ -82,11 +89,13 @@ enum amdgpu_ib_pool_type {
>   	AMDGPU_IB_POOL_MAX
>   };
>   
> -struct amdgpu_device;
> -struct amdgpu_ring;
> -struct amdgpu_ib;
> -struct amdgpu_cs_parser;
> -struct amdgpu_job;
> +struct amdgpu_ib {
> +	struct amdgpu_sa_bo		*sa_bo;
> +	uint32_t			length_dw;
> +	uint64_t			gpu_addr;
> +	uint32_t			*ptr;
> +	uint32_t			flags;
> +};
>   
>   struct amdgpu_sched {
>   	u32				num_scheds;
> @@ -111,6 +120,8 @@ struct amdgpu_fence_driver {
>   	struct dma_fence		**fences;
>   };
>   
> +extern const struct drm_sched_backend_ops amdgpu_sched_ops;
> +
>   void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);
>   void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring);
>   
> @@ -352,4 +363,18 @@ int amdgpu_ring_test_helper(struct amdgpu_ring *ring);
>   
>   void amdgpu_debugfs_ring_init(struct amdgpu_device *adev,
>   			      struct amdgpu_ring *ring);
> +
> +int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> +		  unsigned size,
> +		  enum amdgpu_ib_pool_type pool,
> +		  struct amdgpu_ib *ib);
> +void amdgpu_ib_free(struct amdgpu_device *adev, struct amdgpu_ib *ib,
> +		    struct dma_fence *f);
> +int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
> +		       struct amdgpu_ib *ibs, struct amdgpu_job *job,
> +		       struct dma_fence **f);
> +int amdgpu_ib_pool_init(struct amdgpu_device *adev);
> +void amdgpu_ib_pool_fini(struct amdgpu_device *adev);
> +int amdgpu_ib_ring_tests(struct amdgpu_device *adev);
> +
>   #endif
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace_points.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace_points.c
> index 57c6c39ba064..b96d885f6e33 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace_points.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace_points.c
> @@ -23,6 +23,7 @@
>    */
>   
>   #include <drm/amdgpu_drm.h>
> +#include "amdgpu_cs.h"
>   #include "amdgpu.h"
>   
>   #define CREATE_TRACE_POINTS
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> index 9e102080dad9..4927c10bdc80 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> @@ -37,6 +37,7 @@
>   #include "amdgpu.h"
>   #include "amdgpu_pm.h"
>   #include "amdgpu_uvd.h"
> +#include "amdgpu_cs.h"
>   #include "cikd.h"
>   #include "uvd/uvd_4_2_d.h"
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> index 344f711ad144..6179230b6c6e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> @@ -34,6 +34,7 @@
>   #include "amdgpu.h"
>   #include "amdgpu_pm.h"
>   #include "amdgpu_vce.h"
> +#include "amdgpu_cs.h"
>   #include "cikd.h"
>   
>   /* 1 second timeout */
> diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> index b483f03b4591..7afa660e341c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> @@ -25,6 +25,7 @@
>   
>   #include "amdgpu.h"
>   #include "amdgpu_uvd.h"
> +#include "amdgpu_cs.h"
>   #include "soc15.h"
>   #include "soc15d.h"
>   #include "soc15_common.h"
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> index da11ceba0698..2bb75fdb9571 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> @@ -25,6 +25,7 @@
>   #include "amdgpu.h"
>   #include "amdgpu_vcn.h"
>   #include "amdgpu_pm.h"
> +#include "amdgpu_cs.h"
>   #include "soc15.h"
>   #include "soc15d.h"
>   #include "vcn_v2_0.h"

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 05/10] drm/amdgpu: use job and ib structures directly in CS parsers
  2022-03-03  8:23 ` [PATCH 05/10] drm/amdgpu: use job and ib structures directly in CS parsers Christian König
@ 2022-03-03 20:16   ` Andrey Grodzovsky
  0 siblings, 0 replies; 27+ messages in thread
From: Andrey Grodzovsky @ 2022-03-03 20:16 UTC (permalink / raw)
  To: Christian König, amd-gfx, Marek.Olsak; +Cc: Christian König

Acked-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>

Andrey

On 2022-03-03 03:23, Christian König wrote:
> Instead of providing the ib index provide the job and ib pointers directly to
> the patch and parse functions for UVD and VCE.
>
> Also move the set/get functions for IB values to the IB declerations.
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   |   6 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h   |  13 ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  23 ++++-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c  |  36 ++++---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h  |   4 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c  | 116 ++++++++++++-----------
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h  |   7 +-
>   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c    |  13 +--
>   drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c    |  25 ++---
>   9 files changed, 129 insertions(+), 114 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index 20bf6134baca..dd9e708fe97f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -1024,12 +1024,14 @@ static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
>   			memcpy(ib->ptr, kptr, job->ibs[i].length_dw * 4);
>   			amdgpu_bo_kunmap(aobj);
>   
> -			r = amdgpu_ring_parse_cs(ring, p, i);
> +			r = amdgpu_ring_parse_cs(ring, p, p->job,
> +						 &p->job->ibs[i]);
>   			if (r)
>   				return r;
>   		} else {
>   			ib->ptr = (uint32_t *)kptr;
> -			r = amdgpu_ring_patch_cs_in_place(ring, p, i);
> +			r = amdgpu_ring_patch_cs_in_place(ring, p, p->job,
> +							  &p->job->ibs[i]);
>   			amdgpu_bo_kunmap(aobj);
>   			if (r)
>   				return r;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
> index 30136eb50d2a..652b5593499f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
> @@ -73,19 +73,6 @@ struct amdgpu_cs_parser {
>   	struct amdgpu_cs_post_dep	*post_deps;
>   };
>   
> -static inline u32 amdgpu_get_ib_value(struct amdgpu_cs_parser *p,
> -				      uint32_t ib_idx, int idx)
> -{
> -	return p->job->ibs[ib_idx].ptr[idx];
> -}
> -
> -static inline void amdgpu_set_ib_value(struct amdgpu_cs_parser *p,
> -				       uint32_t ib_idx, int idx,
> -				       uint32_t value)
> -{
> -	p->job->ibs[ib_idx].ptr[idx] = value;
> -}
> -
>   int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
>   			   uint64_t addr, struct amdgpu_bo **bo,
>   			   struct amdgpu_bo_va_mapping **mapping);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> index 05e789fc7a9e..a8bed1b47899 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> @@ -163,8 +163,12 @@ struct amdgpu_ring_funcs {
>   	u64 (*get_wptr)(struct amdgpu_ring *ring);
>   	void (*set_wptr)(struct amdgpu_ring *ring);
>   	/* validating and patching of IBs */
> -	int (*parse_cs)(struct amdgpu_cs_parser *p, uint32_t ib_idx);
> -	int (*patch_cs_in_place)(struct amdgpu_cs_parser *p, uint32_t ib_idx);
> +	int (*parse_cs)(struct amdgpu_cs_parser *p,
> +			struct amdgpu_job *job,
> +			struct amdgpu_ib *ib);
> +	int (*patch_cs_in_place)(struct amdgpu_cs_parser *p,
> +				 struct amdgpu_job *job,
> +				 struct amdgpu_ib *ib);
>   	/* constants to calculate how many DW are needed for an emit */
>   	unsigned emit_frame_size;
>   	unsigned emit_ib_size;
> @@ -264,8 +268,8 @@ struct amdgpu_ring {
>   	atomic_t		*sched_score;
>   };
>   
> -#define amdgpu_ring_parse_cs(r, p, ib) ((r)->funcs->parse_cs((p), (ib)))
> -#define amdgpu_ring_patch_cs_in_place(r, p, ib) ((r)->funcs->patch_cs_in_place((p), (ib)))
> +#define amdgpu_ring_parse_cs(r, p, job, ib) ((r)->funcs->parse_cs((p), (job), (ib)))
> +#define amdgpu_ring_patch_cs_in_place(r, p, job, ib) ((r)->funcs->patch_cs_in_place((p), (job), (ib)))
>   #define amdgpu_ring_test_ring(r) (r)->funcs->test_ring((r))
>   #define amdgpu_ring_test_ib(r, t) (r)->funcs->test_ib((r), (t))
>   #define amdgpu_ring_get_rptr(r) (r)->funcs->get_rptr((r))
> @@ -364,6 +368,17 @@ int amdgpu_ring_test_helper(struct amdgpu_ring *ring);
>   void amdgpu_debugfs_ring_init(struct amdgpu_device *adev,
>   			      struct amdgpu_ring *ring);
>   
> +static inline u32 amdgpu_ib_get_value(struct amdgpu_ib *ib, int idx)
> +{
> +	return ib->ptr[idx];
> +}
> +
> +static inline void amdgpu_ib_set_value(struct amdgpu_ib *ib, int idx,
> +				       uint32_t value)
> +{
> +	ib->ptr[idx] = value;
> +}
> +
>   int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>   		  unsigned size,
>   		  enum amdgpu_ib_pool_type pool,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> index 4927c10bdc80..2ebd133a5222 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> @@ -99,7 +99,7 @@ struct amdgpu_uvd_cs_ctx {
>   	unsigned reg, count;
>   	unsigned data0, data1;
>   	unsigned idx;
> -	unsigned ib_idx;
> +	struct amdgpu_ib *ib;
>   
>   	/* does the IB has a msg command */
>   	bool has_msg_cmd;
> @@ -558,8 +558,8 @@ static u64 amdgpu_uvd_get_addr_from_ctx(struct amdgpu_uvd_cs_ctx *ctx)
>   	uint32_t lo, hi;
>   	uint64_t addr;
>   
> -	lo = amdgpu_get_ib_value(ctx->parser, ctx->ib_idx, ctx->data0);
> -	hi = amdgpu_get_ib_value(ctx->parser, ctx->ib_idx, ctx->data1);
> +	lo = amdgpu_ib_get_value(ctx->ib, ctx->data0);
> +	hi = amdgpu_ib_get_value(ctx->ib, ctx->data1);
>   	addr = ((uint64_t)lo) | (((uint64_t)hi) << 32);
>   
>   	return addr;
> @@ -590,7 +590,7 @@ static int amdgpu_uvd_cs_pass1(struct amdgpu_uvd_cs_ctx *ctx)
>   
>   	if (!ctx->parser->adev->uvd.address_64_bit) {
>   		/* check if it's a message or feedback command */
> -		cmd = amdgpu_get_ib_value(ctx->parser, ctx->ib_idx, ctx->idx) >> 1;
> +		cmd = amdgpu_ib_get_value(ctx->ib, ctx->idx) >> 1;
>   		if (cmd == 0x0 || cmd == 0x3) {
>   			/* yes, force it into VRAM */
>   			uint32_t domain = AMDGPU_GEM_DOMAIN_VRAM;
> @@ -926,12 +926,10 @@ static int amdgpu_uvd_cs_pass2(struct amdgpu_uvd_cs_ctx *ctx)
>   	addr -= mapping->start * AMDGPU_GPU_PAGE_SIZE;
>   	start += addr;
>   
> -	amdgpu_set_ib_value(ctx->parser, ctx->ib_idx, ctx->data0,
> -			    lower_32_bits(start));
> -	amdgpu_set_ib_value(ctx->parser, ctx->ib_idx, ctx->data1,
> -			    upper_32_bits(start));
> +	amdgpu_ib_set_value(ctx->ib, ctx->data0, lower_32_bits(start));
> +	amdgpu_ib_set_value(ctx->ib, ctx->data1, upper_32_bits(start));
>   
> -	cmd = amdgpu_get_ib_value(ctx->parser, ctx->ib_idx, ctx->idx) >> 1;
> +	cmd = amdgpu_ib_get_value(ctx->ib, ctx->idx) >> 1;
>   	if (cmd < 0x4) {
>   		if ((end - start) < ctx->buf_sizes[cmd]) {
>   			DRM_ERROR("buffer (%d) to small (%d / %d)!\n", cmd,
> @@ -991,14 +989,13 @@ static int amdgpu_uvd_cs_pass2(struct amdgpu_uvd_cs_ctx *ctx)
>   static int amdgpu_uvd_cs_reg(struct amdgpu_uvd_cs_ctx *ctx,
>   			     int (*cb)(struct amdgpu_uvd_cs_ctx *ctx))
>   {
> -	struct amdgpu_ib *ib = &ctx->parser->job->ibs[ctx->ib_idx];
>   	int i, r;
>   
>   	ctx->idx++;
>   	for (i = 0; i <= ctx->count; ++i) {
>   		unsigned reg = ctx->reg + i;
>   
> -		if (ctx->idx >= ib->length_dw) {
> +		if (ctx->idx >= ctx->ib->length_dw) {
>   			DRM_ERROR("Register command after end of CS!\n");
>   			return -EINVAL;
>   		}
> @@ -1038,11 +1035,10 @@ static int amdgpu_uvd_cs_reg(struct amdgpu_uvd_cs_ctx *ctx,
>   static int amdgpu_uvd_cs_packets(struct amdgpu_uvd_cs_ctx *ctx,
>   				 int (*cb)(struct amdgpu_uvd_cs_ctx *ctx))
>   {
> -	struct amdgpu_ib *ib = &ctx->parser->job->ibs[ctx->ib_idx];
>   	int r;
>   
> -	for (ctx->idx = 0 ; ctx->idx < ib->length_dw; ) {
> -		uint32_t cmd = amdgpu_get_ib_value(ctx->parser, ctx->ib_idx, ctx->idx);
> +	for (ctx->idx = 0 ; ctx->idx < ctx->ib->length_dw; ) {
> +		uint32_t cmd = amdgpu_ib_get_value(ctx->ib, ctx->idx);
>   		unsigned type = CP_PACKET_GET_TYPE(cmd);
>   		switch (type) {
>   		case PACKET_TYPE0:
> @@ -1067,11 +1063,14 @@ static int amdgpu_uvd_cs_packets(struct amdgpu_uvd_cs_ctx *ctx,
>    * amdgpu_uvd_ring_parse_cs - UVD command submission parser
>    *
>    * @parser: Command submission parser context
> - * @ib_idx: Which indirect buffer to use
> + * @job: the job to parse
> + * @ib: the IB to patch
>    *
>    * Parse the command stream, patch in addresses as necessary.
>    */
> -int amdgpu_uvd_ring_parse_cs(struct amdgpu_cs_parser *parser, uint32_t ib_idx)
> +int amdgpu_uvd_ring_parse_cs(struct amdgpu_cs_parser *parser,
> +			     struct amdgpu_job *job,
> +			     struct amdgpu_ib *ib)
>   {
>   	struct amdgpu_uvd_cs_ctx ctx = {};
>   	unsigned buf_sizes[] = {
> @@ -1081,10 +1080,9 @@ int amdgpu_uvd_ring_parse_cs(struct amdgpu_cs_parser *parser, uint32_t ib_idx)
>   		[0x00000003]	=	2048,
>   		[0x00000004]	=	0xFFFFFFFF,
>   	};
> -	struct amdgpu_ib *ib = &parser->job->ibs[ib_idx];
>   	int r;
>   
> -	parser->job->vm = NULL;
> +	job->vm = NULL;
>   	ib->gpu_addr = amdgpu_sa_bo_gpu_addr(ib->sa_bo);
>   
>   	if (ib->length_dw % 16) {
> @@ -1095,7 +1093,7 @@ int amdgpu_uvd_ring_parse_cs(struct amdgpu_cs_parser *parser, uint32_t ib_idx)
>   
>   	ctx.parser = parser;
>   	ctx.buf_sizes = buf_sizes;
> -	ctx.ib_idx = ib_idx;
> +	ctx.ib = ib;
>   
>   	/* first round only required on chips without UVD 64 bit address support */
>   	if (!parser->adev->uvd.address_64_bit) {
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
> index 76ac9699885d..9f89bb7cd60b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
> @@ -82,7 +82,9 @@ int amdgpu_uvd_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handle,
>   			       bool direct, struct dma_fence **fence);
>   void amdgpu_uvd_free_handles(struct amdgpu_device *adev,
>   			     struct drm_file *filp);
> -int amdgpu_uvd_ring_parse_cs(struct amdgpu_cs_parser *parser, uint32_t ib_idx);
> +int amdgpu_uvd_ring_parse_cs(struct amdgpu_cs_parser *parser,
> +			     struct amdgpu_job *job,
> +			     struct amdgpu_ib *ib);
>   void amdgpu_uvd_ring_begin_use(struct amdgpu_ring *ring);
>   void amdgpu_uvd_ring_end_use(struct amdgpu_ring *ring);
>   int amdgpu_uvd_ring_test_ib(struct amdgpu_ring *ring, long timeout);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> index 6179230b6c6e..02cb3a12dd76 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> @@ -588,8 +588,7 @@ static int amdgpu_vce_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handle,
>   /**
>    * amdgpu_vce_validate_bo - make sure not to cross 4GB boundary
>    *
> - * @p: parser context
> - * @ib_idx: indirect buffer to use
> + * @ib: indirect buffer to use
>    * @lo: address of lower dword
>    * @hi: address of higher dword
>    * @size: minimum size
> @@ -597,8 +596,9 @@ static int amdgpu_vce_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handle,
>    *
>    * Make sure that no BO cross a 4GB boundary.
>    */
> -static int amdgpu_vce_validate_bo(struct amdgpu_cs_parser *p, uint32_t ib_idx,
> -				  int lo, int hi, unsigned size, int32_t index)
> +static int amdgpu_vce_validate_bo(struct amdgpu_cs_parser *p,
> +				  struct amdgpu_ib *ib, int lo, int hi,
> +				  unsigned size, int32_t index)
>   {
>   	int64_t offset = ((uint64_t)size) * ((int64_t)index);
>   	struct ttm_operation_ctx ctx = { false, false };
> @@ -608,8 +608,8 @@ static int amdgpu_vce_validate_bo(struct amdgpu_cs_parser *p, uint32_t ib_idx,
>   	uint64_t addr;
>   	int r;
>   
> -	addr = ((uint64_t)amdgpu_get_ib_value(p, ib_idx, lo)) |
> -	       ((uint64_t)amdgpu_get_ib_value(p, ib_idx, hi)) << 32;
> +	addr = ((uint64_t)amdgpu_ib_get_value(ib, lo)) |
> +	       ((uint64_t)amdgpu_ib_get_value(ib, hi)) << 32;
>   	if (index >= 0) {
>   		addr += offset;
>   		fpfn = PAGE_ALIGN(offset) >> PAGE_SHIFT;
> @@ -639,7 +639,7 @@ static int amdgpu_vce_validate_bo(struct amdgpu_cs_parser *p, uint32_t ib_idx,
>    * amdgpu_vce_cs_reloc - command submission relocation
>    *
>    * @p: parser context
> - * @ib_idx: indirect buffer to use
> + * @ib: indirect buffer to use
>    * @lo: address of lower dword
>    * @hi: address of higher dword
>    * @size: minimum size
> @@ -647,7 +647,7 @@ static int amdgpu_vce_validate_bo(struct amdgpu_cs_parser *p, uint32_t ib_idx,
>    *
>    * Patch relocation inside command stream with real buffer address
>    */
> -static int amdgpu_vce_cs_reloc(struct amdgpu_cs_parser *p, uint32_t ib_idx,
> +static int amdgpu_vce_cs_reloc(struct amdgpu_cs_parser *p, struct amdgpu_ib *ib,
>   			       int lo, int hi, unsigned size, uint32_t index)
>   {
>   	struct amdgpu_bo_va_mapping *mapping;
> @@ -658,8 +658,8 @@ static int amdgpu_vce_cs_reloc(struct amdgpu_cs_parser *p, uint32_t ib_idx,
>   	if (index == 0xffffffff)
>   		index = 0;
>   
> -	addr = ((uint64_t)amdgpu_get_ib_value(p, ib_idx, lo)) |
> -	       ((uint64_t)amdgpu_get_ib_value(p, ib_idx, hi)) << 32;
> +	addr = ((uint64_t)amdgpu_ib_get_value(ib, lo)) |
> +	       ((uint64_t)amdgpu_ib_get_value(ib, hi)) << 32;
>   	addr += ((uint64_t)size) * ((uint64_t)index);
>   
>   	r = amdgpu_cs_find_mapping(p, addr, &bo, &mapping);
> @@ -680,8 +680,8 @@ static int amdgpu_vce_cs_reloc(struct amdgpu_cs_parser *p, uint32_t ib_idx,
>   	addr += amdgpu_bo_gpu_offset(bo);
>   	addr -= ((uint64_t)size) * ((uint64_t)index);
>   
> -	amdgpu_set_ib_value(p, ib_idx, lo, lower_32_bits(addr));
> -	amdgpu_set_ib_value(p, ib_idx, hi, upper_32_bits(addr));
> +	amdgpu_ib_set_value(ib, lo, lower_32_bits(addr));
> +	amdgpu_ib_set_value(ib, hi, upper_32_bits(addr));
>   
>   	return 0;
>   }
> @@ -730,11 +730,13 @@ static int amdgpu_vce_validate_handle(struct amdgpu_cs_parser *p,
>    * amdgpu_vce_ring_parse_cs - parse and validate the command stream
>    *
>    * @p: parser context
> - * @ib_idx: indirect buffer to use
> + * @job: the job to parse
> + * @ib: the IB to patch
>    */
> -int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
> +int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p,
> +			     struct amdgpu_job *job,
> +			     struct amdgpu_ib *ib)
>   {
> -	struct amdgpu_ib *ib = &p->job->ibs[ib_idx];
>   	unsigned fb_idx = 0, bs_idx = 0;
>   	int session_idx = -1;
>   	uint32_t destroyed = 0;
> @@ -745,12 +747,12 @@ int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
>   	unsigned idx;
>   	int i, r = 0;
>   
> -	p->job->vm = NULL;
> +	job->vm = NULL;
>   	ib->gpu_addr = amdgpu_sa_bo_gpu_addr(ib->sa_bo);
>   
>   	for (idx = 0; idx < ib->length_dw;) {
> -		uint32_t len = amdgpu_get_ib_value(p, ib_idx, idx);
> -		uint32_t cmd = amdgpu_get_ib_value(p, ib_idx, idx + 1);
> +		uint32_t len = amdgpu_ib_get_value(ib, idx);
> +		uint32_t cmd = amdgpu_ib_get_value(ib, idx + 1);
>   
>   		if ((len < 8) || (len & 3)) {
>   			DRM_ERROR("invalid VCE command length (%d)!\n", len);
> @@ -760,52 +762,52 @@ int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
>   
>   		switch (cmd) {
>   		case 0x00000002: /* task info */
> -			fb_idx = amdgpu_get_ib_value(p, ib_idx, idx + 6);
> -			bs_idx = amdgpu_get_ib_value(p, ib_idx, idx + 7);
> +			fb_idx = amdgpu_ib_get_value(ib, idx + 6);
> +			bs_idx = amdgpu_ib_get_value(ib, idx + 7);
>   			break;
>   
>   		case 0x03000001: /* encode */
> -			r = amdgpu_vce_validate_bo(p, ib_idx, idx + 10,
> -						   idx + 9, 0, 0);
> +			r = amdgpu_vce_validate_bo(p, ib, idx + 10, idx + 9,
> +						   0, 0);
>   			if (r)
>   				goto out;
>   
> -			r = amdgpu_vce_validate_bo(p, ib_idx, idx + 12,
> -						   idx + 11, 0, 0);
> +			r = amdgpu_vce_validate_bo(p, ib, idx + 12, idx + 11,
> +						   0, 0);
>   			if (r)
>   				goto out;
>   			break;
>   
>   		case 0x05000001: /* context buffer */
> -			r = amdgpu_vce_validate_bo(p, ib_idx, idx + 3,
> -						   idx + 2, 0, 0);
> +			r = amdgpu_vce_validate_bo(p, ib, idx + 3, idx + 2,
> +						   0, 0);
>   			if (r)
>   				goto out;
>   			break;
>   
>   		case 0x05000004: /* video bitstream buffer */
> -			tmp = amdgpu_get_ib_value(p, ib_idx, idx + 4);
> -			r = amdgpu_vce_validate_bo(p, ib_idx, idx + 3, idx + 2,
> +			tmp = amdgpu_ib_get_value(ib, idx + 4);
> +			r = amdgpu_vce_validate_bo(p, ib, idx + 3, idx + 2,
>   						   tmp, bs_idx);
>   			if (r)
>   				goto out;
>   			break;
>   
>   		case 0x05000005: /* feedback buffer */
> -			r = amdgpu_vce_validate_bo(p, ib_idx, idx + 3, idx + 2,
> +			r = amdgpu_vce_validate_bo(p, ib, idx + 3, idx + 2,
>   						   4096, fb_idx);
>   			if (r)
>   				goto out;
>   			break;
>   
>   		case 0x0500000d: /* MV buffer */
> -			r = amdgpu_vce_validate_bo(p, ib_idx, idx + 3,
> -							idx + 2, 0, 0);
> +			r = amdgpu_vce_validate_bo(p, ib, idx + 3, idx + 2,
> +						   0, 0);
>   			if (r)
>   				goto out;
>   
> -			r = amdgpu_vce_validate_bo(p, ib_idx, idx + 8,
> -							idx + 7, 0, 0);
> +			r = amdgpu_vce_validate_bo(p, ib, idx + 8, idx + 7,
> +						   0, 0);
>   			if (r)
>   				goto out;
>   			break;
> @@ -815,12 +817,12 @@ int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
>   	}
>   
>   	for (idx = 0; idx < ib->length_dw;) {
> -		uint32_t len = amdgpu_get_ib_value(p, ib_idx, idx);
> -		uint32_t cmd = amdgpu_get_ib_value(p, ib_idx, idx + 1);
> +		uint32_t len = amdgpu_ib_get_value(ib, idx);
> +		uint32_t cmd = amdgpu_ib_get_value(ib, idx + 1);
>   
>   		switch (cmd) {
>   		case 0x00000001: /* session */
> -			handle = amdgpu_get_ib_value(p, ib_idx, idx + 2);
> +			handle = amdgpu_ib_get_value(ib, idx + 2);
>   			session_idx = amdgpu_vce_validate_handle(p, handle,
>   								 &allocated);
>   			if (session_idx < 0) {
> @@ -831,8 +833,8 @@ int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
>   			break;
>   
>   		case 0x00000002: /* task info */
> -			fb_idx = amdgpu_get_ib_value(p, ib_idx, idx + 6);
> -			bs_idx = amdgpu_get_ib_value(p, ib_idx, idx + 7);
> +			fb_idx = amdgpu_ib_get_value(ib, idx + 6);
> +			bs_idx = amdgpu_ib_get_value(ib, idx + 7);
>   			break;
>   
>   		case 0x01000001: /* create */
> @@ -847,8 +849,8 @@ int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
>   				goto out;
>   			}
>   
> -			*size = amdgpu_get_ib_value(p, ib_idx, idx + 8) *
> -				amdgpu_get_ib_value(p, ib_idx, idx + 10) *
> +			*size = amdgpu_ib_get_value(ib, idx + 8) *
> +				amdgpu_ib_get_value(ib, idx + 10) *
>   				8 * 3 / 2;
>   			break;
>   
> @@ -877,12 +879,12 @@ int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
>   			break;
>   
>   		case 0x03000001: /* encode */
> -			r = amdgpu_vce_cs_reloc(p, ib_idx, idx + 10, idx + 9,
> +			r = amdgpu_vce_cs_reloc(p, ib, idx + 10, idx + 9,
>   						*size, 0);
>   			if (r)
>   				goto out;
>   
> -			r = amdgpu_vce_cs_reloc(p, ib_idx, idx + 12, idx + 11,
> +			r = amdgpu_vce_cs_reloc(p, ib, idx + 12, idx + 11,
>   						*size / 3, 0);
>   			if (r)
>   				goto out;
> @@ -893,35 +895,35 @@ int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
>   			break;
>   
>   		case 0x05000001: /* context buffer */
> -			r = amdgpu_vce_cs_reloc(p, ib_idx, idx + 3, idx + 2,
> +			r = amdgpu_vce_cs_reloc(p, ib, idx + 3, idx + 2,
>   						*size * 2, 0);
>   			if (r)
>   				goto out;
>   			break;
>   
>   		case 0x05000004: /* video bitstream buffer */
> -			tmp = amdgpu_get_ib_value(p, ib_idx, idx + 4);
> -			r = amdgpu_vce_cs_reloc(p, ib_idx, idx + 3, idx + 2,
> +			tmp = amdgpu_ib_get_value(ib, idx + 4);
> +			r = amdgpu_vce_cs_reloc(p, ib, idx + 3, idx + 2,
>   						tmp, bs_idx);
>   			if (r)
>   				goto out;
>   			break;
>   
>   		case 0x05000005: /* feedback buffer */
> -			r = amdgpu_vce_cs_reloc(p, ib_idx, idx + 3, idx + 2,
> +			r = amdgpu_vce_cs_reloc(p, ib, idx + 3, idx + 2,
>   						4096, fb_idx);
>   			if (r)
>   				goto out;
>   			break;
>   
>   		case 0x0500000d: /* MV buffer */
> -			r = amdgpu_vce_cs_reloc(p, ib_idx, idx + 3,
> -							idx + 2, *size, 0);
> +			r = amdgpu_vce_cs_reloc(p, ib, idx + 3,
> +						idx + 2, *size, 0);
>   			if (r)
>   				goto out;
>   
> -			r = amdgpu_vce_cs_reloc(p, ib_idx, idx + 8,
> -							idx + 7, *size / 12, 0);
> +			r = amdgpu_vce_cs_reloc(p, ib, idx + 8,
> +						idx + 7, *size / 12, 0);
>   			if (r)
>   				goto out;
>   			break;
> @@ -966,11 +968,13 @@ int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx)
>    * amdgpu_vce_ring_parse_cs_vm - parse the command stream in VM mode
>    *
>    * @p: parser context
> - * @ib_idx: indirect buffer to use
> + * @job: the job to parse
> + * @ib: the IB to patch
>    */
> -int amdgpu_vce_ring_parse_cs_vm(struct amdgpu_cs_parser *p, uint32_t ib_idx)
> +int amdgpu_vce_ring_parse_cs_vm(struct amdgpu_cs_parser *p,
> +				struct amdgpu_job *job,
> +				struct amdgpu_ib *ib)
>   {
> -	struct amdgpu_ib *ib = &p->job->ibs[ib_idx];
>   	int session_idx = -1;
>   	uint32_t destroyed = 0;
>   	uint32_t created = 0;
> @@ -979,8 +983,8 @@ int amdgpu_vce_ring_parse_cs_vm(struct amdgpu_cs_parser *p, uint32_t ib_idx)
>   	int i, r = 0, idx = 0;
>   
>   	while (idx < ib->length_dw) {
> -		uint32_t len = amdgpu_get_ib_value(p, ib_idx, idx);
> -		uint32_t cmd = amdgpu_get_ib_value(p, ib_idx, idx + 1);
> +		uint32_t len = amdgpu_ib_get_value(ib, idx);
> +		uint32_t cmd = amdgpu_ib_get_value(ib, idx + 1);
>   
>   		if ((len < 8) || (len & 3)) {
>   			DRM_ERROR("invalid VCE command length (%d)!\n", len);
> @@ -990,7 +994,7 @@ int amdgpu_vce_ring_parse_cs_vm(struct amdgpu_cs_parser *p, uint32_t ib_idx)
>   
>   		switch (cmd) {
>   		case 0x00000001: /* session */
> -			handle = amdgpu_get_ib_value(p, ib_idx, idx + 2);
> +			handle = amdgpu_ib_get_value(ib, idx + 2);
>   			session_idx = amdgpu_vce_validate_handle(p, handle,
>   								 &allocated);
>   			if (session_idx < 0) {
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
> index be4a6e773c5b..ea680fc9a6c3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
> @@ -59,8 +59,11 @@ int amdgpu_vce_entity_init(struct amdgpu_device *adev);
>   int amdgpu_vce_suspend(struct amdgpu_device *adev);
>   int amdgpu_vce_resume(struct amdgpu_device *adev);
>   void amdgpu_vce_free_handles(struct amdgpu_device *adev, struct drm_file *filp);
> -int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, uint32_t ib_idx);
> -int amdgpu_vce_ring_parse_cs_vm(struct amdgpu_cs_parser *p, uint32_t ib_idx);
> +int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
> +			     struct amdgpu_ib *ib);
> +int amdgpu_vce_ring_parse_cs_vm(struct amdgpu_cs_parser *p,
> +				struct amdgpu_job *job,
> +				struct amdgpu_ib *ib);
>   void amdgpu_vce_ring_emit_ib(struct amdgpu_ring *ring, struct amdgpu_job *job,
>   				struct amdgpu_ib *ib, uint32_t flags);
>   void amdgpu_vce_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 seq,
> diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> index 7afa660e341c..2f15b8e0f7d7 100644
> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> @@ -1276,14 +1276,15 @@ static int uvd_v7_0_ring_test_ring(struct amdgpu_ring *ring)
>    * uvd_v7_0_ring_patch_cs_in_place - Patch the IB for command submission.
>    *
>    * @p: the CS parser with the IBs
> - * @ib_idx: which IB to patch
> + * @job: which job this ib is in
> + * @ib: which IB to patch
>    *
>    */
>   static int uvd_v7_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
> -					   uint32_t ib_idx)
> +					   struct amdgpu_job *job,
> +					   struct amdgpu_ib *ib)
>   {
> -	struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
> -	struct amdgpu_ib *ib = &p->job->ibs[ib_idx];
> +	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
>   	unsigned i;
>   
>   	/* No patching necessary for the first instance */
> @@ -1291,12 +1292,12 @@ static int uvd_v7_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
>   		return 0;
>   
>   	for (i = 0; i < ib->length_dw; i += 2) {
> -		uint32_t reg = amdgpu_get_ib_value(p, ib_idx, i);
> +		uint32_t reg = amdgpu_ib_get_value(ib, i);
>   
>   		reg -= p->adev->reg_offset[UVD_HWIP][0][1];
>   		reg += p->adev->reg_offset[UVD_HWIP][1][1];
>   
> -		amdgpu_set_ib_value(p, ib_idx, i, reg);
> +		amdgpu_ib_set_value(ib, i, reg);
>   	}
>   	return 0;
>   }
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> index 2bb75fdb9571..5f9ad129464f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> @@ -1807,21 +1807,23 @@ static const struct amdgpu_ring_funcs vcn_v3_0_dec_sw_ring_vm_funcs = {
>   	.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
>   };
>   
> -static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p)
> +static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p,
> +				struct amdgpu_job *job)
>   {
>   	struct drm_gpu_scheduler **scheds;
>   
>   	/* The create msg must be in the first IB submitted */
> -	if (atomic_read(&p->entity->fence_seq))
> +	if (atomic_read(&job->base.entity->fence_seq))
>   		return -EINVAL;
>   
>   	scheds = p->adev->gpu_sched[AMDGPU_HW_IP_VCN_DEC]
>   		[AMDGPU_RING_PRIO_DEFAULT].sched;
> -	drm_sched_entity_modify_sched(p->entity, scheds, 1);
> +	drm_sched_entity_modify_sched(job->base.entity, scheds, 1);
>   	return 0;
>   }
>   
> -static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, uint64_t addr)
> +static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
> +			    uint64_t addr)
>   {
>   	struct ttm_operation_ctx ctx = { false, false };
>   	struct amdgpu_bo_va_mapping *map;
> @@ -1892,7 +1894,7 @@ static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, uint64_t addr)
>   		if (create[0] == 0x7 || create[0] == 0x10 || create[0] == 0x11)
>   			continue;
>   
> -		r = vcn_v3_0_limit_sched(p);
> +		r = vcn_v3_0_limit_sched(p, job);
>   		if (r)
>   			goto out;
>   	}
> @@ -1903,10 +1905,10 @@ static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, uint64_t addr)
>   }
>   
>   static int vcn_v3_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
> -					   uint32_t ib_idx)
> +					   struct amdgpu_job *job,
> +					   struct amdgpu_ib *ib)
>   {
> -	struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
> -	struct amdgpu_ib *ib = &p->job->ibs[ib_idx];
> +	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
>   	uint32_t msg_lo = 0, msg_hi = 0;
>   	unsigned i;
>   	int r;
> @@ -1916,8 +1918,8 @@ static int vcn_v3_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
>   		return 0;
>   
>   	for (i = 0; i < ib->length_dw; i += 2) {
> -		uint32_t reg = amdgpu_get_ib_value(p, ib_idx, i);
> -		uint32_t val = amdgpu_get_ib_value(p, ib_idx, i + 1);
> +		uint32_t reg = amdgpu_ib_get_value(ib, i);
> +		uint32_t val = amdgpu_ib_get_value(ib, i + 1);
>   
>   		if (reg == PACKET0(p->adev->vcn.internal.data0, 0)) {
>   			msg_lo = val;
> @@ -1925,7 +1927,8 @@ static int vcn_v3_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
>   			msg_hi = val;
>   		} else if (reg == PACKET0(p->adev->vcn.internal.cmd, 0) &&
>   			   val == 0) {
> -			r = vcn_v3_0_dec_msg(p, ((u64)msg_hi) << 32 | msg_lo);
> +			r = vcn_v3_0_dec_msg(p, job,
> +					     ((u64)msg_hi) << 32 | msg_lo);
>   			if (r)
>   				return r;
>   		}

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 06/10] drm/amdgpu: properly imbed the IBs into the job
  2022-03-03  8:23 ` [PATCH 06/10] drm/amdgpu: properly imbed the IBs into the job Christian König
@ 2022-03-03 20:25   ` Andrey Grodzovsky
  0 siblings, 0 replies; 27+ messages in thread
From: Andrey Grodzovsky @ 2022-03-03 20:25 UTC (permalink / raw)
  To: Christian König, amd-gfx, Marek.Olsak; +Cc: Christian König

Reviewed-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>

Andrey

On 2022-03-03 03:23, Christian König wrote:
> We now have standard macros for that.
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 7 +------
>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.h | 6 ++++--
>   2 files changed, 5 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index 38c9fd7b7ad4..e4ca62225996 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -78,14 +78,10 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
>   int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,
>   		     struct amdgpu_job **job, struct amdgpu_vm *vm)
>   {
> -	size_t size = sizeof(struct amdgpu_job);
> -
>   	if (num_ibs == 0)
>   		return -EINVAL;
>   
> -	size += sizeof(struct amdgpu_ib) * num_ibs;
> -
> -	*job = kzalloc(size, GFP_KERNEL);
> +	*job = kzalloc(struct_size(*job, ibs, num_ibs), GFP_KERNEL);
>   	if (!*job)
>   		return -ENOMEM;
>   
> @@ -95,7 +91,6 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,
>   	 */
>   	(*job)->base.sched = &adev->rings[0]->sched;
>   	(*job)->vm = vm;
> -	(*job)->ibs = (void *)&(*job)[1];
>   	(*job)->num_ibs = num_ibs;
>   
>   	amdgpu_sync_create(&(*job)->sync);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> index 6d704772ff42..d599c0540b46 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> @@ -25,6 +25,7 @@
>   
>   #include <drm/gpu_scheduler.h>
>   #include "amdgpu_sync.h"
> +#include "amdgpu_ring.h"
>   
>   /* bit set means command submit involves a preamble IB */
>   #define AMDGPU_PREAMBLE_IB_PRESENT          (1 << 0)
> @@ -48,12 +49,10 @@ struct amdgpu_job {
>   	struct amdgpu_vm	*vm;
>   	struct amdgpu_sync	sync;
>   	struct amdgpu_sync	sched_sync;
> -	struct amdgpu_ib	*ibs;
>   	struct dma_fence	hw_fence;
>   	struct dma_fence	*external_hw_fence;
>   	uint32_t		preamble_status;
>   	uint32_t                preemption_status;
> -	uint32_t		num_ibs;
>   	bool                    vm_needs_flush;
>   	uint64_t		vm_pd_addr;
>   	unsigned		vmid;
> @@ -69,6 +68,9 @@ struct amdgpu_job {
>   
>   	/* job_run_counter >= 1 means a resubmit job */
>   	uint32_t		job_run_counter;
> +
> +	uint32_t		num_ibs;
> +	struct amdgpu_ib	ibs[];
>   };
>   
>   int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 08/10] drm/amdgpu: initialize the vmid_wait with the stub fence
  2022-03-03  8:23 ` [PATCH 08/10] drm/amdgpu: initialize the vmid_wait with the stub fence Christian König
@ 2022-03-03 20:31   ` Andrey Grodzovsky
  0 siblings, 0 replies; 27+ messages in thread
From: Andrey Grodzovsky @ 2022-03-03 20:31 UTC (permalink / raw)
  To: Christian König, amd-gfx, Marek.Olsak; +Cc: Christian König

Reviewed-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>

Andrey

On 2022-03-03 03:23, Christian König wrote:
> This way we don't need to check for NULL any more.
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c  | 2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 1 +
>   2 files changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> index ddf46802b1ff..4ba4b54092f1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> @@ -188,7 +188,7 @@ static int amdgpu_vmid_grab_idle(struct amdgpu_vm *vm,
>   	unsigned i;
>   	int r;
>   
> -	if (ring->vmid_wait && !dma_fence_is_signaled(ring->vmid_wait))
> +	if (!dma_fence_is_signaled(ring->vmid_wait))
>   		return amdgpu_sync_fence(sync, ring->vmid_wait);
>   
>   	fences = kmalloc_array(id_mgr->num_ids, sizeof(void *), GFP_KERNEL);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> index 35bcb6dc1816..7f33ae87cb41 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> @@ -193,6 +193,7 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
>   		adev->rings[ring->idx] = ring;
>   		ring->num_hw_submission = sched_hw_submission;
>   		ring->sched_score = sched_score;
> +		ring->vmid_wait = dma_fence_get_stub();
>   		r = amdgpu_fence_driver_init_ring(ring);
>   		if (r)
>   			return r;

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 09/10] drm/amdgpu: add gang submit backend
  2022-03-03  8:23 ` [PATCH 09/10] drm/amdgpu: add gang submit backend Christian König
@ 2022-03-04 17:10   ` Andrey Grodzovsky
  2022-03-05 18:40     ` Christian König
  0 siblings, 1 reply; 27+ messages in thread
From: Andrey Grodzovsky @ 2022-03-04 17:10 UTC (permalink / raw)
  To: Christian König, amd-gfx, Marek.Olsak; +Cc: Christian König


On 2022-03-03 03:23, Christian König wrote:
> Allows submitting jobs as gang which needs to run on multiple
> engines at the same time.
>
> Basic idea is that we have a global gang submit fence representing when the
> gang leader is finally pushed to run on the hardware last.
>
> Jobs submitted as gang are never re-submitted in case of a GPU reset since this
> won't work and will just deadlock the hardware immediately again.
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu.h        |  3 ++
>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 34 ++++++++++++++++++++++
>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c    | 28 ++++++++++++++++--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.h    |  3 ++
>   4 files changed, 66 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 7f447ed7a67f..a664d43d7502 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -852,6 +852,7 @@ struct amdgpu_device {
>   	u64				fence_context;
>   	unsigned			num_rings;
>   	struct amdgpu_ring		*rings[AMDGPU_MAX_RINGS];
> +	struct dma_fence __rcu		*gang_submit;
>   	bool				ib_pool_ready;
>   	struct amdgpu_sa_manager	ib_pools[AMDGPU_IB_POOL_MAX];
>   	struct amdgpu_sched		gpu_sched[AMDGPU_HW_IP_NUM][AMDGPU_RING_PRIO_MAX];
> @@ -1233,6 +1234,8 @@ void amdgpu_device_invalidate_hdp(struct amdgpu_device *adev,
>   		struct amdgpu_ring *ring);
>   
>   void amdgpu_device_halt(struct amdgpu_device *adev);
> +struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev,
> +					    struct dma_fence *gang);
>   
>   /* atpx handler */
>   #if defined(CONFIG_VGA_SWITCHEROO)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index d78141e2c509..a116b8c08827 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -3512,6 +3512,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
>   	adev->gmc.gart_size = 512 * 1024 * 1024;
>   	adev->accel_working = false;
>   	adev->num_rings = 0;
> +	RCU_INIT_POINTER(adev->gang_submit, dma_fence_get_stub());
>   	adev->mman.buffer_funcs = NULL;
>   	adev->mman.buffer_funcs_ring = NULL;
>   	adev->vm_manager.vm_pte_funcs = NULL;
> @@ -3989,6 +3990,7 @@ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
>   	release_firmware(adev->firmware.gpu_info_fw);
>   	adev->firmware.gpu_info_fw = NULL;
>   	adev->accel_working = false;
> +	dma_fence_put(rcu_dereference_protected(adev->gang_submit, true));
>   
>   	amdgpu_reset_fini(adev);
>   
> @@ -5744,3 +5746,35 @@ void amdgpu_device_halt(struct amdgpu_device *adev)
>   	pci_disable_device(pdev);
>   	pci_wait_for_pending_transaction(pdev);
>   }
> +
> +/**
> + * amdgpu_device_switch_gang - switch to a new gang
> + * @adev: amdgpu_device pointer
> + * @gang: the gang to switch to
> + *
> + * Try to switch to a new gang or return a reference to the current gang if that
> + * isn't possible.
> + * Returns: Either NULL if we switched correctly or a reference to the existing
> + * gang.
> + */
> +struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev,
> +					    struct dma_fence *gang)
> +{
> +	struct dma_fence *old = NULL;
> +
> +	do {
> +		dma_fence_put(old);
> +		old = dma_fence_get_rcu_safe(&adev->gang_submit);
> +
> +		if (old == gang)
> +			break;
> +
> +		if (!dma_fence_is_signaled(old))
> +			return old;
> +
> +	} while (cmpxchg((struct dma_fence __force **)&adev->gang_submit,
> +			 old, gang) != old);
> +
> +	dma_fence_put(old);
> +	return NULL;
> +}
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index e07ceae36a5c..059e11c7898c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -169,11 +169,29 @@ static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
>   		kfree(job);
>   }
>   
> +void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
> +				struct amdgpu_job *leader)
> +{
> +	struct dma_fence *fence = &leader->base.s_fence->scheduled;
> +
> +	WARN_ON(job->gang_submit);
> +
> +	/*
> +	 * Don't add a reference when we are the gang leader to avoid circle
> +	 * dependency.
> +	 */
> +	if (job != leader)
> +		dma_fence_get(fence);
> +	job->gang_submit = fence;
> +}
> +
>   void amdgpu_job_free(struct amdgpu_job *job)
>   {
>   	amdgpu_job_free_resources(job);
>   	amdgpu_sync_free(&job->sync);
>   	amdgpu_sync_free(&job->sched_sync);
> +	if (job->gang_submit != &job->base.s_fence->scheduled)
> +		dma_fence_put(job->gang_submit);
>   
>   	/* only put the hw fence if has embedded fence */
>   	if (job->hw_fence.ops != NULL)
> @@ -247,12 +265,16 @@ static struct dma_fence *amdgpu_job_dependency(struct drm_sched_job *sched_job,
>   		fence = amdgpu_sync_get_fence(&job->sync);
>   	}
>   
> +	if (!fence && !job->gang_submit)
> +		fence = amdgpu_device_switch_gang(ring->adev, job->gang_submit);
> +


Why job->gang_submit should be NULL in the check above ? Don't you want 
to switch to an actual new gang fence here ?
Jobs that don't have gang_submit fence set are not gang jobs anyway and 
we don't care for this dependency
for them right ?

Andrey


>   	return fence;
>   }
>   
>   static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job)
>   {
>   	struct amdgpu_ring *ring = to_amdgpu_ring(sched_job->sched);
> +	struct amdgpu_device *adev = ring->adev;
>   	struct dma_fence *fence = NULL, *finished;
>   	struct amdgpu_job *job;
>   	int r = 0;
> @@ -264,8 +286,10 @@ static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job)
>   
>   	trace_amdgpu_sched_run_job(job);
>   
> -	if (job->vram_lost_counter != atomic_read(&ring->adev->vram_lost_counter))
> -		dma_fence_set_error(finished, -ECANCELED);/* skip IB as well if VRAM lost */
> +	/* Skip job if VRAM is lost and never resubmit gangs */
> +	if (job->vram_lost_counter != atomic_read(&adev->vram_lost_counter) ||
> +	    (job->job_run_counter && job->gang_submit))
> +		dma_fence_set_error(finished, -ECANCELED);
>   
>   	if (finished->error < 0) {
>   		DRM_INFO("Skip scheduling IBs!\n");
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> index 0bab8fe0d419..615328130615 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> @@ -51,6 +51,7 @@ struct amdgpu_job {
>   	struct amdgpu_sync	sched_sync;
>   	struct dma_fence	hw_fence;
>   	struct dma_fence	*external_hw_fence;
> +	struct dma_fence	*gang_submit;
>   	uint32_t		preamble_status;
>   	uint32_t                preemption_status;
>   	bool                    vm_needs_flush;
> @@ -80,6 +81,8 @@ int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, unsigned size,
>   void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
>   			      struct amdgpu_bo *gws, struct amdgpu_bo *oa);
>   void amdgpu_job_free_resources(struct amdgpu_job *job);
> +void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
> +				struct amdgpu_job *leader);
>   void amdgpu_job_free(struct amdgpu_job *job);
>   int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity,
>   		      void *owner, struct dma_fence **f);

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 09/10] drm/amdgpu: add gang submit backend
  2022-03-04 17:10   ` Andrey Grodzovsky
@ 2022-03-05 18:40     ` Christian König
  2022-03-07 15:40       ` Andrey Grodzovsky
  0 siblings, 1 reply; 27+ messages in thread
From: Christian König @ 2022-03-05 18:40 UTC (permalink / raw)
  To: Andrey Grodzovsky, amd-gfx, Marek.Olsak; +Cc: Christian König

Am 04.03.22 um 18:10 schrieb Andrey Grodzovsky:
>
> On 2022-03-03 03:23, Christian König wrote:
>> Allows submitting jobs as gang which needs to run on multiple
>> engines at the same time.
>>
>> Basic idea is that we have a global gang submit fence representing 
>> when the
>> gang leader is finally pushed to run on the hardware last.
>>
>> Jobs submitted as gang are never re-submitted in case of a GPU reset 
>> since this
>> won't work and will just deadlock the hardware immediately again.
>>
>> Signed-off-by: Christian König <christian.koenig@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h        |  3 ++
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 34 ++++++++++++++++++++++
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c    | 28 ++++++++++++++++--
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.h    |  3 ++
>>   4 files changed, 66 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> index 7f447ed7a67f..a664d43d7502 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> @@ -852,6 +852,7 @@ struct amdgpu_device {
>>       u64                fence_context;
>>       unsigned            num_rings;
>>       struct amdgpu_ring        *rings[AMDGPU_MAX_RINGS];
>> +    struct dma_fence __rcu        *gang_submit;
>>       bool                ib_pool_ready;
>>       struct amdgpu_sa_manager    ib_pools[AMDGPU_IB_POOL_MAX];
>>       struct amdgpu_sched 
>> gpu_sched[AMDGPU_HW_IP_NUM][AMDGPU_RING_PRIO_MAX];
>> @@ -1233,6 +1234,8 @@ void amdgpu_device_invalidate_hdp(struct 
>> amdgpu_device *adev,
>>           struct amdgpu_ring *ring);
>>     void amdgpu_device_halt(struct amdgpu_device *adev);
>> +struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev,
>> +                        struct dma_fence *gang);
>>     /* atpx handler */
>>   #if defined(CONFIG_VGA_SWITCHEROO)
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> index d78141e2c509..a116b8c08827 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> @@ -3512,6 +3512,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
>>       adev->gmc.gart_size = 512 * 1024 * 1024;
>>       adev->accel_working = false;
>>       adev->num_rings = 0;
>> +    RCU_INIT_POINTER(adev->gang_submit, dma_fence_get_stub());
>>       adev->mman.buffer_funcs = NULL;
>>       adev->mman.buffer_funcs_ring = NULL;
>>       adev->vm_manager.vm_pte_funcs = NULL;
>> @@ -3989,6 +3990,7 @@ void amdgpu_device_fini_sw(struct amdgpu_device 
>> *adev)
>>       release_firmware(adev->firmware.gpu_info_fw);
>>       adev->firmware.gpu_info_fw = NULL;
>>       adev->accel_working = false;
>> + dma_fence_put(rcu_dereference_protected(adev->gang_submit, true));
>>         amdgpu_reset_fini(adev);
>>   @@ -5744,3 +5746,35 @@ void amdgpu_device_halt(struct amdgpu_device 
>> *adev)
>>       pci_disable_device(pdev);
>>       pci_wait_for_pending_transaction(pdev);
>>   }
>> +
>> +/**
>> + * amdgpu_device_switch_gang - switch to a new gang
>> + * @adev: amdgpu_device pointer
>> + * @gang: the gang to switch to
>> + *
>> + * Try to switch to a new gang or return a reference to the current 
>> gang if that
>> + * isn't possible.
>> + * Returns: Either NULL if we switched correctly or a reference to 
>> the existing
>> + * gang.
>> + */
>> +struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev,
>> +                        struct dma_fence *gang)
>> +{
>> +    struct dma_fence *old = NULL;
>> +
>> +    do {
>> +        dma_fence_put(old);
>> +        old = dma_fence_get_rcu_safe(&adev->gang_submit);
>> +
>> +        if (old == gang)
>> +            break;
>> +
>> +        if (!dma_fence_is_signaled(old))
>> +            return old;
>> +
>> +    } while (cmpxchg((struct dma_fence __force **)&adev->gang_submit,
>> +             old, gang) != old);
>> +
>> +    dma_fence_put(old);
>> +    return NULL;
>> +}
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> index e07ceae36a5c..059e11c7898c 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> @@ -169,11 +169,29 @@ static void amdgpu_job_free_cb(struct 
>> drm_sched_job *s_job)
>>           kfree(job);
>>   }
>>   +void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
>> +                struct amdgpu_job *leader)
>> +{
>> +    struct dma_fence *fence = &leader->base.s_fence->scheduled;
>> +
>> +    WARN_ON(job->gang_submit);
>> +
>> +    /*
>> +     * Don't add a reference when we are the gang leader to avoid 
>> circle
>> +     * dependency.
>> +     */
>> +    if (job != leader)
>> +        dma_fence_get(fence);
>> +    job->gang_submit = fence;
>> +}
>> +
>>   void amdgpu_job_free(struct amdgpu_job *job)
>>   {
>>       amdgpu_job_free_resources(job);
>>       amdgpu_sync_free(&job->sync);
>>       amdgpu_sync_free(&job->sched_sync);
>> +    if (job->gang_submit != &job->base.s_fence->scheduled)
>> +        dma_fence_put(job->gang_submit);
>>         /* only put the hw fence if has embedded fence */
>>       if (job->hw_fence.ops != NULL)
>> @@ -247,12 +265,16 @@ static struct dma_fence 
>> *amdgpu_job_dependency(struct drm_sched_job *sched_job,
>>           fence = amdgpu_sync_get_fence(&job->sync);
>>       }
>>   +    if (!fence && !job->gang_submit)
>> +        fence = amdgpu_device_switch_gang(ring->adev, 
>> job->gang_submit);
>> +
>
>
> Why job->gang_submit should be NULL in the check above ? Don't you 
> want to switch to an actual new gang fence here ?
> Jobs that don't have gang_submit fence set are not gang jobs anyway 
> and we don't care for this dependency
> for them right ?

Well exactly that's the point. That a job is not part of a gang submit 
is signaled by setting the pointer to NULL.

If we don't check for NULL here we would just crash.

Christian.

>
> Andrey
>
>
>>       return fence;
>>   }
>>     static struct dma_fence *amdgpu_job_run(struct drm_sched_job 
>> *sched_job)
>>   {
>>       struct amdgpu_ring *ring = to_amdgpu_ring(sched_job->sched);
>> +    struct amdgpu_device *adev = ring->adev;
>>       struct dma_fence *fence = NULL, *finished;
>>       struct amdgpu_job *job;
>>       int r = 0;
>> @@ -264,8 +286,10 @@ static struct dma_fence *amdgpu_job_run(struct 
>> drm_sched_job *sched_job)
>>         trace_amdgpu_sched_run_job(job);
>>   -    if (job->vram_lost_counter != 
>> atomic_read(&ring->adev->vram_lost_counter))
>> -        dma_fence_set_error(finished, -ECANCELED);/* skip IB as well 
>> if VRAM lost */
>> +    /* Skip job if VRAM is lost and never resubmit gangs */
>> +    if (job->vram_lost_counter != 
>> atomic_read(&adev->vram_lost_counter) ||
>> +        (job->job_run_counter && job->gang_submit))
>> +        dma_fence_set_error(finished, -ECANCELED);
>>         if (finished->error < 0) {
>>           DRM_INFO("Skip scheduling IBs!\n");
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>> index 0bab8fe0d419..615328130615 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>> @@ -51,6 +51,7 @@ struct amdgpu_job {
>>       struct amdgpu_sync    sched_sync;
>>       struct dma_fence    hw_fence;
>>       struct dma_fence    *external_hw_fence;
>> +    struct dma_fence    *gang_submit;
>>       uint32_t        preamble_status;
>>       uint32_t                preemption_status;
>>       bool                    vm_needs_flush;
>> @@ -80,6 +81,8 @@ int amdgpu_job_alloc_with_ib(struct amdgpu_device 
>> *adev, unsigned size,
>>   void amdgpu_job_set_resources(struct amdgpu_job *job, struct 
>> amdgpu_bo *gds,
>>                     struct amdgpu_bo *gws, struct amdgpu_bo *oa);
>>   void amdgpu_job_free_resources(struct amdgpu_job *job);
>> +void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
>> +                struct amdgpu_job *leader);
>>   void amdgpu_job_free(struct amdgpu_job *job);
>>   int amdgpu_job_submit(struct amdgpu_job *job, struct 
>> drm_sched_entity *entity,
>>                 void *owner, struct dma_fence **f);


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 09/10] drm/amdgpu: add gang submit backend
  2022-03-05 18:40     ` Christian König
@ 2022-03-07 15:40       ` Andrey Grodzovsky
  2022-03-07 15:59         ` Christian König
  0 siblings, 1 reply; 27+ messages in thread
From: Andrey Grodzovsky @ 2022-03-07 15:40 UTC (permalink / raw)
  To: Christian König, amd-gfx, Marek.Olsak; +Cc: Christian König

[-- Attachment #1: Type: text/plain, Size: 9434 bytes --]


On 2022-03-05 13:40, Christian König wrote:
> Am 04.03.22 um 18:10 schrieb Andrey Grodzovsky:
>>
>> On 2022-03-03 03:23, Christian König wrote:
>>> Allows submitting jobs as gang which needs to run on multiple
>>> engines at the same time.
>>>
>>> Basic idea is that we have a global gang submit fence representing 
>>> when the
>>> gang leader is finally pushed to run on the hardware last.
>>>
>>> Jobs submitted as gang are never re-submitted in case of a GPU reset 
>>> since this
>>> won't work and will just deadlock the hardware immediately again.
>>>
>>> Signed-off-by: Christian König <christian.koenig@amd.com>
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h        |  3 ++
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 34 
>>> ++++++++++++++++++++++
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c    | 28 ++++++++++++++++--
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.h    |  3 ++
>>>   4 files changed, 66 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> index 7f447ed7a67f..a664d43d7502 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> @@ -852,6 +852,7 @@ struct amdgpu_device {
>>>       u64                fence_context;
>>>       unsigned            num_rings;
>>>       struct amdgpu_ring        *rings[AMDGPU_MAX_RINGS];
>>> +    struct dma_fence __rcu        *gang_submit;
>>>       bool                ib_pool_ready;
>>>       struct amdgpu_sa_manager ib_pools[AMDGPU_IB_POOL_MAX];
>>>       struct amdgpu_sched 
>>> gpu_sched[AMDGPU_HW_IP_NUM][AMDGPU_RING_PRIO_MAX];
>>> @@ -1233,6 +1234,8 @@ void amdgpu_device_invalidate_hdp(struct 
>>> amdgpu_device *adev,
>>>           struct amdgpu_ring *ring);
>>>     void amdgpu_device_halt(struct amdgpu_device *adev);
>>> +struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device 
>>> *adev,
>>> +                        struct dma_fence *gang);
>>>     /* atpx handler */
>>>   #if defined(CONFIG_VGA_SWITCHEROO)
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> index d78141e2c509..a116b8c08827 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> @@ -3512,6 +3512,7 @@ int amdgpu_device_init(struct amdgpu_device 
>>> *adev,
>>>       adev->gmc.gart_size = 512 * 1024 * 1024;
>>>       adev->accel_working = false;
>>>       adev->num_rings = 0;
>>> +    RCU_INIT_POINTER(adev->gang_submit, dma_fence_get_stub());
>>>       adev->mman.buffer_funcs = NULL;
>>>       adev->mman.buffer_funcs_ring = NULL;
>>>       adev->vm_manager.vm_pte_funcs = NULL;
>>> @@ -3989,6 +3990,7 @@ void amdgpu_device_fini_sw(struct 
>>> amdgpu_device *adev)
>>>       release_firmware(adev->firmware.gpu_info_fw);
>>>       adev->firmware.gpu_info_fw = NULL;
>>>       adev->accel_working = false;
>>> + dma_fence_put(rcu_dereference_protected(adev->gang_submit, true));
>>>         amdgpu_reset_fini(adev);
>>>   @@ -5744,3 +5746,35 @@ void amdgpu_device_halt(struct 
>>> amdgpu_device *adev)
>>>       pci_disable_device(pdev);
>>>       pci_wait_for_pending_transaction(pdev);
>>>   }
>>> +
>>> +/**
>>> + * amdgpu_device_switch_gang - switch to a new gang
>>> + * @adev: amdgpu_device pointer
>>> + * @gang: the gang to switch to
>>> + *
>>> + * Try to switch to a new gang or return a reference to the current 
>>> gang if that
>>> + * isn't possible.
>>> + * Returns: Either NULL if we switched correctly or a reference to 
>>> the existing
>>> + * gang.
>>> + */
>>> +struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device 
>>> *adev,
>>> +                        struct dma_fence *gang)
>>> +{
>>> +    struct dma_fence *old = NULL;
>>> +
>>> +    do {
>>> +        dma_fence_put(old);
>>> +        old = dma_fence_get_rcu_safe(&adev->gang_submit);
>>> +
>>> +        if (old == gang)
>>> +            break;
>>> +
>>> +        if (!dma_fence_is_signaled(old))
>>> +            return old;
>>> +
>>> +    } while (cmpxchg((struct dma_fence __force **)&adev->gang_submit,
>>> +             old, gang) != old);
>>> +
>>> +    dma_fence_put(old);
>>> +    return NULL;
>>> +}
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>> index e07ceae36a5c..059e11c7898c 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>> @@ -169,11 +169,29 @@ static void amdgpu_job_free_cb(struct 
>>> drm_sched_job *s_job)
>>>           kfree(job);
>>>   }
>>>   +void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
>>> +                struct amdgpu_job *leader)
>>> +{
>>> +    struct dma_fence *fence = &leader->base.s_fence->scheduled;
>>> +
>>> +    WARN_ON(job->gang_submit);
>>> +
>>> +    /*
>>> +     * Don't add a reference when we are the gang leader to avoid 
>>> circle
>>> +     * dependency.
>>> +     */
>>> +    if (job != leader)
>>> +        dma_fence_get(fence);
>>> +    job->gang_submit = fence;
>>> +}
>>> +
>>>   void amdgpu_job_free(struct amdgpu_job *job)
>>>   {
>>>       amdgpu_job_free_resources(job);
>>>       amdgpu_sync_free(&job->sync);
>>>       amdgpu_sync_free(&job->sched_sync);
>>> +    if (job->gang_submit != &job->base.s_fence->scheduled)
>>> +        dma_fence_put(job->gang_submit);
>>>         /* only put the hw fence if has embedded fence */
>>>       if (job->hw_fence.ops != NULL)
>>> @@ -247,12 +265,16 @@ static struct dma_fence 
>>> *amdgpu_job_dependency(struct drm_sched_job *sched_job,
>>>           fence = amdgpu_sync_get_fence(&job->sync);
>>>       }
>>>   +    if (!fence && !job->gang_submit)
>>> +        fence = amdgpu_device_switch_gang(ring->adev, 
>>> job->gang_submit);
>>> +
>>
>>
>> Why job->gang_submit should be NULL in the check above ? Don't you 
>> want to switch to an actual new gang fence here ?
>> Jobs that don't have gang_submit fence set are not gang jobs anyway 
>> and we don't care for this dependency
>> for them right ?
>
> Well exactly that's the point. That a job is not part of a gang submit 
> is signaled by setting the pointer to NULL.


No dispute on this


>
> If we don't check for NULL here we would just crash.


But you go into the 'if clause' if job->gang_submit is equal to NULL, i 
would expect to see here
if (!fence &&*job->gang_submit*) because you want to switch to an actual 
new gang (not to NULL)

Andrey


>
> Christian.
>
>>
>> Andrey
>>
>>
>>>       return fence;
>>>   }
>>>     static struct dma_fence *amdgpu_job_run(struct drm_sched_job 
>>> *sched_job)
>>>   {
>>>       struct amdgpu_ring *ring = to_amdgpu_ring(sched_job->sched);
>>> +    struct amdgpu_device *adev = ring->adev;
>>>       struct dma_fence *fence = NULL, *finished;
>>>       struct amdgpu_job *job;
>>>       int r = 0;
>>> @@ -264,8 +286,10 @@ static struct dma_fence *amdgpu_job_run(struct 
>>> drm_sched_job *sched_job)
>>>         trace_amdgpu_sched_run_job(job);
>>>   -    if (job->vram_lost_counter != 
>>> atomic_read(&ring->adev->vram_lost_counter))
>>> -        dma_fence_set_error(finished, -ECANCELED);/* skip IB as 
>>> well if VRAM lost */
>>> +    /* Skip job if VRAM is lost and never resubmit gangs */
>>> +    if (job->vram_lost_counter != 
>>> atomic_read(&adev->vram_lost_counter) ||
>>> +        (job->job_run_counter && job->gang_submit))
>>> +        dma_fence_set_error(finished, -ECANCELED);
>>>         if (finished->error < 0) {
>>>           DRM_INFO("Skip scheduling IBs!\n");
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h 
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>>> index 0bab8fe0d419..615328130615 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>>> @@ -51,6 +51,7 @@ struct amdgpu_job {
>>>       struct amdgpu_sync    sched_sync;
>>>       struct dma_fence    hw_fence;
>>>       struct dma_fence    *external_hw_fence;
>>> +    struct dma_fence    *gang_submit;
>>>       uint32_t        preamble_status;
>>>       uint32_t                preemption_status;
>>>       bool                    vm_needs_flush;
>>> @@ -80,6 +81,8 @@ int amdgpu_job_alloc_with_ib(struct amdgpu_device 
>>> *adev, unsigned size,
>>>   void amdgpu_job_set_resources(struct amdgpu_job *job, struct 
>>> amdgpu_bo *gds,
>>>                     struct amdgpu_bo *gws, struct amdgpu_bo *oa);
>>>   void amdgpu_job_free_resources(struct amdgpu_job *job);
>>> +void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
>>> +                struct amdgpu_job *leader);
>>>   void amdgpu_job_free(struct amdgpu_job *job);
>>>   int amdgpu_job_submit(struct amdgpu_job *job, struct 
>>> drm_sched_entity *entity,
>>>                 void *owner, struct dma_fence **f);
>

[-- Attachment #2: Type: text/html, Size: 17347 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 09/10] drm/amdgpu: add gang submit backend
  2022-03-07 15:40       ` Andrey Grodzovsky
@ 2022-03-07 15:59         ` Christian König
  2022-03-07 16:02           ` Andrey Grodzovsky
  0 siblings, 1 reply; 27+ messages in thread
From: Christian König @ 2022-03-07 15:59 UTC (permalink / raw)
  To: Andrey Grodzovsky, Christian König, amd-gfx, Marek.Olsak

[-- Attachment #1: Type: text/plain, Size: 9887 bytes --]

Am 07.03.22 um 16:40 schrieb Andrey Grodzovsky:
>
> On 2022-03-05 13:40, Christian König wrote:
>
>> Am 04.03.22 um 18:10 schrieb Andrey Grodzovsky:
>>>
>>> On 2022-03-03 03:23, Christian König wrote:
>>>> Allows submitting jobs as gang which needs to run on multiple
>>>> engines at the same time.
>>>>
>>>> Basic idea is that we have a global gang submit fence representing 
>>>> when the
>>>> gang leader is finally pushed to run on the hardware last.
>>>>
>>>> Jobs submitted as gang are never re-submitted in case of a GPU 
>>>> reset since this
>>>> won't work and will just deadlock the hardware immediately again.
>>>>
>>>> Signed-off-by: Christian König <christian.koenig@amd.com>
>>>> ---
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h        |  3 ++
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 34 
>>>> ++++++++++++++++++++++
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c    | 28 ++++++++++++++++--
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.h    |  3 ++
>>>>   4 files changed, 66 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>>> index 7f447ed7a67f..a664d43d7502 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>>> @@ -852,6 +852,7 @@ struct amdgpu_device {
>>>>       u64                fence_context;
>>>>       unsigned            num_rings;
>>>>       struct amdgpu_ring        *rings[AMDGPU_MAX_RINGS];
>>>> +    struct dma_fence __rcu        *gang_submit;
>>>>       bool                ib_pool_ready;
>>>>       struct amdgpu_sa_manager ib_pools[AMDGPU_IB_POOL_MAX];
>>>>       struct amdgpu_sched 
>>>> gpu_sched[AMDGPU_HW_IP_NUM][AMDGPU_RING_PRIO_MAX];
>>>> @@ -1233,6 +1234,8 @@ void amdgpu_device_invalidate_hdp(struct 
>>>> amdgpu_device *adev,
>>>>           struct amdgpu_ring *ring);
>>>>     void amdgpu_device_halt(struct amdgpu_device *adev);
>>>> +struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device 
>>>> *adev,
>>>> +                        struct dma_fence *gang);
>>>>     /* atpx handler */
>>>>   #if defined(CONFIG_VGA_SWITCHEROO)
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>>> index d78141e2c509..a116b8c08827 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>>> @@ -3512,6 +3512,7 @@ int amdgpu_device_init(struct amdgpu_device 
>>>> *adev,
>>>>       adev->gmc.gart_size = 512 * 1024 * 1024;
>>>>       adev->accel_working = false;
>>>>       adev->num_rings = 0;
>>>> +    RCU_INIT_POINTER(adev->gang_submit, dma_fence_get_stub());
>>>>       adev->mman.buffer_funcs = NULL;
>>>>       adev->mman.buffer_funcs_ring = NULL;
>>>>       adev->vm_manager.vm_pte_funcs = NULL;
>>>> @@ -3989,6 +3990,7 @@ void amdgpu_device_fini_sw(struct 
>>>> amdgpu_device *adev)
>>>>       release_firmware(adev->firmware.gpu_info_fw);
>>>>       adev->firmware.gpu_info_fw = NULL;
>>>>       adev->accel_working = false;
>>>> + dma_fence_put(rcu_dereference_protected(adev->gang_submit, true));
>>>>         amdgpu_reset_fini(adev);
>>>>   @@ -5744,3 +5746,35 @@ void amdgpu_device_halt(struct 
>>>> amdgpu_device *adev)
>>>>       pci_disable_device(pdev);
>>>>       pci_wait_for_pending_transaction(pdev);
>>>>   }
>>>> +
>>>> +/**
>>>> + * amdgpu_device_switch_gang - switch to a new gang
>>>> + * @adev: amdgpu_device pointer
>>>> + * @gang: the gang to switch to
>>>> + *
>>>> + * Try to switch to a new gang or return a reference to the 
>>>> current gang if that
>>>> + * isn't possible.
>>>> + * Returns: Either NULL if we switched correctly or a reference to 
>>>> the existing
>>>> + * gang.
>>>> + */
>>>> +struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device 
>>>> *adev,
>>>> +                        struct dma_fence *gang)
>>>> +{
>>>> +    struct dma_fence *old = NULL;
>>>> +
>>>> +    do {
>>>> +        dma_fence_put(old);
>>>> +        old = dma_fence_get_rcu_safe(&adev->gang_submit);
>>>> +
>>>> +        if (old == gang)
>>>> +            break;
>>>> +
>>>> +        if (!dma_fence_is_signaled(old))
>>>> +            return old;
>>>> +
>>>> +    } while (cmpxchg((struct dma_fence __force **)&adev->gang_submit,
>>>> +             old, gang) != old);
>>>> +
>>>> +    dma_fence_put(old);
>>>> +    return NULL;
>>>> +}
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>>> index e07ceae36a5c..059e11c7898c 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>>> @@ -169,11 +169,29 @@ static void amdgpu_job_free_cb(struct 
>>>> drm_sched_job *s_job)
>>>>           kfree(job);
>>>>   }
>>>>   +void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
>>>> +                struct amdgpu_job *leader)
>>>> +{
>>>> +    struct dma_fence *fence = &leader->base.s_fence->scheduled;
>>>> +
>>>> +    WARN_ON(job->gang_submit);
>>>> +
>>>> +    /*
>>>> +     * Don't add a reference when we are the gang leader to avoid 
>>>> circle
>>>> +     * dependency.
>>>> +     */
>>>> +    if (job != leader)
>>>> +        dma_fence_get(fence);
>>>> +    job->gang_submit = fence;
>>>> +}
>>>> +
>>>>   void amdgpu_job_free(struct amdgpu_job *job)
>>>>   {
>>>>       amdgpu_job_free_resources(job);
>>>>       amdgpu_sync_free(&job->sync);
>>>>       amdgpu_sync_free(&job->sched_sync);
>>>> +    if (job->gang_submit != &job->base.s_fence->scheduled)
>>>> +        dma_fence_put(job->gang_submit);
>>>>         /* only put the hw fence if has embedded fence */
>>>>       if (job->hw_fence.ops != NULL)
>>>> @@ -247,12 +265,16 @@ static struct dma_fence 
>>>> *amdgpu_job_dependency(struct drm_sched_job *sched_job,
>>>>           fence = amdgpu_sync_get_fence(&job->sync);
>>>>       }
>>>>   +    if (!fence && !job->gang_submit)
>>>> +        fence = amdgpu_device_switch_gang(ring->adev, 
>>>> job->gang_submit);
>>>> +
>>>
>>>
>>> Why job->gang_submit should be NULL in the check above ? Don't you 
>>> want to switch to an actual new gang fence here ?
>>> Jobs that don't have gang_submit fence set are not gang jobs anyway 
>>> and we don't care for this dependency
>>> for them right ?
>>
>> Well exactly that's the point. That a job is not part of a gang 
>> submit is signaled by setting the pointer to NULL.
>
>
> No dispute on this
>
>
>>
>> If we don't check for NULL here we would just crash.
>
>
> But you go into the 'if clause' if job->gang_submit is equal to NULL, 
> i would expect to see here
> if (!fence &&*job->gang_submit*) because you want to switch to an 
> actual new gang (not to NULL)
>

WTF? I'm like 100% that I've fixed that before sending it out.

Thanks for point it out, but yeah I've already stumbled over that typo 
as well.

Christian.

> Andrey
>
>
>>
>> Christian.
>>
>>>
>>> Andrey
>>>
>>>
>>>>       return fence;
>>>>   }
>>>>     static struct dma_fence *amdgpu_job_run(struct drm_sched_job 
>>>> *sched_job)
>>>>   {
>>>>       struct amdgpu_ring *ring = to_amdgpu_ring(sched_job->sched);
>>>> +    struct amdgpu_device *adev = ring->adev;
>>>>       struct dma_fence *fence = NULL, *finished;
>>>>       struct amdgpu_job *job;
>>>>       int r = 0;
>>>> @@ -264,8 +286,10 @@ static struct dma_fence *amdgpu_job_run(struct 
>>>> drm_sched_job *sched_job)
>>>>         trace_amdgpu_sched_run_job(job);
>>>>   -    if (job->vram_lost_counter != 
>>>> atomic_read(&ring->adev->vram_lost_counter))
>>>> -        dma_fence_set_error(finished, -ECANCELED);/* skip IB as 
>>>> well if VRAM lost */
>>>> +    /* Skip job if VRAM is lost and never resubmit gangs */
>>>> +    if (job->vram_lost_counter != 
>>>> atomic_read(&adev->vram_lost_counter) ||
>>>> +        (job->job_run_counter && job->gang_submit))
>>>> +        dma_fence_set_error(finished, -ECANCELED);
>>>>         if (finished->error < 0) {
>>>>           DRM_INFO("Skip scheduling IBs!\n");
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h 
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>>>> index 0bab8fe0d419..615328130615 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>>>> @@ -51,6 +51,7 @@ struct amdgpu_job {
>>>>       struct amdgpu_sync    sched_sync;
>>>>       struct dma_fence    hw_fence;
>>>>       struct dma_fence    *external_hw_fence;
>>>> +    struct dma_fence    *gang_submit;
>>>>       uint32_t        preamble_status;
>>>>       uint32_t                preemption_status;
>>>>       bool                    vm_needs_flush;
>>>> @@ -80,6 +81,8 @@ int amdgpu_job_alloc_with_ib(struct amdgpu_device 
>>>> *adev, unsigned size,
>>>>   void amdgpu_job_set_resources(struct amdgpu_job *job, struct 
>>>> amdgpu_bo *gds,
>>>>                     struct amdgpu_bo *gws, struct amdgpu_bo *oa);
>>>>   void amdgpu_job_free_resources(struct amdgpu_job *job);
>>>> +void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
>>>> +                struct amdgpu_job *leader);
>>>>   void amdgpu_job_free(struct amdgpu_job *job);
>>>>   int amdgpu_job_submit(struct amdgpu_job *job, struct 
>>>> drm_sched_entity *entity,
>>>>                 void *owner, struct dma_fence **f);
>>

[-- Attachment #2: Type: text/html, Size: 16632 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 09/10] drm/amdgpu: add gang submit backend
  2022-03-07 15:59         ` Christian König
@ 2022-03-07 16:02           ` Andrey Grodzovsky
  0 siblings, 0 replies; 27+ messages in thread
From: Andrey Grodzovsky @ 2022-03-07 16:02 UTC (permalink / raw)
  To: Christian König, Christian König, amd-gfx, Marek.Olsak

[-- Attachment #1: Type: text/plain, Size: 657 bytes --]

:)))))

I am like - I must be crazy because no way this works but you insist 
that it is and I know u are usually right.

Andrey

On 2022-03-07 10:59, Christian König wrote:
>>>
>>> If we don't check for NULL here we would just crash.
>>
>>
>> But you go into the 'if clause' if job->gang_submit is equal to NULL, 
>> i would expect to see here
>> if (!fence &&*job->gang_submit*) because you want to switch to an 
>> actual new gang (not to NULL)
>>
>
> WTF? I'm like 100% that I've fixed that before sending it out.
>
> Thanks for point it out, but yeah I've already stumbled over that typo 
> as well.
>
> Christian.
>
>> Andrey
>>
>>
>>>
>>> Christian.

[-- Attachment #2: Type: text/html, Size: 1749 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 10/10] drm/amdgpu: add gang submit frontend
  2022-03-03  8:23 ` [PATCH 10/10] drm/amdgpu: add gang submit frontend Christian König
@ 2022-03-07 17:02   ` Andrey Grodzovsky
  2022-06-01 12:09   ` Mohan Marimuthu, Yogesh
  1 sibling, 0 replies; 27+ messages in thread
From: Andrey Grodzovsky @ 2022-03-07 17:02 UTC (permalink / raw)
  To: Christian König, amd-gfx, Marek.Olsak; +Cc: Christian König


On 2022-03-03 03:23, Christian König wrote:
> Allows submitting jobs as gang which needs to run on multiple engines at the
> same time.
>
> All members of the gang get the same implicit, explicit and VM dependencies. So
> no gang member will start running until everything else is ready.
>
> The last job is considered the gang leader (usually a submission to the GFX
> ring) and used for signaling output dependencies.
>
> Each job is remembered individually as user of a buffer object, so there is no
> joining of work at the end.
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c    | 244 ++++++++++++++--------
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h    |   9 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h |  12 +-
>   3 files changed, 173 insertions(+), 92 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index c6541f7b8f54..7429e64919fe 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -69,6 +69,7 @@ static int amdgpu_cs_p1_ib(struct amdgpu_cs_parser *p,
>   			   unsigned int *num_ibs)
>   {
>   	struct drm_sched_entity *entity;
> +	unsigned int i;
>   	int r;
>   
>   	r = amdgpu_ctx_get_entity(p->ctx, chunk_ib->ip_type,
> @@ -83,11 +84,19 @@ static int amdgpu_cs_p1_ib(struct amdgpu_cs_parser *p,
>   		return -EINVAL;
>   
>   	/* Currently we don't support submitting to multiple entities */
> -	if (p->entity && p->entity != entity)
> +	for (i = 0; i < p->gang_size; ++i) {
> +		if (p->entities[i] == entity)
> +			goto found;
> +	}
> +
> +	if (i == AMDGPU_CS_GANG_SIZE)
>   		return -EINVAL;
>   
> -	p->entity = entity;
> -	++(*num_ibs);
> +	p->entities[i] = entity;
> +	p->gang_size = i + 1;
> +
> +found:
> +	++(num_ibs[i]);
>   	return 0;
>   }
>   
> @@ -161,11 +170,12 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
>   			   union drm_amdgpu_cs *cs)
>   {
>   	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> +	unsigned int num_ibs[AMDGPU_CS_GANG_SIZE] = { };
>   	struct amdgpu_vm *vm = &fpriv->vm;
>   	uint64_t *chunk_array_user;
>   	uint64_t *chunk_array;
> -	unsigned size, num_ibs = 0;
>   	uint32_t uf_offset = 0;
> +	unsigned int size;
>   	int ret;
>   	int i;
>   
> @@ -228,7 +238,7 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
>   			if (size < sizeof(struct drm_amdgpu_cs_chunk_ib))
>   				goto free_partial_kdata;
>   
> -			ret = amdgpu_cs_p1_ib(p, p->chunks[i].kdata, &num_ibs);
> +			ret = amdgpu_cs_p1_ib(p, p->chunks[i].kdata, num_ibs);
>   			if (ret)
>   				goto free_partial_kdata;
>   			break;
> @@ -265,21 +275,27 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
>   		}
>   	}
>   
> -	ret = amdgpu_job_alloc(p->adev, num_ibs, &p->job, vm);
> -	if (ret)
> -		goto free_all_kdata;
> +	if (!p->gang_size)
> +		return -EINVAL;
>   
> -	ret = drm_sched_job_init(&p->job->base, p->entity, &fpriv->vm);
> -	if (ret)
> -		goto free_all_kdata;
> +	for (i = 0; i < p->gang_size; ++i) {
> +		ret = amdgpu_job_alloc(p->adev, num_ibs[i], &p->jobs[i], vm);
> +		if (ret)
> +			goto free_all_kdata;
> +
> +		ret = drm_sched_job_init(&p->jobs[i]->base, p->entities[i],
> +					 &fpriv->vm);
> +		if (ret)
> +			goto free_all_kdata;
> +	}
>   
> -	if (p->ctx->vram_lost_counter != p->job->vram_lost_counter) {
> +	if (p->ctx->vram_lost_counter != p->jobs[0]->vram_lost_counter) {
>   		ret = -ECANCELED;
>   		goto free_all_kdata;
>   	}
>   
>   	if (p->uf_entry.tv.bo)
> -		p->job->uf_addr = uf_offset;
> +		p->jobs[p->gang_size - 1]->uf_addr = uf_offset;


I would use some macro here for the index or maybe even a getter
function or a macro that explicitly shows you are retrieving the gang 
leader

Maybe also something for  the 'jobs[0]' above which as I understated 
just used
for retrieving data which is identical for each job in the gang - but 
why not just
use the leader then for all such retrievals ?

Andrey


>   	kvfree(chunk_array);
>   
>   	/* Use this opportunity to fill in task info for the vm */
> @@ -301,22 +317,18 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
>   	return ret;
>   }
>   
> -static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
> -			   struct amdgpu_cs_chunk *chunk,
> -			   unsigned int *num_ibs,
> -			   unsigned int *ce_preempt,
> -			   unsigned int *de_preempt)
> +static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
> +			   struct amdgpu_ib *ib, struct amdgpu_cs_chunk *chunk,
> +			   unsigned int *ce_preempt, unsigned int *de_preempt)
>   {
> -	struct amdgpu_ring *ring = to_amdgpu_ring(p->job->base.sched);
> +	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
>   	struct drm_amdgpu_cs_chunk_ib *chunk_ib = chunk->kdata;
>   	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> -	struct amdgpu_ib *ib = &p->job->ibs[*num_ibs];
>   	struct amdgpu_vm *vm = &fpriv->vm;
>   	int r;
>   
> -
>   	/* MM engine doesn't support user fences */
> -	if (p->job->uf_addr && ring->funcs->no_user_fence)
> +	if (job->uf_addr && ring->funcs->no_user_fence)
>   		return -EINVAL;
>   
>   	if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX &&
> @@ -333,7 +345,7 @@ static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
>   	}
>   
>   	if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE)
> -		p->job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT;
> +		job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT;
>   
>   	r =  amdgpu_ib_get(p->adev, vm, ring->funcs->parse_cs ?
>   			   chunk_ib->ib_bytes : 0,
> @@ -346,8 +358,6 @@ static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
>   	ib->gpu_addr = chunk_ib->va_start;
>   	ib->length_dw = chunk_ib->ib_bytes / 4;
>   	ib->flags = chunk_ib->flags;
> -
> -	(*num_ibs)++;
>   	return 0;
>   }
>   
> @@ -396,7 +406,7 @@ static int amdgpu_cs_p2_dependencies(struct amdgpu_cs_parser *p,
>   			dma_fence_put(old);
>   		}
>   
> -		r = amdgpu_sync_fence(&p->job->sync, fence);
> +		r = amdgpu_sync_fence(&p->jobs[0]->sync, fence);
>   		dma_fence_put(fence);
>   		if (r)
>   			return r;
> @@ -418,7 +428,7 @@ static int amdgpu_syncobj_lookup_and_add(struct amdgpu_cs_parser *p,
>   		return r;
>   	}
>   
> -	r = amdgpu_sync_fence(&p->job->sync, fence);
> +	r = amdgpu_sync_fence(&p->jobs[0]->sync, fence);
>   	dma_fence_put(fence);
>   
>   	return r;
> @@ -541,20 +551,30 @@ static int amdgpu_cs_p2_syncobj_timeline_signal(struct amdgpu_cs_parser *p,
>   
>   static int amdgpu_cs_pass2(struct amdgpu_cs_parser *p)
>   {
> -	unsigned int num_ibs = 0, ce_preempt = 0, de_preempt = 0;
> +	unsigned int ce_preempt = 0, de_preempt = 0;
> +	unsigned int job_idx = 0, ib_idx = 0;
>   	int i, r;
>   
>   	for (i = 0; i < p->nchunks; ++i) {
>   		struct amdgpu_cs_chunk *chunk;
> +		struct amdgpu_job *job;
>   
>   		chunk = &p->chunks[i];
>   
>   		switch (chunk->chunk_id) {
>   		case AMDGPU_CHUNK_ID_IB:
> -			r = amdgpu_cs_p2_ib(p, chunk, &num_ibs,
> +			job = p->jobs[job_idx];
> +			r = amdgpu_cs_p2_ib(p, job, &job->ibs[ib_idx], chunk,
>   					    &ce_preempt, &de_preempt);
>   			if (r)
>   				return r;
> +
> +			if (++ib_idx == job->num_ibs) {
> +				++job_idx;
> +				ib_idx = 0;
> +				ce_preempt = 0;
> +				de_preempt = 0;
> +			}
>   			break;
>   		case AMDGPU_CHUNK_ID_DEPENDENCIES:
>   		case AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES:
> @@ -825,6 +845,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>   	struct amdgpu_vm *vm = &fpriv->vm;
>   	struct amdgpu_bo_list_entry *e;
>   	struct list_head duplicates;
> +	unsigned int i;
>   	int r;
>   
>   	INIT_LIST_HEAD(&p->validated);
> @@ -905,16 +926,6 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>   		e->bo_va = amdgpu_vm_bo_find(vm, bo);
>   	}
>   
> -	/* Move fence waiting after getting reservation lock of
> -	 * PD root. Then there is no need on a ctx mutex lock.
> -	 */
> -	r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entity);
> -	if (unlikely(r != 0)) {
> -		if (r != -ERESTARTSYS)
> -			DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n");
> -		goto error_validate;
> -	}
> -
>   	amdgpu_cs_get_threshold_for_moves(p->adev, &p->bytes_moved_threshold,
>   					  &p->bytes_moved_vis_threshold);
>   	p->bytes_moved = 0;
> @@ -938,14 +949,16 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>   	amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved,
>   				     p->bytes_moved_vis);
>   
> -	amdgpu_job_set_resources(p->job, p->bo_list->gds_obj,
> -				 p->bo_list->gws_obj, p->bo_list->oa_obj);
> +	for (i = 0; i < p->gang_size; ++i)
> +		amdgpu_job_set_resources(p->jobs[i], p->bo_list->gds_obj,
> +					 p->bo_list->gws_obj,
> +					 p->bo_list->oa_obj);
>   
>   	if (!r && p->uf_entry.tv.bo) {
>   		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(p->uf_entry.tv.bo);
>   
>   		r = amdgpu_ttm_alloc_gart(&uf->tbo);
> -		p->job->uf_addr += amdgpu_bo_gpu_offset(uf);
> +		p->jobs[p->gang_size - 1]->uf_addr += amdgpu_bo_gpu_offset(uf);
>   	}
>   
>   error_validate:
> @@ -955,20 +968,24 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>   	return r;
>   }
>   
> -static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *parser)
> +static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *p)
>   {
> -	int i;
> +	int i, j;
>   
>   	if (!trace_amdgpu_cs_enabled())
>   		return;
>   
> -	for (i = 0; i < parser->job->num_ibs; i++)
> -		trace_amdgpu_cs(parser, i);
> +	for (i = 0; i < p->gang_size; ++i) {
> +		struct amdgpu_job *job = p->jobs[i];
> +
> +		for (j = 0; j < job->num_ibs; ++j)
> +			trace_amdgpu_cs(p, job, &job->ibs[j]);
> +	}
>   }
>   
> -static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
> +static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p,
> +			       struct amdgpu_job *job)
>   {
> -	struct amdgpu_job *job = p->job;
>   	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
>   	unsigned int i;
>   	int r;
> @@ -1007,14 +1024,13 @@ static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
>   			memcpy(ib->ptr, kptr, job->ibs[i].length_dw * 4);
>   			amdgpu_bo_kunmap(aobj);
>   
> -			r = amdgpu_ring_parse_cs(ring, p, p->job,
> -						 &p->job->ibs[i]);
> +			r = amdgpu_ring_parse_cs(ring, p, job, &job->ibs[i]);
>   			if (r)
>   				return r;
>   		} else {
>   			ib->ptr = (uint32_t *)kptr;
> -			r = amdgpu_ring_patch_cs_in_place(ring, p, p->job,
> -							  &p->job->ibs[i]);
> +			r = amdgpu_ring_patch_cs_in_place(ring, p, job,
> +							  &job->ibs[i]);
>   			amdgpu_bo_kunmap(aobj);
>   			if (r)
>   				return r;
> @@ -1024,14 +1040,29 @@ static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
>   	return 0;
>   }
>   
> +static int amdgpu_cs_patch_jobs(struct amdgpu_cs_parser *p)
> +{
> +	unsigned int i;
> +	int r;
> +
> +	for (i = 0; i < p->gang_size; ++i) {
> +		r = amdgpu_cs_patch_ibs(p, p->jobs[i]);
> +		if (r)
> +			return r;
> +	}
> +	return 0;
> +}
> +
>   static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
>   {
>   	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
>   	struct amdgpu_device *adev = p->adev;
> +	struct amdgpu_job *job = p->jobs[0];
>   	struct amdgpu_vm *vm = &fpriv->vm;
>   	struct amdgpu_bo_list_entry *e;
>   	struct amdgpu_bo_va *bo_va;
>   	struct amdgpu_bo *bo;
> +	unsigned int i;
>   	int r;
>   
>   	r = amdgpu_vm_clear_freed(adev, vm, NULL);
> @@ -1042,7 +1073,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
>   	if (r)
>   		return r;
>   
> -	r = amdgpu_sync_vm_fence(&p->job->sync, fpriv->prt_va->last_pt_update);
> +	r = amdgpu_sync_vm_fence(&job->sync, fpriv->prt_va->last_pt_update);
>   	if (r)
>   		return r;
>   
> @@ -1052,7 +1083,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
>   		if (r)
>   			return r;
>   
> -		r = amdgpu_sync_vm_fence(&p->job->sync, bo_va->last_pt_update);
> +		r = amdgpu_sync_vm_fence(&job->sync, bo_va->last_pt_update);
>   		if (r)
>   			return r;
>   	}
> @@ -1071,7 +1102,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
>   		if (r)
>   			return r;
>   
> -		r = amdgpu_sync_vm_fence(&p->job->sync, bo_va->last_pt_update);
> +		r = amdgpu_sync_vm_fence(&job->sync, bo_va->last_pt_update);
>   		if (r)
>   			return r;
>   	}
> @@ -1084,11 +1115,18 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
>   	if (r)
>   		return r;
>   
> -	r = amdgpu_sync_vm_fence(&p->job->sync, vm->last_update);
> +	r = amdgpu_sync_vm_fence(&job->sync, vm->last_update);
>   	if (r)
>   		return r;
>   
> -	p->job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo);
> +	for (i = 0; i < p->gang_size; ++i) {
> +		job = p->jobs[i];
> +
> +		if (!job->vm)
> +			continue;
> +
> +		job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo);
> +	}
>   
>   	if (amdgpu_vm_debug) {
>   		/* Invalidate all BOs to test for userspace bugs */
> @@ -1109,7 +1147,9 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
>   static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
>   {
>   	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> +	struct amdgpu_job *job = p->jobs[0];
>   	struct amdgpu_bo_list_entry *e;
> +	unsigned int i;
>   	int r;
>   
>   	list_for_each_entry(e, &p->validated, tv.head) {
> @@ -1119,12 +1159,23 @@ static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
>   
>   		sync_mode = amdgpu_bo_explicit_sync(bo) ?
>   			AMDGPU_SYNC_EXPLICIT : AMDGPU_SYNC_NE_OWNER;
> -		r = amdgpu_sync_resv(p->adev, &p->job->sync, resv, sync_mode,
> +		r = amdgpu_sync_resv(p->adev, &job->sync, resv, sync_mode,
>   				     &fpriv->vm);
>   		if (r)
>   			return r;
>   	}
> -	return 0;
> +
> +	for (i = 1; i < p->gang_size; ++i) {
> +		r = amdgpu_sync_clone(&job->sync, &p->jobs[i]->sync);
> +		if (r)
> +			return r;
> +	}
> +
> +	r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entities[p->gang_size - 1]);
> +	if (r && r != -ERESTARTSYS)
> +		DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n");
> +
> +	return r;
>   }
>   
>   static void amdgpu_cs_post_dependencies(struct amdgpu_cs_parser *p)
> @@ -1147,17 +1198,27 @@ static void amdgpu_cs_post_dependencies(struct amdgpu_cs_parser *p)
>   static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
>   			    union drm_amdgpu_cs *cs)
>   {
> +	struct amdgpu_job *last = p->jobs[p->gang_size - 1];
>   	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> -	struct drm_sched_entity *entity = p->entity;
>   	struct amdgpu_bo_list_entry *e;
> -	struct amdgpu_job *job;
> +	unsigned int i;
>   	uint64_t seq;
>   	int r;
>   
> -	job = p->job;
> -	p->job = NULL;
> +	for (i = 0; i < p->gang_size; ++i)
> +		drm_sched_job_arm(&p->jobs[i]->base);
>   
> -	drm_sched_job_arm(&job->base);
> +	for (i = 0; i < (p->gang_size - 1); ++i) {
> +		struct dma_fence *fence;
> +
> +		fence = &p->jobs[i]->base.s_fence->scheduled;
> +		r = amdgpu_sync_fence(&last->sync, fence);
> +		if (r)
> +			goto error_cleanup;
> +	}
> +
> +	for (i = 0; i < p->gang_size; ++i)
> +		amdgpu_job_set_gang_leader(p->jobs[i], last);
>   
>   	/* No memory allocation is allowed while holding the notifier lock.
>   	 * The lock is held until amdgpu_cs_submit is finished and fence is
> @@ -1175,44 +1236,58 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
>   	}
>   	if (r) {
>   		r = -EAGAIN;
> -		goto error_abort;
> +		goto error_unlock;
>   	}
>   
> -	p->fence = dma_fence_get(&job->base.s_fence->finished);
> +	p->fence = dma_fence_get(&last->base.s_fence->finished);
>   
> -	amdgpu_ctx_add_fence(p->ctx, entity, p->fence, &seq);
> +	amdgpu_ctx_add_fence(p->ctx, p->entities[p->gang_size - 1], p->fence,
> +			     &seq);
>   	amdgpu_cs_post_dependencies(p);
>   
> -	if ((job->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) &&
> +	if ((last->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) &&
>   	    !p->ctx->preamble_presented) {
> -		job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
> +		last->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
>   		p->ctx->preamble_presented = true;
>   	}
>   
>   	cs->out.handle = seq;
> -	job->uf_sequence = seq;
> -
> -	amdgpu_job_free_resources(job);
> +	last->uf_sequence = seq;
>   
> -	trace_amdgpu_cs_ioctl(job);
>   	amdgpu_vm_bo_trace_cs(&fpriv->vm, &p->ticket);
> -	drm_sched_entity_push_job(&job->base);
> +	for (i = 0; i < p->gang_size; ++i) {
> +		amdgpu_job_free_resources(p->jobs[i]);
> +		trace_amdgpu_cs_ioctl(p->jobs[i]);
> +		drm_sched_entity_push_job(&p->jobs[i]->base);
> +		p->jobs[i] = NULL;
> +	}
>   
>   	amdgpu_vm_move_to_lru_tail(p->adev, &fpriv->vm);
>   
> -	/* Make sure all BOs are remembered as writers */
> -	amdgpu_bo_list_for_each_entry(e, p->bo_list)
> +	list_for_each_entry(e, &p->validated, tv.head) {
> +
> +		/* Everybody except for the gang leader uses BOOKKEEP */
> +		for (i = 0; i < (p->gang_size - 1); ++i) {
> +			dma_resv_add_fence(e->tv.bo->base.resv,
> +					   &p->jobs[i]->base.s_fence->finished,
> +					   DMA_RESV_USAGE_BOOKKEEP);
> +		}
> +
> +		/* The gang leader as remembered as writer */
>   		e->tv.num_shared = 0;
> +	}
>   
>   	ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence);
>   	mutex_unlock(&p->adev->notifier_lock);
>   
>   	return 0;
>   
> -error_abort:
> -	drm_sched_job_cleanup(&job->base);
> +error_unlock:
>   	mutex_unlock(&p->adev->notifier_lock);
> -	amdgpu_job_free(job);
> +
> +error_cleanup:
> +	for (i = 0; i < p->gang_size; ++i)
> +		drm_sched_job_cleanup(&p->jobs[i]->base);
>   	return r;
>   }
>   
> @@ -1229,17 +1304,18 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser)
>   
>   	dma_fence_put(parser->fence);
>   
> -	if (parser->ctx) {
> +	if (parser->ctx)
>   		amdgpu_ctx_put(parser->ctx);
> -	}
>   	if (parser->bo_list)
>   		amdgpu_bo_list_put(parser->bo_list);
>   
>   	for (i = 0; i < parser->nchunks; i++)
>   		kvfree(parser->chunks[i].kdata);
>   	kvfree(parser->chunks);
> -	if (parser->job)
> -		amdgpu_job_free(parser->job);
> +	for (i = 0; i < parser->gang_size; ++i) {
> +		if (parser->jobs[i])
> +			amdgpu_job_free(parser->jobs[i]);
> +	}
>   	if (parser->uf_entry.tv.bo) {
>   		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(parser->uf_entry.tv.bo);
>   
> @@ -1283,7 +1359,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
>   		goto error_fini;
>   	}
>   
> -	r = amdgpu_cs_patch_ibs(&parser);
> +	r = amdgpu_cs_patch_jobs(&parser);
>   	if (r)
>   		goto error_backoff;
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
> index 652b5593499f..ba5860c08270 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
> @@ -27,6 +27,8 @@
>   #include "amdgpu_bo_list.h"
>   #include "amdgpu_ring.h"
>   
> +#define AMDGPU_CS_GANG_SIZE	4
> +
>   struct amdgpu_bo_va_mapping;
>   
>   struct amdgpu_cs_chunk {
> @@ -50,9 +52,10 @@ struct amdgpu_cs_parser {
>   	unsigned		nchunks;
>   	struct amdgpu_cs_chunk	*chunks;
>   
> -	/* scheduler job object */
> -	struct drm_sched_entity	*entity;
> -	struct amdgpu_job	*job;
> +	/* scheduler job objects */
> +	unsigned int		gang_size;
> +	struct drm_sched_entity	*entities[AMDGPU_CS_GANG_SIZE];
> +	struct amdgpu_job	*jobs[AMDGPU_CS_GANG_SIZE];
>   
>   	/* buffer objects */
>   	struct ww_acquire_ctx		ticket;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
> index d855cb53c7e0..a5167cb91ba5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
> @@ -140,8 +140,10 @@ TRACE_EVENT(amdgpu_bo_create,
>   );
>   
>   TRACE_EVENT(amdgpu_cs,
> -	    TP_PROTO(struct amdgpu_cs_parser *p, int i),
> -	    TP_ARGS(p, i),
> +	    TP_PROTO(struct amdgpu_cs_parser *p,
> +		     struct amdgpu_job *job,
> +		     struct amdgpu_ib *ib),
> +	    TP_ARGS(p, job, ib),
>   	    TP_STRUCT__entry(
>   			     __field(struct amdgpu_bo_list *, bo_list)
>   			     __field(u32, ring)
> @@ -151,10 +153,10 @@ TRACE_EVENT(amdgpu_cs,
>   
>   	    TP_fast_assign(
>   			   __entry->bo_list = p->bo_list;
> -			   __entry->ring = to_amdgpu_ring(p->entity->rq->sched)->idx;
> -			   __entry->dw = p->job->ibs[i].length_dw;
> +			   __entry->ring = to_amdgpu_ring(job->base.sched)->idx;
> +			   __entry->dw = ib->length_dw;
>   			   __entry->fences = amdgpu_fence_count_emitted(
> -				to_amdgpu_ring(p->entity->rq->sched));
> +				to_amdgpu_ring(job->base.sched));
>   			   ),
>   	    TP_printk("bo_list=%p, ring=%u, dw=%u, fences=%u",
>   		      __entry->bo_list, __entry->ring, __entry->dw,

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [PATCH 10/10] drm/amdgpu: add gang submit frontend
  2022-03-03  8:23 ` [PATCH 10/10] drm/amdgpu: add gang submit frontend Christian König
  2022-03-07 17:02   ` Andrey Grodzovsky
@ 2022-06-01 12:09   ` Mohan Marimuthu, Yogesh
  2022-06-01 12:11     ` Christian König
  1 sibling, 1 reply; 27+ messages in thread
From: Mohan Marimuthu, Yogesh @ 2022-06-01 12:09 UTC (permalink / raw)
  To: Christian König, amd-gfx, Olsak, Marek; +Cc: Koenig, Christian

[AMD Official Use Only - General]



-----Original Message-----
From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of Christian König
Sent: Thursday, March 3, 2022 1:53 PM
To: amd-gfx@lists.freedesktop.org; Olsak, Marek <Marek.Olsak@amd.com>
Cc: Koenig, Christian <Christian.Koenig@amd.com>
Subject: [PATCH 10/10] drm/amdgpu: add gang submit frontend

Allows submitting jobs as gang which needs to run on multiple engines at the same time.

All members of the gang get the same implicit, explicit and VM dependencies. So no gang member will start running until everything else is ready.

The last job is considered the gang leader (usually a submission to the GFX
ring) and used for signaling output dependencies.

Each job is remembered individually as user of a buffer object, so there is no joining of work at the end.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c    | 244 ++++++++++++++--------
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h    |   9 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h |  12 +-
 3 files changed, 173 insertions(+), 92 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index c6541f7b8f54..7429e64919fe 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -69,6 +69,7 @@ static int amdgpu_cs_p1_ib(struct amdgpu_cs_parser *p,
 			   unsigned int *num_ibs)
 {
 	struct drm_sched_entity *entity;
+	unsigned int i;
 	int r;
 
 	r = amdgpu_ctx_get_entity(p->ctx, chunk_ib->ip_type, @@ -83,11 +84,19 @@ static int amdgpu_cs_p1_ib(struct amdgpu_cs_parser *p,
 		return -EINVAL;
 
 	/* Currently we don't support submitting to multiple entities */
-	if (p->entity && p->entity != entity)
+	for (i = 0; i < p->gang_size; ++i) {
+		if (p->entities[i] == entity)
+			goto found;
+	}
+
+	if (i == AMDGPU_CS_GANG_SIZE)
 		return -EINVAL;
 
-	p->entity = entity;
-	++(*num_ibs);
+	p->entities[i] = entity;
+	p->gang_size = i + 1;
+
+found:
+	++(num_ibs[i]);
 	return 0;
 }
 
@@ -161,11 +170,12 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
 			   union drm_amdgpu_cs *cs)
 {
 	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+	unsigned int num_ibs[AMDGPU_CS_GANG_SIZE] = { };
 	struct amdgpu_vm *vm = &fpriv->vm;
 	uint64_t *chunk_array_user;
 	uint64_t *chunk_array;
-	unsigned size, num_ibs = 0;
 	uint32_t uf_offset = 0;
+	unsigned int size;
 	int ret;
 	int i;
 
@@ -228,7 +238,7 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
 			if (size < sizeof(struct drm_amdgpu_cs_chunk_ib))
 				goto free_partial_kdata;
 
-			ret = amdgpu_cs_p1_ib(p, p->chunks[i].kdata, &num_ibs);
+			ret = amdgpu_cs_p1_ib(p, p->chunks[i].kdata, num_ibs);
 			if (ret)
 				goto free_partial_kdata;
 			break;
@@ -265,21 +275,27 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
 		}
 	}
 
-	ret = amdgpu_job_alloc(p->adev, num_ibs, &p->job, vm);
-	if (ret)
-		goto free_all_kdata;
+	if (!p->gang_size)
+		return -EINVAL;
 
-	ret = drm_sched_job_init(&p->job->base, p->entity, &fpriv->vm);
-	if (ret)
-		goto free_all_kdata;
+	for (i = 0; i < p->gang_size; ++i) {
+		ret = amdgpu_job_alloc(p->adev, num_ibs[i], &p->jobs[i], vm);
+		if (ret)
+			goto free_all_kdata;
+
+		ret = drm_sched_job_init(&p->jobs[i]->base, p->entities[i],
+					 &fpriv->vm);
+		if (ret)
+			goto free_all_kdata;
+	}
 
-	if (p->ctx->vram_lost_counter != p->job->vram_lost_counter) {
+	if (p->ctx->vram_lost_counter != p->jobs[0]->vram_lost_counter) {
 		ret = -ECANCELED;
 		goto free_all_kdata;
 	}
 
 	if (p->uf_entry.tv.bo)
-		p->job->uf_addr = uf_offset;
+		p->jobs[p->gang_size - 1]->uf_addr = uf_offset;
 	kvfree(chunk_array);
 
 	/* Use this opportunity to fill in task info for the vm */ @@ -301,22 +317,18 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
 	return ret;
 }
 
-static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
-			   struct amdgpu_cs_chunk *chunk,
-			   unsigned int *num_ibs,
-			   unsigned int *ce_preempt,
-			   unsigned int *de_preempt)
+static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
+			   struct amdgpu_ib *ib, struct amdgpu_cs_chunk *chunk,
+			   unsigned int *ce_preempt, unsigned int *de_preempt)
 {
-	struct amdgpu_ring *ring = to_amdgpu_ring(p->job->base.sched);
+	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
 	struct drm_amdgpu_cs_chunk_ib *chunk_ib = chunk->kdata;
 	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
-	struct amdgpu_ib *ib = &p->job->ibs[*num_ibs];
 	struct amdgpu_vm *vm = &fpriv->vm;
 	int r;
 
-
 	/* MM engine doesn't support user fences */
-	if (p->job->uf_addr && ring->funcs->no_user_fence)
+	if (job->uf_addr && ring->funcs->no_user_fence)
 		return -EINVAL;
 
 	if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX && @@ -333,7 +345,7 @@ static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
 	}
 
 	if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE)
-		p->job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT;
+		job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT;
 
 	r =  amdgpu_ib_get(p->adev, vm, ring->funcs->parse_cs ?
 			   chunk_ib->ib_bytes : 0,
@@ -346,8 +358,6 @@ static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
 	ib->gpu_addr = chunk_ib->va_start;
 	ib->length_dw = chunk_ib->ib_bytes / 4;
 	ib->flags = chunk_ib->flags;
-
-	(*num_ibs)++;
 	return 0;
 }
 
@@ -396,7 +406,7 @@ static int amdgpu_cs_p2_dependencies(struct amdgpu_cs_parser *p,
 			dma_fence_put(old);
 		}
 
-		r = amdgpu_sync_fence(&p->job->sync, fence);
+		r = amdgpu_sync_fence(&p->jobs[0]->sync, fence);
 		dma_fence_put(fence);
 		if (r)
 			return r;
@@ -418,7 +428,7 @@ static int amdgpu_syncobj_lookup_and_add(struct amdgpu_cs_parser *p,
 		return r;
 	}
 
-	r = amdgpu_sync_fence(&p->job->sync, fence);
+	r = amdgpu_sync_fence(&p->jobs[0]->sync, fence);
 	dma_fence_put(fence);
 
 	return r;
@@ -541,20 +551,30 @@ static int amdgpu_cs_p2_syncobj_timeline_signal(struct amdgpu_cs_parser *p,
 
 static int amdgpu_cs_pass2(struct amdgpu_cs_parser *p)  {
-	unsigned int num_ibs = 0, ce_preempt = 0, de_preempt = 0;
+	unsigned int ce_preempt = 0, de_preempt = 0;
+	unsigned int job_idx = 0, ib_idx = 0;
 	int i, r;
 
 	for (i = 0; i < p->nchunks; ++i) {
 		struct amdgpu_cs_chunk *chunk;
+		struct amdgpu_job *job;
 
 		chunk = &p->chunks[i];
 
 		switch (chunk->chunk_id) {
 		case AMDGPU_CHUNK_ID_IB:
-			r = amdgpu_cs_p2_ib(p, chunk, &num_ibs,
+			job = p->jobs[job_idx];
+			r = amdgpu_cs_p2_ib(p, job, &job->ibs[ib_idx], chunk,
 					    &ce_preempt, &de_preempt);
 			if (r)
 				return r;
+
+			if (++ib_idx == job->num_ibs) {
+				++job_idx;
+				ib_idx = 0;
+				ce_preempt = 0;
+				de_preempt = 0;
+			}
 			break;
 		case AMDGPU_CHUNK_ID_DEPENDENCIES:
 		case AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES:
@@ -825,6 +845,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 	struct amdgpu_vm *vm = &fpriv->vm;
 	struct amdgpu_bo_list_entry *e;
 	struct list_head duplicates;
+	unsigned int i;
 	int r;
 
 	INIT_LIST_HEAD(&p->validated);
@@ -905,16 +926,6 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 		e->bo_va = amdgpu_vm_bo_find(vm, bo);
 	}
 
-	/* Move fence waiting after getting reservation lock of
-	 * PD root. Then there is no need on a ctx mutex lock.
-	 */
-	r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entity);
-	if (unlikely(r != 0)) {
-		if (r != -ERESTARTSYS)
-			DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n");
-		goto error_validate;
-	}
-
 	amdgpu_cs_get_threshold_for_moves(p->adev, &p->bytes_moved_threshold,
 					  &p->bytes_moved_vis_threshold);
 	p->bytes_moved = 0;
@@ -938,14 +949,16 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 	amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved,
 				     p->bytes_moved_vis);
 
-	amdgpu_job_set_resources(p->job, p->bo_list->gds_obj,
-				 p->bo_list->gws_obj, p->bo_list->oa_obj);
+	for (i = 0; i < p->gang_size; ++i)
+		amdgpu_job_set_resources(p->jobs[i], p->bo_list->gds_obj,
+					 p->bo_list->gws_obj,
+					 p->bo_list->oa_obj);
 
 	if (!r && p->uf_entry.tv.bo) {
 		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(p->uf_entry.tv.bo);
 
 		r = amdgpu_ttm_alloc_gart(&uf->tbo);
-		p->job->uf_addr += amdgpu_bo_gpu_offset(uf);
+		p->jobs[p->gang_size - 1]->uf_addr += amdgpu_bo_gpu_offset(uf);
 	}
 
 error_validate:
@@ -955,20 +968,24 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 	return r;
 }
 
-static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *parser)
+static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *p)
 {
-	int i;
+	int i, j;
 
 	if (!trace_amdgpu_cs_enabled())
 		return;
 
-	for (i = 0; i < parser->job->num_ibs; i++)
-		trace_amdgpu_cs(parser, i);
+	for (i = 0; i < p->gang_size; ++i) {
+		struct amdgpu_job *job = p->jobs[i];
+
+		for (j = 0; j < job->num_ibs; ++j)
+			trace_amdgpu_cs(p, job, &job->ibs[j]);
+	}
 }
 
-static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
+static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p,
+			       struct amdgpu_job *job)
 {
-	struct amdgpu_job *job = p->job;
 	struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
 	unsigned int i;
 	int r;
@@ -1007,14 +1024,13 @@ static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
 			memcpy(ib->ptr, kptr, job->ibs[i].length_dw * 4);
 			amdgpu_bo_kunmap(aobj);
 
-			r = amdgpu_ring_parse_cs(ring, p, p->job,
-						 &p->job->ibs[i]);
+			r = amdgpu_ring_parse_cs(ring, p, job, &job->ibs[i]);
 			if (r)
 				return r;
 		} else {
 			ib->ptr = (uint32_t *)kptr;
-			r = amdgpu_ring_patch_cs_in_place(ring, p, p->job,
-							  &p->job->ibs[i]);
+			r = amdgpu_ring_patch_cs_in_place(ring, p, job,
+							  &job->ibs[i]);
 			amdgpu_bo_kunmap(aobj);
 			if (r)
 				return r;
@@ -1024,14 +1040,29 @@ static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p)
 	return 0;
 }
 
+static int amdgpu_cs_patch_jobs(struct amdgpu_cs_parser *p) {
+	unsigned int i;
+	int r;
+
+	for (i = 0; i < p->gang_size; ++i) {
+		r = amdgpu_cs_patch_ibs(p, p->jobs[i]);
+		if (r)
+			return r;
+	}
+	return 0;
+}
+
 static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)  {
 	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
 	struct amdgpu_device *adev = p->adev;
+	struct amdgpu_job *job = p->jobs[0];
 	struct amdgpu_vm *vm = &fpriv->vm;
 	struct amdgpu_bo_list_entry *e;
 	struct amdgpu_bo_va *bo_va;
 	struct amdgpu_bo *bo;
+	unsigned int i;
 	int r;
 
 	r = amdgpu_vm_clear_freed(adev, vm, NULL); @@ -1042,7 +1073,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 	if (r)
 		return r;
 
-	r = amdgpu_sync_vm_fence(&p->job->sync, fpriv->prt_va->last_pt_update);
+	r = amdgpu_sync_vm_fence(&job->sync, fpriv->prt_va->last_pt_update);
 	if (r)
 		return r;
 
@@ -1052,7 +1083,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 		if (r)
 			return r;
 
-		r = amdgpu_sync_vm_fence(&p->job->sync, bo_va->last_pt_update);
+		r = amdgpu_sync_vm_fence(&job->sync, bo_va->last_pt_update);
 		if (r)
 			return r;
 	}
@@ -1071,7 +1102,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 		if (r)
 			return r;
 
-		r = amdgpu_sync_vm_fence(&p->job->sync, bo_va->last_pt_update);
+		r = amdgpu_sync_vm_fence(&job->sync, bo_va->last_pt_update);
 		if (r)
 			return r;
 	}
@@ -1084,11 +1115,18 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 	if (r)
 		return r;
 
-	r = amdgpu_sync_vm_fence(&p->job->sync, vm->last_update);
+	r = amdgpu_sync_vm_fence(&job->sync, vm->last_update);
 	if (r)
 		return r;
 
-	p->job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo);
+	for (i = 0; i < p->gang_size; ++i) {
+		job = p->jobs[i];
+
+		if (!job->vm)
+			continue;
+
+		job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo);
+	}
 
 	if (amdgpu_vm_debug) {
 		/* Invalidate all BOs to test for userspace bugs */ @@ -1109,7 +1147,9 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)  static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)  {
 	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+	struct amdgpu_job *job = p->jobs[0];
 	struct amdgpu_bo_list_entry *e;
+	unsigned int i;
 	int r;
 
 	list_for_each_entry(e, &p->validated, tv.head) { @@ -1119,12 +1159,23 @@ static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
 
 		sync_mode = amdgpu_bo_explicit_sync(bo) ?
 			AMDGPU_SYNC_EXPLICIT : AMDGPU_SYNC_NE_OWNER;
-		r = amdgpu_sync_resv(p->adev, &p->job->sync, resv, sync_mode,
+		r = amdgpu_sync_resv(p->adev, &job->sync, resv, sync_mode,
 				     &fpriv->vm);
 		if (r)
 			return r;
 	}
-	return 0;
+
+	for (i = 1; i < p->gang_size; ++i) {
+		r = amdgpu_sync_clone(&job->sync, &p->jobs[i]->sync);
+		if (r)
+			return r;
+	}
+
+	r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entities[p->gang_size - 1]);
+	if (r && r != -ERESTARTSYS)
+		DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n");
+
+	return r;
 }
 
 static void amdgpu_cs_post_dependencies(struct amdgpu_cs_parser *p) @@ -1147,17 +1198,27 @@ static void amdgpu_cs_post_dependencies(struct amdgpu_cs_parser *p)  static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 			    union drm_amdgpu_cs *cs)
 {
+	struct amdgpu_job *last = p->jobs[p->gang_size - 1];
 	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
-	struct drm_sched_entity *entity = p->entity;
 	struct amdgpu_bo_list_entry *e;
-	struct amdgpu_job *job;
+	unsigned int i;
 	uint64_t seq;
 	int r;
 
-	job = p->job;
-	p->job = NULL;
+	for (i = 0; i < p->gang_size; ++i)
+		drm_sched_job_arm(&p->jobs[i]->base);
 
-	drm_sched_job_arm(&job->base);
+	for (i = 0; i < (p->gang_size - 1); ++i) {
+		struct dma_fence *fence;
+
+		fence = &p->jobs[i]->base.s_fence->scheduled;
+		r = amdgpu_sync_fence(&last->sync, fence);
+		if (r)
+			goto error_cleanup;
+	}
+
+	for (i = 0; i < p->gang_size; ++i)
+		amdgpu_job_set_gang_leader(p->jobs[i], last);
 
 	/* No memory allocation is allowed while holding the notifier lock.
 	 * The lock is held until amdgpu_cs_submit is finished and fence is @@ -1175,44 +1236,58 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 	}
 	if (r) {
 		r = -EAGAIN;
-		goto error_abort;
+		goto error_unlock;
 	}
 
-	p->fence = dma_fence_get(&job->base.s_fence->finished);
+	p->fence = dma_fence_get(&last->base.s_fence->finished);
 
-	amdgpu_ctx_add_fence(p->ctx, entity, p->fence, &seq);
+	amdgpu_ctx_add_fence(p->ctx, p->entities[p->gang_size - 1], p->fence,
+			     &seq);
 	amdgpu_cs_post_dependencies(p);
 
-	if ((job->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) &&
+	if ((last->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) &&
 	    !p->ctx->preamble_presented) {
-		job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
+		last->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
 		p->ctx->preamble_presented = true;
 	}
 
 	cs->out.handle = seq;
-	job->uf_sequence = seq;
-
-	amdgpu_job_free_resources(job);
+	last->uf_sequence = seq;
 
-	trace_amdgpu_cs_ioctl(job);
 	amdgpu_vm_bo_trace_cs(&fpriv->vm, &p->ticket);
-	drm_sched_entity_push_job(&job->base);
+	for (i = 0; i < p->gang_size; ++i) {
+		amdgpu_job_free_resources(p->jobs[i]);
+		trace_amdgpu_cs_ioctl(p->jobs[i]);
+		drm_sched_entity_push_job(&p->jobs[i]->base);
+		p->jobs[i] = NULL;
+	}
 
 	amdgpu_vm_move_to_lru_tail(p->adev, &fpriv->vm);
 
-	/* Make sure all BOs are remembered as writers */
-	amdgpu_bo_list_for_each_entry(e, p->bo_list)
+	list_for_each_entry(e, &p->validated, tv.head) {
+
+		/* Everybody except for the gang leader uses BOOKKEEP */
+		for (i = 0; i < (p->gang_size - 1); ++i) {
+			dma_resv_add_fence(e->tv.bo->base.resv,
+					   &p->jobs[i]->base.s_fence->finished,
+					   DMA_RESV_USAGE_BOOKKEEP);
+		}
+
+		/* The gang leader as remembered as writer */
 		e->tv.num_shared = 0;
+	}


p->jobs[i] = NULL is done in previous loop. Now when running &p->jobs[i]->base.s_fence->finished there is NULL pointer crash.

Thank you,
Yogesh
 


 	ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence);
 	mutex_unlock(&p->adev->notifier_lock);
 
 	return 0;
 
-error_abort:
-	drm_sched_job_cleanup(&job->base);
+error_unlock:
 	mutex_unlock(&p->adev->notifier_lock);
-	amdgpu_job_free(job);
+
+error_cleanup:
+	for (i = 0; i < p->gang_size; ++i)
+		drm_sched_job_cleanup(&p->jobs[i]->base);
 	return r;
 }
 
@@ -1229,17 +1304,18 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser)
 
 	dma_fence_put(parser->fence);
 
-	if (parser->ctx) {
+	if (parser->ctx)
 		amdgpu_ctx_put(parser->ctx);
-	}
 	if (parser->bo_list)
 		amdgpu_bo_list_put(parser->bo_list);
 
 	for (i = 0; i < parser->nchunks; i++)
 		kvfree(parser->chunks[i].kdata);
 	kvfree(parser->chunks);
-	if (parser->job)
-		amdgpu_job_free(parser->job);
+	for (i = 0; i < parser->gang_size; ++i) {
+		if (parser->jobs[i])
+			amdgpu_job_free(parser->jobs[i]);
+	}
 	if (parser->uf_entry.tv.bo) {
 		struct amdgpu_bo *uf = ttm_to_amdgpu_bo(parser->uf_entry.tv.bo);
 
@@ -1283,7 +1359,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 		goto error_fini;
 	}
 
-	r = amdgpu_cs_patch_ibs(&parser);
+	r = amdgpu_cs_patch_jobs(&parser);
 	if (r)
 		goto error_backoff;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
index 652b5593499f..ba5860c08270 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
@@ -27,6 +27,8 @@
 #include "amdgpu_bo_list.h"
 #include "amdgpu_ring.h"
 
+#define AMDGPU_CS_GANG_SIZE	4
+
 struct amdgpu_bo_va_mapping;
 
 struct amdgpu_cs_chunk {
@@ -50,9 +52,10 @@ struct amdgpu_cs_parser {
 	unsigned		nchunks;
 	struct amdgpu_cs_chunk	*chunks;
 
-	/* scheduler job object */
-	struct drm_sched_entity	*entity;
-	struct amdgpu_job	*job;
+	/* scheduler job objects */
+	unsigned int		gang_size;
+	struct drm_sched_entity	*entities[AMDGPU_CS_GANG_SIZE];
+	struct amdgpu_job	*jobs[AMDGPU_CS_GANG_SIZE];
 
 	/* buffer objects */
 	struct ww_acquire_ctx		ticket;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
index d855cb53c7e0..a5167cb91ba5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
@@ -140,8 +140,10 @@ TRACE_EVENT(amdgpu_bo_create,  );
 
 TRACE_EVENT(amdgpu_cs,
-	    TP_PROTO(struct amdgpu_cs_parser *p, int i),
-	    TP_ARGS(p, i),
+	    TP_PROTO(struct amdgpu_cs_parser *p,
+		     struct amdgpu_job *job,
+		     struct amdgpu_ib *ib),
+	    TP_ARGS(p, job, ib),
 	    TP_STRUCT__entry(
 			     __field(struct amdgpu_bo_list *, bo_list)
 			     __field(u32, ring)
@@ -151,10 +153,10 @@ TRACE_EVENT(amdgpu_cs,
 
 	    TP_fast_assign(
 			   __entry->bo_list = p->bo_list;
-			   __entry->ring = to_amdgpu_ring(p->entity->rq->sched)->idx;
-			   __entry->dw = p->job->ibs[i].length_dw;
+			   __entry->ring = to_amdgpu_ring(job->base.sched)->idx;
+			   __entry->dw = ib->length_dw;
 			   __entry->fences = amdgpu_fence_count_emitted(
-				to_amdgpu_ring(p->entity->rq->sched));
+				to_amdgpu_ring(job->base.sched));
 			   ),
 	    TP_printk("bo_list=%p, ring=%u, dw=%u, fences=%u",
 		      __entry->bo_list, __entry->ring, __entry->dw,
--
2.25.1

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 10/10] drm/amdgpu: add gang submit frontend
  2022-06-01 12:09   ` Mohan Marimuthu, Yogesh
@ 2022-06-01 12:11     ` Christian König
  2022-06-01 13:21       ` Mohan Marimuthu, Yogesh
  0 siblings, 1 reply; 27+ messages in thread
From: Christian König @ 2022-06-01 12:11 UTC (permalink / raw)
  To: Mohan Marimuthu, Yogesh, Christian König, amd-gfx, Olsak, Marek



Am 01.06.22 um 14:09 schrieb Mohan Marimuthu, Yogesh:
> [SNIP]
> -	/* Make sure all BOs are remembered as writers */
> -	amdgpu_bo_list_for_each_entry(e, p->bo_list)
> +	list_for_each_entry(e, &p->validated, tv.head) {
> +
> +		/* Everybody except for the gang leader uses BOOKKEEP */
> +		for (i = 0; i < (p->gang_size - 1); ++i) {
> +			dma_resv_add_fence(e->tv.bo->base.resv,
> +					   &p->jobs[i]->base.s_fence->finished,
> +					   DMA_RESV_USAGE_BOOKKEEP);
> +		}
> +
> +		/* The gang leader as remembered as writer */
>   		e->tv.num_shared = 0;
> +	}
>
>
> p->jobs[i] = NULL is done in previous loop. Now when running &p->jobs[i]->base.s_fence->finished there is NULL pointer crash.

Ah, yes good point. Going to fix that.

Any more comments on this? Did you finished the test cases?

Thanks,
Christian.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [PATCH 10/10] drm/amdgpu: add gang submit frontend
  2022-06-01 12:11     ` Christian König
@ 2022-06-01 13:21       ` Mohan Marimuthu, Yogesh
  0 siblings, 0 replies; 27+ messages in thread
From: Mohan Marimuthu, Yogesh @ 2022-06-01 13:21 UTC (permalink / raw)
  To: Koenig, Christian, Christian König, amd-gfx, Olsak, Marek

[Public]

Hi Christian,

No other comments. With p->jobs[i] fixed, the test case worked. I have to clean up the code and send it for review.
I wanted to add comparing the time with and without gang submission and fail test case if former is slow. I will do this later. I will send the test case for review first.

Thank you,
Yogesh

-----Original Message-----
From: Koenig, Christian <Christian.Koenig@amd.com> 
Sent: Wednesday, June 1, 2022 5:42 PM
To: Mohan Marimuthu, Yogesh <Yogesh.Mohanmarimuthu@amd.com>; Christian König <ckoenig.leichtzumerken@gmail.com>; amd-gfx@lists.freedesktop.org; Olsak, Marek <Marek.Olsak@amd.com>
Subject: Re: [PATCH 10/10] drm/amdgpu: add gang submit frontend



Am 01.06.22 um 14:09 schrieb Mohan Marimuthu, Yogesh:
> [SNIP]
> -	/* Make sure all BOs are remembered as writers */
> -	amdgpu_bo_list_for_each_entry(e, p->bo_list)
> +	list_for_each_entry(e, &p->validated, tv.head) {
> +
> +		/* Everybody except for the gang leader uses BOOKKEEP */
> +		for (i = 0; i < (p->gang_size - 1); ++i) {
> +			dma_resv_add_fence(e->tv.bo->base.resv,
> +					   &p->jobs[i]->base.s_fence->finished,
> +					   DMA_RESV_USAGE_BOOKKEEP);
> +		}
> +
> +		/* The gang leader as remembered as writer */
>   		e->tv.num_shared = 0;
> +	}
>
>
> p->jobs[i] = NULL is done in previous loop. Now when running &p->jobs[i]->base.s_fence->finished there is NULL pointer crash.

Ah, yes good point. Going to fix that.

Any more comments on this? Did you finished the test cases?

Thanks,
Christian.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: Gang submit
  2022-03-03  8:22 Gang submit Christian König
                   ` (9 preceding siblings ...)
  2022-03-03  8:23 ` [PATCH 10/10] drm/amdgpu: add gang submit frontend Christian König
@ 2022-09-06  1:43 ` Liu, Monk
  2022-09-06  9:02   ` Christian König
  10 siblings, 1 reply; 27+ messages in thread
From: Liu, Monk @ 2022-09-06  1:43 UTC (permalink / raw)
  To: Christian König, amd-gfx, Olsak, Marek

[AMD Official Use Only - General]

Hi Christian


> A gang submission guarantees that multiple IBs can run on different engines at the same time.
> The effect is that as long as members of a gang are waiting to be submitted no other gang can start pushing jobs to the hardware and so deadlocks are effectively prevented.

Could you please help to explain or confirm:

1) If one gfx ib and one compute ib are in a gang, can they run literally  in parallel on GPU ? 
2) if one gfx ib and one compute ib are belong to two gang, they will be put to the gfx and compute ring one by one (e.g:  gang1-gfx-ib scheduled and signaled, and then gang2-compute-ib scheduled )

Thanks 
-------------------------------------------------------------------
Monk Liu | Cloud GPU & Virtualization Solution | AMD
-------------------------------------------------------------------
we are hiring software manager for CVS core team
-------------------------------------------------------------------

-----Original Message-----
From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of Christian König
Sent: 2022年3月3日 16:23
To: amd-gfx@lists.freedesktop.org; Olsak, Marek <Marek.Olsak@amd.com>
Subject: Gang submit

Hi guys,

this patch set implements the the requirement for so called gang submissions in the CS interface.

A gang submission guarantees that multiple IBs can run on different engines at the same time.

This is implemented by keeping a global per-device gang around represented by a dma_fence which signals as soon as all jobs in a gang are pushed to the hardware.

The effect is that as long as members of a gang are waiting to be submitted no other gang can start pushing jobs to the hardware and so deadlocks are effectively prevented.

The whole set is based on top of my dma_resv_usage work and a few patches merged over from amd-staging-drm-next, so it won't easily apply anywhere.

Please review and comment,
Christian.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Gang submit
  2022-09-06  1:43 ` Gang submit Liu, Monk
@ 2022-09-06  9:02   ` Christian König
  0 siblings, 0 replies; 27+ messages in thread
From: Christian König @ 2022-09-06  9:02 UTC (permalink / raw)
  To: Liu, Monk, amd-gfx, Olsak, Marek

Hi Monk,

> If one gfx ib and one compute ib are in a gang, can they run literally  in parallel on GPU ?

Yes, that is essentially the functionality of gang submit.

The driver stack must guarantee that those IBs run at the same time 
because they use a ring buffer to communicate with each other.

> if one gfx ib and one compute ib are belong to two gang, they will be put to the gfx and compute ring one by one (e.g:  gang1-gfx-ib scheduled and signaled, and then gang2-compute-ib scheduled )

Yes, gang submission should never overlap or otherwise you can run into 
lockups.

Regards,
Christian.

Am 06.09.22 um 03:43 schrieb Liu, Monk:
> [AMD Official Use Only - General]
>
> Hi Christian
>
>
>> A gang submission guarantees that multiple IBs can run on different engines at the same time.
>> The effect is that as long as members of a gang are waiting to be submitted no other gang can start pushing jobs to the hardware and so deadlocks are effectively prevented.
> Could you please help to explain or confirm:
>
> 1) If one gfx ib and one compute ib are in a gang, can they run literally  in parallel on GPU ?
> 2) if one gfx ib and one compute ib are belong to two gang, they will be put to the gfx and compute ring one by one (e.g:  gang1-gfx-ib scheduled and signaled, and then gang2-compute-ib scheduled )
>
> Thanks
> -------------------------------------------------------------------
> Monk Liu | Cloud GPU & Virtualization Solution | AMD
> -------------------------------------------------------------------
> we are hiring software manager for CVS core team
> -------------------------------------------------------------------
>
> -----Original Message-----
> From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of Christian König
> Sent: 2022年3月3日 16:23
> To: amd-gfx@lists.freedesktop.org; Olsak, Marek <Marek.Olsak@amd.com>
> Subject: Gang submit
>
> Hi guys,
>
> this patch set implements the the requirement for so called gang submissions in the CS interface.
>
> A gang submission guarantees that multiple IBs can run on different engines at the same time.
>
> This is implemented by keeping a global per-device gang around represented by a dma_fence which signals as soon as all jobs in a gang are pushed to the hardware.
>
> The effect is that as long as members of a gang are waiting to be submitted no other gang can start pushing jobs to the hardware and so deadlocks are effectively prevented.
>
> The whole set is based on top of my dma_resv_usage work and a few patches merged over from amd-staging-drm-next, so it won't easily apply anywhere.
>
> Please review and comment,
> Christian.
>


^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2022-09-06  9:02 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-03  8:22 Gang submit Christian König
2022-03-03  8:22 ` [PATCH 01/10] drm/amdgpu: install ctx entities with cmpxchg Christian König
2022-03-03 19:52   ` Andrey Grodzovsky
2022-03-03  8:23 ` [PATCH 02/10] drm/amdgpu: header cleanup Christian König
2022-03-03 19:56   ` Andrey Grodzovsky
2022-03-03  8:23 ` [PATCH 03/10] drm/amdgpu: cleanup and reorder amdgpu_cs.c Christian König
2022-03-03  8:23 ` [PATCH 04/10] drm/amdgpu: remove SRIOV and MCBP dependencies from the CS Christian König
2022-03-03  8:23 ` [PATCH 05/10] drm/amdgpu: use job and ib structures directly in CS parsers Christian König
2022-03-03 20:16   ` Andrey Grodzovsky
2022-03-03  8:23 ` [PATCH 06/10] drm/amdgpu: properly imbed the IBs into the job Christian König
2022-03-03 20:25   ` Andrey Grodzovsky
2022-03-03  8:23 ` [PATCH 07/10] drm/amdgpu: move setting the job resources Christian König
2022-03-03  8:23 ` [PATCH 08/10] drm/amdgpu: initialize the vmid_wait with the stub fence Christian König
2022-03-03 20:31   ` Andrey Grodzovsky
2022-03-03  8:23 ` [PATCH 09/10] drm/amdgpu: add gang submit backend Christian König
2022-03-04 17:10   ` Andrey Grodzovsky
2022-03-05 18:40     ` Christian König
2022-03-07 15:40       ` Andrey Grodzovsky
2022-03-07 15:59         ` Christian König
2022-03-07 16:02           ` Andrey Grodzovsky
2022-03-03  8:23 ` [PATCH 10/10] drm/amdgpu: add gang submit frontend Christian König
2022-03-07 17:02   ` Andrey Grodzovsky
2022-06-01 12:09   ` Mohan Marimuthu, Yogesh
2022-06-01 12:11     ` Christian König
2022-06-01 13:21       ` Mohan Marimuthu, Yogesh
2022-09-06  1:43 ` Gang submit Liu, Monk
2022-09-06  9:02   ` Christian König

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.