All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] Move scheduler out of AMDGPU
@ 2017-12-01 15:28 Lucas Stach
  2017-12-01 15:29 ` [PATCH 1/2] drm: move amd_gpu_scheduler into common location Lucas Stach
       [not found] ` <20171201152901.3626-1-l.stach-bIcnvbaLZ9MEGnE8C9+IrQ@public.gmane.org>
  0 siblings, 2 replies; 10+ messages in thread
From: Lucas Stach @ 2017-12-01 15:28 UTC (permalink / raw)
  To: Christian König, Alex Deucher
  Cc: David Airlie, kernel-bIcnvbaLZ9MEGnE8C9+IrQ,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	patchwork-lst-bIcnvbaLZ9MEGnE8C9+IrQ

Hi all,

so this is the first step to make the marvelous AMDGPU scheduler useable
for other drivers. I have a (mostly) working prototype of Etnaviv using
the scheduler, but those patches need to keep baking for a while.

I'm sending this out as I want to avoid rebasing this change too much
and don't want to take people by surprise when the Etnaviv implementation
surfaces. Also this might need some coordination between AMDGPU and
Etnaviv, which might be good to get going now.

Please speak up now if you have any objections or comments.

Regards,
Lucas

Lucas Stach (2):
  drm: move amd_gpu_scheduler into common location
  drm/sched: move fence slab handling to module init/exit

 drivers/gpu/drm/Kconfig                            |   5 +
 drivers/gpu/drm/Makefile                           |   1 +
 drivers/gpu/drm/amd/amdgpu/Makefile                |   5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu.h                |  16 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c             |   8 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c            |  38 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c         |  12 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c            |   8 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c          |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c            |  22 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c           |  14 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h           |  12 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c          |  20 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h          |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c           |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c            |   8 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h            |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c            |   8 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h            |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c            |   8 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h            |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c            |  14 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h            |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c             |  10 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h             |   4 +-
 drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c              |   4 +-
 drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c              |   8 +-
 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c              |   8 +-
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.h      | 185 --------------
 drivers/gpu/drm/scheduler/Makefile                 |   4 +
 .../gpu/drm/{amd => }/scheduler/gpu_scheduler.c    | 281 +++++++++++----------
 drivers/gpu/drm/{amd => }/scheduler/sched_fence.c  | 122 +++++----
 include/drm/gpu_scheduler.h                        | 171 +++++++++++++
 .../drm/gpu_scheduler_trace.h                      |  14 +-
 34 files changed, 525 insertions(+), 511 deletions(-)
 delete mode 100644 drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
 create mode 100644 drivers/gpu/drm/scheduler/Makefile
 rename drivers/gpu/drm/{amd => }/scheduler/gpu_scheduler.c (64%)
 rename drivers/gpu/drm/{amd => }/scheduler/sched_fence.c (58%)
 create mode 100644 include/drm/gpu_scheduler.h
 rename drivers/gpu/drm/amd/scheduler/gpu_sched_trace.h => include/drm/gpu_scheduler_trace.h (83%)

-- 
2.11.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/2] drm: move amd_gpu_scheduler into common location
  2017-12-01 15:28 [PATCH 0/2] Move scheduler out of AMDGPU Lucas Stach
@ 2017-12-01 15:29 ` Lucas Stach
       [not found] ` <20171201152901.3626-1-l.stach-bIcnvbaLZ9MEGnE8C9+IrQ@public.gmane.org>
  1 sibling, 0 replies; 10+ messages in thread
From: Lucas Stach @ 2017-12-01 15:29 UTC (permalink / raw)
  To: Christian König, Alex Deucher
  Cc: David Airlie, kernel, amd-gfx, dri-devel, patchwork-lst

This moves and renames the AMDGPU scheduler to a common location in DRM
in order to facilitate re-use by other drivers. This is mostly a straight
forward rename with no code changes.

One notable exception is the function to_drm_sched_fence(), which is no
longer a inline header function to avoid the need to export the
drm_sched_fence_ops_scheduled and drm_sched_fence_ops_finished structures.

Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
---
 drivers/gpu/drm/Kconfig                            |   5 +
 drivers/gpu/drm/Makefile                           |   1 +
 drivers/gpu/drm/amd/amdgpu/Makefile                |   5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu.h                |  16 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c             |   8 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c            |  38 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c         |  12 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c            |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c          |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c            |  22 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c           |  14 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h           |  12 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c          |  20 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h          |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c           |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c            |   8 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h            |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c            |   8 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h            |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c            |   8 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h            |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c            |  14 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h            |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c             |  10 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h             |   4 +-
 drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c              |   4 +-
 drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c              |   8 +-
 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c              |   8 +-
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.h      | 185 --------------
 drivers/gpu/drm/scheduler/Makefile                 |   4 +
 .../gpu/drm/{amd => }/scheduler/gpu_scheduler.c    | 281 +++++++++++----------
 drivers/gpu/drm/{amd => }/scheduler/sched_fence.c  | 118 +++++----
 include/drm/gpu_scheduler.h                        | 174 +++++++++++++
 .../drm/gpu_scheduler_trace.h                      |  14 +-
 34 files changed, 526 insertions(+), 505 deletions(-)
 delete mode 100644 drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
 create mode 100644 drivers/gpu/drm/scheduler/Makefile
 rename drivers/gpu/drm/{amd => }/scheduler/gpu_scheduler.c (64%)
 rename drivers/gpu/drm/{amd => }/scheduler/sched_fence.c (59%)
 create mode 100644 include/drm/gpu_scheduler.h
 rename drivers/gpu/drm/amd/scheduler/gpu_sched_trace.h => include/drm/gpu_scheduler_trace.h (83%)

diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 4d9f21831741..ee38a3db1890 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -149,6 +149,10 @@ config DRM_VM
 	bool
 	depends on DRM && MMU
 
+config DRM_SCHED
+	tristate
+	depends on DRM
+
 source "drivers/gpu/drm/i2c/Kconfig"
 
 source "drivers/gpu/drm/arm/Kconfig"
@@ -178,6 +182,7 @@ config DRM_AMDGPU
 	depends on DRM && PCI && MMU
 	select FW_LOADER
         select DRM_KMS_HELPER
+	select DRM_SCHED
         select DRM_TTM
 	select POWER_SUPPLY
 	select HWMON
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index e9500844333e..1f6ba9e34e31 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -101,3 +101,4 @@ obj-$(CONFIG_DRM_MXSFB)	+= mxsfb/
 obj-$(CONFIG_DRM_TINYDRM) += tinydrm/
 obj-$(CONFIG_DRM_PL111) += pl111/
 obj-$(CONFIG_DRM_TVE200) += tve200/
+obj-$(CONFIG_DRM_SCHED)	+= scheduler/
diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index 78d609123420..5f690f023e75 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -115,10 +115,7 @@ amdgpu-y += \
 amdgpu-y += amdgpu_cgs.o
 
 # GPU scheduler
-amdgpu-y += \
-	../scheduler/gpu_scheduler.o \
-	../scheduler/sched_fence.o \
-	amdgpu_job.o
+amdgpu-y += amdgpu_job.o
 
 # ACP componet
 ifneq ($(CONFIG_DRM_AMD_ACP),)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 5afaf6016b4a..f17882b87cf5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -45,6 +45,7 @@
 #include <drm/drmP.h>
 #include <drm/drm_gem.h>
 #include <drm/amdgpu_drm.h>
+#include <drm/gpu_scheduler.h>
 
 #include <kgd_kfd_interface.h>
 
@@ -68,7 +69,6 @@
 #include "amdgpu_mn.h"
 #include "amdgpu_dm.h"
 
-#include "gpu_scheduler.h"
 #include "amdgpu_virt.h"
 #include "amdgpu_gart.h"
 
@@ -684,7 +684,7 @@ struct amdgpu_ib {
 	uint32_t			flags;
 };
 
-extern const struct amd_sched_backend_ops amdgpu_sched_ops;
+extern const struct drm_sched_backend_ops amdgpu_sched_ops;
 
 int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,
 		     struct amdgpu_job **job, struct amdgpu_vm *vm);
@@ -694,7 +694,7 @@ int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, unsigned size,
 void amdgpu_job_free_resources(struct amdgpu_job *job);
 void amdgpu_job_free(struct amdgpu_job *job);
 int amdgpu_job_submit(struct amdgpu_job *job, struct amdgpu_ring *ring,
-		      struct amd_sched_entity *entity, void *owner,
+		      struct drm_sched_entity *entity, void *owner,
 		      struct dma_fence **f);
 
 /*
@@ -727,7 +727,7 @@ int amdgpu_queue_mgr_map(struct amdgpu_device *adev,
 struct amdgpu_ctx_ring {
 	uint64_t		sequence;
 	struct dma_fence	**fences;
-	struct amd_sched_entity	entity;
+	struct drm_sched_entity	entity;
 };
 
 struct amdgpu_ctx {
@@ -740,8 +740,8 @@ struct amdgpu_ctx {
 	struct dma_fence	**fences;
 	struct amdgpu_ctx_ring	rings[AMDGPU_MAX_RINGS];
 	bool			preamble_presented;
-	enum amd_sched_priority init_priority;
-	enum amd_sched_priority override_priority;
+	enum drm_sched_priority init_priority;
+	enum drm_sched_priority override_priority;
 	struct mutex            lock;
 };
 
@@ -760,7 +760,7 @@ int amdgpu_ctx_add_fence(struct amdgpu_ctx *ctx, struct amdgpu_ring *ring,
 struct dma_fence *amdgpu_ctx_get_fence(struct amdgpu_ctx *ctx,
 				   struct amdgpu_ring *ring, uint64_t seq);
 void amdgpu_ctx_priority_override(struct amdgpu_ctx *ctx,
-				  enum amd_sched_priority priority);
+				  enum drm_sched_priority priority);
 
 int amdgpu_ctx_ioctl(struct drm_device *dev, void *data,
 		     struct drm_file *filp);
@@ -1109,7 +1109,7 @@ struct amdgpu_cs_parser {
 #define AMDGPU_HAVE_CTX_SWITCH              (1 << 2) /* bit set means context switch occured */
 
 struct amdgpu_job {
-	struct amd_sched_job    base;
+	struct drm_sched_job    base;
 	struct amdgpu_device	*adev;
 	struct amdgpu_vm	*vm;
 	struct amdgpu_ring	*ring;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index a57cec737c18..ea10ac821a6d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1141,7 +1141,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 			    union drm_amdgpu_cs *cs)
 {
 	struct amdgpu_ring *ring = p->job->ring;
-	struct amd_sched_entity *entity = &p->ctx->rings[ring->idx].entity;
+	struct drm_sched_entity *entity = &p->ctx->rings[ring->idx].entity;
 	struct amdgpu_job *job;
 	unsigned i;
 	uint64_t seq;
@@ -1164,7 +1164,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 	job = p->job;
 	p->job = NULL;
 
-	r = amd_sched_job_init(&job->base, &ring->sched, entity, p->filp);
+	r = drm_sched_job_init(&job->base, &ring->sched, entity, p->filp);
 	if (r) {
 		amdgpu_job_free(job);
 		amdgpu_mn_unlock(p->mn);
@@ -1191,10 +1191,10 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 
 	amdgpu_job_free_resources(job);
 	amdgpu_ring_priority_get(job->ring,
-				 amd_sched_get_job_priority(&job->base));
+				 drm_sched_get_job_priority(&job->base));
 
 	trace_amdgpu_cs_ioctl(job);
-	amd_sched_entity_push_job(&job->base);
+	drm_sched_entity_push_job(&job->base);
 
 	ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence);
 	amdgpu_mn_unlock(p->mn);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index c184468e2b2b..1165b6c2dd05 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -28,10 +28,10 @@
 #include "amdgpu_sched.h"
 
 static int amdgpu_ctx_priority_permit(struct drm_file *filp,
-				      enum amd_sched_priority priority)
+				      enum drm_sched_priority priority)
 {
 	/* NORMAL and below are accessible by everyone */
-	if (priority <= AMD_SCHED_PRIORITY_NORMAL)
+	if (priority <= DRM_SCHED_PRIORITY_NORMAL)
 		return 0;
 
 	if (capable(CAP_SYS_NICE))
@@ -44,14 +44,14 @@ static int amdgpu_ctx_priority_permit(struct drm_file *filp,
 }
 
 static int amdgpu_ctx_init(struct amdgpu_device *adev,
-			   enum amd_sched_priority priority,
+			   enum drm_sched_priority priority,
 			   struct drm_file *filp,
 			   struct amdgpu_ctx *ctx)
 {
 	unsigned i, j;
 	int r;
 
-	if (priority < 0 || priority >= AMD_SCHED_PRIORITY_MAX)
+	if (priority < 0 || priority >= DRM_SCHED_PRIORITY_MAX)
 		return -EINVAL;
 
 	r = amdgpu_ctx_priority_permit(filp, priority);
@@ -77,19 +77,19 @@ static int amdgpu_ctx_init(struct amdgpu_device *adev,
 	ctx->reset_counter = atomic_read(&adev->gpu_reset_counter);
 	ctx->vram_lost_counter = atomic_read(&adev->vram_lost_counter);
 	ctx->init_priority = priority;
-	ctx->override_priority = AMD_SCHED_PRIORITY_UNSET;
+	ctx->override_priority = DRM_SCHED_PRIORITY_UNSET;
 
 	/* create context entity for each ring */
 	for (i = 0; i < adev->num_rings; i++) {
 		struct amdgpu_ring *ring = adev->rings[i];
-		struct amd_sched_rq *rq;
+		struct drm_sched_rq *rq;
 
 		rq = &ring->sched.sched_rq[priority];
 
 		if (ring == &adev->gfx.kiq.ring)
 			continue;
 
-		r = amd_sched_entity_init(&ring->sched, &ctx->rings[i].entity,
+		r = drm_sched_entity_init(&ring->sched, &ctx->rings[i].entity,
 					  rq, amdgpu_sched_jobs);
 		if (r)
 			goto failed;
@@ -103,7 +103,7 @@ static int amdgpu_ctx_init(struct amdgpu_device *adev,
 
 failed:
 	for (j = 0; j < i; j++)
-		amd_sched_entity_fini(&adev->rings[j]->sched,
+		drm_sched_entity_fini(&adev->rings[j]->sched,
 				      &ctx->rings[j].entity);
 	kfree(ctx->fences);
 	ctx->fences = NULL;
@@ -125,7 +125,7 @@ static void amdgpu_ctx_fini(struct amdgpu_ctx *ctx)
 	ctx->fences = NULL;
 
 	for (i = 0; i < adev->num_rings; i++)
-		amd_sched_entity_fini(&adev->rings[i]->sched,
+		drm_sched_entity_fini(&adev->rings[i]->sched,
 				      &ctx->rings[i].entity);
 
 	amdgpu_queue_mgr_fini(adev, &ctx->queue_mgr);
@@ -136,7 +136,7 @@ static void amdgpu_ctx_fini(struct amdgpu_ctx *ctx)
 static int amdgpu_ctx_alloc(struct amdgpu_device *adev,
 			    struct amdgpu_fpriv *fpriv,
 			    struct drm_file *filp,
-			    enum amd_sched_priority priority,
+			    enum drm_sched_priority priority,
 			    uint32_t *id)
 {
 	struct amdgpu_ctx_mgr *mgr = &fpriv->ctx_mgr;
@@ -231,7 +231,7 @@ int amdgpu_ctx_ioctl(struct drm_device *dev, void *data,
 {
 	int r;
 	uint32_t id;
-	enum amd_sched_priority priority;
+	enum drm_sched_priority priority;
 
 	union drm_amdgpu_ctx *args = data;
 	struct amdgpu_device *adev = dev->dev_private;
@@ -243,8 +243,8 @@ int amdgpu_ctx_ioctl(struct drm_device *dev, void *data,
 
 	/* For backwards compatibility reasons, we need to accept
 	 * ioctls with garbage in the priority field */
-	if (priority == AMD_SCHED_PRIORITY_INVALID)
-		priority = AMD_SCHED_PRIORITY_NORMAL;
+	if (priority == DRM_SCHED_PRIORITY_INVALID)
+		priority = DRM_SCHED_PRIORITY_NORMAL;
 
 	switch (args->in.op) {
 	case AMDGPU_CTX_OP_ALLOC_CTX:
@@ -347,18 +347,18 @@ struct dma_fence *amdgpu_ctx_get_fence(struct amdgpu_ctx *ctx,
 }
 
 void amdgpu_ctx_priority_override(struct amdgpu_ctx *ctx,
-				  enum amd_sched_priority priority)
+				  enum drm_sched_priority priority)
 {
 	int i;
 	struct amdgpu_device *adev = ctx->adev;
-	struct amd_sched_rq *rq;
-	struct amd_sched_entity *entity;
+	struct drm_sched_rq *rq;
+	struct drm_sched_entity *entity;
 	struct amdgpu_ring *ring;
-	enum amd_sched_priority ctx_prio;
+	enum drm_sched_priority ctx_prio;
 
 	ctx->override_priority = priority;
 
-	ctx_prio = (ctx->override_priority == AMD_SCHED_PRIORITY_UNSET) ?
+	ctx_prio = (ctx->override_priority == DRM_SCHED_PRIORITY_UNSET) ?
 			ctx->init_priority : ctx->override_priority;
 
 	for (i = 0; i < adev->num_rings; i++) {
@@ -369,7 +369,7 @@ void amdgpu_ctx_priority_override(struct amdgpu_ctx *ctx,
 		if (ring->funcs->type == AMDGPU_RING_TYPE_KIQ)
 			continue;
 
-		amd_sched_entity_set_rq(entity, rq);
+		drm_sched_entity_set_rq(entity, rq);
 	}
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 2c574374d9b6..d188b69cf560 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2868,11 +2868,11 @@ int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, struct amdgpu_job *job)
 			goto give_up_reset;
 		}
 
-		if (amd_sched_invalidate_job(&job->base, amdgpu_job_hang_limit))
-			amd_sched_job_kickout(&job->base);
+		if (drm_sched_invalidate_job(&job->base, amdgpu_job_hang_limit))
+			drm_sched_job_kickout(&job->base);
 
 		/* only do job_reset on the hang ring if @job not NULL */
-		amd_sched_hw_job_reset(&ring->sched);
+		drm_sched_hw_job_reset(&ring->sched);
 
 		/* after all hw jobs are reset, hw fence is meaningless, so force_completion */
 		amdgpu_fence_driver_force_completion_ring(ring);
@@ -2939,7 +2939,7 @@ int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, struct amdgpu_job *job)
 			continue;
 		}
 
-		amd_sched_job_recovery(&ring->sched);
+		drm_sched_job_recovery(&ring->sched);
 		kthread_unpark(ring->sched.thread);
 	}
 
@@ -2993,7 +2993,7 @@ int amdgpu_gpu_reset(struct amdgpu_device *adev)
 		if (!ring || !ring->sched.thread)
 			continue;
 		kthread_park(ring->sched.thread);
-		amd_sched_hw_job_reset(&ring->sched);
+		drm_sched_hw_job_reset(&ring->sched);
 	}
 	/* after all hw jobs are reset, hw fence is meaningless, so force_completion */
 	amdgpu_fence_driver_force_completion(adev);
@@ -3089,7 +3089,7 @@ int amdgpu_gpu_reset(struct amdgpu_device *adev)
 			if (!ring || !ring->sched.thread)
 				continue;
 
-			amd_sched_job_recovery(&ring->sched);
+			drm_sched_job_recovery(&ring->sched);
 			kthread_unpark(ring->sched.thread);
 		}
 	} else {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index ec96bb1f9eaf..b23c83c59725 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -906,7 +906,7 @@ static int __init amdgpu_init(void)
 	if (r)
 		goto error_fence;
 
-	r = amd_sched_fence_slab_init();
+	r = drm_sched_fence_slab_init();
 	if (r)
 		goto error_sched;
 
@@ -938,7 +938,7 @@ static void __exit amdgpu_exit(void)
 	pci_unregister_driver(pdriver);
 	amdgpu_unregister_atpx_handler();
 	amdgpu_sync_fini();
-	amd_sched_fence_slab_fini();
+	drm_sched_fence_slab_fini();
 	amdgpu_fence_slab_fini();
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 2fa95aef74d5..12ed22119ff1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -445,7 +445,7 @@ int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring,
 			 */
 			timeout = MAX_SCHEDULE_TIMEOUT;
 		}
-		r = amd_sched_init(&ring->sched, &amdgpu_sched_ops,
+		r = drm_sched_init(&ring->sched, &amdgpu_sched_ops,
 				   num_hw_submission,
 				   timeout, ring->name);
 		if (r) {
@@ -503,7 +503,7 @@ void amdgpu_fence_driver_fini(struct amdgpu_device *adev)
 		}
 		amdgpu_irq_put(adev, ring->fence_drv.irq_src,
 			       ring->fence_drv.irq_type);
-		amd_sched_fini(&ring->sched);
+		drm_sched_fini(&ring->sched);
 		del_timer_sync(&ring->fence_drv.fallback_timer);
 		for (j = 0; j <= ring->fence_drv.num_fences_mask; ++j)
 			dma_fence_put(ring->fence_drv.fences[j]);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 0cfc68db575b..4f84bbc2ffe2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -28,7 +28,7 @@
 #include "amdgpu.h"
 #include "amdgpu_trace.h"
 
-static void amdgpu_job_timedout(struct amd_sched_job *s_job)
+static void amdgpu_job_timedout(struct drm_sched_job *s_job)
 {
 	struct amdgpu_job *job = container_of(s_job, struct amdgpu_job, base);
 
@@ -100,11 +100,11 @@ void amdgpu_job_free_resources(struct amdgpu_job *job)
 		amdgpu_ib_free(job->adev, &job->ibs[i], f);
 }
 
-static void amdgpu_job_free_cb(struct amd_sched_job *s_job)
+static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
 {
 	struct amdgpu_job *job = container_of(s_job, struct amdgpu_job, base);
 
-	amdgpu_ring_priority_put(job->ring, amd_sched_get_job_priority(s_job));
+	amdgpu_ring_priority_put(job->ring, drm_sched_get_job_priority(s_job));
 	dma_fence_put(job->fence);
 	amdgpu_sync_free(&job->sync);
 	amdgpu_sync_free(&job->dep_sync);
@@ -124,7 +124,7 @@ void amdgpu_job_free(struct amdgpu_job *job)
 }
 
 int amdgpu_job_submit(struct amdgpu_job *job, struct amdgpu_ring *ring,
-		      struct amd_sched_entity *entity, void *owner,
+		      struct drm_sched_entity *entity, void *owner,
 		      struct dma_fence **f)
 {
 	int r;
@@ -133,7 +133,7 @@ int amdgpu_job_submit(struct amdgpu_job *job, struct amdgpu_ring *ring,
 	if (!f)
 		return -EINVAL;
 
-	r = amd_sched_job_init(&job->base, &ring->sched, entity, owner);
+	r = drm_sched_job_init(&job->base, &ring->sched, entity, owner);
 	if (r)
 		return r;
 
@@ -142,13 +142,13 @@ int amdgpu_job_submit(struct amdgpu_job *job, struct amdgpu_ring *ring,
 	*f = dma_fence_get(&job->base.s_fence->finished);
 	amdgpu_job_free_resources(job);
 	amdgpu_ring_priority_get(job->ring,
-				 amd_sched_get_job_priority(&job->base));
-	amd_sched_entity_push_job(&job->base);
+				 drm_sched_get_job_priority(&job->base));
+	drm_sched_entity_push_job(&job->base);
 
 	return 0;
 }
 
-static struct dma_fence *amdgpu_job_dependency(struct amd_sched_job *sched_job)
+static struct dma_fence *amdgpu_job_dependency(struct drm_sched_job *sched_job)
 {
 	struct amdgpu_job *job = to_amdgpu_job(sched_job);
 	struct amdgpu_vm *vm = job->vm;
@@ -156,7 +156,7 @@ static struct dma_fence *amdgpu_job_dependency(struct amd_sched_job *sched_job)
 	struct dma_fence *fence = amdgpu_sync_get_fence(&job->dep_sync);
 	int r;
 
-	if (amd_sched_dependency_optimized(fence, sched_job->s_entity)) {
+	if (drm_sched_dependency_optimized(fence, sched_job->s_entity)) {
 		r = amdgpu_sync_fence(job->adev, &job->sched_sync, fence);
 		if (r)
 			DRM_ERROR("Error adding fence to sync (%d)\n", r);
@@ -178,7 +178,7 @@ static struct dma_fence *amdgpu_job_dependency(struct amd_sched_job *sched_job)
 	return fence;
 }
 
-static struct dma_fence *amdgpu_job_run(struct amd_sched_job *sched_job)
+static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job)
 {
 	struct dma_fence *fence = NULL;
 	struct amdgpu_device *adev;
@@ -213,7 +213,7 @@ static struct dma_fence *amdgpu_job_run(struct amd_sched_job *sched_job)
 	return fence;
 }
 
-const struct amd_sched_backend_ops amdgpu_sched_ops = {
+const struct drm_sched_backend_ops amdgpu_sched_ops = {
 	.dependency = amdgpu_job_dependency,
 	.run_job = amdgpu_job_run,
 	.timedout_job = amdgpu_job_timedout,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
index a98fbbb4739f..41c75f9632dc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
@@ -164,7 +164,7 @@ void amdgpu_ring_undo(struct amdgpu_ring *ring)
  * Release a request for executing at @priority
  */
 void amdgpu_ring_priority_put(struct amdgpu_ring *ring,
-			      enum amd_sched_priority priority)
+			      enum drm_sched_priority priority)
 {
 	int i;
 
@@ -175,7 +175,7 @@ void amdgpu_ring_priority_put(struct amdgpu_ring *ring,
 		return;
 
 	/* no need to restore if the job is already at the lowest priority */
-	if (priority == AMD_SCHED_PRIORITY_NORMAL)
+	if (priority == DRM_SCHED_PRIORITY_NORMAL)
 		return;
 
 	mutex_lock(&ring->priority_mutex);
@@ -184,8 +184,8 @@ void amdgpu_ring_priority_put(struct amdgpu_ring *ring,
 		goto out_unlock;
 
 	/* decay priority to the next level with a job available */
-	for (i = priority; i >= AMD_SCHED_PRIORITY_MIN; i--) {
-		if (i == AMD_SCHED_PRIORITY_NORMAL
+	for (i = priority; i >= DRM_SCHED_PRIORITY_MIN; i--) {
+		if (i == DRM_SCHED_PRIORITY_NORMAL
 				|| atomic_read(&ring->num_jobs[i])) {
 			ring->priority = i;
 			ring->funcs->set_priority(ring, i);
@@ -206,7 +206,7 @@ void amdgpu_ring_priority_put(struct amdgpu_ring *ring,
  * Request a ring's priority to be raised to @priority (refcounted).
  */
 void amdgpu_ring_priority_get(struct amdgpu_ring *ring,
-			      enum amd_sched_priority priority)
+			      enum drm_sched_priority priority)
 {
 	if (!ring->funcs->set_priority)
 		return;
@@ -317,12 +317,12 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
 	}
 
 	ring->max_dw = max_dw;
-	ring->priority = AMD_SCHED_PRIORITY_NORMAL;
+	ring->priority = DRM_SCHED_PRIORITY_NORMAL;
 	mutex_init(&ring->priority_mutex);
 	INIT_LIST_HEAD(&ring->lru_list);
 	amdgpu_ring_lru_touch(adev, ring);
 
-	for (i = 0; i < AMD_SCHED_PRIORITY_MAX; ++i)
+	for (i = 0; i < DRM_SCHED_PRIORITY_MAX; ++i)
 		atomic_set(&ring->num_jobs[i], 0);
 
 	if (amdgpu_debugfs_ring_init(adev, ring)) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index b18c2b96691f..6d68930c9527 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -25,7 +25,7 @@
 #define __AMDGPU_RING_H__
 
 #include <drm/amdgpu_drm.h>
-#include "gpu_scheduler.h"
+#include <drm/gpu_scheduler.h>
 
 /* max number of rings */
 #define AMDGPU_MAX_RINGS		18
@@ -155,14 +155,14 @@ struct amdgpu_ring_funcs {
 	void (*emit_tmz)(struct amdgpu_ring *ring, bool start);
 	/* priority functions */
 	void (*set_priority) (struct amdgpu_ring *ring,
-			      enum amd_sched_priority priority);
+			      enum drm_sched_priority priority);
 };
 
 struct amdgpu_ring {
 	struct amdgpu_device		*adev;
 	const struct amdgpu_ring_funcs	*funcs;
 	struct amdgpu_fence_driver	fence_drv;
-	struct amd_gpu_scheduler	sched;
+	struct drm_gpu_scheduler	sched;
 	struct list_head		lru_list;
 
 	struct amdgpu_bo	*ring_obj;
@@ -197,7 +197,7 @@ struct amdgpu_ring {
 	unsigned		vm_inv_eng;
 	bool			has_compute_vm_bug;
 
-	atomic_t		num_jobs[AMD_SCHED_PRIORITY_MAX];
+	atomic_t		num_jobs[DRM_SCHED_PRIORITY_MAX];
 	struct mutex		priority_mutex;
 	/* protected by priority_mutex */
 	int			priority;
@@ -213,9 +213,9 @@ void amdgpu_ring_generic_pad_ib(struct amdgpu_ring *ring, struct amdgpu_ib *ib);
 void amdgpu_ring_commit(struct amdgpu_ring *ring);
 void amdgpu_ring_undo(struct amdgpu_ring *ring);
 void amdgpu_ring_priority_get(struct amdgpu_ring *ring,
-			      enum amd_sched_priority priority);
+			      enum drm_sched_priority priority);
 void amdgpu_ring_priority_put(struct amdgpu_ring *ring,
-			      enum amd_sched_priority priority);
+			      enum drm_sched_priority priority);
 int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
 		     unsigned ring_size, struct amdgpu_irq_src *irq_src,
 		     unsigned irq_type);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
index 290cc3f9c433..86a0715d9431 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
@@ -29,29 +29,29 @@
 
 #include "amdgpu_vm.h"
 
-enum amd_sched_priority amdgpu_to_sched_priority(int amdgpu_priority)
+enum drm_sched_priority amdgpu_to_sched_priority(int amdgpu_priority)
 {
 	switch (amdgpu_priority) {
 	case AMDGPU_CTX_PRIORITY_VERY_HIGH:
-		return AMD_SCHED_PRIORITY_HIGH_HW;
+		return DRM_SCHED_PRIORITY_HIGH_HW;
 	case AMDGPU_CTX_PRIORITY_HIGH:
-		return AMD_SCHED_PRIORITY_HIGH_SW;
+		return DRM_SCHED_PRIORITY_HIGH_SW;
 	case AMDGPU_CTX_PRIORITY_NORMAL:
-		return AMD_SCHED_PRIORITY_NORMAL;
+		return DRM_SCHED_PRIORITY_NORMAL;
 	case AMDGPU_CTX_PRIORITY_LOW:
 	case AMDGPU_CTX_PRIORITY_VERY_LOW:
-		return AMD_SCHED_PRIORITY_LOW;
+		return DRM_SCHED_PRIORITY_LOW;
 	case AMDGPU_CTX_PRIORITY_UNSET:
-		return AMD_SCHED_PRIORITY_UNSET;
+		return DRM_SCHED_PRIORITY_UNSET;
 	default:
 		WARN(1, "Invalid context priority %d\n", amdgpu_priority);
-		return AMD_SCHED_PRIORITY_INVALID;
+		return DRM_SCHED_PRIORITY_INVALID;
 	}
 }
 
 static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
 						  int fd,
-						  enum amd_sched_priority priority)
+						  enum drm_sched_priority priority)
 {
 	struct file *filp = fcheck(fd);
 	struct drm_file *file;
@@ -86,11 +86,11 @@ int amdgpu_sched_ioctl(struct drm_device *dev, void *data,
 {
 	union drm_amdgpu_sched *args = data;
 	struct amdgpu_device *adev = dev->dev_private;
-	enum amd_sched_priority priority;
+	enum drm_sched_priority priority;
 	int r;
 
 	priority = amdgpu_to_sched_priority(args->in.priority);
-	if (args->in.flags || priority == AMD_SCHED_PRIORITY_INVALID)
+	if (args->in.flags || priority == DRM_SCHED_PRIORITY_INVALID)
 		return -EINVAL;
 
 	switch (args->in.op) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h
index b28c067d3822..2a1a0c734bdd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h
@@ -27,7 +27,7 @@
 
 #include <drm/drmP.h>
 
-enum amd_sched_priority amdgpu_to_sched_priority(int amdgpu_priority);
+enum drm_sched_priority amdgpu_to_sched_priority(int amdgpu_priority);
 int amdgpu_sched_ioctl(struct drm_device *dev, void *data,
 		       struct drm_file *filp);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
index a4bf21f8f1c1..ebe25502fe31 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
@@ -63,7 +63,7 @@ void amdgpu_sync_create(struct amdgpu_sync *sync)
 static bool amdgpu_sync_same_dev(struct amdgpu_device *adev,
 				 struct dma_fence *f)
 {
-	struct amd_sched_fence *s_fence = to_amd_sched_fence(f);
+	struct drm_sched_fence *s_fence = to_drm_sched_fence(f);
 
 	if (s_fence) {
 		struct amdgpu_ring *ring;
@@ -84,7 +84,7 @@ static bool amdgpu_sync_same_dev(struct amdgpu_device *adev,
  */
 static void *amdgpu_sync_get_owner(struct dma_fence *f)
 {
-	struct amd_sched_fence *s_fence = to_amd_sched_fence(f);
+	struct drm_sched_fence *s_fence = to_drm_sched_fence(f);
 
 	if (s_fence)
 		return s_fence->owner;
@@ -245,7 +245,7 @@ struct dma_fence *amdgpu_sync_peek_fence(struct amdgpu_sync *sync,
 
 	hash_for_each_safe(sync->fences, i, tmp, e, node) {
 		struct dma_fence *f = e->fence;
-		struct amd_sched_fence *s_fence = to_amd_sched_fence(f);
+		struct drm_sched_fence *s_fence = to_drm_sched_fence(f);
 
 		if (dma_fence_is_signaled(f)) {
 			hash_del(&e->node);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index ad5bf86ee8a3..8743b718ac7f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -76,7 +76,7 @@ static int amdgpu_ttm_global_init(struct amdgpu_device *adev)
 {
 	struct drm_global_reference *global_ref;
 	struct amdgpu_ring *ring;
-	struct amd_sched_rq *rq;
+	struct drm_sched_rq *rq;
 	int r;
 
 	adev->mman.mem_global_referenced = false;
@@ -108,8 +108,8 @@ static int amdgpu_ttm_global_init(struct amdgpu_device *adev)
 	mutex_init(&adev->mman.gtt_window_lock);
 
 	ring = adev->mman.buffer_funcs_ring;
-	rq = &ring->sched.sched_rq[AMD_SCHED_PRIORITY_KERNEL];
-	r = amd_sched_entity_init(&ring->sched, &adev->mman.entity,
+	rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_KERNEL];
+	r = drm_sched_entity_init(&ring->sched, &adev->mman.entity,
 				  rq, amdgpu_sched_jobs);
 	if (r) {
 		DRM_ERROR("Failed setting up TTM BO move run queue.\n");
@@ -131,7 +131,7 @@ static int amdgpu_ttm_global_init(struct amdgpu_device *adev)
 static void amdgpu_ttm_global_fini(struct amdgpu_device *adev)
 {
 	if (adev->mman.mem_global_referenced) {
-		amd_sched_entity_fini(adev->mman.entity.sched,
+		drm_sched_entity_fini(adev->mman.entity.sched,
 				      &adev->mman.entity);
 		mutex_destroy(&adev->mman.gtt_window_lock);
 		drm_global_item_unref(&adev->mman.bo_global_ref.ref);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index abd4084982a3..300df9b4e62a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -25,7 +25,7 @@
 #define __AMDGPU_TTM_H__
 
 #include "amdgpu.h"
-#include "gpu_scheduler.h"
+#include <drm/gpu_scheduler.h>
 
 #define AMDGPU_PL_GDS		(TTM_PL_PRIV + 0)
 #define AMDGPU_PL_GWS		(TTM_PL_PRIV + 1)
@@ -55,7 +55,7 @@ struct amdgpu_mman {
 
 	struct mutex				gtt_window_lock;
 	/* Scheduler entity for buffer moves */
-	struct amd_sched_entity			entity;
+	struct drm_sched_entity			entity;
 };
 
 struct amdgpu_copy_mem {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
index e8bd50cf9785..a08003d9eadd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
@@ -116,7 +116,7 @@ static void amdgpu_uvd_idle_work_handler(struct work_struct *work);
 int amdgpu_uvd_sw_init(struct amdgpu_device *adev)
 {
 	struct amdgpu_ring *ring;
-	struct amd_sched_rq *rq;
+	struct drm_sched_rq *rq;
 	unsigned long bo_size;
 	const char *fw_name;
 	const struct common_firmware_header *hdr;
@@ -230,8 +230,8 @@ int amdgpu_uvd_sw_init(struct amdgpu_device *adev)
 	}
 
 	ring = &adev->uvd.ring;
-	rq = &ring->sched.sched_rq[AMD_SCHED_PRIORITY_NORMAL];
-	r = amd_sched_entity_init(&ring->sched, &adev->uvd.entity,
+	rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL];
+	r = drm_sched_entity_init(&ring->sched, &adev->uvd.entity,
 				  rq, amdgpu_sched_jobs);
 	if (r != 0) {
 		DRM_ERROR("Failed setting up UVD run queue.\n");
@@ -272,7 +272,7 @@ int amdgpu_uvd_sw_fini(struct amdgpu_device *adev)
 	int i;
 	kfree(adev->uvd.saved_bo);
 
-	amd_sched_entity_fini(&adev->uvd.ring.sched, &adev->uvd.entity);
+	drm_sched_entity_fini(&adev->uvd.ring.sched, &adev->uvd.entity);
 
 	amdgpu_bo_free_kernel(&adev->uvd.vcpu_bo,
 			      &adev->uvd.gpu_addr,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
index 3553b92bf69a..4b5aded3be0e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
@@ -47,8 +47,8 @@ struct amdgpu_uvd {
 	struct amdgpu_irq_src	irq;
 	bool			address_64_bit;
 	bool			use_ctx_buf;
-	struct amd_sched_entity entity;
-	struct amd_sched_entity entity_enc;
+	struct drm_sched_entity entity;
+	struct drm_sched_entity entity_enc;
 	uint32_t                srbm_soft_reset;
 	unsigned		num_enc_rings;
 };
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
index 2918de2f39ec..b932e502245d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
@@ -85,7 +85,7 @@ static void amdgpu_vce_idle_work_handler(struct work_struct *work);
 int amdgpu_vce_sw_init(struct amdgpu_device *adev, unsigned long size)
 {
 	struct amdgpu_ring *ring;
-	struct amd_sched_rq *rq;
+	struct drm_sched_rq *rq;
 	const char *fw_name;
 	const struct common_firmware_header *hdr;
 	unsigned ucode_version, version_major, version_minor, binary_id;
@@ -174,8 +174,8 @@ int amdgpu_vce_sw_init(struct amdgpu_device *adev, unsigned long size)
 	}
 
 	ring = &adev->vce.ring[0];
-	rq = &ring->sched.sched_rq[AMD_SCHED_PRIORITY_NORMAL];
-	r = amd_sched_entity_init(&ring->sched, &adev->vce.entity,
+	rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL];
+	r = drm_sched_entity_init(&ring->sched, &adev->vce.entity,
 				  rq, amdgpu_sched_jobs);
 	if (r != 0) {
 		DRM_ERROR("Failed setting up VCE run queue.\n");
@@ -207,7 +207,7 @@ int amdgpu_vce_sw_fini(struct amdgpu_device *adev)
 	if (adev->vce.vcpu_bo == NULL)
 		return 0;
 
-	amd_sched_entity_fini(&adev->vce.ring[0].sched, &adev->vce.entity);
+	drm_sched_entity_fini(&adev->vce.ring[0].sched, &adev->vce.entity);
 
 	amdgpu_bo_free_kernel(&adev->vce.vcpu_bo, &adev->vce.gpu_addr,
 		(void **)&adev->vce.cpu_addr);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
index 5ce54cde472d..162cae94e3b1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
@@ -46,7 +46,7 @@ struct amdgpu_vce {
 	struct amdgpu_ring	ring[AMDGPU_MAX_VCE_RINGS];
 	struct amdgpu_irq_src	irq;
 	unsigned		harvest_config;
-	struct amd_sched_entity	entity;
+	struct drm_sched_entity	entity;
 	uint32_t                srbm_soft_reset;
 	unsigned		num_rings;
 };
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
index 041e0121590c..3da757968bb6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
@@ -51,7 +51,7 @@ static void amdgpu_vcn_idle_work_handler(struct work_struct *work);
 int amdgpu_vcn_sw_init(struct amdgpu_device *adev)
 {
 	struct amdgpu_ring *ring;
-	struct amd_sched_rq *rq;
+	struct drm_sched_rq *rq;
 	unsigned long bo_size;
 	const char *fw_name;
 	const struct common_firmware_header *hdr;
@@ -104,8 +104,8 @@ int amdgpu_vcn_sw_init(struct amdgpu_device *adev)
 	}
 
 	ring = &adev->vcn.ring_dec;
-	rq = &ring->sched.sched_rq[AMD_SCHED_PRIORITY_NORMAL];
-	r = amd_sched_entity_init(&ring->sched, &adev->vcn.entity_dec,
+	rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL];
+	r = drm_sched_entity_init(&ring->sched, &adev->vcn.entity_dec,
 				  rq, amdgpu_sched_jobs);
 	if (r != 0) {
 		DRM_ERROR("Failed setting up VCN dec run queue.\n");
@@ -113,8 +113,8 @@ int amdgpu_vcn_sw_init(struct amdgpu_device *adev)
 	}
 
 	ring = &adev->vcn.ring_enc[0];
-	rq = &ring->sched.sched_rq[AMD_SCHED_PRIORITY_NORMAL];
-	r = amd_sched_entity_init(&ring->sched, &adev->vcn.entity_enc,
+	rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL];
+	r = drm_sched_entity_init(&ring->sched, &adev->vcn.entity_enc,
 				  rq, amdgpu_sched_jobs);
 	if (r != 0) {
 		DRM_ERROR("Failed setting up VCN enc run queue.\n");
@@ -130,9 +130,9 @@ int amdgpu_vcn_sw_fini(struct amdgpu_device *adev)
 
 	kfree(adev->vcn.saved_bo);
 
-	amd_sched_entity_fini(&adev->vcn.ring_dec.sched, &adev->vcn.entity_dec);
+	drm_sched_entity_fini(&adev->vcn.ring_dec.sched, &adev->vcn.entity_dec);
 
-	amd_sched_entity_fini(&adev->vcn.ring_enc[0].sched, &adev->vcn.entity_enc);
+	drm_sched_entity_fini(&adev->vcn.ring_enc[0].sched, &adev->vcn.entity_enc);
 
 	amdgpu_bo_free_kernel(&adev->vcn.vcpu_bo,
 			      &adev->vcn.gpu_addr,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
index d50ba0657854..2fd7db891689 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
@@ -56,8 +56,8 @@ struct amdgpu_vcn {
 	struct amdgpu_ring	ring_dec;
 	struct amdgpu_ring	ring_enc[AMDGPU_VCN_MAX_ENC_RINGS];
 	struct amdgpu_irq_src	irq;
-	struct amd_sched_entity entity_dec;
-	struct amd_sched_entity entity_enc;
+	struct drm_sched_entity entity_dec;
+	struct drm_sched_entity entity_enc;
 	unsigned		num_enc_rings;
 };
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index c8c26f21993c..ec41937027e7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -2615,7 +2615,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 		AMDGPU_VM_PTE_COUNT(adev) * 8);
 	unsigned ring_instance;
 	struct amdgpu_ring *ring;
-	struct amd_sched_rq *rq;
+	struct drm_sched_rq *rq;
 	int r, i;
 	u64 flags;
 	uint64_t init_pde_value = 0;
@@ -2635,8 +2635,8 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 	ring_instance = atomic_inc_return(&adev->vm_manager.vm_pte_next_ring);
 	ring_instance %= adev->vm_manager.vm_pte_num_rings;
 	ring = adev->vm_manager.vm_pte_rings[ring_instance];
-	rq = &ring->sched.sched_rq[AMD_SCHED_PRIORITY_KERNEL];
-	r = amd_sched_entity_init(&ring->sched, &vm->entity,
+	rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_KERNEL];
+	r = drm_sched_entity_init(&ring->sched, &vm->entity,
 				  rq, amdgpu_sched_jobs);
 	if (r)
 		return r;
@@ -2716,7 +2716,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 	vm->root.base.bo = NULL;
 
 error_free_sched_entity:
-	amd_sched_entity_fini(&ring->sched, &vm->entity);
+	drm_sched_entity_fini(&ring->sched, &vm->entity);
 
 	return r;
 }
@@ -2775,7 +2775,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 		spin_unlock_irqrestore(&adev->vm_manager.pasid_lock, flags);
 	}
 
-	amd_sched_entity_fini(vm->entity.sched, &vm->entity);
+	drm_sched_entity_fini(vm->entity.sched, &vm->entity);
 
 	if (!RB_EMPTY_ROOT(&vm->va.rb_root)) {
 		dev_err(adev->dev, "still active bo inside vm\n");
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index bae77353447b..1bad14a7002b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -26,8 +26,8 @@
 
 #include <linux/rbtree.h>
 #include <linux/idr.h>
+#include <drm/gpu_scheduler.h>
 
-#include "gpu_scheduler.h"
 #include "amdgpu_sync.h"
 #include "amdgpu_ring.h"
 
@@ -162,7 +162,7 @@ struct amdgpu_vm {
 	spinlock_t		freed_lock;
 
 	/* Scheduler entity for page table updates */
-	struct amd_sched_entity	entity;
+	struct drm_sched_entity	entity;
 
 	/* client id and PASID (TODO: replace client_id with PASID) */
 	u64                     client_id;
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
index 9ecdf621a74a..58f2f8623fa2 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
@@ -6490,10 +6490,10 @@ static void gfx_v8_0_hqd_set_priority(struct amdgpu_device *adev,
 	mutex_unlock(&adev->srbm_mutex);
 }
 static void gfx_v8_0_ring_set_priority_compute(struct amdgpu_ring *ring,
-					       enum amd_sched_priority priority)
+					       enum drm_sched_priority priority)
 {
 	struct amdgpu_device *adev = ring->adev;
-	bool acquire = priority == AMD_SCHED_PRIORITY_HIGH_HW;
+	bool acquire = priority == DRM_SCHED_PRIORITY_HIGH_HW;
 
 	if (ring->funcs->type != AMDGPU_RING_TYPE_COMPUTE)
 		return;
diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
index 920910ac8663..543f730b2ea5 100644
--- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
@@ -412,10 +412,10 @@ static int uvd_v6_0_sw_init(void *handle)
 		return r;
 
 	if (uvd_v6_0_enc_support(adev)) {
-		struct amd_sched_rq *rq;
+		struct drm_sched_rq *rq;
 		ring = &adev->uvd.ring_enc[0];
-		rq = &ring->sched.sched_rq[AMD_SCHED_PRIORITY_NORMAL];
-		r = amd_sched_entity_init(&ring->sched, &adev->uvd.entity_enc,
+		rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL];
+		r = drm_sched_entity_init(&ring->sched, &adev->uvd.entity_enc,
 					  rq, amdgpu_sched_jobs);
 		if (r) {
 			DRM_ERROR("Failed setting up UVD ENC run queue.\n");
@@ -456,7 +456,7 @@ static int uvd_v6_0_sw_fini(void *handle)
 		return r;
 
 	if (uvd_v6_0_enc_support(adev)) {
-		amd_sched_entity_fini(&adev->uvd.ring_enc[0].sched, &adev->uvd.entity_enc);
+		drm_sched_entity_fini(&adev->uvd.ring_enc[0].sched, &adev->uvd.entity_enc);
 
 		for (i = 0; i < adev->uvd.num_enc_rings; ++i)
 			amdgpu_ring_fini(&adev->uvd.ring_enc[i]);
diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
index 6634545060fd..27c5eb917a2d 100644
--- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
@@ -385,7 +385,7 @@ static int uvd_v7_0_early_init(void *handle)
 static int uvd_v7_0_sw_init(void *handle)
 {
 	struct amdgpu_ring *ring;
-	struct amd_sched_rq *rq;
+	struct drm_sched_rq *rq;
 	int i, r;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
@@ -416,8 +416,8 @@ static int uvd_v7_0_sw_init(void *handle)
 	}
 
 	ring = &adev->uvd.ring_enc[0];
-	rq = &ring->sched.sched_rq[AMD_SCHED_PRIORITY_NORMAL];
-	r = amd_sched_entity_init(&ring->sched, &adev->uvd.entity_enc,
+	rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL];
+	r = drm_sched_entity_init(&ring->sched, &adev->uvd.entity_enc,
 				  rq, amdgpu_sched_jobs);
 	if (r) {
 		DRM_ERROR("Failed setting up UVD ENC run queue.\n");
@@ -472,7 +472,7 @@ static int uvd_v7_0_sw_fini(void *handle)
 	if (r)
 		return r;
 
-	amd_sched_entity_fini(&adev->uvd.ring_enc[0].sched, &adev->uvd.entity_enc);
+	drm_sched_entity_fini(&adev->uvd.ring_enc[0].sched, &adev->uvd.entity_enc);
 
 	for (i = 0; i < adev->uvd.num_enc_rings; ++i)
 		amdgpu_ring_fini(&adev->uvd.ring_enc[i]);
diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
deleted file mode 100644
index 52c8e5447624..000000000000
--- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
+++ /dev/null
@@ -1,185 +0,0 @@
-/*
- * Copyright 2015 Advanced Micro Devices, Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- * OTHER DEALINGS IN THE SOFTWARE.
- *
- */
-
-#ifndef _GPU_SCHEDULER_H_
-#define _GPU_SCHEDULER_H_
-
-#include <linux/kfifo.h>
-#include <linux/dma-fence.h>
-
-struct amd_gpu_scheduler;
-struct amd_sched_rq;
-
-/**
- * A scheduler entity is a wrapper around a job queue or a group
- * of other entities. Entities take turns emitting jobs from their
- * job queues to corresponding hardware ring based on scheduling
- * policy.
-*/
-struct amd_sched_entity {
-	struct list_head		list;
-	struct amd_sched_rq		*rq;
-	spinlock_t			rq_lock;
-	struct amd_gpu_scheduler	*sched;
-
-	spinlock_t			queue_lock;
-	struct kfifo                    job_queue;
-
-	atomic_t			fence_seq;
-	uint64_t                        fence_context;
-
-	struct dma_fence		*dependency;
-	struct dma_fence_cb		cb;
-};
-
-/**
- * Run queue is a set of entities scheduling command submissions for
- * one specific ring. It implements the scheduling policy that selects
- * the next entity to emit commands from.
-*/
-struct amd_sched_rq {
-	spinlock_t		lock;
-	struct list_head	entities;
-	struct amd_sched_entity	*current_entity;
-};
-
-struct amd_sched_fence {
-	struct dma_fence                scheduled;
-	struct dma_fence                finished;
-	struct dma_fence_cb             cb;
-	struct dma_fence                *parent;
-	struct amd_gpu_scheduler	*sched;
-	spinlock_t			lock;
-	void                            *owner;
-};
-
-struct amd_sched_job {
-	struct amd_gpu_scheduler        *sched;
-	struct amd_sched_entity         *s_entity;
-	struct amd_sched_fence          *s_fence;
-	struct dma_fence_cb		finish_cb;
-	struct work_struct		finish_work;
-	struct list_head		node;
-	struct delayed_work		work_tdr;
-	uint64_t			id;
-	atomic_t karma;
-};
-
-extern const struct dma_fence_ops amd_sched_fence_ops_scheduled;
-extern const struct dma_fence_ops amd_sched_fence_ops_finished;
-static inline struct amd_sched_fence *to_amd_sched_fence(struct dma_fence *f)
-{
-	if (f->ops == &amd_sched_fence_ops_scheduled)
-		return container_of(f, struct amd_sched_fence, scheduled);
-
-	if (f->ops == &amd_sched_fence_ops_finished)
-		return container_of(f, struct amd_sched_fence, finished);
-
-	return NULL;
-}
-
-static inline bool amd_sched_invalidate_job(struct amd_sched_job *s_job, int threshold)
-{
-	return (s_job && atomic_inc_return(&s_job->karma) > threshold);
-}
-
-/**
- * Define the backend operations called by the scheduler,
- * these functions should be implemented in driver side
-*/
-struct amd_sched_backend_ops {
-	struct dma_fence *(*dependency)(struct amd_sched_job *sched_job);
-	struct dma_fence *(*run_job)(struct amd_sched_job *sched_job);
-	void (*timedout_job)(struct amd_sched_job *sched_job);
-	void (*free_job)(struct amd_sched_job *sched_job);
-};
-
-enum amd_sched_priority {
-	AMD_SCHED_PRIORITY_MIN,
-	AMD_SCHED_PRIORITY_LOW = AMD_SCHED_PRIORITY_MIN,
-	AMD_SCHED_PRIORITY_NORMAL,
-	AMD_SCHED_PRIORITY_HIGH_SW,
-	AMD_SCHED_PRIORITY_HIGH_HW,
-	AMD_SCHED_PRIORITY_KERNEL,
-	AMD_SCHED_PRIORITY_MAX,
-	AMD_SCHED_PRIORITY_INVALID = -1,
-	AMD_SCHED_PRIORITY_UNSET = -2
-};
-
-/**
- * One scheduler is implemented for each hardware ring
-*/
-struct amd_gpu_scheduler {
-	const struct amd_sched_backend_ops	*ops;
-	uint32_t			hw_submission_limit;
-	long				timeout;
-	const char			*name;
-	struct amd_sched_rq		sched_rq[AMD_SCHED_PRIORITY_MAX];
-	wait_queue_head_t		wake_up_worker;
-	wait_queue_head_t		job_scheduled;
-	atomic_t			hw_rq_count;
-	atomic64_t			job_id_count;
-	struct task_struct		*thread;
-	struct list_head	ring_mirror_list;
-	spinlock_t			job_list_lock;
-};
-
-int amd_sched_init(struct amd_gpu_scheduler *sched,
-		   const struct amd_sched_backend_ops *ops,
-		   uint32_t hw_submission, long timeout, const char *name);
-void amd_sched_fini(struct amd_gpu_scheduler *sched);
-
-int amd_sched_entity_init(struct amd_gpu_scheduler *sched,
-			  struct amd_sched_entity *entity,
-			  struct amd_sched_rq *rq,
-			  uint32_t jobs);
-void amd_sched_entity_fini(struct amd_gpu_scheduler *sched,
-			   struct amd_sched_entity *entity);
-void amd_sched_entity_push_job(struct amd_sched_job *sched_job);
-void amd_sched_entity_set_rq(struct amd_sched_entity *entity,
-			     struct amd_sched_rq *rq);
-
-int amd_sched_fence_slab_init(void);
-void amd_sched_fence_slab_fini(void);
-
-struct amd_sched_fence *amd_sched_fence_create(
-	struct amd_sched_entity *s_entity, void *owner);
-void amd_sched_fence_scheduled(struct amd_sched_fence *fence);
-void amd_sched_fence_finished(struct amd_sched_fence *fence);
-int amd_sched_job_init(struct amd_sched_job *job,
-		       struct amd_gpu_scheduler *sched,
-		       struct amd_sched_entity *entity,
-		       void *owner);
-void amd_sched_hw_job_reset(struct amd_gpu_scheduler *sched);
-void amd_sched_job_recovery(struct amd_gpu_scheduler *sched);
-bool amd_sched_dependency_optimized(struct dma_fence* fence,
-				    struct amd_sched_entity *entity);
-void amd_sched_job_kickout(struct amd_sched_job *s_job);
-
-static inline enum amd_sched_priority
-amd_sched_get_job_priority(struct amd_sched_job *job)
-{
-	return (job->s_entity->rq - job->sched->sched_rq);
-}
-
-#endif
diff --git a/drivers/gpu/drm/scheduler/Makefile b/drivers/gpu/drm/scheduler/Makefile
new file mode 100644
index 000000000000..ed877912d06d
--- /dev/null
+++ b/drivers/gpu/drm/scheduler/Makefile
@@ -0,0 +1,4 @@
+ccflags-y := -Iinclude/drm
+gpu-sched-y := gpu_scheduler.o sched_fence.o
+
+obj-$(CONFIG_DRM_SCHED) += gpu-sched.o
diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c
similarity index 64%
rename from drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
rename to drivers/gpu/drm/scheduler/gpu_scheduler.c
index 92ec663fdada..2642938ebdc1 100644
--- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
+++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
@@ -19,32 +19,32 @@
  * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
  * OTHER DEALINGS IN THE SOFTWARE.
  *
- *
  */
+
 #include <linux/kthread.h>
 #include <linux/wait.h>
 #include <linux/sched.h>
 #include <uapi/linux/sched/types.h>
 #include <drm/drmP.h>
-#include "gpu_scheduler.h"
+#include <drm/gpu_scheduler.h>
 
 #define CREATE_TRACE_POINTS
-#include "gpu_sched_trace.h"
+#include <drm/gpu_scheduler_trace.h>
 
-static bool amd_sched_entity_is_ready(struct amd_sched_entity *entity);
-static void amd_sched_wakeup(struct amd_gpu_scheduler *sched);
-static void amd_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb);
+static bool drm_sched_entity_is_ready(struct drm_sched_entity *entity);
+static void drm_sched_wakeup(struct drm_gpu_scheduler *sched);
+static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb);
 
 /* Initialize a given run queue struct */
-static void amd_sched_rq_init(struct amd_sched_rq *rq)
+static void drm_sched_rq_init(struct drm_sched_rq *rq)
 {
 	spin_lock_init(&rq->lock);
 	INIT_LIST_HEAD(&rq->entities);
 	rq->current_entity = NULL;
 }
 
-static void amd_sched_rq_add_entity(struct amd_sched_rq *rq,
-				    struct amd_sched_entity *entity)
+static void drm_sched_rq_add_entity(struct drm_sched_rq *rq,
+				    struct drm_sched_entity *entity)
 {
 	if (!list_empty(&entity->list))
 		return;
@@ -53,8 +53,8 @@ static void amd_sched_rq_add_entity(struct amd_sched_rq *rq,
 	spin_unlock(&rq->lock);
 }
 
-static void amd_sched_rq_remove_entity(struct amd_sched_rq *rq,
-				       struct amd_sched_entity *entity)
+static void drm_sched_rq_remove_entity(struct drm_sched_rq *rq,
+				       struct drm_sched_entity *entity)
 {
 	if (list_empty(&entity->list))
 		return;
@@ -72,17 +72,17 @@ static void amd_sched_rq_remove_entity(struct amd_sched_rq *rq,
  *
  * Try to find a ready entity, returns NULL if none found.
  */
-static struct amd_sched_entity *
-amd_sched_rq_select_entity(struct amd_sched_rq *rq)
+static struct drm_sched_entity *
+drm_sched_rq_select_entity(struct drm_sched_rq *rq)
 {
-	struct amd_sched_entity *entity;
+	struct drm_sched_entity *entity;
 
 	spin_lock(&rq->lock);
 
 	entity = rq->current_entity;
 	if (entity) {
 		list_for_each_entry_continue(entity, &rq->entities, list) {
-			if (amd_sched_entity_is_ready(entity)) {
+			if (drm_sched_entity_is_ready(entity)) {
 				rq->current_entity = entity;
 				spin_unlock(&rq->lock);
 				return entity;
@@ -92,7 +92,7 @@ amd_sched_rq_select_entity(struct amd_sched_rq *rq)
 
 	list_for_each_entry(entity, &rq->entities, list) {
 
-		if (amd_sched_entity_is_ready(entity)) {
+		if (drm_sched_entity_is_ready(entity)) {
 			rq->current_entity = entity;
 			spin_unlock(&rq->lock);
 			return entity;
@@ -111,16 +111,16 @@ amd_sched_rq_select_entity(struct amd_sched_rq *rq)
  * Init a context entity used by scheduler when submit to HW ring.
  *
  * @sched	The pointer to the scheduler
- * @entity	The pointer to a valid amd_sched_entity
+ * @entity	The pointer to a valid drm_sched_entity
  * @rq		The run queue this entity belongs
  * @kernel	If this is an entity for the kernel
  * @jobs	The max number of jobs in the job queue
  *
  * return 0 if succeed. negative error code on failure
 */
-int amd_sched_entity_init(struct amd_gpu_scheduler *sched,
-			  struct amd_sched_entity *entity,
-			  struct amd_sched_rq *rq,
+int drm_sched_entity_init(struct drm_gpu_scheduler *sched,
+			  struct drm_sched_entity *entity,
+			  struct drm_sched_rq *rq,
 			  uint32_t jobs)
 {
 	int r;
@@ -128,7 +128,7 @@ int amd_sched_entity_init(struct amd_gpu_scheduler *sched,
 	if (!(sched && entity && rq))
 		return -EINVAL;
 
-	memset(entity, 0, sizeof(struct amd_sched_entity));
+	memset(entity, 0, sizeof(struct drm_sched_entity));
 	INIT_LIST_HEAD(&entity->list);
 	entity->rq = rq;
 	entity->sched = sched;
@@ -144,6 +144,7 @@ int amd_sched_entity_init(struct amd_gpu_scheduler *sched,
 
 	return 0;
 }
+EXPORT_SYMBOL(drm_sched_entity_init);
 
 /**
  * Query if entity is initialized
@@ -153,8 +154,8 @@ int amd_sched_entity_init(struct amd_gpu_scheduler *sched,
  *
  * return true if entity is initialized, false otherwise
 */
-static bool amd_sched_entity_is_initialized(struct amd_gpu_scheduler *sched,
-					    struct amd_sched_entity *entity)
+static bool drm_sched_entity_is_initialized(struct drm_gpu_scheduler *sched,
+					    struct drm_sched_entity *entity)
 {
 	return entity->sched == sched &&
 		entity->rq != NULL;
@@ -167,7 +168,7 @@ static bool amd_sched_entity_is_initialized(struct amd_gpu_scheduler *sched,
  *
  * Return true if entity don't has any unscheduled jobs.
  */
-static bool amd_sched_entity_is_idle(struct amd_sched_entity *entity)
+static bool drm_sched_entity_is_idle(struct drm_sched_entity *entity)
 {
 	rmb();
 	if (kfifo_is_empty(&entity->job_queue))
@@ -183,7 +184,7 @@ static bool amd_sched_entity_is_idle(struct amd_sched_entity *entity)
  *
  * Return true if entity could provide a job.
  */
-static bool amd_sched_entity_is_ready(struct amd_sched_entity *entity)
+static bool drm_sched_entity_is_ready(struct drm_sched_entity *entity)
 {
 	if (kfifo_is_empty(&entity->job_queue))
 		return false;
@@ -202,12 +203,12 @@ static bool amd_sched_entity_is_ready(struct amd_sched_entity *entity)
  *
  * Cleanup and free the allocated resources.
  */
-void amd_sched_entity_fini(struct amd_gpu_scheduler *sched,
-			   struct amd_sched_entity *entity)
+void drm_sched_entity_fini(struct drm_gpu_scheduler *sched,
+			   struct drm_sched_entity *entity)
 {
 	int r;
 
-	if (!amd_sched_entity_is_initialized(sched, entity))
+	if (!drm_sched_entity_is_initialized(sched, entity))
 		return;
 	/**
 	 * The client will not queue more IBs during this fini, consume existing
@@ -217,10 +218,10 @@ void amd_sched_entity_fini(struct amd_gpu_scheduler *sched,
 		r = -ERESTARTSYS;
 	else
 		r = wait_event_killable(sched->job_scheduled,
-					amd_sched_entity_is_idle(entity));
-	amd_sched_entity_set_rq(entity, NULL);
+					drm_sched_entity_is_idle(entity));
+	drm_sched_entity_set_rq(entity, NULL);
 	if (r) {
-		struct amd_sched_job *job;
+		struct drm_sched_job *job;
 
 		/* Park the kernel for a moment to make sure it isn't processing
 		 * our enity.
@@ -228,10 +229,10 @@ void amd_sched_entity_fini(struct amd_gpu_scheduler *sched,
 		kthread_park(sched->thread);
 		kthread_unpark(sched->thread);
 		while (kfifo_out(&entity->job_queue, &job, sizeof(job))) {
-			struct amd_sched_fence *s_fence = job->s_fence;
-			amd_sched_fence_scheduled(s_fence);
+			struct drm_sched_fence *s_fence = job->s_fence;
+			drm_sched_fence_scheduled(s_fence);
 			dma_fence_set_error(&s_fence->finished, -ESRCH);
-			amd_sched_fence_finished(s_fence);
+			drm_sched_fence_finished(s_fence);
 			dma_fence_put(&s_fence->finished);
 			sched->ops->free_job(job);
 		}
@@ -239,26 +240,27 @@ void amd_sched_entity_fini(struct amd_gpu_scheduler *sched,
 	}
 	kfifo_free(&entity->job_queue);
 }
+EXPORT_SYMBOL(drm_sched_entity_fini);
 
-static void amd_sched_entity_wakeup(struct dma_fence *f, struct dma_fence_cb *cb)
+static void drm_sched_entity_wakeup(struct dma_fence *f, struct dma_fence_cb *cb)
 {
-	struct amd_sched_entity *entity =
-		container_of(cb, struct amd_sched_entity, cb);
+	struct drm_sched_entity *entity =
+		container_of(cb, struct drm_sched_entity, cb);
 	entity->dependency = NULL;
 	dma_fence_put(f);
-	amd_sched_wakeup(entity->sched);
+	drm_sched_wakeup(entity->sched);
 }
 
-static void amd_sched_entity_clear_dep(struct dma_fence *f, struct dma_fence_cb *cb)
+static void drm_sched_entity_clear_dep(struct dma_fence *f, struct dma_fence_cb *cb)
 {
-	struct amd_sched_entity *entity =
-		container_of(cb, struct amd_sched_entity, cb);
+	struct drm_sched_entity *entity =
+		container_of(cb, struct drm_sched_entity, cb);
 	entity->dependency = NULL;
 	dma_fence_put(f);
 }
 
-void amd_sched_entity_set_rq(struct amd_sched_entity *entity,
-			     struct amd_sched_rq *rq)
+void drm_sched_entity_set_rq(struct drm_sched_entity *entity,
+			     struct drm_sched_rq *rq)
 {
 	if (entity->rq == rq)
 		return;
@@ -266,37 +268,39 @@ void amd_sched_entity_set_rq(struct amd_sched_entity *entity,
 	spin_lock(&entity->rq_lock);
 
 	if (entity->rq)
-		amd_sched_rq_remove_entity(entity->rq, entity);
+		drm_sched_rq_remove_entity(entity->rq, entity);
 
 	entity->rq = rq;
 	if (rq)
-		amd_sched_rq_add_entity(rq, entity);
+		drm_sched_rq_add_entity(rq, entity);
 
 	spin_unlock(&entity->rq_lock);
 }
+EXPORT_SYMBOL(drm_sched_entity_set_rq);
 
-bool amd_sched_dependency_optimized(struct dma_fence* fence,
-				    struct amd_sched_entity *entity)
+bool drm_sched_dependency_optimized(struct dma_fence* fence,
+				    struct drm_sched_entity *entity)
 {
-	struct amd_gpu_scheduler *sched = entity->sched;
-	struct amd_sched_fence *s_fence;
+	struct drm_gpu_scheduler *sched = entity->sched;
+	struct drm_sched_fence *s_fence;
 
 	if (!fence || dma_fence_is_signaled(fence))
 		return false;
 	if (fence->context == entity->fence_context)
 		return true;
-	s_fence = to_amd_sched_fence(fence);
+	s_fence = to_drm_sched_fence(fence);
 	if (s_fence && s_fence->sched == sched)
 		return true;
 
 	return false;
 }
+EXPORT_SYMBOL(drm_sched_dependency_optimized);
 
-static bool amd_sched_entity_add_dependency_cb(struct amd_sched_entity *entity)
+static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity)
 {
-	struct amd_gpu_scheduler *sched = entity->sched;
+	struct drm_gpu_scheduler *sched = entity->sched;
 	struct dma_fence * fence = entity->dependency;
-	struct amd_sched_fence *s_fence;
+	struct drm_sched_fence *s_fence;
 
 	if (fence->context == entity->fence_context) {
 		/* We can ignore fences from ourself */
@@ -304,7 +308,7 @@ static bool amd_sched_entity_add_dependency_cb(struct amd_sched_entity *entity)
 		return false;
 	}
 
-	s_fence = to_amd_sched_fence(fence);
+	s_fence = to_drm_sched_fence(fence);
 	if (s_fence && s_fence->sched == sched) {
 
 		/*
@@ -315,7 +319,7 @@ static bool amd_sched_entity_add_dependency_cb(struct amd_sched_entity *entity)
 		dma_fence_put(entity->dependency);
 		entity->dependency = fence;
 		if (!dma_fence_add_callback(fence, &entity->cb,
-					    amd_sched_entity_clear_dep))
+					    drm_sched_entity_clear_dep))
 			return true;
 
 		/* Ignore it when it is already scheduled */
@@ -324,24 +328,24 @@ static bool amd_sched_entity_add_dependency_cb(struct amd_sched_entity *entity)
 	}
 
 	if (!dma_fence_add_callback(entity->dependency, &entity->cb,
-				    amd_sched_entity_wakeup))
+				    drm_sched_entity_wakeup))
 		return true;
 
 	dma_fence_put(entity->dependency);
 	return false;
 }
 
-static struct amd_sched_job *
-amd_sched_entity_peek_job(struct amd_sched_entity *entity)
+static struct drm_sched_job *
+drm_sched_entity_peek_job(struct drm_sched_entity *entity)
 {
-	struct amd_gpu_scheduler *sched = entity->sched;
-	struct amd_sched_job *sched_job;
+	struct drm_gpu_scheduler *sched = entity->sched;
+	struct drm_sched_job *sched_job;
 
 	if (!kfifo_out_peek(&entity->job_queue, &sched_job, sizeof(sched_job)))
 		return NULL;
 
 	while ((entity->dependency = sched->ops->dependency(sched_job)))
-		if (amd_sched_entity_add_dependency_cb(entity))
+		if (drm_sched_entity_add_dependency_cb(entity))
 			return NULL;
 
 	return sched_job;
@@ -354,10 +358,10 @@ amd_sched_entity_peek_job(struct amd_sched_entity *entity)
  *
  * Returns true if we could submit the job.
  */
-static bool amd_sched_entity_in(struct amd_sched_job *sched_job)
+static bool drm_sched_entity_in(struct drm_sched_job *sched_job)
 {
-	struct amd_gpu_scheduler *sched = sched_job->sched;
-	struct amd_sched_entity *entity = sched_job->s_entity;
+	struct drm_gpu_scheduler *sched = sched_job->sched;
+	struct drm_sched_entity *entity = sched_job->s_entity;
 	bool added, first = false;
 
 	spin_lock(&entity->queue_lock);
@@ -373,26 +377,26 @@ static bool amd_sched_entity_in(struct amd_sched_job *sched_job)
 	if (first) {
 		/* Add the entity to the run queue */
 		spin_lock(&entity->rq_lock);
-		amd_sched_rq_add_entity(entity->rq, entity);
+		drm_sched_rq_add_entity(entity->rq, entity);
 		spin_unlock(&entity->rq_lock);
-		amd_sched_wakeup(sched);
+		drm_sched_wakeup(sched);
 	}
 	return added;
 }
 
 /* job_finish is called after hw fence signaled
  */
-static void amd_sched_job_finish(struct work_struct *work)
+static void drm_sched_job_finish(struct work_struct *work)
 {
-	struct amd_sched_job *s_job = container_of(work, struct amd_sched_job,
+	struct drm_sched_job *s_job = container_of(work, struct drm_sched_job,
 						   finish_work);
-	struct amd_gpu_scheduler *sched = s_job->sched;
+	struct drm_gpu_scheduler *sched = s_job->sched;
 
 	/* remove job from ring_mirror_list */
 	spin_lock(&sched->job_list_lock);
 	list_del_init(&s_job->node);
 	if (sched->timeout != MAX_SCHEDULE_TIMEOUT) {
-		struct amd_sched_job *next;
+		struct drm_sched_job *next;
 
 		spin_unlock(&sched->job_list_lock);
 		cancel_delayed_work_sync(&s_job->work_tdr);
@@ -400,7 +404,7 @@ static void amd_sched_job_finish(struct work_struct *work)
 
 		/* queue TDR for next job */
 		next = list_first_entry_or_null(&sched->ring_mirror_list,
-						struct amd_sched_job, node);
+						struct drm_sched_job, node);
 
 		if (next)
 			schedule_delayed_work(&next->work_tdr, sched->timeout);
@@ -410,41 +414,41 @@ static void amd_sched_job_finish(struct work_struct *work)
 	sched->ops->free_job(s_job);
 }
 
-static void amd_sched_job_finish_cb(struct dma_fence *f,
+static void drm_sched_job_finish_cb(struct dma_fence *f,
 				    struct dma_fence_cb *cb)
 {
-	struct amd_sched_job *job = container_of(cb, struct amd_sched_job,
+	struct drm_sched_job *job = container_of(cb, struct drm_sched_job,
 						 finish_cb);
 	schedule_work(&job->finish_work);
 }
 
-static void amd_sched_job_begin(struct amd_sched_job *s_job)
+static void drm_sched_job_begin(struct drm_sched_job *s_job)
 {
-	struct amd_gpu_scheduler *sched = s_job->sched;
+	struct drm_gpu_scheduler *sched = s_job->sched;
 
 	dma_fence_add_callback(&s_job->s_fence->finished, &s_job->finish_cb,
-			       amd_sched_job_finish_cb);
+			       drm_sched_job_finish_cb);
 
 	spin_lock(&sched->job_list_lock);
 	list_add_tail(&s_job->node, &sched->ring_mirror_list);
 	if (sched->timeout != MAX_SCHEDULE_TIMEOUT &&
 	    list_first_entry_or_null(&sched->ring_mirror_list,
-				     struct amd_sched_job, node) == s_job)
+				     struct drm_sched_job, node) == s_job)
 		schedule_delayed_work(&s_job->work_tdr, sched->timeout);
 	spin_unlock(&sched->job_list_lock);
 }
 
-static void amd_sched_job_timedout(struct work_struct *work)
+static void drm_sched_job_timedout(struct work_struct *work)
 {
-	struct amd_sched_job *job = container_of(work, struct amd_sched_job,
+	struct drm_sched_job *job = container_of(work, struct drm_sched_job,
 						 work_tdr.work);
 
 	job->sched->ops->timedout_job(job);
 }
 
-void amd_sched_hw_job_reset(struct amd_gpu_scheduler *sched)
+void drm_sched_hw_job_reset(struct drm_gpu_scheduler *sched)
 {
-	struct amd_sched_job *s_job;
+	struct drm_sched_job *s_job;
 
 	spin_lock(&sched->job_list_lock);
 	list_for_each_entry_reverse(s_job, &sched->ring_mirror_list, node) {
@@ -458,29 +462,31 @@ void amd_sched_hw_job_reset(struct amd_gpu_scheduler *sched)
 	}
 	spin_unlock(&sched->job_list_lock);
 }
+EXPORT_SYMBOL(drm_sched_hw_job_reset);
 
-void amd_sched_job_kickout(struct amd_sched_job *s_job)
+void drm_sched_job_kickout(struct drm_sched_job *s_job)
 {
-	struct amd_gpu_scheduler *sched = s_job->sched;
+	struct drm_gpu_scheduler *sched = s_job->sched;
 
 	spin_lock(&sched->job_list_lock);
 	list_del_init(&s_job->node);
 	spin_unlock(&sched->job_list_lock);
 }
+EXPORT_SYMBOL(drm_sched_job_kickout);
 
-void amd_sched_job_recovery(struct amd_gpu_scheduler *sched)
+void drm_sched_job_recovery(struct drm_gpu_scheduler *sched)
 {
-	struct amd_sched_job *s_job, *tmp;
+	struct drm_sched_job *s_job, *tmp;
 	int r;
 
 	spin_lock(&sched->job_list_lock);
 	s_job = list_first_entry_or_null(&sched->ring_mirror_list,
-					 struct amd_sched_job, node);
+					 struct drm_sched_job, node);
 	if (s_job && sched->timeout != MAX_SCHEDULE_TIMEOUT)
 		schedule_delayed_work(&s_job->work_tdr, sched->timeout);
 
 	list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) {
-		struct amd_sched_fence *s_fence = s_job->s_fence;
+		struct drm_sched_fence *s_fence = s_job->s_fence;
 		struct dma_fence *fence;
 
 		spin_unlock(&sched->job_list_lock);
@@ -489,21 +495,22 @@ void amd_sched_job_recovery(struct amd_gpu_scheduler *sched)
 		if (fence) {
 			s_fence->parent = dma_fence_get(fence);
 			r = dma_fence_add_callback(fence, &s_fence->cb,
-						   amd_sched_process_job);
+						   drm_sched_process_job);
 			if (r == -ENOENT)
-				amd_sched_process_job(fence, &s_fence->cb);
+				drm_sched_process_job(fence, &s_fence->cb);
 			else if (r)
 				DRM_ERROR("fence add callback failed (%d)\n",
 					  r);
 			dma_fence_put(fence);
 		} else {
 			DRM_ERROR("Failed to run job!\n");
-			amd_sched_process_job(NULL, &s_fence->cb);
+			drm_sched_process_job(NULL, &s_fence->cb);
 		}
 		spin_lock(&sched->job_list_lock);
 	}
 	spin_unlock(&sched->job_list_lock);
 }
+EXPORT_SYMBOL(drm_sched_job_recovery);
 
 /**
  * Submit a job to the job queue
@@ -512,39 +519,41 @@ void amd_sched_job_recovery(struct amd_gpu_scheduler *sched)
  *
  * Returns 0 for success, negative error code otherwise.
  */
-void amd_sched_entity_push_job(struct amd_sched_job *sched_job)
+void drm_sched_entity_push_job(struct drm_sched_job *sched_job)
 {
-	struct amd_sched_entity *entity = sched_job->s_entity;
+	struct drm_sched_entity *entity = sched_job->s_entity;
 
-	trace_amd_sched_job(sched_job);
+	trace_drm_sched_job(sched_job);
 	wait_event(entity->sched->job_scheduled,
-		   amd_sched_entity_in(sched_job));
+		   drm_sched_entity_in(sched_job));
 }
+EXPORT_SYMBOL(drm_sched_entity_push_job);
 
 /* init a sched_job with basic field */
-int amd_sched_job_init(struct amd_sched_job *job,
-		       struct amd_gpu_scheduler *sched,
-		       struct amd_sched_entity *entity,
+int drm_sched_job_init(struct drm_sched_job *job,
+		       struct drm_gpu_scheduler *sched,
+		       struct drm_sched_entity *entity,
 		       void *owner)
 {
 	job->sched = sched;
 	job->s_entity = entity;
-	job->s_fence = amd_sched_fence_create(entity, owner);
+	job->s_fence = drm_sched_fence_create(entity, owner);
 	if (!job->s_fence)
 		return -ENOMEM;
 	job->id = atomic64_inc_return(&sched->job_id_count);
 
-	INIT_WORK(&job->finish_work, amd_sched_job_finish);
+	INIT_WORK(&job->finish_work, drm_sched_job_finish);
 	INIT_LIST_HEAD(&job->node);
-	INIT_DELAYED_WORK(&job->work_tdr, amd_sched_job_timedout);
+	INIT_DELAYED_WORK(&job->work_tdr, drm_sched_job_timedout);
 
 	return 0;
 }
+EXPORT_SYMBOL(drm_sched_job_init);
 
 /**
  * Return ture if we can push more jobs to the hw.
  */
-static bool amd_sched_ready(struct amd_gpu_scheduler *sched)
+static bool drm_sched_ready(struct drm_gpu_scheduler *sched)
 {
 	return atomic_read(&sched->hw_rq_count) <
 		sched->hw_submission_limit;
@@ -553,27 +562,27 @@ static bool amd_sched_ready(struct amd_gpu_scheduler *sched)
 /**
  * Wake up the scheduler when it is ready
  */
-static void amd_sched_wakeup(struct amd_gpu_scheduler *sched)
+static void drm_sched_wakeup(struct drm_gpu_scheduler *sched)
 {
-	if (amd_sched_ready(sched))
+	if (drm_sched_ready(sched))
 		wake_up_interruptible(&sched->wake_up_worker);
 }
 
 /**
  * Select next entity to process
 */
-static struct amd_sched_entity *
-amd_sched_select_entity(struct amd_gpu_scheduler *sched)
+static struct drm_sched_entity *
+drm_sched_select_entity(struct drm_gpu_scheduler *sched)
 {
-	struct amd_sched_entity *entity;
+	struct drm_sched_entity *entity;
 	int i;
 
-	if (!amd_sched_ready(sched))
+	if (!drm_sched_ready(sched))
 		return NULL;
 
 	/* Kernel run queue has higher priority than normal run queue*/
-	for (i = AMD_SCHED_PRIORITY_MAX - 1; i >= AMD_SCHED_PRIORITY_MIN; i--) {
-		entity = amd_sched_rq_select_entity(&sched->sched_rq[i]);
+	for (i = DRM_SCHED_PRIORITY_MAX - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
+		entity = drm_sched_rq_select_entity(&sched->sched_rq[i]);
 		if (entity)
 			break;
 	}
@@ -581,22 +590,22 @@ amd_sched_select_entity(struct amd_gpu_scheduler *sched)
 	return entity;
 }
 
-static void amd_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb)
+static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb)
 {
-	struct amd_sched_fence *s_fence =
-		container_of(cb, struct amd_sched_fence, cb);
-	struct amd_gpu_scheduler *sched = s_fence->sched;
+	struct drm_sched_fence *s_fence =
+		container_of(cb, struct drm_sched_fence, cb);
+	struct drm_gpu_scheduler *sched = s_fence->sched;
 
 	dma_fence_get(&s_fence->finished);
 	atomic_dec(&sched->hw_rq_count);
-	amd_sched_fence_finished(s_fence);
+	drm_sched_fence_finished(s_fence);
 
-	trace_amd_sched_process_job(s_fence);
+	trace_drm_sched_process_job(s_fence);
 	dma_fence_put(&s_fence->finished);
 	wake_up_interruptible(&sched->wake_up_worker);
 }
 
-static bool amd_sched_blocked(struct amd_gpu_scheduler *sched)
+static bool drm_sched_blocked(struct drm_gpu_scheduler *sched)
 {
 	if (kthread_should_park()) {
 		kthread_parkme();
@@ -606,53 +615,53 @@ static bool amd_sched_blocked(struct amd_gpu_scheduler *sched)
 	return false;
 }
 
-static int amd_sched_main(void *param)
+static int drm_sched_main(void *param)
 {
 	struct sched_param sparam = {.sched_priority = 1};
-	struct amd_gpu_scheduler *sched = (struct amd_gpu_scheduler *)param;
+	struct drm_gpu_scheduler *sched = (struct drm_gpu_scheduler *)param;
 	int r, count;
 
 	sched_setscheduler(current, SCHED_FIFO, &sparam);
 
 	while (!kthread_should_stop()) {
-		struct amd_sched_entity *entity = NULL;
-		struct amd_sched_fence *s_fence;
-		struct amd_sched_job *sched_job;
+		struct drm_sched_entity *entity = NULL;
+		struct drm_sched_fence *s_fence;
+		struct drm_sched_job *sched_job;
 		struct dma_fence *fence;
 
 		wait_event_interruptible(sched->wake_up_worker,
-					 (!amd_sched_blocked(sched) &&
-					  (entity = amd_sched_select_entity(sched))) ||
+					 (!drm_sched_blocked(sched) &&
+					  (entity = drm_sched_select_entity(sched))) ||
 					 kthread_should_stop());
 
 		if (!entity)
 			continue;
 
-		sched_job = amd_sched_entity_peek_job(entity);
+		sched_job = drm_sched_entity_peek_job(entity);
 		if (!sched_job)
 			continue;
 
 		s_fence = sched_job->s_fence;
 
 		atomic_inc(&sched->hw_rq_count);
-		amd_sched_job_begin(sched_job);
+		drm_sched_job_begin(sched_job);
 
 		fence = sched->ops->run_job(sched_job);
-		amd_sched_fence_scheduled(s_fence);
+		drm_sched_fence_scheduled(s_fence);
 
 		if (fence) {
 			s_fence->parent = dma_fence_get(fence);
 			r = dma_fence_add_callback(fence, &s_fence->cb,
-						   amd_sched_process_job);
+						   drm_sched_process_job);
 			if (r == -ENOENT)
-				amd_sched_process_job(fence, &s_fence->cb);
+				drm_sched_process_job(fence, &s_fence->cb);
 			else if (r)
 				DRM_ERROR("fence add callback failed (%d)\n",
 					  r);
 			dma_fence_put(fence);
 		} else {
 			DRM_ERROR("Failed to run job!\n");
-			amd_sched_process_job(NULL, &s_fence->cb);
+			drm_sched_process_job(NULL, &s_fence->cb);
 		}
 
 		count = kfifo_out(&entity->job_queue, &sched_job,
@@ -673,8 +682,8 @@ static int amd_sched_main(void *param)
  *
  * Return 0 on success, otherwise error code.
 */
-int amd_sched_init(struct amd_gpu_scheduler *sched,
-		   const struct amd_sched_backend_ops *ops,
+int drm_sched_init(struct drm_gpu_scheduler *sched,
+		   const struct drm_sched_backend_ops *ops,
 		   unsigned hw_submission, long timeout, const char *name)
 {
 	int i;
@@ -682,8 +691,8 @@ int amd_sched_init(struct amd_gpu_scheduler *sched,
 	sched->hw_submission_limit = hw_submission;
 	sched->name = name;
 	sched->timeout = timeout;
-	for (i = AMD_SCHED_PRIORITY_MIN; i < AMD_SCHED_PRIORITY_MAX; i++)
-		amd_sched_rq_init(&sched->sched_rq[i]);
+	for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_MAX; i++)
+		drm_sched_rq_init(&sched->sched_rq[i]);
 
 	init_waitqueue_head(&sched->wake_up_worker);
 	init_waitqueue_head(&sched->job_scheduled);
@@ -693,7 +702,7 @@ int amd_sched_init(struct amd_gpu_scheduler *sched,
 	atomic64_set(&sched->job_id_count, 0);
 
 	/* Each scheduler will run on a seperate kernel thread */
-	sched->thread = kthread_run(amd_sched_main, sched, sched->name);
+	sched->thread = kthread_run(drm_sched_main, sched, sched->name);
 	if (IS_ERR(sched->thread)) {
 		DRM_ERROR("Failed to create scheduler for %s.\n", name);
 		return PTR_ERR(sched->thread);
@@ -701,14 +710,16 @@ int amd_sched_init(struct amd_gpu_scheduler *sched,
 
 	return 0;
 }
+EXPORT_SYMBOL(drm_sched_init);
 
 /**
  * Destroy a gpu scheduler
  *
  * @sched	The pointer to the scheduler
  */
-void amd_sched_fini(struct amd_gpu_scheduler *sched)
+void drm_sched_fini(struct drm_gpu_scheduler *sched)
 {
 	if (sched->thread)
 		kthread_stop(sched->thread);
 }
+EXPORT_SYMBOL(drm_sched_fini);
diff --git a/drivers/gpu/drm/amd/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c
similarity index 59%
rename from drivers/gpu/drm/amd/scheduler/sched_fence.c
rename to drivers/gpu/drm/scheduler/sched_fence.c
index 33f54d0a5c4f..f6f2955890c4 100644
--- a/drivers/gpu/drm/amd/scheduler/sched_fence.c
+++ b/drivers/gpu/drm/scheduler/sched_fence.c
@@ -19,57 +19,36 @@
  * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
  * OTHER DEALINGS IN THE SOFTWARE.
  *
- *
  */
+
 #include <linux/kthread.h>
 #include <linux/wait.h>
 #include <linux/sched.h>
 #include <drm/drmP.h>
-#include "gpu_scheduler.h"
+#include <drm/gpu_scheduler.h>
 
 static struct kmem_cache *sched_fence_slab;
 
-int amd_sched_fence_slab_init(void)
+int drm_sched_fence_slab_init(void)
 {
 	sched_fence_slab = kmem_cache_create(
-		"amd_sched_fence", sizeof(struct amd_sched_fence), 0,
+		"drm_sched_fence", sizeof(struct drm_sched_fence), 0,
 		SLAB_HWCACHE_ALIGN, NULL);
 	if (!sched_fence_slab)
 		return -ENOMEM;
 
 	return 0;
 }
+EXPORT_SYMBOL_GPL(drm_sched_fence_slab_init);
 
-void amd_sched_fence_slab_fini(void)
+void drm_sched_fence_slab_fini(void)
 {
 	rcu_barrier();
 	kmem_cache_destroy(sched_fence_slab);
 }
+EXPORT_SYMBOL_GPL(drm_sched_fence_slab_fini);
 
-struct amd_sched_fence *amd_sched_fence_create(struct amd_sched_entity *entity,
-					       void *owner)
-{
-	struct amd_sched_fence *fence = NULL;
-	unsigned seq;
-
-	fence = kmem_cache_zalloc(sched_fence_slab, GFP_KERNEL);
-	if (fence == NULL)
-		return NULL;
-
-	fence->owner = owner;
-	fence->sched = entity->sched;
-	spin_lock_init(&fence->lock);
-
-	seq = atomic_inc_return(&entity->fence_seq);
-	dma_fence_init(&fence->scheduled, &amd_sched_fence_ops_scheduled,
-		       &fence->lock, entity->fence_context, seq);
-	dma_fence_init(&fence->finished, &amd_sched_fence_ops_finished,
-		       &fence->lock, entity->fence_context + 1, seq);
-
-	return fence;
-}
-
-void amd_sched_fence_scheduled(struct amd_sched_fence *fence)
+void drm_sched_fence_scheduled(struct drm_sched_fence *fence)
 {
 	int ret = dma_fence_signal(&fence->scheduled);
 
@@ -81,7 +60,7 @@ void amd_sched_fence_scheduled(struct amd_sched_fence *fence)
 				"was already signaled\n");
 }
 
-void amd_sched_fence_finished(struct amd_sched_fence *fence)
+void drm_sched_fence_finished(struct drm_sched_fence *fence)
 {
 	int ret = dma_fence_signal(&fence->finished);
 
@@ -93,18 +72,18 @@ void amd_sched_fence_finished(struct amd_sched_fence *fence)
 				"was already signaled\n");
 }
 
-static const char *amd_sched_fence_get_driver_name(struct dma_fence *fence)
+static const char *drm_sched_fence_get_driver_name(struct dma_fence *fence)
 {
-	return "amd_sched";
+	return "drm_sched";
 }
 
-static const char *amd_sched_fence_get_timeline_name(struct dma_fence *f)
+static const char *drm_sched_fence_get_timeline_name(struct dma_fence *f)
 {
-	struct amd_sched_fence *fence = to_amd_sched_fence(f);
+	struct drm_sched_fence *fence = to_drm_sched_fence(f);
 	return (const char *)fence->sched->name;
 }
 
-static bool amd_sched_fence_enable_signaling(struct dma_fence *f)
+static bool drm_sched_fence_enable_signaling(struct dma_fence *f)
 {
 	return true;
 }
@@ -116,10 +95,10 @@ static bool amd_sched_fence_enable_signaling(struct dma_fence *f)
  *
  * Free up the fence memory after the RCU grace period.
  */
-static void amd_sched_fence_free(struct rcu_head *rcu)
+static void drm_sched_fence_free(struct rcu_head *rcu)
 {
 	struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
-	struct amd_sched_fence *fence = to_amd_sched_fence(f);
+	struct drm_sched_fence *fence = to_drm_sched_fence(f);
 
 	dma_fence_put(fence->parent);
 	kmem_cache_free(sched_fence_slab, fence);
@@ -133,11 +112,11 @@ static void amd_sched_fence_free(struct rcu_head *rcu)
  * This function is called when the reference count becomes zero.
  * It just RCU schedules freeing up the fence.
  */
-static void amd_sched_fence_release_scheduled(struct dma_fence *f)
+static void drm_sched_fence_release_scheduled(struct dma_fence *f)
 {
-	struct amd_sched_fence *fence = to_amd_sched_fence(f);
+	struct drm_sched_fence *fence = to_drm_sched_fence(f);
 
-	call_rcu(&fence->finished.rcu, amd_sched_fence_free);
+	call_rcu(&fence->finished.rcu, drm_sched_fence_free);
 }
 
 /**
@@ -147,27 +126,62 @@ static void amd_sched_fence_release_scheduled(struct dma_fence *f)
  *
  * Drop the extra reference from the scheduled fence to the base fence.
  */
-static void amd_sched_fence_release_finished(struct dma_fence *f)
+static void drm_sched_fence_release_finished(struct dma_fence *f)
 {
-	struct amd_sched_fence *fence = to_amd_sched_fence(f);
+	struct drm_sched_fence *fence = to_drm_sched_fence(f);
 
 	dma_fence_put(&fence->scheduled);
 }
 
-const struct dma_fence_ops amd_sched_fence_ops_scheduled = {
-	.get_driver_name = amd_sched_fence_get_driver_name,
-	.get_timeline_name = amd_sched_fence_get_timeline_name,
-	.enable_signaling = amd_sched_fence_enable_signaling,
+const struct dma_fence_ops drm_sched_fence_ops_scheduled = {
+	.get_driver_name = drm_sched_fence_get_driver_name,
+	.get_timeline_name = drm_sched_fence_get_timeline_name,
+	.enable_signaling = drm_sched_fence_enable_signaling,
 	.signaled = NULL,
 	.wait = dma_fence_default_wait,
-	.release = amd_sched_fence_release_scheduled,
+	.release = drm_sched_fence_release_scheduled,
 };
 
-const struct dma_fence_ops amd_sched_fence_ops_finished = {
-	.get_driver_name = amd_sched_fence_get_driver_name,
-	.get_timeline_name = amd_sched_fence_get_timeline_name,
-	.enable_signaling = amd_sched_fence_enable_signaling,
+const struct dma_fence_ops drm_sched_fence_ops_finished = {
+	.get_driver_name = drm_sched_fence_get_driver_name,
+	.get_timeline_name = drm_sched_fence_get_timeline_name,
+	.enable_signaling = drm_sched_fence_enable_signaling,
 	.signaled = NULL,
 	.wait = dma_fence_default_wait,
-	.release = amd_sched_fence_release_finished,
+	.release = drm_sched_fence_release_finished,
 };
+
+struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f)
+{
+	if (f->ops == &drm_sched_fence_ops_scheduled)
+		return container_of(f, struct drm_sched_fence, scheduled);
+
+	if (f->ops == &drm_sched_fence_ops_finished)
+		return container_of(f, struct drm_sched_fence, finished);
+
+	return NULL;
+}
+EXPORT_SYMBOL(to_drm_sched_fence);
+
+struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity,
+					       void *owner)
+{
+	struct drm_sched_fence *fence = NULL;
+	unsigned seq;
+
+	fence = kmem_cache_zalloc(sched_fence_slab, GFP_KERNEL);
+	if (fence == NULL)
+		return NULL;
+
+	fence->owner = owner;
+	fence->sched = entity->sched;
+	spin_lock_init(&fence->lock);
+
+	seq = atomic_inc_return(&entity->fence_seq);
+	dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled,
+		       &fence->lock, entity->fence_context, seq);
+	dma_fence_init(&fence->finished, &drm_sched_fence_ops_finished,
+		       &fence->lock, entity->fence_context + 1, seq);
+
+	return fence;
+}
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
new file mode 100644
index 000000000000..870ce9a693d3
--- /dev/null
+++ b/include/drm/gpu_scheduler.h
@@ -0,0 +1,174 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef _DRM_GPU_SCHEDULER_H_
+#define _DRM_GPU_SCHEDULER_H_
+
+#include <linux/kfifo.h>
+#include <linux/dma-fence.h>
+
+struct drm_gpu_scheduler;
+struct drm_sched_rq;
+
+/**
+ * A scheduler entity is a wrapper around a job queue or a group
+ * of other entities. Entities take turns emitting jobs from their
+ * job queues to corresponding hardware ring based on scheduling
+ * policy.
+*/
+struct drm_sched_entity {
+	struct list_head		list;
+	struct drm_sched_rq		*rq;
+	spinlock_t			rq_lock;
+	struct drm_gpu_scheduler	*sched;
+
+	spinlock_t			queue_lock;
+	struct kfifo			job_queue;
+
+	atomic_t			fence_seq;
+	uint64_t			fence_context;
+
+	struct dma_fence		*dependency;
+	struct dma_fence_cb		cb;
+};
+
+/**
+ * Run queue is a set of entities scheduling command submissions for
+ * one specific ring. It implements the scheduling policy that selects
+ * the next entity to emit commands from.
+*/
+struct drm_sched_rq {
+	spinlock_t			lock;
+	struct list_head		entities;
+	struct drm_sched_entity		*current_entity;
+};
+
+struct drm_sched_fence {
+	struct dma_fence		scheduled;
+	struct dma_fence		finished;
+	struct dma_fence_cb		cb;
+	struct dma_fence		*parent;
+	struct drm_gpu_scheduler	*sched;
+	spinlock_t			lock;
+	void				*owner;
+};
+
+struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f);
+
+struct drm_sched_job {
+	struct drm_gpu_scheduler	*sched;
+	struct drm_sched_entity		*s_entity;
+	struct drm_sched_fence		*s_fence;
+	struct dma_fence_cb		finish_cb;
+	struct work_struct		finish_work;
+	struct list_head		node;
+	struct delayed_work		work_tdr;
+	uint64_t			id;
+	atomic_t			karma;
+};
+
+static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job,
+					    int threshold)
+{
+	return (s_job && atomic_inc_return(&s_job->karma) > threshold);
+}
+
+/**
+ * Define the backend operations called by the scheduler,
+ * these functions should be implemented in driver side
+*/
+struct drm_sched_backend_ops {
+	struct dma_fence *(*dependency)(struct drm_sched_job *sched_job);
+	struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
+	void (*timedout_job)(struct drm_sched_job *sched_job);
+	void (*free_job)(struct drm_sched_job *sched_job);
+};
+
+enum drm_sched_priority {
+	DRM_SCHED_PRIORITY_MIN,
+	DRM_SCHED_PRIORITY_LOW = DRM_SCHED_PRIORITY_MIN,
+	DRM_SCHED_PRIORITY_NORMAL,
+	DRM_SCHED_PRIORITY_HIGH_SW,
+	DRM_SCHED_PRIORITY_HIGH_HW,
+	DRM_SCHED_PRIORITY_KERNEL,
+	DRM_SCHED_PRIORITY_MAX,
+	DRM_SCHED_PRIORITY_INVALID = -1,
+	DRM_SCHED_PRIORITY_UNSET = -2
+};
+
+/**
+ * One scheduler is implemented for each hardware ring
+*/
+struct drm_gpu_scheduler {
+	const struct drm_sched_backend_ops	*ops;
+	uint32_t			hw_submission_limit;
+	long				timeout;
+	const char			*name;
+	struct drm_sched_rq		sched_rq[DRM_SCHED_PRIORITY_MAX];
+	wait_queue_head_t		wake_up_worker;
+	wait_queue_head_t		job_scheduled;
+	atomic_t			hw_rq_count;
+	atomic64_t			job_id_count;
+	struct task_struct		*thread;
+	struct list_head		ring_mirror_list;
+	spinlock_t			job_list_lock;
+};
+
+int drm_sched_init(struct drm_gpu_scheduler *sched,
+		   const struct drm_sched_backend_ops *ops,
+		   uint32_t hw_submission, long timeout, const char *name);
+void drm_sched_fini(struct drm_gpu_scheduler *sched);
+
+int drm_sched_entity_init(struct drm_gpu_scheduler *sched,
+			  struct drm_sched_entity *entity,
+			  struct drm_sched_rq *rq,
+			  uint32_t jobs);
+void drm_sched_entity_fini(struct drm_gpu_scheduler *sched,
+			   struct drm_sched_entity *entity);
+void drm_sched_entity_push_job(struct drm_sched_job *sched_job);
+void drm_sched_entity_set_rq(struct drm_sched_entity *entity,
+			     struct drm_sched_rq *rq);
+
+int drm_sched_fence_slab_init(void);
+void drm_sched_fence_slab_fini(void);
+
+struct drm_sched_fence *drm_sched_fence_create(
+	struct drm_sched_entity *s_entity, void *owner);
+void drm_sched_fence_scheduled(struct drm_sched_fence *fence);
+void drm_sched_fence_finished(struct drm_sched_fence *fence);
+int drm_sched_job_init(struct drm_sched_job *job,
+		       struct drm_gpu_scheduler *sched,
+		       struct drm_sched_entity *entity,
+		       void *owner);
+void drm_sched_hw_job_reset(struct drm_gpu_scheduler *sched);
+void drm_sched_job_recovery(struct drm_gpu_scheduler *sched);
+bool drm_sched_dependency_optimized(struct dma_fence* fence,
+				    struct drm_sched_entity *entity);
+void drm_sched_job_kickout(struct drm_sched_job *s_job);
+
+static inline enum drm_sched_priority
+drm_sched_get_job_priority(struct drm_sched_job *job)
+{
+	return (job->s_entity->rq - job->sched->sched_rq);
+}
+#endif
diff --git a/drivers/gpu/drm/amd/scheduler/gpu_sched_trace.h b/include/drm/gpu_scheduler_trace.h
similarity index 83%
rename from drivers/gpu/drm/amd/scheduler/gpu_sched_trace.h
rename to include/drm/gpu_scheduler_trace.h
index 283a0dc25e84..a745a9442c72 100644
--- a/drivers/gpu/drm/amd/scheduler/gpu_sched_trace.h
+++ b/include/drm/gpu_scheduler_trace.h
@@ -9,14 +9,14 @@
 #include <drm/drmP.h>
 
 #undef TRACE_SYSTEM
-#define TRACE_SYSTEM gpu_sched
-#define TRACE_INCLUDE_FILE gpu_sched_trace
+#define TRACE_SYSTEM gpu_scheduler
+#define TRACE_INCLUDE_FILE gpu_scheduler_trace
 
-TRACE_EVENT(amd_sched_job,
-	    TP_PROTO(struct amd_sched_job *sched_job),
+TRACE_EVENT(drm_sched_job,
+	    TP_PROTO(struct drm_sched_job *sched_job),
 	    TP_ARGS(sched_job),
 	    TP_STRUCT__entry(
-			     __field(struct amd_sched_entity *, entity)
+			     __field(struct drm_sched_entity *, entity)
 			     __field(struct dma_fence *, fence)
 			     __field(const char *, name)
 			     __field(uint64_t, id)
@@ -40,8 +40,8 @@ TRACE_EVENT(amd_sched_job,
 		      __entry->job_count, __entry->hw_job_count)
 );
 
-TRACE_EVENT(amd_sched_process_job,
-	    TP_PROTO(struct amd_sched_fence *fence),
+TRACE_EVENT(drm_sched_process_job,
+	    TP_PROTO(struct drm_sched_fence *fence),
 	    TP_ARGS(fence),
 	    TP_STRUCT__entry(
 		    __field(struct dma_fence *, fence)
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/2] drm/sched: move fence slab handling to module init/exit
       [not found] ` <20171201152901.3626-1-l.stach-bIcnvbaLZ9MEGnE8C9+IrQ@public.gmane.org>
@ 2017-12-01 15:29   ` Lucas Stach
  2017-12-01 15:55   ` [PATCH 0/2] Move scheduler out of AMDGPU Christian König
  1 sibling, 0 replies; 10+ messages in thread
From: Lucas Stach @ 2017-12-01 15:29 UTC (permalink / raw)
  To: Christian König, Alex Deucher
  Cc: David Airlie, kernel-bIcnvbaLZ9MEGnE8C9+IrQ,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	patchwork-lst-bIcnvbaLZ9MEGnE8C9+IrQ

This is the only part of the scheduler which must not be called from
different drivers. Move it to module init/exit so it is done a single
time when loading the scheduler.

Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c |  8 --------
 drivers/gpu/drm/scheduler/sched_fence.c | 12 ++++++++----
 include/drm/gpu_scheduler.h             |  3 ---
 3 files changed, 8 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index b23c83c59725..18b7fce2fb27 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -906,10 +906,6 @@ static int __init amdgpu_init(void)
 	if (r)
 		goto error_fence;
 
-	r = drm_sched_fence_slab_init();
-	if (r)
-		goto error_sched;
-
 	if (vgacon_text_force()) {
 		DRM_ERROR("VGACON disables amdgpu kernel modesetting.\n");
 		return -EINVAL;
@@ -922,9 +918,6 @@ static int __init amdgpu_init(void)
 	/* let modprobe override vga console setting */
 	return pci_register_driver(pdriver);
 
-error_sched:
-	amdgpu_fence_slab_fini();
-
 error_fence:
 	amdgpu_sync_fini();
 
@@ -938,7 +931,6 @@ static void __exit amdgpu_exit(void)
 	pci_unregister_driver(pdriver);
 	amdgpu_unregister_atpx_handler();
 	amdgpu_sync_fini();
-	drm_sched_fence_slab_fini();
 	amdgpu_fence_slab_fini();
 }
 
diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c
index f6f2955890c4..69aab086b913 100644
--- a/drivers/gpu/drm/scheduler/sched_fence.c
+++ b/drivers/gpu/drm/scheduler/sched_fence.c
@@ -29,7 +29,7 @@
 
 static struct kmem_cache *sched_fence_slab;
 
-int drm_sched_fence_slab_init(void)
+static int __init drm_sched_fence_slab_init(void)
 {
 	sched_fence_slab = kmem_cache_create(
 		"drm_sched_fence", sizeof(struct drm_sched_fence), 0,
@@ -39,14 +39,12 @@ int drm_sched_fence_slab_init(void)
 
 	return 0;
 }
-EXPORT_SYMBOL_GPL(drm_sched_fence_slab_init);
 
-void drm_sched_fence_slab_fini(void)
+static void __exit drm_sched_fence_slab_fini(void)
 {
 	rcu_barrier();
 	kmem_cache_destroy(sched_fence_slab);
 }
-EXPORT_SYMBOL_GPL(drm_sched_fence_slab_fini);
 
 void drm_sched_fence_scheduled(struct drm_sched_fence *fence)
 {
@@ -185,3 +183,9 @@ struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity,
 
 	return fence;
 }
+
+module_init(drm_sched_fence_slab_init);
+module_exit(drm_sched_fence_slab_fini);
+
+MODULE_DESCRIPTION("DRM GPU scheduler");
+MODULE_LICENSE("GPL and additional rights");
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 870ce9a693d3..2e165093a789 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -149,9 +149,6 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job);
 void drm_sched_entity_set_rq(struct drm_sched_entity *entity,
 			     struct drm_sched_rq *rq);
 
-int drm_sched_fence_slab_init(void);
-void drm_sched_fence_slab_fini(void);
-
 struct drm_sched_fence *drm_sched_fence_create(
 	struct drm_sched_entity *s_entity, void *owner);
 void drm_sched_fence_scheduled(struct drm_sched_fence *fence);
-- 
2.11.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/2] Move scheduler out of AMDGPU
       [not found] ` <20171201152901.3626-1-l.stach-bIcnvbaLZ9MEGnE8C9+IrQ@public.gmane.org>
  2017-12-01 15:29   ` [PATCH 2/2] drm/sched: move fence slab handling to module init/exit Lucas Stach
@ 2017-12-01 15:55   ` Christian König
  2017-12-01 16:04     ` Lucas Stach
       [not found]     ` <7a884709-8717-cf45-152d-60bf8cb0f7e2-5C7GfCeVMHo@public.gmane.org>
  1 sibling, 2 replies; 10+ messages in thread
From: Christian König @ 2017-12-01 15:55 UTC (permalink / raw)
  To: Lucas Stach, Alex Deucher
  Cc: Grodzovsky, Andrey, David Airlie,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	patchwork-lst-bIcnvbaLZ9MEGnE8C9+IrQ,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	kernel-bIcnvbaLZ9MEGnE8C9+IrQ

Am 01.12.2017 um 16:28 schrieb Lucas Stach:
> Hi all,
>
> so this is the first step to make the marvelous AMDGPU scheduler useable
> for other drivers. I have a (mostly) working prototype of Etnaviv using
> the scheduler, but those patches need to keep baking for a while.
>
> I'm sending this out as I want to avoid rebasing this change too much
> and don't want to take people by surprise when the Etnaviv implementation
> surfaces. Also this might need some coordination between AMDGPU and
> Etnaviv, which might be good to get going now.
>
> Please speak up now if you have any objections or comments.

Looks good to me, but question is what is this based upon?

I strongly assume drm-next, so question is now if we have any patches 
inside amd branches we should apply before doing this.

CCing Andrey as well cause he has some tasks assigned around the 
scheduler as well.

Regards,
Christian.

>
> Regards,
> Lucas
>
> Lucas Stach (2):
>    drm: move amd_gpu_scheduler into common location
>    drm/sched: move fence slab handling to module init/exit
>
>   drivers/gpu/drm/Kconfig                            |   5 +
>   drivers/gpu/drm/Makefile                           |   1 +
>   drivers/gpu/drm/amd/amdgpu/Makefile                |   5 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu.h                |  16 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c             |   8 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c            |  38 +--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c         |  12 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c            |   8 -
>   drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c          |   4 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c            |  22 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c           |  14 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h           |  12 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c          |  20 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h          |   2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c           |   6 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c            |   8 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h            |   4 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c            |   8 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h            |   4 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c            |   8 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h            |   2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c            |  14 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h            |   4 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c             |  10 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h             |   4 +-
>   drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c              |   4 +-
>   drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c              |   8 +-
>   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c              |   8 +-
>   drivers/gpu/drm/amd/scheduler/gpu_scheduler.h      | 185 --------------
>   drivers/gpu/drm/scheduler/Makefile                 |   4 +
>   .../gpu/drm/{amd => }/scheduler/gpu_scheduler.c    | 281 +++++++++++----------
>   drivers/gpu/drm/{amd => }/scheduler/sched_fence.c  | 122 +++++----
>   include/drm/gpu_scheduler.h                        | 171 +++++++++++++
>   .../drm/gpu_scheduler_trace.h                      |  14 +-
>   34 files changed, 525 insertions(+), 511 deletions(-)
>   delete mode 100644 drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
>   create mode 100644 drivers/gpu/drm/scheduler/Makefile
>   rename drivers/gpu/drm/{amd => }/scheduler/gpu_scheduler.c (64%)
>   rename drivers/gpu/drm/{amd => }/scheduler/sched_fence.c (58%)
>   create mode 100644 include/drm/gpu_scheduler.h
>   rename drivers/gpu/drm/amd/scheduler/gpu_sched_trace.h => include/drm/gpu_scheduler_trace.h (83%)
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/2] Move scheduler out of AMDGPU
  2017-12-01 15:55   ` [PATCH 0/2] Move scheduler out of AMDGPU Christian König
@ 2017-12-01 16:04     ` Lucas Stach
       [not found]     ` <7a884709-8717-cf45-152d-60bf8cb0f7e2-5C7GfCeVMHo@public.gmane.org>
  1 sibling, 0 replies; 10+ messages in thread
From: Lucas Stach @ 2017-12-01 16:04 UTC (permalink / raw)
  To: Christian König, Alex Deucher
  Cc: David Airlie, amd-gfx, patchwork-lst, dri-devel, kernel

Am Freitag, den 01.12.2017, 16:55 +0100 schrieb Christian König:
> Am 01.12.2017 um 16:28 schrieb Lucas Stach:
> > Hi all,
> > 
> > so this is the first step to make the marvelous AMDGPU scheduler
> > useable
> > for other drivers. I have a (mostly) working prototype of Etnaviv
> > using
> > the scheduler, but those patches need to keep baking for a while.
> > 
> > I'm sending this out as I want to avoid rebasing this change too
> > much
> > and don't want to take people by surprise when the Etnaviv
> > implementation
> > surfaces. Also this might need some coordination between AMDGPU and
> > Etnaviv, which might be good to get going now.
> > 
> > Please speak up now if you have any objections or comments.
> 
> Looks good to me, but question is what is this based upon?
> 
> I strongly assume drm-next, so question is now if we have any
> patches 
> inside amd branches we should apply before doing this.

For now this is based on 4.15-rc1, where the only difference to drm-
next in the scheduler code is currently this:

------------------------------------>8------------------------------

diff --git a/drivers/gpu/drm/amd/scheduler/gpu_sched_trace.h b/drivers/gpu/drm/amd/scheduler/gpu_sched_trace.h
index 8bd38102b58e..283a0dc25e84 100644
--- a/drivers/gpu/drm/amd/scheduler/gpu_sched_trace.h
+++ b/drivers/gpu/drm/amd/scheduler/gpu_sched_trace.h
@@ -1,3 +1,4 @@
+/* SPDX-License-Identifier: GPL-2.0 */
 #if !defined(_GPU_SCHED_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
 #define _GPU_SCHED_TRACE_H_
 
diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
index e4d3b4ec4e92..92ec663fdada 100644
--- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
+++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
@@ -188,7 +188,7 @@ static bool amd_sched_entity_is_ready(struct amd_sched_entity *entity)
        if (kfifo_is_empty(&entity->job_queue))
                return false;
 
-       if (ACCESS_ONCE(entity->dependency))
+       if (READ_ONCE(entity->dependency))
                return false;
 
        return true;

--------------------------------------->8------------------------------

I'm fine with rebasing this on whatever AMDGPU guys prefer, but this
requires a stable branch with all relevant patches included, so we can
use this as a synchronization point for the move.

> CCing Andrey as well cause he has some tasks assigned around the 
> scheduler as well.

Thanks,
Lucas
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* RE: [PATCH 0/2] Move scheduler out of AMDGPU
       [not found]     ` <7a884709-8717-cf45-152d-60bf8cb0f7e2-5C7GfCeVMHo@public.gmane.org>
@ 2017-12-02 12:14       ` Liu, Monk
       [not found]         ` <BLUPR12MB04491E836AFBE4759784405E843E0-7LeqcoF/hwpTIQvHjXdJlwdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
  2017-12-04 21:47       ` Alex Deucher
  1 sibling, 1 reply; 10+ messages in thread
From: Liu, Monk @ 2017-12-02 12:14 UTC (permalink / raw)
  To: Koenig, Christian, Lucas Stach, Deucher, Alexander
  Cc: Grodzovsky, Andrey, David Airlie,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	patchwork-lst-bIcnvbaLZ9MEGnE8C9+IrQ,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	kernel-bIcnvbaLZ9MEGnE8C9+IrQ

I'm wondering if GPU reset still work after this home move ...



-----Original Message-----
From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf Of Christian K?nig
Sent: Friday, December 1, 2017 11:55 PM
To: Lucas Stach <l.stach@pengutronix.de>; Deucher, Alexander <Alexander.Deucher@amd.com>
Cc: Grodzovsky, Andrey <Andrey.Grodzovsky@amd.com>; David Airlie <airlied@linux.ie>; amd-gfx@lists.freedesktop.org; patchwork-lst@pengutronix.de; dri-devel@lists.freedesktop.org; kernel@pengutronix.de
Subject: Re: [PATCH 0/2] Move scheduler out of AMDGPU

Am 01.12.2017 um 16:28 schrieb Lucas Stach:
> Hi all,
>
> so this is the first step to make the marvelous AMDGPU scheduler 
> useable for other drivers. I have a (mostly) working prototype of 
> Etnaviv using the scheduler, but those patches need to keep baking for a while.
>
> I'm sending this out as I want to avoid rebasing this change too much 
> and don't want to take people by surprise when the Etnaviv 
> implementation surfaces. Also this might need some coordination 
> between AMDGPU and Etnaviv, which might be good to get going now.
>
> Please speak up now if you have any objections or comments.

Looks good to me, but question is what is this based upon?

I strongly assume drm-next, so question is now if we have any patches inside amd branches we should apply before doing this.

CCing Andrey as well cause he has some tasks assigned around the scheduler as well.

Regards,
Christian.

>
> Regards,
> Lucas
>
> Lucas Stach (2):
>    drm: move amd_gpu_scheduler into common location
>    drm/sched: move fence slab handling to module init/exit
>
>   drivers/gpu/drm/Kconfig                            |   5 +
>   drivers/gpu/drm/Makefile                           |   1 +
>   drivers/gpu/drm/amd/amdgpu/Makefile                |   5 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu.h                |  16 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c             |   8 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c            |  38 +--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c         |  12 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c            |   8 -
>   drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c          |   4 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c            |  22 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c           |  14 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h           |  12 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c          |  20 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h          |   2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c           |   6 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c            |   8 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h            |   4 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c            |   8 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h            |   4 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c            |   8 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h            |   2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c            |  14 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h            |   4 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c             |  10 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h             |   4 +-
>   drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c              |   4 +-
>   drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c              |   8 +-
>   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c              |   8 +-
>   drivers/gpu/drm/amd/scheduler/gpu_scheduler.h      | 185 --------------
>   drivers/gpu/drm/scheduler/Makefile                 |   4 +
>   .../gpu/drm/{amd => }/scheduler/gpu_scheduler.c    | 281 +++++++++++----------
>   drivers/gpu/drm/{amd => }/scheduler/sched_fence.c  | 122 +++++----
>   include/drm/gpu_scheduler.h                        | 171 +++++++++++++
>   .../drm/gpu_scheduler_trace.h                      |  14 +-
>   34 files changed, 525 insertions(+), 511 deletions(-)
>   delete mode 100644 drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
>   create mode 100644 drivers/gpu/drm/scheduler/Makefile
>   rename drivers/gpu/drm/{amd => }/scheduler/gpu_scheduler.c (64%)
>   rename drivers/gpu/drm/{amd => }/scheduler/sched_fence.c (58%)
>   create mode 100644 include/drm/gpu_scheduler.h
>   rename drivers/gpu/drm/amd/scheduler/gpu_sched_trace.h => 
> include/drm/gpu_scheduler_trace.h (83%)
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/2] Move scheduler out of AMDGPU
       [not found]         ` <BLUPR12MB04491E836AFBE4759784405E843E0-7LeqcoF/hwpTIQvHjXdJlwdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
@ 2017-12-04  9:42           ` Lucas Stach
  0 siblings, 0 replies; 10+ messages in thread
From: Lucas Stach @ 2017-12-04  9:42 UTC (permalink / raw)
  To: Liu, Monk, Koenig, Christian, Deucher, Alexander
  Cc: Grodzovsky, Andrey, David Airlie,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	patchwork-lst-bIcnvbaLZ9MEGnE8C9+IrQ,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	kernel-bIcnvbaLZ9MEGnE8C9+IrQ

Hi,

Am Samstag, den 02.12.2017, 12:14 +0000 schrieb Liu, Monk:
> I'm wondering if GPU reset still work after this home move ...

Why wouldn't it continue to work? After all this is just a code move
that doesn't change anything about the inner workings of the scheduler.

Regards,
Lucas

> 
> -----Original Message-----
> From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On
> Behalf Of Christian K?nig
> Sent: Friday, December 1, 2017 11:55 PM
> To: Lucas Stach <l.stach@pengutronix.de>; Deucher, Alexander <Alexand
> er.Deucher@amd.com>
> Cc: Grodzovsky, Andrey <Andrey.Grodzovsky@amd.com>; David Airlie <air
> lied@linux.ie>; amd-gfx@lists.freedesktop.org; patchwork-lst@pengutro
> nix.de; dri-devel@lists.freedesktop.org; kernel@pengutronix.de
> Subject: Re: [PATCH 0/2] Move scheduler out of AMDGPU
> 
> Am 01.12.2017 um 16:28 schrieb Lucas Stach:
> > Hi all,
> > 
> > so this is the first step to make the marvelous AMDGPU scheduler 
> > useable for other drivers. I have a (mostly) working prototype of 
> > Etnaviv using the scheduler, but those patches need to keep baking
> > for a while.
> > 
> > I'm sending this out as I want to avoid rebasing this change too
> > much 
> > and don't want to take people by surprise when the Etnaviv 
> > implementation surfaces. Also this might need some coordination 
> > between AMDGPU and Etnaviv, which might be good to get going now.
> > 
> > Please speak up now if you have any objections or comments.
> 
> Looks good to me, but question is what is this based upon?
> 
> I strongly assume drm-next, so question is now if we have any patches
> inside amd branches we should apply before doing this.
> 
> CCing Andrey as well cause he has some tasks assigned around the
> scheduler as well.
> 
> Regards,
> Christian.
> 
> > 
> > Regards,
> > Lucas
> > 
> > Lucas Stach (2):
> >    drm: move amd_gpu_scheduler into common location
> >    drm/sched: move fence slab handling to module init/exit
> > 
> >   drivers/gpu/drm/Kconfig                            |   5 +
> >   drivers/gpu/drm/Makefile                           |   1 +
> >   drivers/gpu/drm/amd/amdgpu/Makefile                |   5 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu.h                |  16 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c             |   8 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c            |  38 +--
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c         |  12 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c            |   8 -
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c          |   4 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c            |  22 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c           |  14 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h           |  12 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c          |  20 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h          |   2 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c           |   6 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c            |   8 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h            |   4 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c            |   8 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h            |   4 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c            |   8 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h            |   2 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c            |  14 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h            |   4 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c             |  10 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h             |   4 +-
> >   drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c              |   4 +-
> >   drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c              |   8 +-
> >   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c              |   8 +-
> >   drivers/gpu/drm/amd/scheduler/gpu_scheduler.h      | 185 --------
> > ------
> >   drivers/gpu/drm/scheduler/Makefile                 |   4 +
> >   .../gpu/drm/{amd => }/scheduler/gpu_scheduler.c    | 281
> > +++++++++++----------
> >   drivers/gpu/drm/{amd => }/scheduler/sched_fence.c  | 122 +++++---
> > -
> >   include/drm/gpu_scheduler.h                        | 171
> > +++++++++++++
> >   .../drm/gpu_scheduler_trace.h                      |  14 +-
> >   34 files changed, 525 insertions(+), 511 deletions(-)
> >   delete mode 100644 drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
> >   create mode 100644 drivers/gpu/drm/scheduler/Makefile
> >   rename drivers/gpu/drm/{amd => }/scheduler/gpu_scheduler.c (64%)
> >   rename drivers/gpu/drm/{amd => }/scheduler/sched_fence.c (58%)
> >   create mode 100644 include/drm/gpu_scheduler.h
> >   rename drivers/gpu/drm/amd/scheduler/gpu_sched_trace.h => 
> > include/drm/gpu_scheduler_trace.h (83%)
> > 
> 
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/2] Move scheduler out of AMDGPU
       [not found]     ` <7a884709-8717-cf45-152d-60bf8cb0f7e2-5C7GfCeVMHo@public.gmane.org>
  2017-12-02 12:14       ` Liu, Monk
@ 2017-12-04 21:47       ` Alex Deucher
       [not found]         ` <CADnq5_OpH9eZ0iy5jRhY+_SNuUDi2yCL+XWHxa53t2UQVOcqGw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  1 sibling, 1 reply; 10+ messages in thread
From: Alex Deucher @ 2017-12-04 21:47 UTC (permalink / raw)
  To: Christian König
  Cc: David Airlie, Maling list - DRI developers,
	patchwork-lst-bIcnvbaLZ9MEGnE8C9+IrQ, amd-gfx list, Sascha Hauer,
	Alex Deucher, Lucas Stach

On Fri, Dec 1, 2017 at 10:55 AM, Christian König
<christian.koenig@amd.com> wrote:
> Am 01.12.2017 um 16:28 schrieb Lucas Stach:
>>
>> Hi all,
>>
>> so this is the first step to make the marvelous AMDGPU scheduler useable
>> for other drivers. I have a (mostly) working prototype of Etnaviv using
>> the scheduler, but those patches need to keep baking for a while.
>>
>> I'm sending this out as I want to avoid rebasing this change too much
>> and don't want to take people by surprise when the Etnaviv implementation
>> surfaces. Also this might need some coordination between AMDGPU and
>> Etnaviv, which might be good to get going now.
>>
>> Please speak up now if you have any objections or comments.
>
>
> Looks good to me, but question is what is this based upon?
>
> I strongly assume drm-next, so question is now if we have any patches inside
> amd branches we should apply before doing this.

We have a bunch of changes queued up which will go usptream for 4.16.
See amd-staging-drm-next:
https://cgit.freedesktop.org/~agd5f/linux/log/?h=amd-staging-drm-next
which is a mirror of our main development branch or:
https://cgit.freedesktop.org/~agd5f/linux/log/?h=drm-next-4.16-wip
which is what is currently queued for 4.16.

Alex

>
> CCing Andrey as well cause he has some tasks assigned around the scheduler
> as well.
>
> Regards,
> Christian.
>
>
>>
>> Regards,
>> Lucas
>>
>> Lucas Stach (2):
>>    drm: move amd_gpu_scheduler into common location
>>    drm/sched: move fence slab handling to module init/exit
>>
>>   drivers/gpu/drm/Kconfig                            |   5 +
>>   drivers/gpu/drm/Makefile                           |   1 +
>>   drivers/gpu/drm/amd/amdgpu/Makefile                |   5 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h                |  16 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c             |   8 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c            |  38 +--
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c         |  12 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c            |   8 -
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c          |   4 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c            |  22 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c           |  14 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h           |  12 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c          |  20 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_sched.h          |   2 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c           |   6 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c            |   8 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h            |   4 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c            |   8 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h            |   4 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c            |   8 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h            |   2 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c            |  14 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h            |   4 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c             |  10 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h             |   4 +-
>>   drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c              |   4 +-
>>   drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c              |   8 +-
>>   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c              |   8 +-
>>   drivers/gpu/drm/amd/scheduler/gpu_scheduler.h      | 185 --------------
>>   drivers/gpu/drm/scheduler/Makefile                 |   4 +
>>   .../gpu/drm/{amd => }/scheduler/gpu_scheduler.c    | 281
>> +++++++++++----------
>>   drivers/gpu/drm/{amd => }/scheduler/sched_fence.c  | 122 +++++----
>>   include/drm/gpu_scheduler.h                        | 171 +++++++++++++
>>   .../drm/gpu_scheduler_trace.h                      |  14 +-
>>   34 files changed, 525 insertions(+), 511 deletions(-)
>>   delete mode 100644 drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
>>   create mode 100644 drivers/gpu/drm/scheduler/Makefile
>>   rename drivers/gpu/drm/{amd => }/scheduler/gpu_scheduler.c (64%)
>>   rename drivers/gpu/drm/{amd => }/scheduler/sched_fence.c (58%)
>>   create mode 100644 include/drm/gpu_scheduler.h
>>   rename drivers/gpu/drm/amd/scheduler/gpu_sched_trace.h =>
>> include/drm/gpu_scheduler_trace.h (83%)
>>
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/2] Move scheduler out of AMDGPU
       [not found]         ` <CADnq5_OpH9eZ0iy5jRhY+_SNuUDi2yCL+XWHxa53t2UQVOcqGw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-12-05  9:18           ` Lucas Stach
  2017-12-05 22:31             ` Alex Deucher
  0 siblings, 1 reply; 10+ messages in thread
From: Lucas Stach @ 2017-12-05  9:18 UTC (permalink / raw)
  To: Alex Deucher, Christian König
  Cc: David Airlie, Maling list - DRI developers,
	patchwork-lst-bIcnvbaLZ9MEGnE8C9+IrQ, amd-gfx list, Sascha Hauer,
	Alex Deucher

Hi Alex,

Am Montag, den 04.12.2017, 16:47 -0500 schrieb Alex Deucher:
> On Fri, Dec 1, 2017 at 10:55 AM, Christian König
> > <christian.koenig@amd.com> wrote:
> > Am 01.12.2017 um 16:28 schrieb Lucas Stach:
> > > 
> > > Hi all,
> > > 
> > > so this is the first step to make the marvelous AMDGPU scheduler useable
> > > for other drivers. I have a (mostly) working prototype of Etnaviv using
> > > the scheduler, but those patches need to keep baking for a while.
> > > 
> > > I'm sending this out as I want to avoid rebasing this change too much
> > > and don't want to take people by surprise when the Etnaviv implementation
> > > surfaces. Also this might need some coordination between AMDGPU and
> > > Etnaviv, which might be good to get going now.
> > > 
> > > Please speak up now if you have any objections or comments.
> > 
> > 
> > Looks good to me, but question is what is this based upon?
> > 
> > I strongly assume drm-next, so question is now if we have any patches inside
> > amd branches we should apply before doing this.
> 
> We have a bunch of changes queued up which will go usptream for 4.16.
> See amd-staging-drm-next:
> https://cgit.freedesktop.org/~agd5f/linux/log/?h=amd-staging-drm-next
> which is a mirror of our main development branch or:
> https://cgit.freedesktop.org/~agd5f/linux/log/?h=drm-next-4.16-wip
> which is what is currently queued for 4.16.

Is this branch/tag stable?

How would you like to handle the merge? Should I send out patches for
you to apply and you get me a stable branch to pull into etnaviv, or
should I provide a stable branch based on the above to pull into both
amdgpu and etnaviv?

Regards,
Lucas
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/2] Move scheduler out of AMDGPU
  2017-12-05  9:18           ` Lucas Stach
@ 2017-12-05 22:31             ` Alex Deucher
  0 siblings, 0 replies; 10+ messages in thread
From: Alex Deucher @ 2017-12-05 22:31 UTC (permalink / raw)
  To: Lucas Stach
  Cc: David Airlie, Maling list - DRI developers, patchwork-lst,
	amd-gfx list, Sascha Hauer, Alex Deucher, Christian König

On Tue, Dec 5, 2017 at 4:18 AM, Lucas Stach <l.stach@pengutronix.de> wrote:
> Hi Alex,
>
> Am Montag, den 04.12.2017, 16:47 -0500 schrieb Alex Deucher:
>> On Fri, Dec 1, 2017 at 10:55 AM, Christian König
>> > <christian.koenig@amd.com> wrote:
>> > Am 01.12.2017 um 16:28 schrieb Lucas Stach:
>> > >
>> > > Hi all,
>> > >
>> > > so this is the first step to make the marvelous AMDGPU scheduler useable
>> > > for other drivers. I have a (mostly) working prototype of Etnaviv using
>> > > the scheduler, but those patches need to keep baking for a while.
>> > >
>> > > I'm sending this out as I want to avoid rebasing this change too much
>> > > and don't want to take people by surprise when the Etnaviv implementation
>> > > surfaces. Also this might need some coordination between AMDGPU and
>> > > Etnaviv, which might be good to get going now.
>> > >
>> > > Please speak up now if you have any objections or comments.
>> >
>> >
>> > Looks good to me, but question is what is this based upon?
>> >
>> > I strongly assume drm-next, so question is now if we have any patches inside
>> > amd branches we should apply before doing this.
>>
>> We have a bunch of changes queued up which will go usptream for 4.16.
>> See amd-staging-drm-next:
>> https://cgit.freedesktop.org/~agd5f/linux/log/?h=amd-staging-drm-next
>> which is a mirror of our main development branch or:
>> https://cgit.freedesktop.org/~agd5f/linux/log/?h=drm-next-4.16-wip
>> which is what is currently queued for 4.16.
>
> Is this branch/tag stable?
>

amd-staging-drm-next rebases periodically.  The wip branches are not
stable.  I'll be sending out a pull request for 4.16 in the next day
or two and that branch (drm-next-4.16) will be stable.

> How would you like to handle the merge? Should I send out patches for
> you to apply and you get me a stable branch to pull into etnaviv, or
> should I provide a stable branch based on the above to pull into both
> amdgpu and etnaviv?

If you want to send patches against amd-staging-drm-next, I can pull
them into our system and include them in my next pull request after
we've tested them internally.  At that point, you can either pull my
stable branch or wait until it gets into Dave's drm-next branch.

Alex
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-12-05 22:31 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-01 15:28 [PATCH 0/2] Move scheduler out of AMDGPU Lucas Stach
2017-12-01 15:29 ` [PATCH 1/2] drm: move amd_gpu_scheduler into common location Lucas Stach
     [not found] ` <20171201152901.3626-1-l.stach-bIcnvbaLZ9MEGnE8C9+IrQ@public.gmane.org>
2017-12-01 15:29   ` [PATCH 2/2] drm/sched: move fence slab handling to module init/exit Lucas Stach
2017-12-01 15:55   ` [PATCH 0/2] Move scheduler out of AMDGPU Christian König
2017-12-01 16:04     ` Lucas Stach
     [not found]     ` <7a884709-8717-cf45-152d-60bf8cb0f7e2-5C7GfCeVMHo@public.gmane.org>
2017-12-02 12:14       ` Liu, Monk
     [not found]         ` <BLUPR12MB04491E836AFBE4759784405E843E0-7LeqcoF/hwpTIQvHjXdJlwdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
2017-12-04  9:42           ` Lucas Stach
2017-12-04 21:47       ` Alex Deucher
     [not found]         ` <CADnq5_OpH9eZ0iy5jRhY+_SNuUDi2yCL+XWHxa53t2UQVOcqGw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-12-05  9:18           ` Lucas Stach
2017-12-05 22:31             ` Alex Deucher

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.