All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] *** Dedicated vmid per process v2 ***
@ 2017-04-24  5:57 Chunming Zhou
       [not found] ` <1493013460-13344-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: Chunming Zhou @ 2017-04-24  5:57 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="all", Size: 1756 bytes --]

The current kernel implementation, which grabs the idle VMID from pool when emitting the job may:

    The back-to-back submission from one process could use different VMID.
    The submission to different queues from single process could use different VMID

It works well in most case but cannot work for the SQ thread trace capture.

The VMID for the submission that set the {SQTT}_BASE, which refers to the address of the trace buffer, is stored in shader engine.

If the profiling application have to use different VMIDs to submit IBs in its life cycle:

    Some trace is not captured since it actually uses different VMID to submit jobs.
    Some part of captured trace may come from different application since they are accidentally uses the owner’s VMID to submit jobs.

V2:
1. address Christian's comments:
	a. drop context flags for tag process, instead, add vm ioctl.
	b. change order of patches.
	c. sync waiting only when vm flush needs.

2. address Alex's comments;
	bump module version

Chunming Zhou (6):
  drm/amdgpu: add vm ioctl
  drm/amdgpu: add dedicated vmid field in vm struct
  drm/amdgpu: reserve vmid by vm ioctl
  drm/amdgpu: add limitation for dedicated vm number v2
  drm/amdgpu: implement grab dedicated vmid V2
  drm/amdgpu: bump module verion for reserved vmid

 drivers/gpu/drm/amd/amdgpu/amdgpu.h        |   1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c |   1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c    |   3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c    |   1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c     | 159 ++++++++++++++++++++++++++++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h     |   5 +
 include/uapi/drm/amdgpu_drm.h              |  20 ++++
 7 files changed, 188 insertions(+), 2 deletions(-)

-- 
1.9.1


[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 1/6] drm/amdgpu: add vm ioctl
       [not found] ` <1493013460-13344-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-24  5:57   ` Chunming Zhou
       [not found]     ` <1493013460-13344-2-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
  2017-04-24  5:57   ` [PATCH 2/6] drm/amdgpu: add dedicated vmid field in vm struct Chunming Zhou
                     ` (5 subsequent siblings)
  6 siblings, 1 reply; 29+ messages in thread
From: Chunming Zhou @ 2017-04-24  5:57 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

It will be used for reserving vmid.

Change-Id: Ib7169ea999690c8e82d0dcbccdd2d97760c0270a
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c  | 16 ++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h  |  1 +
 include/uapi/drm/amdgpu_drm.h           | 20 ++++++++++++++++++++
 4 files changed, 38 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
index cad589a..7004e6c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
@@ -1051,6 +1051,7 @@ int amdgpu_get_vblank_timestamp_kms(struct drm_device *dev, unsigned int pipe,
 const struct drm_ioctl_desc amdgpu_ioctls_kms[] = {
 	DRM_IOCTL_DEF_DRV(AMDGPU_GEM_CREATE, amdgpu_gem_create_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(AMDGPU_CTX, amdgpu_ctx_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(AMDGPU_VM, amdgpu_vm_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(AMDGPU_BO_LIST, amdgpu_bo_list_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
 	/* KMS */
 	DRM_IOCTL_DEF_DRV(AMDGPU_GEM_MMAP, amdgpu_gem_mmap_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index f804d38..eb429c5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -2358,3 +2358,19 @@ void amdgpu_vm_manager_fini(struct amdgpu_device *adev)
 		}
 	}
 }
+
+int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+{
+	union drm_amdgpu_vm *args = data;
+	struct amdgpu_device *adev = dev->dev_private;
+	struct amdgpu_fpriv *fpriv = filp->driver_priv;
+
+	switch (args->in.op) {
+	case AMDGPU_VM_OP_RESERVE_VMID:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 0f547c6..62dbace 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -247,5 +247,6 @@ int amdgpu_vm_bo_clear_mappings(struct amdgpu_device *adev,
 void amdgpu_vm_bo_rmv(struct amdgpu_device *adev,
 		      struct amdgpu_bo_va *bo_va);
 void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint64_t vm_size);
+int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct drm_file *filp);
 
 #endif
diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
index 56b7a2f3..5ee639b 100644
--- a/include/uapi/drm/amdgpu_drm.h
+++ b/include/uapi/drm/amdgpu_drm.h
@@ -51,6 +51,7 @@
 #define DRM_AMDGPU_GEM_OP		0x10
 #define DRM_AMDGPU_GEM_USERPTR		0x11
 #define DRM_AMDGPU_WAIT_FENCES		0x12
+#define DRM_AMDGPU_VM			0x13
 
 /* hybrid specific ioctls */
 #define DRM_AMDGPU_SEM			0x5b
@@ -71,6 +72,7 @@
 #define DRM_IOCTL_AMDGPU_GEM_OP		DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_OP, struct drm_amdgpu_gem_op)
 #define DRM_IOCTL_AMDGPU_GEM_USERPTR	DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_USERPTR, struct drm_amdgpu_gem_userptr)
 #define DRM_IOCTL_AMDGPU_WAIT_FENCES	DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_WAIT_FENCES, union drm_amdgpu_wait_fences)
+#define DRM_IOCTL_AMDGPU_VM		DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_VM, union drm_amdgpu_vm)
 
 /* hybrid specific ioctls */
 #define DRM_IOCTL_AMDGPU_GEM_DGMA	DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_DGMA, struct drm_amdgpu_gem_dgma)
@@ -212,6 +214,24 @@ struct drm_amdgpu_ctx_in {
 	union drm_amdgpu_ctx_out out;
 };
 
+/* vm ioctl */
+#define AMDGPU_VM_OP_RESERVE_VMID	1
+struct drm_amdgpu_vm_in {
+	/** AMDGPU_VM_OP_* */
+	__u32	op;
+	__u32	flags;
+};
+
+struct drm_amdgpu_vm_out {
+	/** For future use, no flags defined so far */
+	__u64	flags;
+};
+
+union drm_amdgpu_vm {
+	struct drm_amdgpu_vm_in in;
+	struct drm_amdgpu_vm_out out;
+};
+
 /* sem related */
 #define AMDGPU_SEM_OP_CREATE_SEM        1
 #define AMDGPU_SEM_OP_WAIT_SEM	        2
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 2/6] drm/amdgpu: add dedicated vmid field in vm struct
       [not found] ` <1493013460-13344-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
  2017-04-24  5:57   ` [PATCH 1/6] drm/amdgpu: add vm ioctl Chunming Zhou
@ 2017-04-24  5:57   ` Chunming Zhou
       [not found]     ` <1493013460-13344-3-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
  2017-04-24  5:57   ` [PATCH 3/6] drm/amdgpu: reserve vmid by vm ioctl Chunming Zhou
                     ` (4 subsequent siblings)
  6 siblings, 1 reply; 29+ messages in thread
From: Chunming Zhou @ 2017-04-24  5:57 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

Change-Id: Id728e20366c8a1ae90d4e901dc80e136e2a613bb
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 17 ++++++++++++++++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  2 ++
 2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index eb429c5..acf9102 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -2144,10 +2144,12 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 	unsigned ring_instance;
 	struct amdgpu_ring *ring;
 	struct amd_sched_rq *rq;
-	int r;
+	int r, i;
 
 	vm->va = RB_ROOT;
 	vm->client_id = atomic64_inc_return(&adev->vm_manager.client_counter);
+	for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
+		vm->dedicated_vmid[i] = NULL;
 	spin_lock_init(&vm->status_lock);
 	INIT_LIST_HEAD(&vm->invalidated);
 	INIT_LIST_HEAD(&vm->cleared);
@@ -2250,6 +2252,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 {
 	struct amdgpu_bo_va_mapping *mapping, *tmp;
 	bool prt_fini_needed = !!adev->gart.gart_funcs->set_prt;
+	int i;
 
 	if (vm->is_kfd_vm) {
 		struct amdgpu_vm_id_manager *id_mgr =
@@ -2292,6 +2295,18 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 
 	amdgpu_vm_free_levels(&vm->root);
 	fence_put(vm->last_dir_update);
+	for (i = 0; i < AMDGPU_MAX_VMHUBS; i++) {
+		struct amdgpu_vm_id_manager *id_mgr =
+			&adev->vm_manager.id_mgr[i];
+
+		mutex_lock(&id_mgr->lock);
+		if (vm->dedicated_vmid[i]) {
+			list_add(&vm->dedicated_vmid[i]->list,
+				 &id_mgr->ids_lru);
+			vm->dedicated_vmid[i] = NULL;
+		}
+		mutex_unlock(&id_mgr->lock);
+	}
 }
 
 /**
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 62dbace..23981ee 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -122,6 +122,8 @@ struct amdgpu_vm {
 
 	/* client id */
 	u64                     client_id;
+	/* dedicated vmid */
+	struct amdgpu_vm_id	*dedicated_vmid[AMDGPU_MAX_VMHUBS];
 	/* each VM will map on CSA */
 	struct amdgpu_bo_va *csa_bo_va;
 
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 3/6] drm/amdgpu: reserve vmid by vm ioctl
       [not found] ` <1493013460-13344-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
  2017-04-24  5:57   ` [PATCH 1/6] drm/amdgpu: add vm ioctl Chunming Zhou
  2017-04-24  5:57   ` [PATCH 2/6] drm/amdgpu: add dedicated vmid field in vm struct Chunming Zhou
@ 2017-04-24  5:57   ` Chunming Zhou
       [not found]     ` <1493013460-13344-4-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
  2017-04-24  5:57   ` [PATCH 4/6] drm/amdgpu: add limitation for dedicated vm number v2 Chunming Zhou
                     ` (3 subsequent siblings)
  6 siblings, 1 reply; 29+ messages in thread
From: Chunming Zhou @ 2017-04-24  5:57 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

Change-Id: I5f80dc39dc9d44660a96a2b710b0dbb4d3b9039d
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 56 ++++++++++++++++++++++++++++++++++
 1 file changed, 56 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index acf9102..5f4dcc9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -397,6 +397,17 @@ static bool amdgpu_vm_had_gpu_reset(struct amdgpu_device *adev,
 		atomic_read(&adev->gpu_reset_counter);
 }
 
+static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm)
+{
+	unsigned vmhub;
+
+	for (vmhub = 0; vmhub < AMDGPU_MAX_VMHUBS; vmhub++) {
+		if (!vm->dedicated_vmid[vmhub])
+			return false;
+	}
+	return true;
+}
+
 /**
  * amdgpu_vm_grab_id - allocate the next free VMID
  *
@@ -546,6 +557,45 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
 	return r;
 }
 
+static int amdgpu_vm_alloc_dedicated_vmid(struct amdgpu_device *adev,
+					  struct amdgpu_vm *vm)
+{
+	struct amdgpu_vm_id_manager *id_mgr;
+	struct amdgpu_vm_id *idle;
+	unsigned vmhub;
+	int r;
+
+	for (vmhub = 0; vmhub < AMDGPU_MAX_VMHUBS; vmhub++) {
+		id_mgr = &adev->vm_manager.id_mgr[vmhub];
+
+		mutex_lock(&id_mgr->lock);
+		/* Select the first entry VMID */
+		idle = list_first_entry(&id_mgr->ids_lru, struct amdgpu_vm_id,
+					list);
+		list_del_init(&idle->list);
+		vm->dedicated_vmid[vmhub] = idle;
+		mutex_unlock(&id_mgr->lock);
+
+		r = amdgpu_sync_wait(&idle->active);
+		if (r)
+			goto err;
+	}
+
+	return 0;
+err:
+	for (vmhub = 0; vmhub < AMDGPU_MAX_VMHUBS; vmhub++) {
+		id_mgr = &adev->vm_manager.id_mgr[vmhub];
+
+		mutex_lock(&id_mgr->lock);
+		if (vm->dedicated_vmid[vmhub])
+			list_add(&vm->dedicated_vmid[vmhub]->list,
+				 &id_mgr->ids_lru);
+		vm->dedicated_vmid[vmhub] = NULL;
+		mutex_unlock(&id_mgr->lock);
+	}
+	return r;
+}
+
 static bool amdgpu_vm_ring_has_compute_vm_bug(struct amdgpu_ring *ring)
 {
 	struct amdgpu_device *adev = ring->adev;
@@ -2379,9 +2429,15 @@ int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 	union drm_amdgpu_vm *args = data;
 	struct amdgpu_device *adev = dev->dev_private;
 	struct amdgpu_fpriv *fpriv = filp->driver_priv;
+	int r;
 
 	switch (args->in.op) {
 	case AMDGPU_VM_OP_RESERVE_VMID:
+		if (!amdgpu_vm_dedicated_vmid_ready(&fpriv->vm)) {
+			r = amdgpu_vm_alloc_dedicated_vmid(adev, &fpriv->vm);
+			if (r)
+				return r;
+		}
 		break;
 	default:
 		return -EINVAL;
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 4/6] drm/amdgpu: add limitation for dedicated vm number v2
       [not found] ` <1493013460-13344-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (2 preceding siblings ...)
  2017-04-24  5:57   ` [PATCH 3/6] drm/amdgpu: reserve vmid by vm ioctl Chunming Zhou
@ 2017-04-24  5:57   ` Chunming Zhou
       [not found]     ` <1493013460-13344-5-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
  2017-04-24  5:57   ` [PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2 Chunming Zhou
                     ` (2 subsequent siblings)
  6 siblings, 1 reply; 29+ messages in thread
From: Chunming Zhou @ 2017-04-24  5:57 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

v2: move #define to amdgpu_vm.h

Change-Id: Ie5958cf6dbdc1c8278e61d9158483472d6f5c6e3
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h        | 1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c     | 9 +++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h     | 2 ++
 4 files changed, 13 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 0831cd2..ba9d3d0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -1591,6 +1591,7 @@ struct amdgpu_device {
 	struct amdgpu_dummy_page	dummy_page;
 	struct amdgpu_vm_manager	vm_manager;
 	struct amdgpu_vmhub             vmhub[AMDGPU_MAX_VMHUBS];
+	atomic_t			reserved_vmid;
 
 	/* memory management */
 	struct amdgpu_mman		mman;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index a175dfd..9993085 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1889,6 +1889,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 	adev->vm_manager.vm_pte_num_rings = 0;
 	adev->gart.gart_funcs = NULL;
 	adev->fence_context = kcl_fence_context_alloc(AMDGPU_MAX_RINGS);
+	atomic_set(&adev->reserved_vmid, 0);
 
 	adev->smc_rreg = &amdgpu_invalid_rreg;
 	adev->smc_wreg = &amdgpu_invalid_wreg;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 5f4dcc9..f7113b9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -565,6 +565,10 @@ static int amdgpu_vm_alloc_dedicated_vmid(struct amdgpu_device *adev,
 	unsigned vmhub;
 	int r;
 
+	if (atomic_read(&adev->reserved_vmid) >= AMDGPU_VM_MAX_RESERVED_VMID) {
+		DRM_ERROR("Over limitation of reserved vmid\n");
+		return -EINVAL;
+	}
 	for (vmhub = 0; vmhub < AMDGPU_MAX_VMHUBS; vmhub++) {
 		id_mgr = &adev->vm_manager.id_mgr[vmhub];
 
@@ -580,6 +584,7 @@ static int amdgpu_vm_alloc_dedicated_vmid(struct amdgpu_device *adev,
 		if (r)
 			goto err;
 	}
+	atomic_inc(&adev->reserved_vmid);
 
 	return 0;
 err:
@@ -2302,6 +2307,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 {
 	struct amdgpu_bo_va_mapping *mapping, *tmp;
 	bool prt_fini_needed = !!adev->gart.gart_funcs->set_prt;
+	bool dedicated = false;
 	int i;
 
 	if (vm->is_kfd_vm) {
@@ -2354,9 +2360,12 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 			list_add(&vm->dedicated_vmid[i]->list,
 				 &id_mgr->ids_lru);
 			vm->dedicated_vmid[i] = NULL;
+			dedicated = true;
 		}
 		mutex_unlock(&id_mgr->lock);
 	}
+	if (dedicated)
+		atomic_dec(&adev->reserved_vmid);
 }
 
 /**
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 23981ee..2d3e6ed 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -83,6 +83,8 @@
 
 /* hardcode that limit for now */
 #define AMDGPU_VA_RESERVED_SIZE			(8 << 20)
+/* max vmids dedicated for process */
+#define AMDGPU_VM_MAX_RESERVED_VMID 2
 
 struct amdgpu_vm_pt {
 	struct amdgpu_bo	*bo;
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2
       [not found] ` <1493013460-13344-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (3 preceding siblings ...)
  2017-04-24  5:57   ` [PATCH 4/6] drm/amdgpu: add limitation for dedicated vm number v2 Chunming Zhou
@ 2017-04-24  5:57   ` Chunming Zhou
  2017-04-24  5:57   ` [PATCH 6/6] drm/amdgpu: bump module verion for reserved vmid Chunming Zhou
  2017-04-25  9:07   ` [PATCH 0/6] *** Dedicated vmid per process v2 *** zhoucm1
  6 siblings, 0 replies; 29+ messages in thread
From: Chunming Zhou @ 2017-04-24  5:57 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

v2: move sync waiting only when flush needs

Change-Id: I64da2701c9fdcf986afb90ba1492a78d5bef1b6c
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 61 ++++++++++++++++++++++++++++++++++
 1 file changed, 61 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index f7113b9..2c0b453 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -408,6 +408,63 @@ static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm)
 	return true;
 }
 
+static int amdgpu_vm_grab_dedicated_vmid(struct amdgpu_vm *vm,
+					 struct amdgpu_ring *ring,
+					 struct amdgpu_sync *sync,
+					 struct fence *fence,
+					 struct amdgpu_job *job)
+{
+	struct amdgpu_device *adev = ring->adev;
+	unsigned vmhub = ring->funcs->vmhub;
+	struct amdgpu_vm_id *id = vm->dedicated_vmid[vmhub];
+	struct amdgpu_vm_id_manager *id_mgr = &adev->vm_manager.id_mgr[vmhub];
+	struct fence *updates = sync->last_vm_update;
+	int r = 0;
+	struct fence *flushed, *tmp;
+	bool needs_flush = false;
+
+	mutex_lock(&id_mgr->lock);
+	if (amdgpu_vm_had_gpu_reset(adev, id))
+		needs_flush = true;
+
+	flushed  = id->flushed_updates;
+	if (updates && (!flushed || updates->context != flushed->context ||
+			fence_is_later(updates, flushed)))
+		needs_flush = true;
+	if (needs_flush) {
+		tmp = amdgpu_sync_get_fence(&id->active);
+		if (tmp) {
+			r = amdgpu_sync_fence(adev, sync, tmp);
+			fence_put(tmp);
+			mutex_unlock(&id_mgr->lock);
+			return r;
+		}
+	}
+
+	/* Good we can use this VMID. Remember this submission as
+	* user of the VMID.
+	*/
+	r = amdgpu_sync_fence(ring->adev, &id->active, fence);
+	if (r)
+		goto out;
+
+	if (updates && (!flushed || updates->context != flushed->context ||
+			fence_is_later(updates, flushed))) {
+		fence_put(id->flushed_updates);
+		id->flushed_updates = fence_get(updates);
+	}
+	id->pd_gpu_addr = job->vm_pd_addr;
+	id->current_gpu_reset_count = atomic_read(&adev->gpu_reset_counter);
+	atomic64_set(&id->owner, vm->client_id);
+	job->vm_needs_flush = needs_flush;
+
+	job->vm_id = id - id_mgr->ids;
+	trace_amdgpu_vm_grab_id(vm, ring, job);
+out:
+	mutex_unlock(&id_mgr->lock);
+	return r;
+}
+
 /**
  * amdgpu_vm_grab_id - allocate the next free VMID
  *
@@ -432,6 +489,10 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
 	unsigned i;
 	int r = 0;
 
+	if (amdgpu_vm_dedicated_vmid_ready(vm))
+		return amdgpu_vm_grab_dedicated_vmid(vm, ring, sync,
+						     fence, job);
+
 	fences = kmalloc_array(sizeof(void *), id_mgr->num_ids, GFP_KERNEL);
 	if (!fences)
 		return -ENOMEM;
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 6/6] drm/amdgpu: bump module verion for reserved vmid
       [not found] ` <1493013460-13344-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (4 preceding siblings ...)
  2017-04-24  5:57   ` [PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2 Chunming Zhou
@ 2017-04-24  5:57   ` Chunming Zhou
  2017-04-25  9:07   ` [PATCH 0/6] *** Dedicated vmid per process v2 *** zhoucm1
  6 siblings, 0 replies; 29+ messages in thread
From: Chunming Zhou @ 2017-04-24  5:57 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

Change-Id: I1065e0430ed44f7ee6c29214b72e35a7343ea02b
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 55322b4..6799829 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -64,9 +64,10 @@
  * - 3.12.0 - Add query for double offchip LDS buffers
  * - 3.13.0 - Add PRT support
  * - 3.14.0 - Fix race in amdgpu_ctx_get_fence() and note new functionality
+ * - 3.15.0 - Add reserved vmid support
  */
 #define KMS_DRIVER_MAJOR	3
-#define KMS_DRIVER_MINOR	14
+#define KMS_DRIVER_MINOR	15
 #define KMS_DRIVER_PATCHLEVEL	0
 
 int amdgpu_vram_limit = 0;
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH 0/6] *** Dedicated vmid per process v2 ***
       [not found] ` <1493013460-13344-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (5 preceding siblings ...)
  2017-04-24  5:57   ` [PATCH 6/6] drm/amdgpu: bump module verion for reserved vmid Chunming Zhou
@ 2017-04-25  9:07   ` zhoucm1
       [not found]     ` <58FF11E7.6040607-5C7GfCeVMHo@public.gmane.org>
  6 siblings, 1 reply; 29+ messages in thread
From: zhoucm1 @ 2017-04-25  9:07 UTC (permalink / raw)
  To: Chunming Zhou, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

ping... anyone can give the review?

Thanks,
David Zhou

On 2017年04月24日 13:57, Chunming Zhou wrote:
> The current kernel implementation, which grabs the idle VMID from pool when emitting the job may:
>
>      The back-to-back submission from one process could use different VMID.
>      The submission to different queues from single process could use different VMID
>
> It works well in most case but cannot work for the SQ thread trace capture.
>
> The VMID for the submission that set the {SQTT}_BASE, which refers to the address of the trace buffer, is stored in shader engine.
>
> If the profiling application have to use different VMIDs to submit IBs in its life cycle:
>
>      Some trace is not captured since it actually uses different VMID to submit jobs.
>      Some part of captured trace may come from different application since they are accidentally uses the owner’s VMID to submit jobs.
>
> V2:
> 1. address Christian's comments:
> 	a. drop context flags for tag process, instead, add vm ioctl.
> 	b. change order of patches.
> 	c. sync waiting only when vm flush needs.
>
> 2. address Alex's comments;
> 	bump module version
>
> Chunming Zhou (6):
>    drm/amdgpu: add vm ioctl
>    drm/amdgpu: add dedicated vmid field in vm struct
>    drm/amdgpu: reserve vmid by vm ioctl
>    drm/amdgpu: add limitation for dedicated vm number v2
>    drm/amdgpu: implement grab dedicated vmid V2
>    drm/amdgpu: bump module verion for reserved vmid
>
>   drivers/gpu/drm/amd/amdgpu/amdgpu.h        |   1 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c |   1 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c    |   3 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c    |   1 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c     | 159 ++++++++++++++++++++++++++++-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h     |   5 +
>   include/uapi/drm/amdgpu_drm.h              |  20 ++++
>   7 files changed, 188 insertions(+), 2 deletions(-)
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 0/6] *** Dedicated vmid per process v2 ***
       [not found]     ` <58FF11E7.6040607-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-25 10:02       ` Christian König
  0 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2017-04-25 10:02 UTC (permalink / raw)
  To: zhoucm1, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

On my TODO list, but behind about two or three other items I need to 
tackle first.

Christian.

Am 25.04.2017 um 11:07 schrieb zhoucm1:
> ping... anyone can give the review?
>
> Thanks,
> David Zhou
>
> On 2017年04月24日 13:57, Chunming Zhou wrote:
>> The current kernel implementation, which grabs the idle VMID from 
>> pool when emitting the job may:
>>
>>      The back-to-back submission from one process could use different 
>> VMID.
>>      The submission to different queues from single process could use 
>> different VMID
>>
>> It works well in most case but cannot work for the SQ thread trace 
>> capture.
>>
>> The VMID for the submission that set the {SQTT}_BASE, which refers to 
>> the address of the trace buffer, is stored in shader engine.
>>
>> If the profiling application have to use different VMIDs to submit 
>> IBs in its life cycle:
>>
>>      Some trace is not captured since it actually uses different VMID 
>> to submit jobs.
>>      Some part of captured trace may come from different application 
>> since they are accidentally uses the owner’s VMID to submit jobs.
>>
>> V2:
>> 1. address Christian's comments:
>>     a. drop context flags for tag process, instead, add vm ioctl.
>>     b. change order of patches.
>>     c. sync waiting only when vm flush needs.
>>
>> 2. address Alex's comments;
>>     bump module version
>>
>> Chunming Zhou (6):
>>    drm/amdgpu: add vm ioctl
>>    drm/amdgpu: add dedicated vmid field in vm struct
>>    drm/amdgpu: reserve vmid by vm ioctl
>>    drm/amdgpu: add limitation for dedicated vm number v2
>>    drm/amdgpu: implement grab dedicated vmid V2
>>    drm/amdgpu: bump module verion for reserved vmid
>>
>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h        |   1 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c |   1 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c    |   3 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c    |   1 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c     | 159 
>> ++++++++++++++++++++++++++++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h     |   5 +
>>   include/uapi/drm/amdgpu_drm.h              |  20 ++++
>>   7 files changed, 188 insertions(+), 2 deletions(-)
>>
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/6] drm/amdgpu: add vm ioctl
       [not found]     ` <1493013460-13344-2-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-26  7:09       ` Zhang, Jerry (Junwei)
       [not found]         ` <590047A5.1070807-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: Zhang, Jerry (Junwei) @ 2017-04-26  7:09 UTC (permalink / raw)
  To: Chunming Zhou, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

On 04/24/2017 01:57 PM, Chunming Zhou wrote:
> It will be used for reserving vmid.
>
> Change-Id: Ib7169ea999690c8e82d0dcbccdd2d97760c0270a
> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c |  1 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c  | 16 ++++++++++++++++
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h  |  1 +
>   include/uapi/drm/amdgpu_drm.h           | 20 ++++++++++++++++++++
>   4 files changed, 38 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> index cad589a..7004e6c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> @@ -1051,6 +1051,7 @@ int amdgpu_get_vblank_timestamp_kms(struct drm_device *dev, unsigned int pipe,
>   const struct drm_ioctl_desc amdgpu_ioctls_kms[] = {
>   	DRM_IOCTL_DEF_DRV(AMDGPU_GEM_CREATE, amdgpu_gem_create_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
>   	DRM_IOCTL_DEF_DRV(AMDGPU_CTX, amdgpu_ctx_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
> +	DRM_IOCTL_DEF_DRV(AMDGPU_VM, amdgpu_vm_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
>   	DRM_IOCTL_DEF_DRV(AMDGPU_BO_LIST, amdgpu_bo_list_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
>   	/* KMS */
>   	DRM_IOCTL_DEF_DRV(AMDGPU_GEM_MMAP, amdgpu_gem_mmap_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index f804d38..eb429c5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -2358,3 +2358,19 @@ void amdgpu_vm_manager_fini(struct amdgpu_device *adev)
>   		}
>   	}
>   }
> +
> +int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
> +{
> +	union drm_amdgpu_vm *args = data;
> +	struct amdgpu_device *adev = dev->dev_private;
> +	struct amdgpu_fpriv *fpriv = filp->driver_priv;
> +
> +	switch (args->in.op) {
> +	case AMDGPU_VM_OP_RESERVE_VMID:

How do you think to add an op to release the reserved vmid as well?

Jerry
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> index 0f547c6..62dbace 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> @@ -247,5 +247,6 @@ int amdgpu_vm_bo_clear_mappings(struct amdgpu_device *adev,
>   void amdgpu_vm_bo_rmv(struct amdgpu_device *adev,
>   		      struct amdgpu_bo_va *bo_va);
>   void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint64_t vm_size);
> +int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct drm_file *filp);
>
>   #endif
> diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
> index 56b7a2f3..5ee639b 100644
> --- a/include/uapi/drm/amdgpu_drm.h
> +++ b/include/uapi/drm/amdgpu_drm.h
> @@ -51,6 +51,7 @@
>   #define DRM_AMDGPU_GEM_OP		0x10
>   #define DRM_AMDGPU_GEM_USERPTR		0x11
>   #define DRM_AMDGPU_WAIT_FENCES		0x12
> +#define DRM_AMDGPU_VM			0x13
>
>   /* hybrid specific ioctls */
>   #define DRM_AMDGPU_SEM			0x5b
> @@ -71,6 +72,7 @@
>   #define DRM_IOCTL_AMDGPU_GEM_OP		DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_OP, struct drm_amdgpu_gem_op)
>   #define DRM_IOCTL_AMDGPU_GEM_USERPTR	DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_USERPTR, struct drm_amdgpu_gem_userptr)
>   #define DRM_IOCTL_AMDGPU_WAIT_FENCES	DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_WAIT_FENCES, union drm_amdgpu_wait_fences)
> +#define DRM_IOCTL_AMDGPU_VM		DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_VM, union drm_amdgpu_vm)
>
>   /* hybrid specific ioctls */
>   #define DRM_IOCTL_AMDGPU_GEM_DGMA	DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_DGMA, struct drm_amdgpu_gem_dgma)
> @@ -212,6 +214,24 @@ struct drm_amdgpu_ctx_in {
>   	union drm_amdgpu_ctx_out out;
>   };
>
> +/* vm ioctl */
> +#define AMDGPU_VM_OP_RESERVE_VMID	1
> +struct drm_amdgpu_vm_in {
> +	/** AMDGPU_VM_OP_* */
> +	__u32	op;
> +	__u32	flags;
> +};
> +
> +struct drm_amdgpu_vm_out {
> +	/** For future use, no flags defined so far */
> +	__u64	flags;
> +};
> +
> +union drm_amdgpu_vm {
> +	struct drm_amdgpu_vm_in in;
> +	struct drm_amdgpu_vm_out out;
> +};
> +
>   /* sem related */
>   #define AMDGPU_SEM_OP_CREATE_SEM        1
>   #define AMDGPU_SEM_OP_WAIT_SEM	        2
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/6] drm/amdgpu: add dedicated vmid field in vm struct
       [not found]     ` <1493013460-13344-3-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-26  7:09       ` Zhang, Jerry (Junwei)
       [not found]         ` <590047B3.7050109-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: Zhang, Jerry (Junwei) @ 2017-04-26  7:09 UTC (permalink / raw)
  To: Chunming Zhou, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

On 04/24/2017 01:57 PM, Chunming Zhou wrote:
> Change-Id: Id728e20366c8a1ae90d4e901dc80e136e2a613bb
> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 17 ++++++++++++++++-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  2 ++
>   2 files changed, 18 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index eb429c5..acf9102 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -2144,10 +2144,12 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>   	unsigned ring_instance;
>   	struct amdgpu_ring *ring;
>   	struct amd_sched_rq *rq;
> -	int r;
> +	int r, i;
>
>   	vm->va = RB_ROOT;
>   	vm->client_id = atomic64_inc_return(&adev->vm_manager.client_counter);
> +	for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
> +		vm->dedicated_vmid[i] = NULL;

Maybe it's better to give it a consistent name as resv_vmid, or anything like that.

Jerry
>   	spin_lock_init(&vm->status_lock);
>   	INIT_LIST_HEAD(&vm->invalidated);
>   	INIT_LIST_HEAD(&vm->cleared);
> @@ -2250,6 +2252,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>   {
>   	struct amdgpu_bo_va_mapping *mapping, *tmp;
>   	bool prt_fini_needed = !!adev->gart.gart_funcs->set_prt;
> +	int i;
>
>   	if (vm->is_kfd_vm) {
>   		struct amdgpu_vm_id_manager *id_mgr =
> @@ -2292,6 +2295,18 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>
>   	amdgpu_vm_free_levels(&vm->root);
>   	fence_put(vm->last_dir_update);
> +	for (i = 0; i < AMDGPU_MAX_VMHUBS; i++) {
> +		struct amdgpu_vm_id_manager *id_mgr =
> +			&adev->vm_manager.id_mgr[i];
> +
> +		mutex_lock(&id_mgr->lock);
> +		if (vm->dedicated_vmid[i]) {
> +			list_add(&vm->dedicated_vmid[i]->list,
> +				 &id_mgr->ids_lru);
> +			vm->dedicated_vmid[i] = NULL;
> +		}
> +		mutex_unlock(&id_mgr->lock);
> +	}
>   }
>
>   /**
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> index 62dbace..23981ee 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> @@ -122,6 +122,8 @@ struct amdgpu_vm {
>
>   	/* client id */
>   	u64                     client_id;
> +	/* dedicated vmid */
> +	struct amdgpu_vm_id	*dedicated_vmid[AMDGPU_MAX_VMHUBS];
>   	/* each VM will map on CSA */
>   	struct amdgpu_bo_va *csa_bo_va;
>
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/6] drm/amdgpu: reserve vmid by vm ioctl
       [not found]     ` <1493013460-13344-4-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-26  7:09       ` Zhang, Jerry (Junwei)
       [not found]         ` <590047C6.50005-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: Zhang, Jerry (Junwei) @ 2017-04-26  7:09 UTC (permalink / raw)
  To: Chunming Zhou, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW



On 04/24/2017 01:57 PM, Chunming Zhou wrote:
> Change-Id: I5f80dc39dc9d44660a96a2b710b0dbb4d3b9039d
> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 56 ++++++++++++++++++++++++++++++++++
>   1 file changed, 56 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index acf9102..5f4dcc9 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -397,6 +397,17 @@ static bool amdgpu_vm_had_gpu_reset(struct amdgpu_device *adev,
>   		atomic_read(&adev->gpu_reset_counter);
>   }
>
> +static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm)
> +{
> +	unsigned vmhub;
> +
> +	for (vmhub = 0; vmhub < AMDGPU_MAX_VMHUBS; vmhub++) {
> +		if (!vm->dedicated_vmid[vmhub])
> +			return false;

Just note:
Seemingly for pre-gmcv9, it will always allocate one more dedicated/reserved 
vmid for the 2nd un-used vmhub.
anyway it will not be used either.

Jerry
> +	}
> +	return true;
> +}
> +
>   /**
>    * amdgpu_vm_grab_id - allocate the next free VMID
>    *
> @@ -546,6 +557,45 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
>   	return r;
>   }
>
> +static int amdgpu_vm_alloc_dedicated_vmid(struct amdgpu_device *adev,
> +					  struct amdgpu_vm *vm)
> +{
> +	struct amdgpu_vm_id_manager *id_mgr;
> +	struct amdgpu_vm_id *idle;
> +	unsigned vmhub;
> +	int r;
> +
> +	for (vmhub = 0; vmhub < AMDGPU_MAX_VMHUBS; vmhub++) {
> +		id_mgr = &adev->vm_manager.id_mgr[vmhub];
> +
> +		mutex_lock(&id_mgr->lock);
> +		/* Select the first entry VMID */
> +		idle = list_first_entry(&id_mgr->ids_lru, struct amdgpu_vm_id,
> +					list);
> +		list_del_init(&idle->list);
> +		vm->dedicated_vmid[vmhub] = idle;
> +		mutex_unlock(&id_mgr->lock);
> +
> +		r = amdgpu_sync_wait(&idle->active);
> +		if (r)
> +			goto err;
> +	}
> +
> +	return 0;
> +err:
> +	for (vmhub = 0; vmhub < AMDGPU_MAX_VMHUBS; vmhub++) {
> +		id_mgr = &adev->vm_manager.id_mgr[vmhub];
> +
> +		mutex_lock(&id_mgr->lock);
> +		if (vm->dedicated_vmid[vmhub])
> +			list_add(&vm->dedicated_vmid[vmhub]->list,
> +				 &id_mgr->ids_lru);
> +		vm->dedicated_vmid[vmhub] = NULL;
> +		mutex_unlock(&id_mgr->lock);
> +	}
> +	return r;
> +}
> +
>   static bool amdgpu_vm_ring_has_compute_vm_bug(struct amdgpu_ring *ring)
>   {
>   	struct amdgpu_device *adev = ring->adev;
> @@ -2379,9 +2429,15 @@ int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
>   	union drm_amdgpu_vm *args = data;
>   	struct amdgpu_device *adev = dev->dev_private;
>   	struct amdgpu_fpriv *fpriv = filp->driver_priv;
> +	int r;
>
>   	switch (args->in.op) {
>   	case AMDGPU_VM_OP_RESERVE_VMID:
> +		if (!amdgpu_vm_dedicated_vmid_ready(&fpriv->vm)) {
> +			r = amdgpu_vm_alloc_dedicated_vmid(adev, &fpriv->vm);

Could you change the return value in this ready checking func?
Usually we can easily get to know this kind of things.
if (sth_ready())
     do_sth();


> +			if (r)
> +				return r;
> +		}
>   		break;
>   	default:
>   		return -EINVAL;
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 4/6] drm/amdgpu: add limitation for dedicated vm number v2
       [not found]     ` <1493013460-13344-5-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-26  7:10       ` Zhang, Jerry (Junwei)
       [not found]         ` <590047D0.7080901-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: Zhang, Jerry (Junwei) @ 2017-04-26  7:10 UTC (permalink / raw)
  To: Chunming Zhou, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

On 04/24/2017 01:57 PM, Chunming Zhou wrote:
> v2: move #define to amdgpu_vm.h
>
> Change-Id: Ie5958cf6dbdc1c8278e61d9158483472d6f5c6e3
> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu.h        | 1 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 1 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c     | 9 +++++++++
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h     | 2 ++
>   4 files changed, 13 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 0831cd2..ba9d3d0 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -1591,6 +1591,7 @@ struct amdgpu_device {
>   	struct amdgpu_dummy_page	dummy_page;
>   	struct amdgpu_vm_manager	vm_manager;
>   	struct amdgpu_vmhub             vmhub[AMDGPU_MAX_VMHUBS];
> +	atomic_t			reserved_vmid;

Perhaps it's more reasonable to belongs to vm_manager.

Jerry

>
>   	/* memory management */
>   	struct amdgpu_mman		mman;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index a175dfd..9993085 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -1889,6 +1889,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
>   	adev->vm_manager.vm_pte_num_rings = 0;
>   	adev->gart.gart_funcs = NULL;
>   	adev->fence_context = kcl_fence_context_alloc(AMDGPU_MAX_RINGS);
> +	atomic_set(&adev->reserved_vmid, 0);
>
>   	adev->smc_rreg = &amdgpu_invalid_rreg;
>   	adev->smc_wreg = &amdgpu_invalid_wreg;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 5f4dcc9..f7113b9 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -565,6 +565,10 @@ static int amdgpu_vm_alloc_dedicated_vmid(struct amdgpu_device *adev,
>   	unsigned vmhub;
>   	int r;
>
> +	if (atomic_read(&adev->reserved_vmid) >= AMDGPU_VM_MAX_RESERVED_VMID) {
> +		DRM_ERROR("Over limitation of reserved vmid\n");
> +		return -EINVAL;
> +	}
>   	for (vmhub = 0; vmhub < AMDGPU_MAX_VMHUBS; vmhub++) {
>   		id_mgr = &adev->vm_manager.id_mgr[vmhub];
>
> @@ -580,6 +584,7 @@ static int amdgpu_vm_alloc_dedicated_vmid(struct amdgpu_device *adev,
>   		if (r)
>   			goto err;
>   	}
> +	atomic_inc(&adev->reserved_vmid);
>
>   	return 0;
>   err:
> @@ -2302,6 +2307,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>   {
>   	struct amdgpu_bo_va_mapping *mapping, *tmp;
>   	bool prt_fini_needed = !!adev->gart.gart_funcs->set_prt;
> +	bool dedicated = false;
>   	int i;
>
>   	if (vm->is_kfd_vm) {
> @@ -2354,9 +2360,12 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>   			list_add(&vm->dedicated_vmid[i]->list,
>   				 &id_mgr->ids_lru);
>   			vm->dedicated_vmid[i] = NULL;
> +			dedicated = true;
>   		}
>   		mutex_unlock(&id_mgr->lock);
>   	}
> +	if (dedicated)
> +		atomic_dec(&adev->reserved_vmid);
>   }
>
>   /**
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> index 23981ee..2d3e6ed 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> @@ -83,6 +83,8 @@
>
>   /* hardcode that limit for now */
>   #define AMDGPU_VA_RESERVED_SIZE			(8 << 20)
> +/* max vmids dedicated for process */
> +#define AMDGPU_VM_MAX_RESERVED_VMID 2
>
>   struct amdgpu_vm_pt {
>   	struct amdgpu_bo	*bo;
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/6] drm/amdgpu: add vm ioctl
       [not found]         ` <590047A5.1070807-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-26  8:47           ` Christian König
  0 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2017-04-26  8:47 UTC (permalink / raw)
  To: Zhang, Jerry (Junwei),
	Chunming Zhou, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 26.04.2017 um 09:09 schrieb Zhang, Jerry (Junwei):
> On 04/24/2017 01:57 PM, Chunming Zhou wrote:
>> It will be used for reserving vmid.
>>
>> Change-Id: Ib7169ea999690c8e82d0dcbccdd2d97760c0270a
>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c |  1 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c  | 16 ++++++++++++++++
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h  |  1 +
>>   include/uapi/drm/amdgpu_drm.h           | 20 ++++++++++++++++++++
>>   4 files changed, 38 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
>> index cad589a..7004e6c 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
>> @@ -1051,6 +1051,7 @@ int amdgpu_get_vblank_timestamp_kms(struct 
>> drm_device *dev, unsigned int pipe,
>>   const struct drm_ioctl_desc amdgpu_ioctls_kms[] = {
>>       DRM_IOCTL_DEF_DRV(AMDGPU_GEM_CREATE, amdgpu_gem_create_ioctl, 
>> DRM_AUTH|DRM_RENDER_ALLOW),
>>       DRM_IOCTL_DEF_DRV(AMDGPU_CTX, amdgpu_ctx_ioctl, 
>> DRM_AUTH|DRM_RENDER_ALLOW),
>> +    DRM_IOCTL_DEF_DRV(AMDGPU_VM, amdgpu_vm_ioctl, 
>> DRM_AUTH|DRM_RENDER_ALLOW),
>>       DRM_IOCTL_DEF_DRV(AMDGPU_BO_LIST, amdgpu_bo_list_ioctl, 
>> DRM_AUTH|DRM_RENDER_ALLOW),
>>       /* KMS */
>>       DRM_IOCTL_DEF_DRV(AMDGPU_GEM_MMAP, amdgpu_gem_mmap_ioctl, 
>> DRM_AUTH|DRM_RENDER_ALLOW),
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> index f804d38..eb429c5 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> @@ -2358,3 +2358,19 @@ void amdgpu_vm_manager_fini(struct 
>> amdgpu_device *adev)
>>           }
>>       }
>>   }
>> +
>> +int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct 
>> drm_file *filp)
>> +{
>> +    union drm_amdgpu_vm *args = data;
>> +    struct amdgpu_device *adev = dev->dev_private;
>> +    struct amdgpu_fpriv *fpriv = filp->driver_priv;
>> +
>> +    switch (args->in.op) {
>> +    case AMDGPU_VM_OP_RESERVE_VMID:
>
> How do you think to add an op to release the reserved vmid as well?

Yeah, that's a really good point.

Additional to Jerrys comment you either need to reorder the patches or 
return an error here as long as the implementation is missing.

Otherwise you just ignore th IOCTL till it is really implemented and 
that isn't really an option.

Christian.

>
> Jerry
>> +        break;
>> +    default:
>> +        return -EINVAL;
>> +    }
>> +
>> +    return 0;
>> +}
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>> index 0f547c6..62dbace 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>> @@ -247,5 +247,6 @@ int amdgpu_vm_bo_clear_mappings(struct 
>> amdgpu_device *adev,
>>   void amdgpu_vm_bo_rmv(struct amdgpu_device *adev,
>>                 struct amdgpu_bo_va *bo_va);
>>   void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint64_t 
>> vm_size);
>> +int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct 
>> drm_file *filp);
>>
>>   #endif
>> diff --git a/include/uapi/drm/amdgpu_drm.h 
>> b/include/uapi/drm/amdgpu_drm.h
>> index 56b7a2f3..5ee639b 100644
>> --- a/include/uapi/drm/amdgpu_drm.h
>> +++ b/include/uapi/drm/amdgpu_drm.h
>> @@ -51,6 +51,7 @@
>>   #define DRM_AMDGPU_GEM_OP        0x10
>>   #define DRM_AMDGPU_GEM_USERPTR        0x11
>>   #define DRM_AMDGPU_WAIT_FENCES        0x12
>> +#define DRM_AMDGPU_VM            0x13
>>
>>   /* hybrid specific ioctls */
>>   #define DRM_AMDGPU_SEM            0x5b
>> @@ -71,6 +72,7 @@
>>   #define DRM_IOCTL_AMDGPU_GEM_OP DRM_IOWR(DRM_COMMAND_BASE + 
>> DRM_AMDGPU_GEM_OP, struct drm_amdgpu_gem_op)
>>   #define DRM_IOCTL_AMDGPU_GEM_USERPTR DRM_IOWR(DRM_COMMAND_BASE + 
>> DRM_AMDGPU_GEM_USERPTR, struct drm_amdgpu_gem_userptr)
>>   #define DRM_IOCTL_AMDGPU_WAIT_FENCES DRM_IOWR(DRM_COMMAND_BASE + 
>> DRM_AMDGPU_WAIT_FENCES, union drm_amdgpu_wait_fences)
>> +#define DRM_IOCTL_AMDGPU_VM        DRM_IOWR(DRM_COMMAND_BASE + 
>> DRM_AMDGPU_VM, union drm_amdgpu_vm)
>>
>>   /* hybrid specific ioctls */
>>   #define DRM_IOCTL_AMDGPU_GEM_DGMA    DRM_IOWR(DRM_COMMAND_BASE + 
>> DRM_AMDGPU_GEM_DGMA, struct drm_amdgpu_gem_dgma)
>> @@ -212,6 +214,24 @@ struct drm_amdgpu_ctx_in {
>>       union drm_amdgpu_ctx_out out;
>>   };
>>
>> +/* vm ioctl */
>> +#define AMDGPU_VM_OP_RESERVE_VMID    1
>> +struct drm_amdgpu_vm_in {
>> +    /** AMDGPU_VM_OP_* */
>> +    __u32    op;
>> +    __u32    flags;
>> +};
>> +
>> +struct drm_amdgpu_vm_out {
>> +    /** For future use, no flags defined so far */
>> +    __u64    flags;
>> +};
>> +
>> +union drm_amdgpu_vm {
>> +    struct drm_amdgpu_vm_in in;
>> +    struct drm_amdgpu_vm_out out;
>> +};
>> +
>>   /* sem related */
>>   #define AMDGPU_SEM_OP_CREATE_SEM        1
>>   #define AMDGPU_SEM_OP_WAIT_SEM            2
>>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/6] drm/amdgpu: add dedicated vmid field in vm struct
       [not found]         ` <590047B3.7050109-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-26  8:49           ` Christian König
       [not found]             ` <291734a9-5306-f3e3-219c-c53cbf433358-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: Christian König @ 2017-04-26  8:49 UTC (permalink / raw)
  To: Zhang, Jerry (Junwei),
	Chunming Zhou, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 26.04.2017 um 09:09 schrieb Zhang, Jerry (Junwei):
> On 04/24/2017 01:57 PM, Chunming Zhou wrote:
>> Change-Id: Id728e20366c8a1ae90d4e901dc80e136e2a613bb
>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 17 ++++++++++++++++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  2 ++
>>   2 files changed, 18 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> index eb429c5..acf9102 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> @@ -2144,10 +2144,12 @@ int amdgpu_vm_init(struct amdgpu_device 
>> *adev, struct amdgpu_vm *vm,
>>       unsigned ring_instance;
>>       struct amdgpu_ring *ring;
>>       struct amd_sched_rq *rq;
>> -    int r;
>> +    int r, i;
>>
>>       vm->va = RB_ROOT;
>>       vm->client_id = 
>> atomic64_inc_return(&adev->vm_manager.client_counter);
>> +    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
>> +        vm->dedicated_vmid[i] = NULL;
>
> Maybe it's better to give it a consistent name as resv_vmid, or 
> anything like that.

Yes, agree. And if I'm not completely mistaken we still should only 
apply that to the GFX hub on Vega10.

Christian.

>
> Jerry
>> spin_lock_init(&vm->status_lock);
>>       INIT_LIST_HEAD(&vm->invalidated);
>>       INIT_LIST_HEAD(&vm->cleared);
>> @@ -2250,6 +2252,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, 
>> struct amdgpu_vm *vm)
>>   {
>>       struct amdgpu_bo_va_mapping *mapping, *tmp;
>>       bool prt_fini_needed = !!adev->gart.gart_funcs->set_prt;
>> +    int i;
>>
>>       if (vm->is_kfd_vm) {
>>           struct amdgpu_vm_id_manager *id_mgr =
>> @@ -2292,6 +2295,18 @@ void amdgpu_vm_fini(struct amdgpu_device 
>> *adev, struct amdgpu_vm *vm)
>>
>>       amdgpu_vm_free_levels(&vm->root);
>>       fence_put(vm->last_dir_update);
>> +    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++) {
>> +        struct amdgpu_vm_id_manager *id_mgr =
>> +            &adev->vm_manager.id_mgr[i];
>> +
>> +        mutex_lock(&id_mgr->lock);
>> +        if (vm->dedicated_vmid[i]) {
>> +            list_add(&vm->dedicated_vmid[i]->list,
>> +                 &id_mgr->ids_lru);
>> +            vm->dedicated_vmid[i] = NULL;
>> +        }
>> +        mutex_unlock(&id_mgr->lock);
>> +    }
>>   }
>>
>>   /**
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>> index 62dbace..23981ee 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>> @@ -122,6 +122,8 @@ struct amdgpu_vm {
>>
>>       /* client id */
>>       u64                     client_id;
>> +    /* dedicated vmid */
>> +    struct amdgpu_vm_id    *dedicated_vmid[AMDGPU_MAX_VMHUBS];
>>       /* each VM will map on CSA */
>>       struct amdgpu_bo_va *csa_bo_va;
>>
>>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/6] drm/amdgpu: reserve vmid by vm ioctl
       [not found]         ` <590047C6.50005-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-26  8:51           ` Christian König
       [not found]             ` <238b3d86-743a-4ca9-c008-92ae4e1ec79c-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: Christian König @ 2017-04-26  8:51 UTC (permalink / raw)
  To: Zhang, Jerry (Junwei),
	Chunming Zhou, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 26.04.2017 um 09:09 schrieb Zhang, Jerry (Junwei):
>
>
> On 04/24/2017 01:57 PM, Chunming Zhou wrote:
>> Change-Id: I5f80dc39dc9d44660a96a2b710b0dbb4d3b9039d
>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 56 
>> ++++++++++++++++++++++++++++++++++
>>   1 file changed, 56 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> index acf9102..5f4dcc9 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> @@ -397,6 +397,17 @@ static bool amdgpu_vm_had_gpu_reset(struct 
>> amdgpu_device *adev,
>>           atomic_read(&adev->gpu_reset_counter);
>>   }
>>
>> +static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm)
>> +{
>> +    unsigned vmhub;
>> +
>> +    for (vmhub = 0; vmhub < AMDGPU_MAX_VMHUBS; vmhub++) {
>> +        if (!vm->dedicated_vmid[vmhub])
>> +            return false;
>
> Just note:
> Seemingly for pre-gmcv9, it will always allocate one more 
> dedicated/reserved vmid for the 2nd un-used vmhub.
> anyway it will not be used either.

It's even worse. IIRC we don't initialize the 2nd hub on pre-gmcv9. So 
that could work with uninitialized data.

Christian.

>
> Jerry
>> +    }
>> +    return true;
>> +}
>> +
>>   /**
>>    * amdgpu_vm_grab_id - allocate the next free VMID
>>    *
>> @@ -546,6 +557,45 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, 
>> struct amdgpu_ring *ring,
>>       return r;
>>   }
>>
>> +static int amdgpu_vm_alloc_dedicated_vmid(struct amdgpu_device *adev,
>> +                      struct amdgpu_vm *vm)
>> +{
>> +    struct amdgpu_vm_id_manager *id_mgr;
>> +    struct amdgpu_vm_id *idle;
>> +    unsigned vmhub;
>> +    int r;
>> +
>> +    for (vmhub = 0; vmhub < AMDGPU_MAX_VMHUBS; vmhub++) {
>> +        id_mgr = &adev->vm_manager.id_mgr[vmhub];
>> +
>> +        mutex_lock(&id_mgr->lock);
>> +        /* Select the first entry VMID */
>> +        idle = list_first_entry(&id_mgr->ids_lru, struct amdgpu_vm_id,
>> +                    list);
>> +        list_del_init(&idle->list);
>> +        vm->dedicated_vmid[vmhub] = idle;
>> +        mutex_unlock(&id_mgr->lock);
>> +
>> +        r = amdgpu_sync_wait(&idle->active);
>> +        if (r)
>> +            goto err;
>> +    }
>> +
>> +    return 0;
>> +err:
>> +    for (vmhub = 0; vmhub < AMDGPU_MAX_VMHUBS; vmhub++) {
>> +        id_mgr = &adev->vm_manager.id_mgr[vmhub];
>> +
>> +        mutex_lock(&id_mgr->lock);
>> +        if (vm->dedicated_vmid[vmhub])
>> + list_add(&vm->dedicated_vmid[vmhub]->list,
>> +                 &id_mgr->ids_lru);
>> +        vm->dedicated_vmid[vmhub] = NULL;
>> +        mutex_unlock(&id_mgr->lock);
>> +    }
>> +    return r;
>> +}
>> +
>>   static bool amdgpu_vm_ring_has_compute_vm_bug(struct amdgpu_ring 
>> *ring)
>>   {
>>       struct amdgpu_device *adev = ring->adev;
>> @@ -2379,9 +2429,15 @@ int amdgpu_vm_ioctl(struct drm_device *dev, 
>> void *data, struct drm_file *filp)
>>       union drm_amdgpu_vm *args = data;
>>       struct amdgpu_device *adev = dev->dev_private;
>>       struct amdgpu_fpriv *fpriv = filp->driver_priv;
>> +    int r;
>>
>>       switch (args->in.op) {
>>       case AMDGPU_VM_OP_RESERVE_VMID:
>> +        if (!amdgpu_vm_dedicated_vmid_ready(&fpriv->vm)) {
>> +            r = amdgpu_vm_alloc_dedicated_vmid(adev, &fpriv->vm);
>
> Could you change the return value in this ready checking func?
> Usually we can easily get to know this kind of things.
> if (sth_ready())
>     do_sth();
>
>
>> +            if (r)
>> +                return r;
>> +        }
>>           break;
>>       default:
>>           return -EINVAL;
>>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 4/6] drm/amdgpu: add limitation for dedicated vm number v2
       [not found]         ` <590047D0.7080901-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-26  8:53           ` Christian König
  0 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2017-04-26  8:53 UTC (permalink / raw)
  To: Zhang, Jerry (Junwei),
	Chunming Zhou, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 26.04.2017 um 09:10 schrieb Zhang, Jerry (Junwei):
> On 04/24/2017 01:57 PM, Chunming Zhou wrote:
>> v2: move #define to amdgpu_vm.h
>>
>> Change-Id: Ie5958cf6dbdc1c8278e61d9158483472d6f5c6e3
>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h        | 1 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 1 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c     | 9 +++++++++
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h     | 2 ++
>>   4 files changed, 13 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> index 0831cd2..ba9d3d0 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> @@ -1591,6 +1591,7 @@ struct amdgpu_device {
>>       struct amdgpu_dummy_page    dummy_page;
>>       struct amdgpu_vm_manager    vm_manager;
>>       struct amdgpu_vmhub             vmhub[AMDGPU_MAX_VMHUBS];
>> +    atomic_t            reserved_vmid;
>
> Perhaps it's more reasonable to belongs to vm_manager.

Yes, agree.

>
> Jerry
>
>>
>>       /* memory management */
>>       struct amdgpu_mman        mman;
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> index a175dfd..9993085 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> @@ -1889,6 +1889,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
>>       adev->vm_manager.vm_pte_num_rings = 0;
>>       adev->gart.gart_funcs = NULL;
>>       adev->fence_context = kcl_fence_context_alloc(AMDGPU_MAX_RINGS);
>> +    atomic_set(&adev->reserved_vmid, 0);
>>
>>       adev->smc_rreg = &amdgpu_invalid_rreg;
>>       adev->smc_wreg = &amdgpu_invalid_wreg;
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> index 5f4dcc9..f7113b9 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> @@ -565,6 +565,10 @@ static int amdgpu_vm_alloc_dedicated_vmid(struct 
>> amdgpu_device *adev,
>>       unsigned vmhub;
>>       int r;
>>
>> +    if (atomic_read(&adev->reserved_vmid) >= 
>> AMDGPU_VM_MAX_RESERVED_VMID) {
>> +        DRM_ERROR("Over limitation of reserved vmid\n");
>> +        return -EINVAL;
>> +    }
>>       for (vmhub = 0; vmhub < AMDGPU_MAX_VMHUBS; vmhub++) {
>>           id_mgr = &adev->vm_manager.id_mgr[vmhub];
>>
>> @@ -580,6 +584,7 @@ static int amdgpu_vm_alloc_dedicated_vmid(struct 
>> amdgpu_device *adev,
>>           if (r)
>>               goto err;
>>       }
>> +    atomic_inc(&adev->reserved_vmid);

That's not atomic at all, two threads could try to reserve a VMID at the 
same time and race.

You need to use atomic_inc_return for this before allocating the IDs and 
properly clean up if allocating them fails.

>>
>>       return 0;
>>   err:
>> @@ -2302,6 +2307,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, 
>> struct amdgpu_vm *vm)
>>   {
>>       struct amdgpu_bo_va_mapping *mapping, *tmp;
>>       bool prt_fini_needed = !!adev->gart.gart_funcs->set_prt;
>> +    bool dedicated = false;
>>       int i;
>>
>>       if (vm->is_kfd_vm) {
>> @@ -2354,9 +2360,12 @@ void amdgpu_vm_fini(struct amdgpu_device 
>> *adev, struct amdgpu_vm *vm)
>>               list_add(&vm->dedicated_vmid[i]->list,
>>                    &id_mgr->ids_lru);
>>               vm->dedicated_vmid[i] = NULL;
>> +            dedicated = true;
>>           }
>>           mutex_unlock(&id_mgr->lock);
>>       }
>> +    if (dedicated)
>> +        atomic_dec(&adev->reserved_vmid);
>>   }
>>
>>   /**
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>> index 23981ee..2d3e6ed 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>> @@ -83,6 +83,8 @@
>>
>>   /* hardcode that limit for now */
>>   #define AMDGPU_VA_RESERVED_SIZE            (8 << 20)
>> +/* max vmids dedicated for process */
>> +#define AMDGPU_VM_MAX_RESERVED_VMID 2

This should be 1 instead of 2, or did I miss something?

Regards,
Christian.

>>
>>   struct amdgpu_vm_pt {
>>       struct amdgpu_bo    *bo;
>>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/6] drm/amdgpu: reserve vmid by vm ioctl
       [not found]             ` <238b3d86-743a-4ca9-c008-92ae4e1ec79c-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
@ 2017-04-26  9:04               ` Zhang, Jerry (Junwei)
       [not found]                 ` <5900629A.20701-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: Zhang, Jerry (Junwei) @ 2017-04-26  9:04 UTC (permalink / raw)
  To: Christian König, Chunming Zhou,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

On 04/26/2017 04:51 PM, Christian König wrote:
> Am 26.04.2017 um 09:09 schrieb Zhang, Jerry (Junwei):
>>
>>
>> On 04/24/2017 01:57 PM, Chunming Zhou wrote:
>>> Change-Id: I5f80dc39dc9d44660a96a2b710b0dbb4d3b9039d
>>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 56
>>> ++++++++++++++++++++++++++++++++++
>>>   1 file changed, 56 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index acf9102..5f4dcc9 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -397,6 +397,17 @@ static bool amdgpu_vm_had_gpu_reset(struct
>>> amdgpu_device *adev,
>>>           atomic_read(&adev->gpu_reset_counter);
>>>   }
>>>
>>> +static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm)
>>> +{
>>> +    unsigned vmhub;
>>> +
>>> +    for (vmhub = 0; vmhub < AMDGPU_MAX_VMHUBS; vmhub++) {
>>> +        if (!vm->dedicated_vmid[vmhub])
>>> +            return false;
>>
>> Just note:
>> Seemingly for pre-gmcv9, it will always allocate one more dedicated/reserved
>> vmid for the 2nd un-used vmhub.
>> anyway it will not be used either.
>
> It's even worse. IIRC we don't initialize the 2nd hub on pre-gmcv9. So that
> could work with uninitialized data.

I saw AMDGPU_MAX_VMHUBS is defined 2 for all ASICs.
So the 2nd dedicated_vmid here will be 0 always, returning false.
Right?

Additionally, in amdgpu_vm_manager_init() the id_mgr[i] is initialized in loop 
of AMDGPU_MAX_VMHUBS as well.
Thus I though both vmhub are initialized.
Any missing, please correct me.
Thanks.

Jerry

>
> Christian.
>
>>
>> Jerry
>>> +    }
>>> +    return true;
>>> +}
>>> +
>>>   /**
>>>    * amdgpu_vm_grab_id - allocate the next free VMID
>>>    *
>>> @@ -546,6 +557,45 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct
>>> amdgpu_ring *ring,
>>>       return r;
>>>   }
>>>
>>> +static int amdgpu_vm_alloc_dedicated_vmid(struct amdgpu_device *adev,
>>> +                      struct amdgpu_vm *vm)
>>> +{
>>> +    struct amdgpu_vm_id_manager *id_mgr;
>>> +    struct amdgpu_vm_id *idle;
>>> +    unsigned vmhub;
>>> +    int r;
>>> +
>>> +    for (vmhub = 0; vmhub < AMDGPU_MAX_VMHUBS; vmhub++) {
>>> +        id_mgr = &adev->vm_manager.id_mgr[vmhub];
>>> +
>>> +        mutex_lock(&id_mgr->lock);
>>> +        /* Select the first entry VMID */
>>> +        idle = list_first_entry(&id_mgr->ids_lru, struct amdgpu_vm_id,
>>> +                    list);
>>> +        list_del_init(&idle->list);
>>> +        vm->dedicated_vmid[vmhub] = idle;
>>> +        mutex_unlock(&id_mgr->lock);
>>> +
>>> +        r = amdgpu_sync_wait(&idle->active);
>>> +        if (r)
>>> +            goto err;
>>> +    }
>>> +
>>> +    return 0;
>>> +err:
>>> +    for (vmhub = 0; vmhub < AMDGPU_MAX_VMHUBS; vmhub++) {
>>> +        id_mgr = &adev->vm_manager.id_mgr[vmhub];
>>> +
>>> +        mutex_lock(&id_mgr->lock);
>>> +        if (vm->dedicated_vmid[vmhub])
>>> + list_add(&vm->dedicated_vmid[vmhub]->list,
>>> +                 &id_mgr->ids_lru);
>>> +        vm->dedicated_vmid[vmhub] = NULL;
>>> +        mutex_unlock(&id_mgr->lock);
>>> +    }
>>> +    return r;
>>> +}
>>> +
>>>   static bool amdgpu_vm_ring_has_compute_vm_bug(struct amdgpu_ring *ring)
>>>   {
>>>       struct amdgpu_device *adev = ring->adev;
>>> @@ -2379,9 +2429,15 @@ int amdgpu_vm_ioctl(struct drm_device *dev, void
>>> *data, struct drm_file *filp)
>>>       union drm_amdgpu_vm *args = data;
>>>       struct amdgpu_device *adev = dev->dev_private;
>>>       struct amdgpu_fpriv *fpriv = filp->driver_priv;
>>> +    int r;
>>>
>>>       switch (args->in.op) {
>>>       case AMDGPU_VM_OP_RESERVE_VMID:
>>> +        if (!amdgpu_vm_dedicated_vmid_ready(&fpriv->vm)) {
>>> +            r = amdgpu_vm_alloc_dedicated_vmid(adev, &fpriv->vm);
>>
>> Could you change the return value in this ready checking func?
>> Usually we can easily get to know this kind of things.
>> if (sth_ready())
>>     do_sth();
>>
>>
>>> +            if (r)
>>> +                return r;
>>> +        }
>>>           break;
>>>       default:
>>>           return -EINVAL;
>>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/6] drm/amdgpu: add dedicated vmid field in vm struct
       [not found]             ` <291734a9-5306-f3e3-219c-c53cbf433358-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
@ 2017-04-26  9:05               ` zhoucm1
       [not found]                 ` <590062C5.6020402-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: zhoucm1 @ 2017-04-26  9:05 UTC (permalink / raw)
  To: Christian König, Zhang, Jerry (Junwei),
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW



On 2017年04月26日 16:49, Christian König wrote:
> Am 26.04.2017 um 09:09 schrieb Zhang, Jerry (Junwei):
>> On 04/24/2017 01:57 PM, Chunming Zhou wrote:
>>> Change-Id: Id728e20366c8a1ae90d4e901dc80e136e2a613bb
>>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 17 ++++++++++++++++-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  2 ++
>>>   2 files changed, 18 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index eb429c5..acf9102 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -2144,10 +2144,12 @@ int amdgpu_vm_init(struct amdgpu_device 
>>> *adev, struct amdgpu_vm *vm,
>>>       unsigned ring_instance;
>>>       struct amdgpu_ring *ring;
>>>       struct amd_sched_rq *rq;
>>> -    int r;
>>> +    int r, i;
>>>
>>>       vm->va = RB_ROOT;
>>>       vm->client_id = 
>>> atomic64_inc_return(&adev->vm_manager.client_counter);
>>> +    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
>>> +        vm->dedicated_vmid[i] = NULL;
>>
>> Maybe it's better to give it a consistent name as resv_vmid, or 
>> anything like that.
>
> Yes, agree.
I think the reserved vmid is dedicated to this vm, I don't know where 
this name doesn't make sense.

> And if I'm not completely mistaken we still should only apply that to 
> the GFX hub on Vega10.
David Mao required mmhub as well. IIRC, we don't have necessary to argue 
more on this.

Regards,
David Zhou
>
> Christian.
>
>>
>> Jerry
>>> spin_lock_init(&vm->status_lock);
>>>       INIT_LIST_HEAD(&vm->invalidated);
>>>       INIT_LIST_HEAD(&vm->cleared);
>>> @@ -2250,6 +2252,7 @@ void amdgpu_vm_fini(struct amdgpu_device 
>>> *adev, struct amdgpu_vm *vm)
>>>   {
>>>       struct amdgpu_bo_va_mapping *mapping, *tmp;
>>>       bool prt_fini_needed = !!adev->gart.gart_funcs->set_prt;
>>> +    int i;
>>>
>>>       if (vm->is_kfd_vm) {
>>>           struct amdgpu_vm_id_manager *id_mgr =
>>> @@ -2292,6 +2295,18 @@ void amdgpu_vm_fini(struct amdgpu_device 
>>> *adev, struct amdgpu_vm *vm)
>>>
>>>       amdgpu_vm_free_levels(&vm->root);
>>>       fence_put(vm->last_dir_update);
>>> +    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++) {
>>> +        struct amdgpu_vm_id_manager *id_mgr =
>>> +            &adev->vm_manager.id_mgr[i];
>>> +
>>> +        mutex_lock(&id_mgr->lock);
>>> +        if (vm->dedicated_vmid[i]) {
>>> +            list_add(&vm->dedicated_vmid[i]->list,
>>> +                 &id_mgr->ids_lru);
>>> +            vm->dedicated_vmid[i] = NULL;
>>> +        }
>>> +        mutex_unlock(&id_mgr->lock);
>>> +    }
>>>   }
>>>
>>>   /**
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> index 62dbace..23981ee 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> @@ -122,6 +122,8 @@ struct amdgpu_vm {
>>>
>>>       /* client id */
>>>       u64                     client_id;
>>> +    /* dedicated vmid */
>>> +    struct amdgpu_vm_id *dedicated_vmid[AMDGPU_MAX_VMHUBS];
>>>       /* each VM will map on CSA */
>>>       struct amdgpu_bo_va *csa_bo_va;
>>>
>>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/6] drm/amdgpu: reserve vmid by vm ioctl
       [not found]                 ` <5900629A.20701-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-26  9:08                   ` Christian König
  0 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2017-04-26  9:08 UTC (permalink / raw)
  To: Zhang, Jerry (Junwei),
	Chunming Zhou, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 26.04.2017 um 11:04 schrieb Zhang, Jerry (Junwei):
> On 04/26/2017 04:51 PM, Christian König wrote:
>> Am 26.04.2017 um 09:09 schrieb Zhang, Jerry (Junwei):
>>>
>>>
>>> On 04/24/2017 01:57 PM, Chunming Zhou wrote:
>>>> Change-Id: I5f80dc39dc9d44660a96a2b710b0dbb4d3b9039d
>>>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>>>> ---
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 56
>>>> ++++++++++++++++++++++++++++++++++
>>>>   1 file changed, 56 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>> index acf9102..5f4dcc9 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>> @@ -397,6 +397,17 @@ static bool amdgpu_vm_had_gpu_reset(struct
>>>> amdgpu_device *adev,
>>>>           atomic_read(&adev->gpu_reset_counter);
>>>>   }
>>>>
>>>> +static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm)
>>>> +{
>>>> +    unsigned vmhub;
>>>> +
>>>> +    for (vmhub = 0; vmhub < AMDGPU_MAX_VMHUBS; vmhub++) {
>>>> +        if (!vm->dedicated_vmid[vmhub])
>>>> +            return false;
>>>
>>> Just note:
>>> Seemingly for pre-gmcv9, it will always allocate one more 
>>> dedicated/reserved
>>> vmid for the 2nd un-used vmhub.
>>> anyway it will not be used either.
>>
>> It's even worse. IIRC we don't initialize the 2nd hub on pre-gmcv9. 
>> So that
>> could work with uninitialized data.
>
> I saw AMDGPU_MAX_VMHUBS is defined 2 for all ASICs.
> So the 2nd dedicated_vmid here will be 0 always, returning false.
> Right?
>
> Additionally, in amdgpu_vm_manager_init() the id_mgr[i] is initialized 
> in loop of AMDGPU_MAX_VMHUBS as well.
> Thus I though both vmhub are initialized.
> Any missing, please correct me.
> Thanks.

Indeed you are right we do always initialize all HUBs here, so that 
should work but is still a bit awkward.

Christian.

>
> Jerry
>
>>
>> Christian.
>>
>>>
>>> Jerry
>>>> +    }
>>>> +    return true;
>>>> +}
>>>> +
>>>>   /**
>>>>    * amdgpu_vm_grab_id - allocate the next free VMID
>>>>    *
>>>> @@ -546,6 +557,45 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, 
>>>> struct
>>>> amdgpu_ring *ring,
>>>>       return r;
>>>>   }
>>>>
>>>> +static int amdgpu_vm_alloc_dedicated_vmid(struct amdgpu_device *adev,
>>>> +                      struct amdgpu_vm *vm)
>>>> +{
>>>> +    struct amdgpu_vm_id_manager *id_mgr;
>>>> +    struct amdgpu_vm_id *idle;
>>>> +    unsigned vmhub;
>>>> +    int r;
>>>> +
>>>> +    for (vmhub = 0; vmhub < AMDGPU_MAX_VMHUBS; vmhub++) {
>>>> +        id_mgr = &adev->vm_manager.id_mgr[vmhub];
>>>> +
>>>> +        mutex_lock(&id_mgr->lock);
>>>> +        /* Select the first entry VMID */
>>>> +        idle = list_first_entry(&id_mgr->ids_lru, struct 
>>>> amdgpu_vm_id,
>>>> +                    list);
>>>> +        list_del_init(&idle->list);
>>>> +        vm->dedicated_vmid[vmhub] = idle;
>>>> +        mutex_unlock(&id_mgr->lock);
>>>> +
>>>> +        r = amdgpu_sync_wait(&idle->active);
>>>> +        if (r)
>>>> +            goto err;
>>>> +    }
>>>> +
>>>> +    return 0;
>>>> +err:
>>>> +    for (vmhub = 0; vmhub < AMDGPU_MAX_VMHUBS; vmhub++) {
>>>> +        id_mgr = &adev->vm_manager.id_mgr[vmhub];
>>>> +
>>>> +        mutex_lock(&id_mgr->lock);
>>>> +        if (vm->dedicated_vmid[vmhub])
>>>> + list_add(&vm->dedicated_vmid[vmhub]->list,
>>>> +                 &id_mgr->ids_lru);
>>>> +        vm->dedicated_vmid[vmhub] = NULL;
>>>> +        mutex_unlock(&id_mgr->lock);
>>>> +    }
>>>> +    return r;
>>>> +}
>>>> +
>>>>   static bool amdgpu_vm_ring_has_compute_vm_bug(struct amdgpu_ring 
>>>> *ring)
>>>>   {
>>>>       struct amdgpu_device *adev = ring->adev;
>>>> @@ -2379,9 +2429,15 @@ int amdgpu_vm_ioctl(struct drm_device *dev, 
>>>> void
>>>> *data, struct drm_file *filp)
>>>>       union drm_amdgpu_vm *args = data;
>>>>       struct amdgpu_device *adev = dev->dev_private;
>>>>       struct amdgpu_fpriv *fpriv = filp->driver_priv;
>>>> +    int r;
>>>>
>>>>       switch (args->in.op) {
>>>>       case AMDGPU_VM_OP_RESERVE_VMID:
>>>> +        if (!amdgpu_vm_dedicated_vmid_ready(&fpriv->vm)) {
>>>> +            r = amdgpu_vm_alloc_dedicated_vmid(adev, &fpriv->vm);
>>>
>>> Could you change the return value in this ready checking func?
>>> Usually we can easily get to know this kind of things.
>>> if (sth_ready())
>>>     do_sth();
>>>
>>>
>>>> +            if (r)
>>>> +                return r;
>>>> +        }
>>>>           break;
>>>>       default:
>>>>           return -EINVAL;
>>>>
>>> _______________________________________________
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>
>>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/6] drm/amdgpu: add dedicated vmid field in vm struct
       [not found]                 ` <590062C5.6020402-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-26  9:10                   ` Christian König
       [not found]                     ` <05c4fcec-9dae-24ac-4a5d-26e0fd5bc148-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: Christian König @ 2017-04-26  9:10 UTC (permalink / raw)
  To: zhoucm1, Zhang, Jerry (Junwei), amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 26.04.2017 um 11:05 schrieb zhoucm1:
>
>
> On 2017年04月26日 16:49, Christian König wrote:
>> Am 26.04.2017 um 09:09 schrieb Zhang, Jerry (Junwei):
>>> On 04/24/2017 01:57 PM, Chunming Zhou wrote:
>>>> Change-Id: Id728e20366c8a1ae90d4e901dc80e136e2a613bb
>>>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>>>> ---
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 17 ++++++++++++++++-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  2 ++
>>>>   2 files changed, 18 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>> index eb429c5..acf9102 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>> @@ -2144,10 +2144,12 @@ int amdgpu_vm_init(struct amdgpu_device 
>>>> *adev, struct amdgpu_vm *vm,
>>>>       unsigned ring_instance;
>>>>       struct amdgpu_ring *ring;
>>>>       struct amd_sched_rq *rq;
>>>> -    int r;
>>>> +    int r, i;
>>>>
>>>>       vm->va = RB_ROOT;
>>>>       vm->client_id = 
>>>> atomic64_inc_return(&adev->vm_manager.client_counter);
>>>> +    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
>>>> +        vm->dedicated_vmid[i] = NULL;
>>>
>>> Maybe it's better to give it a consistent name as resv_vmid, or 
>>> anything like that.
>>
>> Yes, agree.
> I think the reserved vmid is dedicated to this vm, I don't know where 
> this name doesn't make sense.
>
>> And if I'm not completely mistaken we still should only apply that to 
>> the GFX hub on Vega10.
> David Mao required mmhub as well. IIRC, we don't have necessary to 
> argue more on this.

I think we still have. There is no technical reason why we should use 
the reserved/dedicated VM for other engines than the one involved in the 
SQ trace.

So I would say we should limit this to GFX and Compute jobs and only 
allocate the dedicated VMID for those.

Regards,
Christian.

>
> Regards,
> David Zhou
>>
>> Christian.
>>
>>>
>>> Jerry
>>>> spin_lock_init(&vm->status_lock);
>>>>       INIT_LIST_HEAD(&vm->invalidated);
>>>>       INIT_LIST_HEAD(&vm->cleared);
>>>> @@ -2250,6 +2252,7 @@ void amdgpu_vm_fini(struct amdgpu_device 
>>>> *adev, struct amdgpu_vm *vm)
>>>>   {
>>>>       struct amdgpu_bo_va_mapping *mapping, *tmp;
>>>>       bool prt_fini_needed = !!adev->gart.gart_funcs->set_prt;
>>>> +    int i;
>>>>
>>>>       if (vm->is_kfd_vm) {
>>>>           struct amdgpu_vm_id_manager *id_mgr =
>>>> @@ -2292,6 +2295,18 @@ void amdgpu_vm_fini(struct amdgpu_device 
>>>> *adev, struct amdgpu_vm *vm)
>>>>
>>>>       amdgpu_vm_free_levels(&vm->root);
>>>>       fence_put(vm->last_dir_update);
>>>> +    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++) {
>>>> +        struct amdgpu_vm_id_manager *id_mgr =
>>>> +            &adev->vm_manager.id_mgr[i];
>>>> +
>>>> +        mutex_lock(&id_mgr->lock);
>>>> +        if (vm->dedicated_vmid[i]) {
>>>> + list_add(&vm->dedicated_vmid[i]->list,
>>>> +                 &id_mgr->ids_lru);
>>>> +            vm->dedicated_vmid[i] = NULL;
>>>> +        }
>>>> +        mutex_unlock(&id_mgr->lock);
>>>> +    }
>>>>   }
>>>>
>>>>   /**
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>> index 62dbace..23981ee 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>> @@ -122,6 +122,8 @@ struct amdgpu_vm {
>>>>
>>>>       /* client id */
>>>>       u64                     client_id;
>>>> +    /* dedicated vmid */
>>>> +    struct amdgpu_vm_id *dedicated_vmid[AMDGPU_MAX_VMHUBS];
>>>>       /* each VM will map on CSA */
>>>>       struct amdgpu_bo_va *csa_bo_va;
>>>>
>>>>
>>> _______________________________________________
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>
>>
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/6] drm/amdgpu: add dedicated vmid field in vm struct
       [not found]                     ` <05c4fcec-9dae-24ac-4a5d-26e0fd5bc148-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
@ 2017-04-26  9:14                       ` zhoucm1
       [not found]                         ` <59006508.7080907-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: zhoucm1 @ 2017-04-26  9:14 UTC (permalink / raw)
  To: Christian König, Zhang, Jerry (Junwei),
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW



On 2017年04月26日 17:10, Christian König wrote:
> Am 26.04.2017 um 11:05 schrieb zhoucm1:
>>
>>
>> On 2017年04月26日 16:49, Christian König wrote:
>>> Am 26.04.2017 um 09:09 schrieb Zhang, Jerry (Junwei):
>>>> On 04/24/2017 01:57 PM, Chunming Zhou wrote:
>>>>> Change-Id: Id728e20366c8a1ae90d4e901dc80e136e2a613bb
>>>>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>>>>> ---
>>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 17 ++++++++++++++++-
>>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  2 ++
>>>>>   2 files changed, 18 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> index eb429c5..acf9102 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> @@ -2144,10 +2144,12 @@ int amdgpu_vm_init(struct amdgpu_device 
>>>>> *adev, struct amdgpu_vm *vm,
>>>>>       unsigned ring_instance;
>>>>>       struct amdgpu_ring *ring;
>>>>>       struct amd_sched_rq *rq;
>>>>> -    int r;
>>>>> +    int r, i;
>>>>>
>>>>>       vm->va = RB_ROOT;
>>>>>       vm->client_id = 
>>>>> atomic64_inc_return(&adev->vm_manager.client_counter);
>>>>> +    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
>>>>> +        vm->dedicated_vmid[i] = NULL;
>>>>
>>>> Maybe it's better to give it a consistent name as resv_vmid, or 
>>>> anything like that.
>>>
>>> Yes, agree.
>> I think the reserved vmid is dedicated to this vm, I don't know where 
>> this name doesn't make sense.
>>
>>> And if I'm not completely mistaken we still should only apply that 
>>> to the GFX hub on Vega10.
>> David Mao required mmhub as well. IIRC, we don't have necessary to 
>> argue more on this.
>
> I think we still have. There is no technical reason why we should use 
> the reserved/dedicated VM for other engines than the one involved in 
> the SQ trace.
How about removing SQ trace reason? is it ok just that one process wants 
to do experiment for other purpose? :)

Regards,
David Zhou
>
> So I would say we should limit this to GFX and Compute jobs and only 
> allocate the dedicated VMID for those.
>
> Regards,
> Christian.
>
>>
>> Regards,
>> David Zhou
>>>
>>> Christian.
>>>
>>>>
>>>> Jerry
>>>>> spin_lock_init(&vm->status_lock);
>>>>>       INIT_LIST_HEAD(&vm->invalidated);
>>>>>       INIT_LIST_HEAD(&vm->cleared);
>>>>> @@ -2250,6 +2252,7 @@ void amdgpu_vm_fini(struct amdgpu_device 
>>>>> *adev, struct amdgpu_vm *vm)
>>>>>   {
>>>>>       struct amdgpu_bo_va_mapping *mapping, *tmp;
>>>>>       bool prt_fini_needed = !!adev->gart.gart_funcs->set_prt;
>>>>> +    int i;
>>>>>
>>>>>       if (vm->is_kfd_vm) {
>>>>>           struct amdgpu_vm_id_manager *id_mgr =
>>>>> @@ -2292,6 +2295,18 @@ void amdgpu_vm_fini(struct amdgpu_device 
>>>>> *adev, struct amdgpu_vm *vm)
>>>>>
>>>>>       amdgpu_vm_free_levels(&vm->root);
>>>>>       fence_put(vm->last_dir_update);
>>>>> +    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++) {
>>>>> +        struct amdgpu_vm_id_manager *id_mgr =
>>>>> +            &adev->vm_manager.id_mgr[i];
>>>>> +
>>>>> +        mutex_lock(&id_mgr->lock);
>>>>> +        if (vm->dedicated_vmid[i]) {
>>>>> + list_add(&vm->dedicated_vmid[i]->list,
>>>>> +                 &id_mgr->ids_lru);
>>>>> +            vm->dedicated_vmid[i] = NULL;
>>>>> +        }
>>>>> +        mutex_unlock(&id_mgr->lock);
>>>>> +    }
>>>>>   }
>>>>>
>>>>>   /**
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>>> index 62dbace..23981ee 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>>> @@ -122,6 +122,8 @@ struct amdgpu_vm {
>>>>>
>>>>>       /* client id */
>>>>>       u64                     client_id;
>>>>> +    /* dedicated vmid */
>>>>> +    struct amdgpu_vm_id *dedicated_vmid[AMDGPU_MAX_VMHUBS];
>>>>>       /* each VM will map on CSA */
>>>>>       struct amdgpu_bo_va *csa_bo_va;
>>>>>
>>>>>
>>>> _______________________________________________
>>>> amd-gfx mailing list
>>>> amd-gfx@lists.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>>
>>>
>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/6] drm/amdgpu: add dedicated vmid field in vm struct
       [not found]                         ` <59006508.7080907-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-26  9:45                           ` Christian König
       [not found]                             ` <f41de8f1-11d9-a727-c34f-d218b08f9047-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: Christian König @ 2017-04-26  9:45 UTC (permalink / raw)
  To: zhoucm1, Zhang, Jerry (Junwei), amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 26.04.2017 um 11:14 schrieb zhoucm1:
>
>
> On 2017年04月26日 17:10, Christian König wrote:
>> Am 26.04.2017 um 11:05 schrieb zhoucm1:
>>>
>>>
>>> On 2017年04月26日 16:49, Christian König wrote:
>>>> Am 26.04.2017 um 09:09 schrieb Zhang, Jerry (Junwei):
>>>>> On 04/24/2017 01:57 PM, Chunming Zhou wrote:
>>>>>> Change-Id: Id728e20366c8a1ae90d4e901dc80e136e2a613bb
>>>>>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>>>>>> ---
>>>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 17 ++++++++++++++++-
>>>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  2 ++
>>>>>>   2 files changed, 18 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
>>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>> index eb429c5..acf9102 100644
>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>> @@ -2144,10 +2144,12 @@ int amdgpu_vm_init(struct amdgpu_device 
>>>>>> *adev, struct amdgpu_vm *vm,
>>>>>>       unsigned ring_instance;
>>>>>>       struct amdgpu_ring *ring;
>>>>>>       struct amd_sched_rq *rq;
>>>>>> -    int r;
>>>>>> +    int r, i;
>>>>>>
>>>>>>       vm->va = RB_ROOT;
>>>>>>       vm->client_id = 
>>>>>> atomic64_inc_return(&adev->vm_manager.client_counter);
>>>>>> +    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
>>>>>> +        vm->dedicated_vmid[i] = NULL;
>>>>>
>>>>> Maybe it's better to give it a consistent name as resv_vmid, or 
>>>>> anything like that.
>>>>
>>>> Yes, agree.
>>> I think the reserved vmid is dedicated to this vm, I don't know 
>>> where this name doesn't make sense.
>>>
>>>> And if I'm not completely mistaken we still should only apply that 
>>>> to the GFX hub on Vega10.
>>> David Mao required mmhub as well. IIRC, we don't have necessary to 
>>> argue more on this.
>>
>> I think we still have. There is no technical reason why we should use 
>> the reserved/dedicated VM for other engines than the one involved in 
>> the SQ trace.
> How about removing SQ trace reason? is it ok just that one process 
> wants to do experiment for other purpose? :)

In this case I would clearly NAK the whole approach.

Regards,
Christian.

>
> Regards,
> David Zhou
>>
>> So I would say we should limit this to GFX and Compute jobs and only 
>> allocate the dedicated VMID for those.
>>
>> Regards,
>> Christian.
>>
>>>
>>> Regards,
>>> David Zhou
>>>>
>>>> Christian.
>>>>
>>>>>
>>>>> Jerry
>>>>>> spin_lock_init(&vm->status_lock);
>>>>>>       INIT_LIST_HEAD(&vm->invalidated);
>>>>>>       INIT_LIST_HEAD(&vm->cleared);
>>>>>> @@ -2250,6 +2252,7 @@ void amdgpu_vm_fini(struct amdgpu_device 
>>>>>> *adev, struct amdgpu_vm *vm)
>>>>>>   {
>>>>>>       struct amdgpu_bo_va_mapping *mapping, *tmp;
>>>>>>       bool prt_fini_needed = !!adev->gart.gart_funcs->set_prt;
>>>>>> +    int i;
>>>>>>
>>>>>>       if (vm->is_kfd_vm) {
>>>>>>           struct amdgpu_vm_id_manager *id_mgr =
>>>>>> @@ -2292,6 +2295,18 @@ void amdgpu_vm_fini(struct amdgpu_device 
>>>>>> *adev, struct amdgpu_vm *vm)
>>>>>>
>>>>>>       amdgpu_vm_free_levels(&vm->root);
>>>>>>       fence_put(vm->last_dir_update);
>>>>>> +    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++) {
>>>>>> +        struct amdgpu_vm_id_manager *id_mgr =
>>>>>> +            &adev->vm_manager.id_mgr[i];
>>>>>> +
>>>>>> +        mutex_lock(&id_mgr->lock);
>>>>>> +        if (vm->dedicated_vmid[i]) {
>>>>>> + list_add(&vm->dedicated_vmid[i]->list,
>>>>>> +                 &id_mgr->ids_lru);
>>>>>> +            vm->dedicated_vmid[i] = NULL;
>>>>>> +        }
>>>>>> +        mutex_unlock(&id_mgr->lock);
>>>>>> +    }
>>>>>>   }
>>>>>>
>>>>>>   /**
>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
>>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>>>> index 62dbace..23981ee 100644
>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>>>> @@ -122,6 +122,8 @@ struct amdgpu_vm {
>>>>>>
>>>>>>       /* client id */
>>>>>>       u64                     client_id;
>>>>>> +    /* dedicated vmid */
>>>>>> +    struct amdgpu_vm_id *dedicated_vmid[AMDGPU_MAX_VMHUBS];
>>>>>>       /* each VM will map on CSA */
>>>>>>       struct amdgpu_bo_va *csa_bo_va;
>>>>>>
>>>>>>
>>>>> _______________________________________________
>>>>> amd-gfx mailing list
>>>>> amd-gfx@lists.freedesktop.org
>>>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>>>
>>>>
>>>
>>> _______________________________________________
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>
>>
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/6] drm/amdgpu: add dedicated vmid field in vm struct
       [not found]                             ` <f41de8f1-11d9-a727-c34f-d218b08f9047-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
@ 2017-04-26  9:46                               ` zhoucm1
  0 siblings, 0 replies; 29+ messages in thread
From: zhoucm1 @ 2017-04-26  9:46 UTC (permalink / raw)
  To: Christian König, Zhang, Jerry (Junwei),
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW



On 2017年04月26日 17:45, Christian König wrote:
> Am 26.04.2017 um 11:14 schrieb zhoucm1:
>>
>>
>> On 2017年04月26日 17:10, Christian König wrote:
>>> Am 26.04.2017 um 11:05 schrieb zhoucm1:
>>>>
>>>>
>>>> On 2017年04月26日 16:49, Christian König wrote:
>>>>> Am 26.04.2017 um 09:09 schrieb Zhang, Jerry (Junwei):
>>>>>> On 04/24/2017 01:57 PM, Chunming Zhou wrote:
>>>>>>> Change-Id: Id728e20366c8a1ae90d4e901dc80e136e2a613bb
>>>>>>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>>>>>>> ---
>>>>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 17 ++++++++++++++++-
>>>>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  2 ++
>>>>>>>   2 files changed, 18 insertions(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
>>>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>>> index eb429c5..acf9102 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>>> @@ -2144,10 +2144,12 @@ int amdgpu_vm_init(struct amdgpu_device 
>>>>>>> *adev, struct amdgpu_vm *vm,
>>>>>>>       unsigned ring_instance;
>>>>>>>       struct amdgpu_ring *ring;
>>>>>>>       struct amd_sched_rq *rq;
>>>>>>> -    int r;
>>>>>>> +    int r, i;
>>>>>>>
>>>>>>>       vm->va = RB_ROOT;
>>>>>>>       vm->client_id = 
>>>>>>> atomic64_inc_return(&adev->vm_manager.client_counter);
>>>>>>> +    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
>>>>>>> +        vm->dedicated_vmid[i] = NULL;
>>>>>>
>>>>>> Maybe it's better to give it a consistent name as resv_vmid, or 
>>>>>> anything like that.
>>>>>
>>>>> Yes, agree.
>>>> I think the reserved vmid is dedicated to this vm, I don't know 
>>>> where this name doesn't make sense.
>>>>
>>>>> And if I'm not completely mistaken we still should only apply that 
>>>>> to the GFX hub on Vega10.
>>>> David Mao required mmhub as well. IIRC, we don't have necessary to 
>>>> argue more on this.
>>>
>>> I think we still have. There is no technical reason why we should 
>>> use the reserved/dedicated VM for other engines than the one 
>>> involved in the SQ trace.
>> How about removing SQ trace reason? is it ok just that one process 
>> wants to do experiment for other purpose? :)
>
> In this case I would clearly NAK the whole approach.

Which makes me very difficult to talk between you and David Mao.

Regards,
David Zhou
>
> Regards,
> Christian.
>
>>
>> Regards,
>> David Zhou
>>>
>>> So I would say we should limit this to GFX and Compute jobs and only 
>>> allocate the dedicated VMID for those.
>>>
>>> Regards,
>>> Christian.
>>>
>>>>
>>>> Regards,
>>>> David Zhou
>>>>>
>>>>> Christian.
>>>>>
>>>>>>
>>>>>> Jerry
>>>>>>> spin_lock_init(&vm->status_lock);
>>>>>>>       INIT_LIST_HEAD(&vm->invalidated);
>>>>>>>       INIT_LIST_HEAD(&vm->cleared);
>>>>>>> @@ -2250,6 +2252,7 @@ void amdgpu_vm_fini(struct amdgpu_device 
>>>>>>> *adev, struct amdgpu_vm *vm)
>>>>>>>   {
>>>>>>>       struct amdgpu_bo_va_mapping *mapping, *tmp;
>>>>>>>       bool prt_fini_needed = !!adev->gart.gart_funcs->set_prt;
>>>>>>> +    int i;
>>>>>>>
>>>>>>>       if (vm->is_kfd_vm) {
>>>>>>>           struct amdgpu_vm_id_manager *id_mgr =
>>>>>>> @@ -2292,6 +2295,18 @@ void amdgpu_vm_fini(struct amdgpu_device 
>>>>>>> *adev, struct amdgpu_vm *vm)
>>>>>>>
>>>>>>>       amdgpu_vm_free_levels(&vm->root);
>>>>>>>       fence_put(vm->last_dir_update);
>>>>>>> +    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++) {
>>>>>>> +        struct amdgpu_vm_id_manager *id_mgr =
>>>>>>> +            &adev->vm_manager.id_mgr[i];
>>>>>>> +
>>>>>>> +        mutex_lock(&id_mgr->lock);
>>>>>>> +        if (vm->dedicated_vmid[i]) {
>>>>>>> + list_add(&vm->dedicated_vmid[i]->list,
>>>>>>> +                 &id_mgr->ids_lru);
>>>>>>> +            vm->dedicated_vmid[i] = NULL;
>>>>>>> +        }
>>>>>>> +        mutex_unlock(&id_mgr->lock);
>>>>>>> +    }
>>>>>>>   }
>>>>>>>
>>>>>>>   /**
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
>>>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>>>>> index 62dbace..23981ee 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>>>>> @@ -122,6 +122,8 @@ struct amdgpu_vm {
>>>>>>>
>>>>>>>       /* client id */
>>>>>>>       u64                     client_id;
>>>>>>> +    /* dedicated vmid */
>>>>>>> +    struct amdgpu_vm_id *dedicated_vmid[AMDGPU_MAX_VMHUBS];
>>>>>>>       /* each VM will map on CSA */
>>>>>>>       struct amdgpu_bo_va *csa_bo_va;
>>>>>>>
>>>>>>>
>>>>>> _______________________________________________
>>>>>> amd-gfx mailing list
>>>>>> amd-gfx@lists.freedesktop.org
>>>>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> amd-gfx mailing list
>>>> amd-gfx@lists.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>>
>>>
>>
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2
       [not found]             ` <590176A2.70304-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-27  9:14               ` Christian König
  0 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2017-04-27  9:14 UTC (permalink / raw)
  To: zhoucm1, Zhang, Jerry (Junwei), amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 27.04.2017 um 06:42 schrieb zhoucm1:
>
>
> On 2017年04月27日 10:52, Zhang, Jerry (Junwei) wrote:
>> On 04/26/2017 07:10 PM, Chunming Zhou wrote:
>>> v2: move sync waiting only when flush needs
>>>
>>> Change-Id: I64da2701c9fdcf986afb90ba1492a78d5bef1b6c
>>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 61 
>>> ++++++++++++++++++++++++++++++++++
>>>   1 file changed, 61 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index 214ac50..bce7701 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -402,6 +402,63 @@ static bool 
>>> amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, unsigned vmhub)
>>>       return !!vm->dedicated_vmid[vmhub];
>>>   }
>>>
>>> +static int amdgpu_vm_grab_dedicated_vmid(struct amdgpu_vm *vm,
>>> +                     struct amdgpu_ring *ring,
>>> +                     struct amdgpu_sync *sync,
>>> +                     struct fence *fence,
>>> +                     struct amdgpu_job *job)
>>> +{
>>> +    struct amdgpu_device *adev = ring->adev;
>>> +    unsigned vmhub = ring->funcs->vmhub;
>>> +    struct amdgpu_vm_id *id = vm->dedicated_vmid[vmhub];
>>> +    struct amdgpu_vm_id_manager *id_mgr = 
>>> &adev->vm_manager.id_mgr[vmhub];
>>> +    struct fence *updates = sync->last_vm_update;
>>> +    int r = 0;
>>> +    struct fence *flushed, *tmp;
>>> +    bool needs_flush = false;
>>> +
>>> +    mutex_lock(&id_mgr->lock);
>>> +    if (amdgpu_vm_had_gpu_reset(adev, id))
>>> +        needs_flush = true;
>>> +
>>> +    flushed  = id->flushed_updates;
>>> +    if (updates && (!flushed || updates->context != 
>>> flushed->context ||
>>> +            fence_is_later(updates, flushed)))
>>> +        needs_flush = true;
>>
>> Just question:
>> Do we need to consider concurrent flush for Vega10 like grab id func?
> Christian has pointed that old asic has hardware bug.

Which is fixed for Vega10, on that hardware concurrent flushing works fine.

It's just that Tonga/Fiji have a real problem with that and for 
CIK/Polaris it's more a subtle one which is hard to trigger.

Regards,
Christian.

>
> Regards,
> David Zhou
>>
>> Jerry
>>
>>> +    if (needs_flush) {
>>> +        tmp = amdgpu_sync_get_fence(&id->active);
>>> +        if (tmp) {
>>> +            r = amdgpu_sync_fence(adev, sync, tmp);
>>> +            fence_put(tmp);
>>> +            mutex_unlock(&id_mgr->lock);
>>> +            return r;
>>> +        }
>>> +    }
>>> +
>>> +    /* Good we can use this VMID. Remember this submission as
>>> +    * user of the VMID.
>>> +    */
>>> +    r = amdgpu_sync_fence(ring->adev, &id->active, fence);
>>> +    if (r)
>>> +        goto out;
>>> +
>>> +    if (updates && (!flushed || updates->context != 
>>> flushed->context ||
>>> +            fence_is_later(updates, flushed))) {
>>> +        fence_put(id->flushed_updates);
>>> +        id->flushed_updates = fence_get(updates);
>>> +    }
>>> +    id->pd_gpu_addr = job->vm_pd_addr;
>>> +    id->current_gpu_reset_count = 
>>> atomic_read(&adev->gpu_reset_counter);
>>> +    atomic64_set(&id->owner, vm->client_id);
>>> +    job->vm_needs_flush = needs_flush;
>>> +
>>> +    job->vm_id = id - id_mgr->ids;
>>> +    trace_amdgpu_vm_grab_id(vm, ring, job);
>>> +out:
>>> +    mutex_unlock(&id_mgr->lock);
>>> +    return r;
>>> +}
>>> +
>>>   /**
>>>    * amdgpu_vm_grab_id - allocate the next free VMID
>>>    *
>>> @@ -426,6 +483,10 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, 
>>> struct amdgpu_ring *ring,
>>>       unsigned i;
>>>       int r = 0;
>>>
>>> +    if (amdgpu_vm_dedicated_vmid_ready(vm, vmhub))
>>> +        return amdgpu_vm_grab_dedicated_vmid(vm, ring, sync,
>>> +                             fence, job);
>>> +
>>>       fences = kmalloc_array(sizeof(void *), id_mgr->num_ids, 
>>> GFP_KERNEL);
>>>       if (!fences)
>>>           return -ENOMEM;
>>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2
       [not found]         ` <59015CD8.3000303-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-27  4:42           ` zhoucm1
       [not found]             ` <590176A2.70304-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: zhoucm1 @ 2017-04-27  4:42 UTC (permalink / raw)
  To: Zhang, Jerry (Junwei), amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW



On 2017年04月27日 10:52, Zhang, Jerry (Junwei) wrote:
> On 04/26/2017 07:10 PM, Chunming Zhou wrote:
>> v2: move sync waiting only when flush needs
>>
>> Change-Id: I64da2701c9fdcf986afb90ba1492a78d5bef1b6c
>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 61 
>> ++++++++++++++++++++++++++++++++++
>>   1 file changed, 61 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> index 214ac50..bce7701 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> @@ -402,6 +402,63 @@ static bool 
>> amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, unsigned vmhub)
>>       return !!vm->dedicated_vmid[vmhub];
>>   }
>>
>> +static int amdgpu_vm_grab_dedicated_vmid(struct amdgpu_vm *vm,
>> +                     struct amdgpu_ring *ring,
>> +                     struct amdgpu_sync *sync,
>> +                     struct fence *fence,
>> +                     struct amdgpu_job *job)
>> +{
>> +    struct amdgpu_device *adev = ring->adev;
>> +    unsigned vmhub = ring->funcs->vmhub;
>> +    struct amdgpu_vm_id *id = vm->dedicated_vmid[vmhub];
>> +    struct amdgpu_vm_id_manager *id_mgr = 
>> &adev->vm_manager.id_mgr[vmhub];
>> +    struct fence *updates = sync->last_vm_update;
>> +    int r = 0;
>> +    struct fence *flushed, *tmp;
>> +    bool needs_flush = false;
>> +
>> +    mutex_lock(&id_mgr->lock);
>> +    if (amdgpu_vm_had_gpu_reset(adev, id))
>> +        needs_flush = true;
>> +
>> +    flushed  = id->flushed_updates;
>> +    if (updates && (!flushed || updates->context != flushed->context ||
>> +            fence_is_later(updates, flushed)))
>> +        needs_flush = true;
>
> Just question:
> Do we need to consider concurrent flush for Vega10 like grab id func?
Christian has pointed that old asic has hardware bug.

Regards,
David Zhou
>
> Jerry
>
>> +    if (needs_flush) {
>> +        tmp = amdgpu_sync_get_fence(&id->active);
>> +        if (tmp) {
>> +            r = amdgpu_sync_fence(adev, sync, tmp);
>> +            fence_put(tmp);
>> +            mutex_unlock(&id_mgr->lock);
>> +            return r;
>> +        }
>> +    }
>> +
>> +    /* Good we can use this VMID. Remember this submission as
>> +    * user of the VMID.
>> +    */
>> +    r = amdgpu_sync_fence(ring->adev, &id->active, fence);
>> +    if (r)
>> +        goto out;
>> +
>> +    if (updates && (!flushed || updates->context != flushed->context ||
>> +            fence_is_later(updates, flushed))) {
>> +        fence_put(id->flushed_updates);
>> +        id->flushed_updates = fence_get(updates);
>> +    }
>> +    id->pd_gpu_addr = job->vm_pd_addr;
>> +    id->current_gpu_reset_count = 
>> atomic_read(&adev->gpu_reset_counter);
>> +    atomic64_set(&id->owner, vm->client_id);
>> +    job->vm_needs_flush = needs_flush;
>> +
>> +    job->vm_id = id - id_mgr->ids;
>> +    trace_amdgpu_vm_grab_id(vm, ring, job);
>> +out:
>> +    mutex_unlock(&id_mgr->lock);
>> +    return r;
>> +}
>> +
>>   /**
>>    * amdgpu_vm_grab_id - allocate the next free VMID
>>    *
>> @@ -426,6 +483,10 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, 
>> struct amdgpu_ring *ring,
>>       unsigned i;
>>       int r = 0;
>>
>> +    if (amdgpu_vm_dedicated_vmid_ready(vm, vmhub))
>> +        return amdgpu_vm_grab_dedicated_vmid(vm, ring, sync,
>> +                             fence, job);
>> +
>>       fences = kmalloc_array(sizeof(void *), id_mgr->num_ids, 
>> GFP_KERNEL);
>>       if (!fences)
>>           return -ENOMEM;
>>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2
       [not found]     ` <1493205039-3721-6-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
  2017-04-26 12:57       ` Christian König
@ 2017-04-27  2:52       ` Zhang, Jerry (Junwei)
       [not found]         ` <59015CD8.3000303-5C7GfCeVMHo@public.gmane.org>
  1 sibling, 1 reply; 29+ messages in thread
From: Zhang, Jerry (Junwei) @ 2017-04-27  2:52 UTC (permalink / raw)
  To: Chunming Zhou, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

On 04/26/2017 07:10 PM, Chunming Zhou wrote:
> v2: move sync waiting only when flush needs
>
> Change-Id: I64da2701c9fdcf986afb90ba1492a78d5bef1b6c
> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 61 ++++++++++++++++++++++++++++++++++
>   1 file changed, 61 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 214ac50..bce7701 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -402,6 +402,63 @@ static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, unsigned vmhub)
>   	return !!vm->dedicated_vmid[vmhub];
>   }
>
> +static int amdgpu_vm_grab_dedicated_vmid(struct amdgpu_vm *vm,
> +					 struct amdgpu_ring *ring,
> +					 struct amdgpu_sync *sync,
> +					 struct fence *fence,
> +					 struct amdgpu_job *job)
> +{
> +	struct amdgpu_device *adev = ring->adev;
> +	unsigned vmhub = ring->funcs->vmhub;
> +	struct amdgpu_vm_id *id = vm->dedicated_vmid[vmhub];
> +	struct amdgpu_vm_id_manager *id_mgr = &adev->vm_manager.id_mgr[vmhub];
> +	struct fence *updates = sync->last_vm_update;
> +	int r = 0;
> +	struct fence *flushed, *tmp;
> +	bool needs_flush = false;
> +
> +	mutex_lock(&id_mgr->lock);
> +	if (amdgpu_vm_had_gpu_reset(adev, id))
> +		needs_flush = true;
> +
> +	flushed  = id->flushed_updates;
> +	if (updates && (!flushed || updates->context != flushed->context ||
> +			fence_is_later(updates, flushed)))
> +		needs_flush = true;

Just question:
Do we need to consider concurrent flush for Vega10 like grab id func?

Jerry

> +	if (needs_flush) {
> +		tmp = amdgpu_sync_get_fence(&id->active);
> +		if (tmp) {
> +			r = amdgpu_sync_fence(adev, sync, tmp);
> +			fence_put(tmp);
> +			mutex_unlock(&id_mgr->lock);
> +			return r;
> +		}
> +	}
> +
> +	/* Good we can use this VMID. Remember this submission as
> +	* user of the VMID.
> +	*/
> +	r = amdgpu_sync_fence(ring->adev, &id->active, fence);
> +	if (r)
> +		goto out;
> +
> +	if (updates && (!flushed || updates->context != flushed->context ||
> +			fence_is_later(updates, flushed))) {
> +		fence_put(id->flushed_updates);
> +		id->flushed_updates = fence_get(updates);
> +	}
> +	id->pd_gpu_addr = job->vm_pd_addr;
> +	id->current_gpu_reset_count = atomic_read(&adev->gpu_reset_counter);
> +	atomic64_set(&id->owner, vm->client_id);
> +	job->vm_needs_flush = needs_flush;
> +
> +	job->vm_id = id - id_mgr->ids;
> +	trace_amdgpu_vm_grab_id(vm, ring, job);
> +out:
> +	mutex_unlock(&id_mgr->lock);
> +	return r;
> +}
> +
>   /**
>    * amdgpu_vm_grab_id - allocate the next free VMID
>    *
> @@ -426,6 +483,10 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
>   	unsigned i;
>   	int r = 0;
>
> +	if (amdgpu_vm_dedicated_vmid_ready(vm, vmhub))
> +		return amdgpu_vm_grab_dedicated_vmid(vm, ring, sync,
> +						     fence, job);
> +
>   	fences = kmalloc_array(sizeof(void *), id_mgr->num_ids, GFP_KERNEL);
>   	if (!fences)
>   		return -ENOMEM;
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2
       [not found]     ` <1493205039-3721-6-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-26 12:57       ` Christian König
  2017-04-27  2:52       ` Zhang, Jerry (Junwei)
  1 sibling, 0 replies; 29+ messages in thread
From: Christian König @ 2017-04-26 12:57 UTC (permalink / raw)
  To: Chunming Zhou, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 26.04.2017 um 13:10 schrieb Chunming Zhou:
> v2: move sync waiting only when flush needs
>
> Change-Id: I64da2701c9fdcf986afb90ba1492a78d5bef1b6c
> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 61 ++++++++++++++++++++++++++++++++++
>   1 file changed, 61 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 214ac50..bce7701 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -402,6 +402,63 @@ static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, unsigned vmhub)
>   	return !!vm->dedicated_vmid[vmhub];
>   }
>   
> +static int amdgpu_vm_grab_dedicated_vmid(struct amdgpu_vm *vm,
> +					 struct amdgpu_ring *ring,
> +					 struct amdgpu_sync *sync,
> +					 struct fence *fence,
> +					 struct amdgpu_job *job)
> +{
> +	struct amdgpu_device *adev = ring->adev;
> +	unsigned vmhub = ring->funcs->vmhub;
> +	struct amdgpu_vm_id *id = vm->dedicated_vmid[vmhub];
> +	struct amdgpu_vm_id_manager *id_mgr = &adev->vm_manager.id_mgr[vmhub];
> +	struct fence *updates = sync->last_vm_update;
> +	int r = 0;
> +	struct fence *flushed, *tmp;
> +	bool needs_flush = false;
> +
> +	mutex_lock(&id_mgr->lock);
> +	if (amdgpu_vm_had_gpu_reset(adev, id))
> +		needs_flush = true;
> +
> +	flushed  = id->flushed_updates;
> +	if (updates && (!flushed || updates->context != flushed->context ||
> +			fence_is_later(updates, flushed)))
> +		needs_flush = true;
> +	if (needs_flush) {
> +		tmp = amdgpu_sync_get_fence(&id->active);
> +		if (tmp) {
> +			r = amdgpu_sync_fence(adev, sync, tmp);
> +			fence_put(tmp);
> +			mutex_unlock(&id_mgr->lock);
> +			return r;
> +		}
> +	}
> +
> +	/* Good we can use this VMID. Remember this submission as
> +	* user of the VMID.
> +	*/
> +	r = amdgpu_sync_fence(ring->adev, &id->active, fence);
> +	if (r)
> +		goto out;
> +
> +	if (updates && (!flushed || updates->context != flushed->context ||
> +			fence_is_later(updates, flushed))) {
> +		fence_put(id->flushed_updates);
> +		id->flushed_updates = fence_get(updates);
> +	}
> +	id->pd_gpu_addr = job->vm_pd_addr;
> +	id->current_gpu_reset_count = atomic_read(&adev->gpu_reset_counter);
> +	atomic64_set(&id->owner, vm->client_id);
> +	job->vm_needs_flush = needs_flush;
> +
> +	job->vm_id = id - id_mgr->ids;
> +	trace_amdgpu_vm_grab_id(vm, ring, job);
> +out:
> +	mutex_unlock(&id_mgr->lock);
> +	return r;
> +}
> +
>   /**
>    * amdgpu_vm_grab_id - allocate the next free VMID
>    *
> @@ -426,6 +483,10 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
>   	unsigned i;
>   	int r = 0;
>   
> +	if (amdgpu_vm_dedicated_vmid_ready(vm, vmhub))
> +		return amdgpu_vm_grab_dedicated_vmid(vm, ring, sync,
> +						     fence, job);

That is racy as well. The reserved VMID could be unreserved while we try 
to grab it leading to a NULL pointer dereference in 
amdgpu_vm_grab_dedicated_vmid().

Christian.

> +
>   	fences = kmalloc_array(sizeof(void *), id_mgr->num_ids, GFP_KERNEL);
>   	if (!fences)
>   		return -ENOMEM;


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2
       [not found] ` <1493205039-3721-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-26 11:10   ` Chunming Zhou
       [not found]     ` <1493205039-3721-6-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: Chunming Zhou @ 2017-04-26 11:10 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

v2: move sync waiting only when flush needs

Change-Id: I64da2701c9fdcf986afb90ba1492a78d5bef1b6c
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 61 ++++++++++++++++++++++++++++++++++
 1 file changed, 61 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 214ac50..bce7701 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -402,6 +402,63 @@ static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, unsigned vmhub)
 	return !!vm->dedicated_vmid[vmhub];
 }
 
+static int amdgpu_vm_grab_dedicated_vmid(struct amdgpu_vm *vm,
+					 struct amdgpu_ring *ring,
+					 struct amdgpu_sync *sync,
+					 struct fence *fence,
+					 struct amdgpu_job *job)
+{
+	struct amdgpu_device *adev = ring->adev;
+	unsigned vmhub = ring->funcs->vmhub;
+	struct amdgpu_vm_id *id = vm->dedicated_vmid[vmhub];
+	struct amdgpu_vm_id_manager *id_mgr = &adev->vm_manager.id_mgr[vmhub];
+	struct fence *updates = sync->last_vm_update;
+	int r = 0;
+	struct fence *flushed, *tmp;
+	bool needs_flush = false;
+
+	mutex_lock(&id_mgr->lock);
+	if (amdgpu_vm_had_gpu_reset(adev, id))
+		needs_flush = true;
+
+	flushed  = id->flushed_updates;
+	if (updates && (!flushed || updates->context != flushed->context ||
+			fence_is_later(updates, flushed)))
+		needs_flush = true;
+	if (needs_flush) {
+		tmp = amdgpu_sync_get_fence(&id->active);
+		if (tmp) {
+			r = amdgpu_sync_fence(adev, sync, tmp);
+			fence_put(tmp);
+			mutex_unlock(&id_mgr->lock);
+			return r;
+		}
+	}
+
+	/* Good we can use this VMID. Remember this submission as
+	* user of the VMID.
+	*/
+	r = amdgpu_sync_fence(ring->adev, &id->active, fence);
+	if (r)
+		goto out;
+
+	if (updates && (!flushed || updates->context != flushed->context ||
+			fence_is_later(updates, flushed))) {
+		fence_put(id->flushed_updates);
+		id->flushed_updates = fence_get(updates);
+	}
+	id->pd_gpu_addr = job->vm_pd_addr;
+	id->current_gpu_reset_count = atomic_read(&adev->gpu_reset_counter);
+	atomic64_set(&id->owner, vm->client_id);
+	job->vm_needs_flush = needs_flush;
+
+	job->vm_id = id - id_mgr->ids;
+	trace_amdgpu_vm_grab_id(vm, ring, job);
+out:
+	mutex_unlock(&id_mgr->lock);
+	return r;
+}
+
 /**
  * amdgpu_vm_grab_id - allocate the next free VMID
  *
@@ -426,6 +483,10 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
 	unsigned i;
 	int r = 0;
 
+	if (amdgpu_vm_dedicated_vmid_ready(vm, vmhub))
+		return amdgpu_vm_grab_dedicated_vmid(vm, ring, sync,
+						     fence, job);
+
 	fences = kmalloc_array(sizeof(void *), id_mgr->num_ids, GFP_KERNEL);
 	if (!fences)
 		return -ENOMEM;
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2017-04-27  9:14 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-24  5:57 [PATCH 0/6] *** Dedicated vmid per process v2 *** Chunming Zhou
     [not found] ` <1493013460-13344-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
2017-04-24  5:57   ` [PATCH 1/6] drm/amdgpu: add vm ioctl Chunming Zhou
     [not found]     ` <1493013460-13344-2-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
2017-04-26  7:09       ` Zhang, Jerry (Junwei)
     [not found]         ` <590047A5.1070807-5C7GfCeVMHo@public.gmane.org>
2017-04-26  8:47           ` Christian König
2017-04-24  5:57   ` [PATCH 2/6] drm/amdgpu: add dedicated vmid field in vm struct Chunming Zhou
     [not found]     ` <1493013460-13344-3-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
2017-04-26  7:09       ` Zhang, Jerry (Junwei)
     [not found]         ` <590047B3.7050109-5C7GfCeVMHo@public.gmane.org>
2017-04-26  8:49           ` Christian König
     [not found]             ` <291734a9-5306-f3e3-219c-c53cbf433358-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
2017-04-26  9:05               ` zhoucm1
     [not found]                 ` <590062C5.6020402-5C7GfCeVMHo@public.gmane.org>
2017-04-26  9:10                   ` Christian König
     [not found]                     ` <05c4fcec-9dae-24ac-4a5d-26e0fd5bc148-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
2017-04-26  9:14                       ` zhoucm1
     [not found]                         ` <59006508.7080907-5C7GfCeVMHo@public.gmane.org>
2017-04-26  9:45                           ` Christian König
     [not found]                             ` <f41de8f1-11d9-a727-c34f-d218b08f9047-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
2017-04-26  9:46                               ` zhoucm1
2017-04-24  5:57   ` [PATCH 3/6] drm/amdgpu: reserve vmid by vm ioctl Chunming Zhou
     [not found]     ` <1493013460-13344-4-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
2017-04-26  7:09       ` Zhang, Jerry (Junwei)
     [not found]         ` <590047C6.50005-5C7GfCeVMHo@public.gmane.org>
2017-04-26  8:51           ` Christian König
     [not found]             ` <238b3d86-743a-4ca9-c008-92ae4e1ec79c-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
2017-04-26  9:04               ` Zhang, Jerry (Junwei)
     [not found]                 ` <5900629A.20701-5C7GfCeVMHo@public.gmane.org>
2017-04-26  9:08                   ` Christian König
2017-04-24  5:57   ` [PATCH 4/6] drm/amdgpu: add limitation for dedicated vm number v2 Chunming Zhou
     [not found]     ` <1493013460-13344-5-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
2017-04-26  7:10       ` Zhang, Jerry (Junwei)
     [not found]         ` <590047D0.7080901-5C7GfCeVMHo@public.gmane.org>
2017-04-26  8:53           ` Christian König
2017-04-24  5:57   ` [PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2 Chunming Zhou
2017-04-24  5:57   ` [PATCH 6/6] drm/amdgpu: bump module verion for reserved vmid Chunming Zhou
2017-04-25  9:07   ` [PATCH 0/6] *** Dedicated vmid per process v2 *** zhoucm1
     [not found]     ` <58FF11E7.6040607-5C7GfCeVMHo@public.gmane.org>
2017-04-25 10:02       ` Christian König
2017-04-26 11:10 [PATCH 0/6 v3] *** Dedicated vmid per process v3 *** Chunming Zhou
     [not found] ` <1493205039-3721-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
2017-04-26 11:10   ` [PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2 Chunming Zhou
     [not found]     ` <1493205039-3721-6-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
2017-04-26 12:57       ` Christian König
2017-04-27  2:52       ` Zhang, Jerry (Junwei)
     [not found]         ` <59015CD8.3000303-5C7GfCeVMHo@public.gmane.org>
2017-04-27  4:42           ` zhoucm1
     [not found]             ` <590176A2.70304-5C7GfCeVMHo@public.gmane.org>
2017-04-27  9:14               ` Christian König

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.