All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6 v3] *** Dedicated vmid per process v3 ***
@ 2017-04-26 11:10 Chunming Zhou
       [not found] ` <1493205039-3721-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 17+ messages in thread
From: Chunming Zhou @ 2017-04-26 11:10 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="all", Size: 1753 bytes --]

*** BLURB HERE ***
The current kernel implementation, which grabs the idle VMID from pool when emitting the job may:

    The back-to-back submission from one process could use different VMID.
    The submission to different queues from single process could use different VMID

It works well in most case but cannot work for the SQ thread trace capture.

The VMID for the submission that set the {SQTT}_BASE, which refers to the address of the trace buffer, is stored in shader engine.

If the profiling application have to use different VMIDs to submit IBs in its life cycle:

    Some trace is not captured since it actually uses different VMID to submit jobs.
    Some part of captured trace may come from different application since they are accidentally uses the owner’s VMID to submit jobs.

V2:
1. address Christian's comments:
	a. drop context flags for tag process, instead, add vm ioctl.
	b. change order of patches.
	c. sync waiting only when vm flush needs.

2. address Alex's comments;
	bump module version

V3:
  address Jerry and Christian's comments.
  and only reserve gfxhub vmid


Chunming Zhou (6):
  drm/amdgpu: add vm ioctl
  drm/amdgpu: add dedicated vmid field in vm struct
  drm/amdgpu: reserve/unreserve vmid by vm ioctl v3
  drm/amdgpu: add limitation for dedicated vm number v3
  drm/amdgpu: implement grab dedicated vmid V2
  drm/amdgpu: bump module verion for reserved vmid

 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c |   3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c |   1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c  | 148 +++++++++++++++++++++++++++++++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h  |   6 ++
 include/uapi/drm/amdgpu_drm.h           |  22 +++++
 5 files changed, 178 insertions(+), 2 deletions(-)

-- 
1.9.1


[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/6] drm/amdgpu: add vm ioctl
       [not found] ` <1493205039-3721-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-26 11:10   ` Chunming Zhou
  2017-04-26 11:10   ` [PATCH 2/6] drm/amdgpu: add dedicated vmid field in vm struct Chunming Zhou
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 17+ messages in thread
From: Chunming Zhou @ 2017-04-26 11:10 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

It will be used for reserving vmid.

Change-Id: Ib7169ea999690c8e82d0dcbccdd2d97760c0270a
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c  | 18 ++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h  |  1 +
 include/uapi/drm/amdgpu_drm.h           | 22 ++++++++++++++++++++++
 4 files changed, 42 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
index 052e6a5..8ea33c8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
@@ -1062,6 +1062,7 @@ int amdgpu_get_vblank_timestamp_kms(struct drm_device *dev, unsigned int pipe,
 const struct drm_ioctl_desc amdgpu_ioctls_kms[] = {
 	DRM_IOCTL_DEF_DRV(AMDGPU_GEM_CREATE, amdgpu_gem_create_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(AMDGPU_CTX, amdgpu_ctx_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(AMDGPU_VM, amdgpu_vm_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(AMDGPU_BO_LIST, amdgpu_bo_list_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
 	/* KMS */
 	DRM_IOCTL_DEF_DRV(AMDGPU_GEM_MMAP, amdgpu_gem_mmap_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index a8cd23b..8698177 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -2379,3 +2379,21 @@ void amdgpu_vm_manager_fini(struct amdgpu_device *adev)
 		}
 	}
 }
+
+int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+{
+	union drm_amdgpu_vm *args = data;
+	struct amdgpu_device *adev = dev->dev_private;
+	struct amdgpu_fpriv *fpriv = filp->driver_priv;
+
+	switch (args->in.op) {
+	case AMDGPU_VM_OP_RESERVE_VMID:
+	case AMDGPU_VM_OP_UNRESERVE_VMID:
+		return -EINVAL;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 8eba2d3..16e0aaa 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -248,5 +248,6 @@ int amdgpu_vm_bo_clear_mappings(struct amdgpu_device *adev,
 void amdgpu_vm_bo_rmv(struct amdgpu_device *adev,
 		      struct amdgpu_bo_va *bo_va);
 void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint64_t vm_size);
+int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct drm_file *filp);
 
 #endif
diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
index b98b371..1c95775 100644
--- a/include/uapi/drm/amdgpu_drm.h
+++ b/include/uapi/drm/amdgpu_drm.h
@@ -51,6 +51,7 @@
 #define DRM_AMDGPU_GEM_OP		0x10
 #define DRM_AMDGPU_GEM_USERPTR		0x11
 #define DRM_AMDGPU_WAIT_FENCES		0x12
+#define DRM_AMDGPU_VM			0x13
 
 /* hybrid specific ioctls */
 #define DRM_AMDGPU_SEM			0x5b
@@ -71,6 +72,7 @@
 #define DRM_IOCTL_AMDGPU_GEM_OP		DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_OP, struct drm_amdgpu_gem_op)
 #define DRM_IOCTL_AMDGPU_GEM_USERPTR	DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_USERPTR, struct drm_amdgpu_gem_userptr)
 #define DRM_IOCTL_AMDGPU_WAIT_FENCES	DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_WAIT_FENCES, union drm_amdgpu_wait_fences)
+#define DRM_IOCTL_AMDGPU_VM		DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_VM, union drm_amdgpu_vm)
 
 /* hybrid specific ioctls */
 #define DRM_IOCTL_AMDGPU_GEM_DGMA	DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_DGMA, struct drm_amdgpu_gem_dgma)
@@ -212,6 +214,26 @@ struct drm_amdgpu_ctx_in {
 	union drm_amdgpu_ctx_out out;
 };
 
+/* vm ioctl */
+#define AMDGPU_VM_OP_RESERVE_VMID	1
+#define AMDGPU_VM_OP_UNRESERVE_VMID	2
+
+struct drm_amdgpu_vm_in {
+	/** AMDGPU_VM_OP_* */
+	__u32	op;
+	__u32	flags;
+};
+
+struct drm_amdgpu_vm_out {
+	/** For future use, no flags defined so far */
+	__u64	flags;
+};
+
+union drm_amdgpu_vm {
+	struct drm_amdgpu_vm_in in;
+	struct drm_amdgpu_vm_out out;
+};
+
 /* sem related */
 #define AMDGPU_SEM_OP_CREATE_SEM        1
 #define AMDGPU_SEM_OP_WAIT_SEM	        2
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/6] drm/amdgpu: add dedicated vmid field in vm struct
       [not found] ` <1493205039-3721-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
  2017-04-26 11:10   ` [PATCH 1/6] drm/amdgpu: add vm ioctl Chunming Zhou
@ 2017-04-26 11:10   ` Chunming Zhou
  2017-04-26 11:10   ` [PATCH 3/6] drm/amdgpu: reserve/unreserve vmid by vm ioctl v3 Chunming Zhou
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 17+ messages in thread
From: Chunming Zhou @ 2017-04-26 11:10 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

Change-Id: Id728e20366c8a1ae90d4e901dc80e136e2a613bb
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 17 ++++++++++++++++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  2 ++
 2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 8698177..ec87d64 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -2163,10 +2163,12 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 	unsigned ring_instance;
 	struct amdgpu_ring *ring;
 	struct amd_sched_rq *rq;
-	int r;
+	int r, i;
 
 	vm->va = RB_ROOT;
 	vm->client_id = atomic64_inc_return(&adev->vm_manager.client_counter);
+	for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
+		vm->dedicated_vmid[i] = NULL;
 	spin_lock_init(&vm->status_lock);
 	INIT_LIST_HEAD(&vm->invalidated);
 	INIT_LIST_HEAD(&vm->cleared);
@@ -2270,6 +2272,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 {
 	struct amdgpu_bo_va_mapping *mapping, *tmp;
 	bool prt_fini_needed = !!adev->gart.gart_funcs->set_prt;
+	int i;
 
 	if (vm->is_kfd_vm) {
 		struct amdgpu_vm_id_manager *id_mgr =
@@ -2313,6 +2316,18 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 
 	amdgpu_vm_free_levels(&vm->root);
 	fence_put(vm->last_dir_update);
+	for (i = 0; i < AMDGPU_MAX_VMHUBS; i++) {
+		struct amdgpu_vm_id_manager *id_mgr =
+			&adev->vm_manager.id_mgr[i];
+
+		mutex_lock(&id_mgr->lock);
+		if (vm->dedicated_vmid[i]) {
+			list_add(&vm->dedicated_vmid[i]->list,
+				 &id_mgr->ids_lru);
+			vm->dedicated_vmid[i] = NULL;
+		}
+		mutex_unlock(&id_mgr->lock);
+	}
 }
 
 /**
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 16e0aaa..97ad67d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -123,6 +123,8 @@ struct amdgpu_vm {
 
 	/* client id */
 	u64                     client_id;
+	/* dedicated vmid */
+	struct amdgpu_vm_id	*dedicated_vmid[AMDGPU_MAX_VMHUBS];
 	/* each VM will map on CSA */
 	struct amdgpu_bo_va *csa_bo_va;
 
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/6] drm/amdgpu: reserve/unreserve vmid by vm ioctl v3
       [not found] ` <1493205039-3721-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
  2017-04-26 11:10   ` [PATCH 1/6] drm/amdgpu: add vm ioctl Chunming Zhou
  2017-04-26 11:10   ` [PATCH 2/6] drm/amdgpu: add dedicated vmid field in vm struct Chunming Zhou
@ 2017-04-26 11:10   ` Chunming Zhou
       [not found]     ` <1493205039-3721-4-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
  2017-04-26 11:10   ` [PATCH 4/6] drm/amdgpu: add limitation for dedicated vm number v3 Chunming Zhou
                     ` (2 subsequent siblings)
  5 siblings, 1 reply; 17+ messages in thread
From: Chunming Zhou @ 2017-04-26 11:10 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

add reserve/unreserve vmid funtions.
v3:
only reserve vmid from gfxhub

Change-Id: I5f80dc39dc9d44660a96a2b710b0dbb4d3b9039d
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 70 +++++++++++++++++++++++++++-------
 1 file changed, 57 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index ec87d64..a783e8e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -397,6 +397,11 @@ static bool amdgpu_vm_had_gpu_reset(struct amdgpu_device *adev,
 		atomic_read(&adev->gpu_reset_counter);
 }
 
+static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, unsigned vmhub)
+{
+	return !!vm->dedicated_vmid[vmhub];
+}
+
 /**
  * amdgpu_vm_grab_id - allocate the next free VMID
  *
@@ -546,6 +551,47 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
 	return r;
 }
 
+static void amdgpu_vm_free_dedicated_vmid(struct amdgpu_device *adev,
+					  struct amdgpu_vm *vm,
+					  unsigned vmhub)
+{
+	struct amdgpu_vm_id_manager *id_mgr = &adev->vm_manager.id_mgr[vmhub];
+
+	mutex_lock(&id_mgr->lock);
+	if (vm->dedicated_vmid[vmhub]) {
+		list_add(&vm->dedicated_vmid[vmhub]->list,
+			&id_mgr->ids_lru);
+		vm->dedicated_vmid[vmhub] = NULL;
+	}
+	mutex_unlock(&id_mgr->lock);
+}
+
+static int amdgpu_vm_alloc_dedicated_vmid(struct amdgpu_device *adev,
+					  struct amdgpu_vm *vm,
+					  unsigned vmhub)
+{
+	struct amdgpu_vm_id_manager *id_mgr;
+	struct amdgpu_vm_id *idle;
+	int r;
+
+	id_mgr = &adev->vm_manager.id_mgr[vmhub];
+	mutex_lock(&id_mgr->lock);
+	/* Select the first entry VMID */
+	idle = list_first_entry(&id_mgr->ids_lru, struct amdgpu_vm_id, list);
+	list_del_init(&idle->list);
+	vm->dedicated_vmid[vmhub] = idle;
+	mutex_unlock(&id_mgr->lock);
+
+	r = amdgpu_sync_wait(&idle->active);
+	if (r)
+		goto err;
+
+	return 0;
+err:
+	amdgpu_vm_free_dedicated_vmid(adev, vm, vmhub);
+	return r;
+}
+
 static bool amdgpu_vm_ring_has_compute_vm_bug(struct amdgpu_ring *ring)
 {
 	struct amdgpu_device *adev = ring->adev;
@@ -2316,18 +2362,8 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 
 	amdgpu_vm_free_levels(&vm->root);
 	fence_put(vm->last_dir_update);
-	for (i = 0; i < AMDGPU_MAX_VMHUBS; i++) {
-		struct amdgpu_vm_id_manager *id_mgr =
-			&adev->vm_manager.id_mgr[i];
-
-		mutex_lock(&id_mgr->lock);
-		if (vm->dedicated_vmid[i]) {
-			list_add(&vm->dedicated_vmid[i]->list,
-				 &id_mgr->ids_lru);
-			vm->dedicated_vmid[i] = NULL;
-		}
-		mutex_unlock(&id_mgr->lock);
-	}
+	for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
+		amdgpu_vm_free_dedicated_vmid(adev, vm, i);
 }
 
 /**
@@ -2400,11 +2436,19 @@ int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 	union drm_amdgpu_vm *args = data;
 	struct amdgpu_device *adev = dev->dev_private;
 	struct amdgpu_fpriv *fpriv = filp->driver_priv;
+	int r;
 
 	switch (args->in.op) {
 	case AMDGPU_VM_OP_RESERVE_VMID:
+		/* current, we only have requirement to reserve vmid from gfxhub */
+		if (!amdgpu_vm_dedicated_vmid_ready(&fpriv->vm, 0)) {
+			r = amdgpu_vm_alloc_dedicated_vmid(adev, &fpriv->vm, 0);
+			if (r)
+				return r;
+		}
+		break;
 	case AMDGPU_VM_OP_UNRESERVE_VMID:
-		return -EINVAL;
+		amdgpu_vm_free_dedicated_vmid(adev, &fpriv->vm, 0);
 		break;
 	default:
 		return -EINVAL;
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/6] drm/amdgpu: add limitation for dedicated vm number v3
       [not found] ` <1493205039-3721-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (2 preceding siblings ...)
  2017-04-26 11:10   ` [PATCH 3/6] drm/amdgpu: reserve/unreserve vmid by vm ioctl v3 Chunming Zhou
@ 2017-04-26 11:10   ` Chunming Zhou
  2017-04-26 11:10   ` [PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2 Chunming Zhou
  2017-04-26 11:10   ` [PATCH 6/6] drm/amdgpu: bump module verion for reserved vmid Chunming Zhou
  5 siblings, 0 replies; 17+ messages in thread
From: Chunming Zhou @ 2017-04-26 11:10 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

v2: move #define to amdgpu_vm.h
v3: move reserved vmid counter to id_manager,
and increase counter before allocating vmid

Change-Id: Ie5958cf6dbdc1c8278e61d9158483472d6f5c6e3
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 8 ++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 3 +++
 2 files changed, 11 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index a783e8e..214ac50 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -562,6 +562,7 @@ static void amdgpu_vm_free_dedicated_vmid(struct amdgpu_device *adev,
 		list_add(&vm->dedicated_vmid[vmhub]->list,
 			&id_mgr->ids_lru);
 		vm->dedicated_vmid[vmhub] = NULL;
+		atomic_dec(&id_mgr->reserved_vmid);
 	}
 	mutex_unlock(&id_mgr->lock);
 }
@@ -575,6 +576,12 @@ static int amdgpu_vm_alloc_dedicated_vmid(struct amdgpu_device *adev,
 	int r;
 
 	id_mgr = &adev->vm_manager.id_mgr[vmhub];
+	if (atomic_inc_return(&id_mgr->reserved_vmid) >
+	    AMDGPU_VM_MAX_RESERVED_VMID) {
+		DRM_ERROR("Over limitation of reserved vmid\n");
+		atomic_dec(&id_mgr->reserved_vmid);
+		return -EINVAL;
+	}
 	mutex_lock(&id_mgr->lock);
 	/* Select the first entry VMID */
 	idle = list_first_entry(&id_mgr->ids_lru, struct amdgpu_vm_id, list);
@@ -2383,6 +2390,7 @@ void amdgpu_vm_manager_init(struct amdgpu_device *adev)
 
 		mutex_init(&id_mgr->lock);
 		INIT_LIST_HEAD(&id_mgr->ids_lru);
+		atomic_set(&id_mgr->reserved_vmid, 0);
 
 		/* skip over VMID 0, since it is the system VM */
 		for (j = 1; j < id_mgr->num_ids; ++j) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 97ad67d..067c262 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -84,6 +84,8 @@
 
 /* hardcode that limit for now */
 #define AMDGPU_VA_RESERVED_SIZE			(8 << 20)
+/* max vmids dedicated for process */
+#define AMDGPU_VM_MAX_RESERVED_VMID	1
 
 struct amdgpu_vm_pt {
 	struct amdgpu_bo	*bo;
@@ -157,6 +159,7 @@ struct amdgpu_vm_id_manager {
 	unsigned		num_ids;
 	struct list_head	ids_lru;
 	struct amdgpu_vm_id	ids[AMDGPU_NUM_VM];
+	atomic_t		reserved_vmid;
 };
 
 struct amdgpu_vm_manager {
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2
       [not found] ` <1493205039-3721-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (3 preceding siblings ...)
  2017-04-26 11:10   ` [PATCH 4/6] drm/amdgpu: add limitation for dedicated vm number v3 Chunming Zhou
@ 2017-04-26 11:10   ` Chunming Zhou
       [not found]     ` <1493205039-3721-6-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
  2017-04-26 11:10   ` [PATCH 6/6] drm/amdgpu: bump module verion for reserved vmid Chunming Zhou
  5 siblings, 1 reply; 17+ messages in thread
From: Chunming Zhou @ 2017-04-26 11:10 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

v2: move sync waiting only when flush needs

Change-Id: I64da2701c9fdcf986afb90ba1492a78d5bef1b6c
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 61 ++++++++++++++++++++++++++++++++++
 1 file changed, 61 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 214ac50..bce7701 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -402,6 +402,63 @@ static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, unsigned vmhub)
 	return !!vm->dedicated_vmid[vmhub];
 }
 
+static int amdgpu_vm_grab_dedicated_vmid(struct amdgpu_vm *vm,
+					 struct amdgpu_ring *ring,
+					 struct amdgpu_sync *sync,
+					 struct fence *fence,
+					 struct amdgpu_job *job)
+{
+	struct amdgpu_device *adev = ring->adev;
+	unsigned vmhub = ring->funcs->vmhub;
+	struct amdgpu_vm_id *id = vm->dedicated_vmid[vmhub];
+	struct amdgpu_vm_id_manager *id_mgr = &adev->vm_manager.id_mgr[vmhub];
+	struct fence *updates = sync->last_vm_update;
+	int r = 0;
+	struct fence *flushed, *tmp;
+	bool needs_flush = false;
+
+	mutex_lock(&id_mgr->lock);
+	if (amdgpu_vm_had_gpu_reset(adev, id))
+		needs_flush = true;
+
+	flushed  = id->flushed_updates;
+	if (updates && (!flushed || updates->context != flushed->context ||
+			fence_is_later(updates, flushed)))
+		needs_flush = true;
+	if (needs_flush) {
+		tmp = amdgpu_sync_get_fence(&id->active);
+		if (tmp) {
+			r = amdgpu_sync_fence(adev, sync, tmp);
+			fence_put(tmp);
+			mutex_unlock(&id_mgr->lock);
+			return r;
+		}
+	}
+
+	/* Good we can use this VMID. Remember this submission as
+	* user of the VMID.
+	*/
+	r = amdgpu_sync_fence(ring->adev, &id->active, fence);
+	if (r)
+		goto out;
+
+	if (updates && (!flushed || updates->context != flushed->context ||
+			fence_is_later(updates, flushed))) {
+		fence_put(id->flushed_updates);
+		id->flushed_updates = fence_get(updates);
+	}
+	id->pd_gpu_addr = job->vm_pd_addr;
+	id->current_gpu_reset_count = atomic_read(&adev->gpu_reset_counter);
+	atomic64_set(&id->owner, vm->client_id);
+	job->vm_needs_flush = needs_flush;
+
+	job->vm_id = id - id_mgr->ids;
+	trace_amdgpu_vm_grab_id(vm, ring, job);
+out:
+	mutex_unlock(&id_mgr->lock);
+	return r;
+}
+
 /**
  * amdgpu_vm_grab_id - allocate the next free VMID
  *
@@ -426,6 +483,10 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
 	unsigned i;
 	int r = 0;
 
+	if (amdgpu_vm_dedicated_vmid_ready(vm, vmhub))
+		return amdgpu_vm_grab_dedicated_vmid(vm, ring, sync,
+						     fence, job);
+
 	fences = kmalloc_array(sizeof(void *), id_mgr->num_ids, GFP_KERNEL);
 	if (!fences)
 		return -ENOMEM;
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 6/6] drm/amdgpu: bump module verion for reserved vmid
       [not found] ` <1493205039-3721-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (4 preceding siblings ...)
  2017-04-26 11:10   ` [PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2 Chunming Zhou
@ 2017-04-26 11:10   ` Chunming Zhou
  5 siblings, 0 replies; 17+ messages in thread
From: Chunming Zhou @ 2017-04-26 11:10 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

Change-Id: I1065e0430ed44f7ee6c29214b72e35a7343ea02b
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 55322b4..6799829 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -64,9 +64,10 @@
  * - 3.12.0 - Add query for double offchip LDS buffers
  * - 3.13.0 - Add PRT support
  * - 3.14.0 - Fix race in amdgpu_ctx_get_fence() and note new functionality
+ * - 3.15.0 - Add reserved vmid support
  */
 #define KMS_DRIVER_MAJOR	3
-#define KMS_DRIVER_MINOR	14
+#define KMS_DRIVER_MINOR	15
 #define KMS_DRIVER_PATCHLEVEL	0
 
 int amdgpu_vram_limit = 0;
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] drm/amdgpu: reserve/unreserve vmid by vm ioctl v3
       [not found]     ` <1493205039-3721-4-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-26 12:26       ` Christian König
       [not found]         ` <bf812c57-d9ee-8a15-1198-acc687145fea-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
  0 siblings, 1 reply; 17+ messages in thread
From: Christian König @ 2017-04-26 12:26 UTC (permalink / raw)
  To: Chunming Zhou, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 26.04.2017 um 13:10 schrieb Chunming Zhou:
> add reserve/unreserve vmid funtions.
> v3:
> only reserve vmid from gfxhub
>
> Change-Id: I5f80dc39dc9d44660a96a2b710b0dbb4d3b9039d
> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 70 +++++++++++++++++++++++++++-------
>   1 file changed, 57 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index ec87d64..a783e8e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -397,6 +397,11 @@ static bool amdgpu_vm_had_gpu_reset(struct amdgpu_device *adev,
>   		atomic_read(&adev->gpu_reset_counter);
>   }
>   
> +static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, unsigned vmhub)
> +{
> +	return !!vm->dedicated_vmid[vmhub];
> +}
> +
>   /**
>    * amdgpu_vm_grab_id - allocate the next free VMID
>    *
> @@ -546,6 +551,47 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
>   	return r;
>   }
>   
> +static void amdgpu_vm_free_dedicated_vmid(struct amdgpu_device *adev,
> +					  struct amdgpu_vm *vm,
> +					  unsigned vmhub)
> +{
> +	struct amdgpu_vm_id_manager *id_mgr = &adev->vm_manager.id_mgr[vmhub];
> +
> +	mutex_lock(&id_mgr->lock);
> +	if (vm->dedicated_vmid[vmhub]) {
> +		list_add(&vm->dedicated_vmid[vmhub]->list,
> +			&id_mgr->ids_lru);
> +		vm->dedicated_vmid[vmhub] = NULL;
> +	}
> +	mutex_unlock(&id_mgr->lock);
> +}
> +
> +static int amdgpu_vm_alloc_dedicated_vmid(struct amdgpu_device *adev,
> +					  struct amdgpu_vm *vm,
> +					  unsigned vmhub)
> +{
> +	struct amdgpu_vm_id_manager *id_mgr;
> +	struct amdgpu_vm_id *idle;
> +	int r;
> +
> +	id_mgr = &adev->vm_manager.id_mgr[vmhub];
> +	mutex_lock(&id_mgr->lock);
> +	/* Select the first entry VMID */
> +	idle = list_first_entry(&id_mgr->ids_lru, struct amdgpu_vm_id, list);
> +	list_del_init(&idle->list);
> +	vm->dedicated_vmid[vmhub] = idle;

We need a check here if there isn't an ID already reserved for the VM.

Additional to that I would still rename the field to reserved_vmid, that 
describes better what it is actually doing.

> +	mutex_unlock(&id_mgr->lock);
> +
> +	r = amdgpu_sync_wait(&idle->active);
> +	if (r)
> +		goto err;

This is racy. A command submission could happen while we wait for the id 
to become idle.

The easiest way to solve this is to hold the lock while waiting for the 
ID to become available.

> +
> +	return 0;
> +err:
> +	amdgpu_vm_free_dedicated_vmid(adev, vm, vmhub);
> +	return r;
> +}
> +
>   static bool amdgpu_vm_ring_has_compute_vm_bug(struct amdgpu_ring *ring)
>   {
>   	struct amdgpu_device *adev = ring->adev;
> @@ -2316,18 +2362,8 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>   
>   	amdgpu_vm_free_levels(&vm->root);
>   	fence_put(vm->last_dir_update);
> -	for (i = 0; i < AMDGPU_MAX_VMHUBS; i++) {
> -		struct amdgpu_vm_id_manager *id_mgr =
> -			&adev->vm_manager.id_mgr[i];
> -
> -		mutex_lock(&id_mgr->lock);
> -		if (vm->dedicated_vmid[i]) {
> -			list_add(&vm->dedicated_vmid[i]->list,
> -				 &id_mgr->ids_lru);
> -			vm->dedicated_vmid[i] = NULL;
> -		}
> -		mutex_unlock(&id_mgr->lock);
> -	}
> +	for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
> +		amdgpu_vm_free_dedicated_vmid(adev, vm, i);
>   }
>   
>   /**
> @@ -2400,11 +2436,19 @@ int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
>   	union drm_amdgpu_vm *args = data;
>   	struct amdgpu_device *adev = dev->dev_private;
>   	struct amdgpu_fpriv *fpriv = filp->driver_priv;
> +	int r;
>   
>   	switch (args->in.op) {
>   	case AMDGPU_VM_OP_RESERVE_VMID:
> +		/* current, we only have requirement to reserve vmid from gfxhub */
> +		if (!amdgpu_vm_dedicated_vmid_ready(&fpriv->vm, 0)) {
> +			r = amdgpu_vm_alloc_dedicated_vmid(adev, &fpriv->vm, 0);

This is racy as well.

Two threads could try to reserve a VMID at the same time. You need to 
integrate the check if it's already allocated into 
amdgpu_vm_alloc_dedicated_vmid().

Regards,
Christian.

> +			if (r)
> +				return r;
> +		}
> +		break;
>   	case AMDGPU_VM_OP_UNRESERVE_VMID:
> -		return -EINVAL;
> +		amdgpu_vm_free_dedicated_vmid(adev, &fpriv->vm, 0);
>   		break;
>   	default:
>   		return -EINVAL;


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2
       [not found]     ` <1493205039-3721-6-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-26 12:57       ` Christian König
  2017-04-27  2:52       ` Zhang, Jerry (Junwei)
  1 sibling, 0 replies; 17+ messages in thread
From: Christian König @ 2017-04-26 12:57 UTC (permalink / raw)
  To: Chunming Zhou, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 26.04.2017 um 13:10 schrieb Chunming Zhou:
> v2: move sync waiting only when flush needs
>
> Change-Id: I64da2701c9fdcf986afb90ba1492a78d5bef1b6c
> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 61 ++++++++++++++++++++++++++++++++++
>   1 file changed, 61 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 214ac50..bce7701 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -402,6 +402,63 @@ static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, unsigned vmhub)
>   	return !!vm->dedicated_vmid[vmhub];
>   }
>   
> +static int amdgpu_vm_grab_dedicated_vmid(struct amdgpu_vm *vm,
> +					 struct amdgpu_ring *ring,
> +					 struct amdgpu_sync *sync,
> +					 struct fence *fence,
> +					 struct amdgpu_job *job)
> +{
> +	struct amdgpu_device *adev = ring->adev;
> +	unsigned vmhub = ring->funcs->vmhub;
> +	struct amdgpu_vm_id *id = vm->dedicated_vmid[vmhub];
> +	struct amdgpu_vm_id_manager *id_mgr = &adev->vm_manager.id_mgr[vmhub];
> +	struct fence *updates = sync->last_vm_update;
> +	int r = 0;
> +	struct fence *flushed, *tmp;
> +	bool needs_flush = false;
> +
> +	mutex_lock(&id_mgr->lock);
> +	if (amdgpu_vm_had_gpu_reset(adev, id))
> +		needs_flush = true;
> +
> +	flushed  = id->flushed_updates;
> +	if (updates && (!flushed || updates->context != flushed->context ||
> +			fence_is_later(updates, flushed)))
> +		needs_flush = true;
> +	if (needs_flush) {
> +		tmp = amdgpu_sync_get_fence(&id->active);
> +		if (tmp) {
> +			r = amdgpu_sync_fence(adev, sync, tmp);
> +			fence_put(tmp);
> +			mutex_unlock(&id_mgr->lock);
> +			return r;
> +		}
> +	}
> +
> +	/* Good we can use this VMID. Remember this submission as
> +	* user of the VMID.
> +	*/
> +	r = amdgpu_sync_fence(ring->adev, &id->active, fence);
> +	if (r)
> +		goto out;
> +
> +	if (updates && (!flushed || updates->context != flushed->context ||
> +			fence_is_later(updates, flushed))) {
> +		fence_put(id->flushed_updates);
> +		id->flushed_updates = fence_get(updates);
> +	}
> +	id->pd_gpu_addr = job->vm_pd_addr;
> +	id->current_gpu_reset_count = atomic_read(&adev->gpu_reset_counter);
> +	atomic64_set(&id->owner, vm->client_id);
> +	job->vm_needs_flush = needs_flush;
> +
> +	job->vm_id = id - id_mgr->ids;
> +	trace_amdgpu_vm_grab_id(vm, ring, job);
> +out:
> +	mutex_unlock(&id_mgr->lock);
> +	return r;
> +}
> +
>   /**
>    * amdgpu_vm_grab_id - allocate the next free VMID
>    *
> @@ -426,6 +483,10 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
>   	unsigned i;
>   	int r = 0;
>   
> +	if (amdgpu_vm_dedicated_vmid_ready(vm, vmhub))
> +		return amdgpu_vm_grab_dedicated_vmid(vm, ring, sync,
> +						     fence, job);

That is racy as well. The reserved VMID could be unreserved while we try 
to grab it leading to a NULL pointer dereference in 
amdgpu_vm_grab_dedicated_vmid().

Christian.

> +
>   	fences = kmalloc_array(sizeof(void *), id_mgr->num_ids, GFP_KERNEL);
>   	if (!fences)
>   		return -ENOMEM;


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] drm/amdgpu: reserve/unreserve vmid by vm ioctl v3
       [not found]         ` <bf812c57-d9ee-8a15-1198-acc687145fea-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
@ 2017-04-27  2:13           ` zhoucm1
       [not found]             ` <590153B1.3070305-5C7GfCeVMHo@public.gmane.org>
  2017-04-27  2:18           ` Zhang, Jerry (Junwei)
  1 sibling, 1 reply; 17+ messages in thread
From: zhoucm1 @ 2017-04-27  2:13 UTC (permalink / raw)
  To: Christian König, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW



On 2017年04月26日 20:26, Christian König wrote:
> Am 26.04.2017 um 13:10 schrieb Chunming Zhou:
>> add reserve/unreserve vmid funtions.
>> v3:
>> only reserve vmid from gfxhub
>>
>> Change-Id: I5f80dc39dc9d44660a96a2b710b0dbb4d3b9039d
>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 70 
>> +++++++++++++++++++++++++++-------
>>   1 file changed, 57 insertions(+), 13 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> index ec87d64..a783e8e 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> @@ -397,6 +397,11 @@ static bool amdgpu_vm_had_gpu_reset(struct 
>> amdgpu_device *adev,
>>           atomic_read(&adev->gpu_reset_counter);
>>   }
>>   +static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, 
>> unsigned vmhub)
>> +{
>> +    return !!vm->dedicated_vmid[vmhub];
>> +}
>> +
>>   /**
>>    * amdgpu_vm_grab_id - allocate the next free VMID
>>    *
>> @@ -546,6 +551,47 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, 
>> struct amdgpu_ring *ring,
>>       return r;
>>   }
>>   +static void amdgpu_vm_free_dedicated_vmid(struct amdgpu_device *adev,
>> +                      struct amdgpu_vm *vm,
>> +                      unsigned vmhub)
>> +{
>> +    struct amdgpu_vm_id_manager *id_mgr = 
>> &adev->vm_manager.id_mgr[vmhub];
>> +
>> +    mutex_lock(&id_mgr->lock);
>> +    if (vm->dedicated_vmid[vmhub]) {
>> +        list_add(&vm->dedicated_vmid[vmhub]->list,
>> +            &id_mgr->ids_lru);
>> +        vm->dedicated_vmid[vmhub] = NULL;
>> +    }
>> +    mutex_unlock(&id_mgr->lock);
>> +}
>> +
>> +static int amdgpu_vm_alloc_dedicated_vmid(struct amdgpu_device *adev,
>> +                      struct amdgpu_vm *vm,
>> +                      unsigned vmhub)
>> +{
>> +    struct amdgpu_vm_id_manager *id_mgr;
>> +    struct amdgpu_vm_id *idle;
>> +    int r;
>> +
>> +    id_mgr = &adev->vm_manager.id_mgr[vmhub];
>> +    mutex_lock(&id_mgr->lock);
>> +    /* Select the first entry VMID */
>> +    idle = list_first_entry(&id_mgr->ids_lru, struct amdgpu_vm_id, 
>> list);
>> +    list_del_init(&idle->list);
>> +    vm->dedicated_vmid[vmhub] = idle;
>
> We need a check here if there isn't an ID already reserved for the VM.
>
> Additional to that I would still rename the field to reserved_vmid, 
> that describes better what it is actually doing.
the vmid is global, reserve it from global lru, so it makes sense to 
name 'reserved_vmid' like in patch#4.
the reserved vmid is dedicated to one vm, so the field in vm is 
'dedicated_vmid' like here, which is my reason to name it.

Anyway, the alternative is ok to me although I prefer mine, I hope we 
can deliver this solution by today, many RGP issues depend on it.

Thanks,
David Zhou
>
>> +    mutex_unlock(&id_mgr->lock);
>> +
>> +    r = amdgpu_sync_wait(&idle->active);
>> +    if (r)
>> +        goto err;
>
> This is racy. A command submission could happen while we wait for the 
> id to become idle.
>
> The easiest way to solve this is to hold the lock while waiting for 
> the ID to become available.
>
>> +
>> +    return 0;
>> +err:
>> +    amdgpu_vm_free_dedicated_vmid(adev, vm, vmhub);
>> +    return r;
>> +}
>> +
>>   static bool amdgpu_vm_ring_has_compute_vm_bug(struct amdgpu_ring 
>> *ring)
>>   {
>>       struct amdgpu_device *adev = ring->adev;
>> @@ -2316,18 +2362,8 @@ void amdgpu_vm_fini(struct amdgpu_device 
>> *adev, struct amdgpu_vm *vm)
>>         amdgpu_vm_free_levels(&vm->root);
>>       fence_put(vm->last_dir_update);
>> -    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++) {
>> -        struct amdgpu_vm_id_manager *id_mgr =
>> -            &adev->vm_manager.id_mgr[i];
>> -
>> -        mutex_lock(&id_mgr->lock);
>> -        if (vm->dedicated_vmid[i]) {
>> -            list_add(&vm->dedicated_vmid[i]->list,
>> -                 &id_mgr->ids_lru);
>> -            vm->dedicated_vmid[i] = NULL;
>> -        }
>> -        mutex_unlock(&id_mgr->lock);
>> -    }
>> +    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
>> +        amdgpu_vm_free_dedicated_vmid(adev, vm, i);
>>   }
>>     /**
>> @@ -2400,11 +2436,19 @@ int amdgpu_vm_ioctl(struct drm_device *dev, 
>> void *data, struct drm_file *filp)
>>       union drm_amdgpu_vm *args = data;
>>       struct amdgpu_device *adev = dev->dev_private;
>>       struct amdgpu_fpriv *fpriv = filp->driver_priv;
>> +    int r;
>>         switch (args->in.op) {
>>       case AMDGPU_VM_OP_RESERVE_VMID:
>> +        /* current, we only have requirement to reserve vmid from 
>> gfxhub */
>> +        if (!amdgpu_vm_dedicated_vmid_ready(&fpriv->vm, 0)) {
>> +            r = amdgpu_vm_alloc_dedicated_vmid(adev, &fpriv->vm, 0);
>
> This is racy as well.
>
> Two threads could try to reserve a VMID at the same time. You need to 
> integrate the check if it's already allocated into 
> amdgpu_vm_alloc_dedicated_vmid().
>
> Regards,
> Christian.
>
>> +            if (r)
>> +                return r;
>> +        }
>> +        break;
>>       case AMDGPU_VM_OP_UNRESERVE_VMID:
>> -        return -EINVAL;
>> +        amdgpu_vm_free_dedicated_vmid(adev, &fpriv->vm, 0);
>>           break;
>>       default:
>>           return -EINVAL;
>
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] drm/amdgpu: reserve/unreserve vmid by vm ioctl v3
       [not found]         ` <bf812c57-d9ee-8a15-1198-acc687145fea-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
  2017-04-27  2:13           ` zhoucm1
@ 2017-04-27  2:18           ` Zhang, Jerry (Junwei)
  1 sibling, 0 replies; 17+ messages in thread
From: Zhang, Jerry (Junwei) @ 2017-04-27  2:18 UTC (permalink / raw)
  To: Christian König, Chunming Zhou,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

On 04/26/2017 08:26 PM, Christian König wrote:
> Am 26.04.2017 um 13:10 schrieb Chunming Zhou:
>> add reserve/unreserve vmid funtions.
>> v3:
>> only reserve vmid from gfxhub
>>
>> Change-Id: I5f80dc39dc9d44660a96a2b710b0dbb4d3b9039d
>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 70 +++++++++++++++++++++++++++-------
>>   1 file changed, 57 insertions(+), 13 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> index ec87d64..a783e8e 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> @@ -397,6 +397,11 @@ static bool amdgpu_vm_had_gpu_reset(struct amdgpu_device
>> *adev,
>>           atomic_read(&adev->gpu_reset_counter);
>>   }
>> +static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, unsigned
>> vmhub)
>> +{
>> +    return !!vm->dedicated_vmid[vmhub];
>> +}
>> +
>>   /**
>>    * amdgpu_vm_grab_id - allocate the next free VMID
>>    *
>> @@ -546,6 +551,47 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct
>> amdgpu_ring *ring,
>>       return r;
>>   }
>> +static void amdgpu_vm_free_dedicated_vmid(struct amdgpu_device *adev,
>> +                      struct amdgpu_vm *vm,
>> +                      unsigned vmhub)
>> +{
>> +    struct amdgpu_vm_id_manager *id_mgr = &adev->vm_manager.id_mgr[vmhub];
>> +
>> +    mutex_lock(&id_mgr->lock);
>> +    if (vm->dedicated_vmid[vmhub]) {
>> +        list_add(&vm->dedicated_vmid[vmhub]->list,
>> +            &id_mgr->ids_lru);
>> +        vm->dedicated_vmid[vmhub] = NULL;
>> +    }
>> +    mutex_unlock(&id_mgr->lock);
>> +}
>> +
>> +static int amdgpu_vm_alloc_dedicated_vmid(struct amdgpu_device *adev,
>> +                      struct amdgpu_vm *vm,
>> +                      unsigned vmhub)
>> +{
>> +    struct amdgpu_vm_id_manager *id_mgr;
>> +    struct amdgpu_vm_id *idle;
>> +    int r;
>> +
>> +    id_mgr = &adev->vm_manager.id_mgr[vmhub];
>> +    mutex_lock(&id_mgr->lock);
>> +    /* Select the first entry VMID */
>> +    idle = list_first_entry(&id_mgr->ids_lru, struct amdgpu_vm_id, list);
>> +    list_del_init(&idle->list);
>> +    vm->dedicated_vmid[vmhub] = idle;
>
> We need a check here if there isn't an ID already reserved for the VM.
>
> Additional to that I would still rename the field to reserved_vmid, that
> describes better what it is actually doing.
>
>> +    mutex_unlock(&id_mgr->lock);
>> +
>> +    r = amdgpu_sync_wait(&idle->active);
>> +    if (r)
>> +        goto err;
>
> This is racy. A command submission could happen while we wait for the id to
> become idle.
>
> The easiest way to solve this is to hold the lock while waiting for the ID to
> become available.
>
>> +
>> +    return 0;
>> +err:
>> +    amdgpu_vm_free_dedicated_vmid(adev, vm, vmhub);
>> +    return r;
>> +}
>> +
>>   static bool amdgpu_vm_ring_has_compute_vm_bug(struct amdgpu_ring *ring)
>>   {
>>       struct amdgpu_device *adev = ring->adev;
>> @@ -2316,18 +2362,8 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct
>> amdgpu_vm *vm)
>>       amdgpu_vm_free_levels(&vm->root);
>>       fence_put(vm->last_dir_update);
>> -    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++) {
>> -        struct amdgpu_vm_id_manager *id_mgr =
>> -            &adev->vm_manager.id_mgr[i];
>> -
>> -        mutex_lock(&id_mgr->lock);
>> -        if (vm->dedicated_vmid[i]) {
>> -            list_add(&vm->dedicated_vmid[i]->list,
>> -                 &id_mgr->ids_lru);
>> -            vm->dedicated_vmid[i] = NULL;
>> -        }
>> -        mutex_unlock(&id_mgr->lock);
>> -    }
>> +    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
>> +        amdgpu_vm_free_dedicated_vmid(adev, vm, i);
>>   }
>>   /**
>> @@ -2400,11 +2436,19 @@ int amdgpu_vm_ioctl(struct drm_device *dev, void
>> *data, struct drm_file *filp)
>>       union drm_amdgpu_vm *args = data;
>>       struct amdgpu_device *adev = dev->dev_private;
>>       struct amdgpu_fpriv *fpriv = filp->driver_priv;
>> +    int r;
>>       switch (args->in.op) {
>>       case AMDGPU_VM_OP_RESERVE_VMID:
>> +        /* current, we only have requirement to reserve vmid from gfxhub */
>> +        if (!amdgpu_vm_dedicated_vmid_ready(&fpriv->vm, 0)) {
>> +            r = amdgpu_vm_alloc_dedicated_vmid(adev, &fpriv->vm, 0);
>
> This is racy as well.
>
> Two threads could try to reserve a VMID at the same time. You need to integrate
> the check if it's already allocated into amdgpu_vm_alloc_dedicated_vmid().

Apart from that, small tip.
AMDGPU_GFXHUB instead of "0" looks more self-explanatory.

Jerry

>
> Regards,
> Christian.
>
>> +            if (r)
>> +                return r;
>> +        }
>> +        break;
>>       case AMDGPU_VM_OP_UNRESERVE_VMID:
>> -        return -EINVAL;
>> +        amdgpu_vm_free_dedicated_vmid(adev, &fpriv->vm, 0);
>>           break;
>>       default:
>>           return -EINVAL;
>
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] drm/amdgpu: reserve/unreserve vmid by vm ioctl v3
       [not found]             ` <590153B1.3070305-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-27  2:24               ` Zhang, Jerry (Junwei)
       [not found]                 ` <59015648.5010403-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 17+ messages in thread
From: Zhang, Jerry (Junwei) @ 2017-04-27  2:24 UTC (permalink / raw)
  To: zhoucm1, Christian König, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

On 04/27/2017 10:13 AM, zhoucm1 wrote:
>
>
> On 2017年04月26日 20:26, Christian König wrote:
>> Am 26.04.2017 um 13:10 schrieb Chunming Zhou:
>>> add reserve/unreserve vmid funtions.
>>> v3:
>>> only reserve vmid from gfxhub
>>>
>>> Change-Id: I5f80dc39dc9d44660a96a2b710b0dbb4d3b9039d
>>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 70
>>> +++++++++++++++++++++++++++-------
>>>   1 file changed, 57 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index ec87d64..a783e8e 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -397,6 +397,11 @@ static bool amdgpu_vm_had_gpu_reset(struct
>>> amdgpu_device *adev,
>>>           atomic_read(&adev->gpu_reset_counter);
>>>   }
>>>   +static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, unsigned
>>> vmhub)
>>> +{
>>> +    return !!vm->dedicated_vmid[vmhub];
>>> +}
>>> +
>>>   /**
>>>    * amdgpu_vm_grab_id - allocate the next free VMID
>>>    *
>>> @@ -546,6 +551,47 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct
>>> amdgpu_ring *ring,
>>>       return r;
>>>   }
>>>   +static void amdgpu_vm_free_dedicated_vmid(struct amdgpu_device *adev,
>>> +                      struct amdgpu_vm *vm,
>>> +                      unsigned vmhub)
>>> +{
>>> +    struct amdgpu_vm_id_manager *id_mgr = &adev->vm_manager.id_mgr[vmhub];
>>> +
>>> +    mutex_lock(&id_mgr->lock);
>>> +    if (vm->dedicated_vmid[vmhub]) {
>>> +        list_add(&vm->dedicated_vmid[vmhub]->list,
>>> +            &id_mgr->ids_lru);
>>> +        vm->dedicated_vmid[vmhub] = NULL;
>>> +    }
>>> +    mutex_unlock(&id_mgr->lock);
>>> +}
>>> +
>>> +static int amdgpu_vm_alloc_dedicated_vmid(struct amdgpu_device *adev,
>>> +                      struct amdgpu_vm *vm,
>>> +                      unsigned vmhub)
>>> +{
>>> +    struct amdgpu_vm_id_manager *id_mgr;
>>> +    struct amdgpu_vm_id *idle;
>>> +    int r;
>>> +
>>> +    id_mgr = &adev->vm_manager.id_mgr[vmhub];
>>> +    mutex_lock(&id_mgr->lock);
>>> +    /* Select the first entry VMID */
>>> +    idle = list_first_entry(&id_mgr->ids_lru, struct amdgpu_vm_id, list);
>>> +    list_del_init(&idle->list);
>>> +    vm->dedicated_vmid[vmhub] = idle;
>>
>> We need a check here if there isn't an ID already reserved for the VM.
>>
>> Additional to that I would still rename the field to reserved_vmid, that
>> describes better what it is actually doing.
> the vmid is global, reserve it from global lru, so it makes sense to name
> 'reserved_vmid' like in patch#4.
> the reserved vmid is dedicated to one vm, so the field in vm is
> 'dedicated_vmid' like here, which is my reason to name it.

Just the same thing looking at different views.

IMHO, it's reserved vmid from global group, naming "reserved_vmid".
And in Patch#4, it constrains the number, possibly naming "reserved_vmid_seq" 
or "reserved_vmid_ref".

Any of them looks reasonable, select the one you like. :)

Jerry

>
> Anyway, the alternative is ok to me although I prefer mine, I hope we can
> deliver this solution by today, many RGP issues depend on it.
>
> Thanks,
> David Zhou
>>
>>> +    mutex_unlock(&id_mgr->lock);
>>> +
>>> +    r = amdgpu_sync_wait(&idle->active);
>>> +    if (r)
>>> +        goto err;
>>
>> This is racy. A command submission could happen while we wait for the id to
>> become idle.
>>
>> The easiest way to solve this is to hold the lock while waiting for the ID to
>> become available.
>>
>>> +
>>> +    return 0;
>>> +err:
>>> +    amdgpu_vm_free_dedicated_vmid(adev, vm, vmhub);
>>> +    return r;
>>> +}
>>> +
>>>   static bool amdgpu_vm_ring_has_compute_vm_bug(struct amdgpu_ring *ring)
>>>   {
>>>       struct amdgpu_device *adev = ring->adev;
>>> @@ -2316,18 +2362,8 @@ void amdgpu_vm_fini(struct amdgpu_device *adev,
>>> struct amdgpu_vm *vm)
>>>         amdgpu_vm_free_levels(&vm->root);
>>>       fence_put(vm->last_dir_update);
>>> -    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++) {
>>> -        struct amdgpu_vm_id_manager *id_mgr =
>>> -            &adev->vm_manager.id_mgr[i];
>>> -
>>> -        mutex_lock(&id_mgr->lock);
>>> -        if (vm->dedicated_vmid[i]) {
>>> -            list_add(&vm->dedicated_vmid[i]->list,
>>> -                 &id_mgr->ids_lru);
>>> -            vm->dedicated_vmid[i] = NULL;
>>> -        }
>>> -        mutex_unlock(&id_mgr->lock);
>>> -    }
>>> +    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
>>> +        amdgpu_vm_free_dedicated_vmid(adev, vm, i);
>>>   }
>>>     /**
>>> @@ -2400,11 +2436,19 @@ int amdgpu_vm_ioctl(struct drm_device *dev, void
>>> *data, struct drm_file *filp)
>>>       union drm_amdgpu_vm *args = data;
>>>       struct amdgpu_device *adev = dev->dev_private;
>>>       struct amdgpu_fpriv *fpriv = filp->driver_priv;
>>> +    int r;
>>>         switch (args->in.op) {
>>>       case AMDGPU_VM_OP_RESERVE_VMID:
>>> +        /* current, we only have requirement to reserve vmid from gfxhub */
>>> +        if (!amdgpu_vm_dedicated_vmid_ready(&fpriv->vm, 0)) {
>>> +            r = amdgpu_vm_alloc_dedicated_vmid(adev, &fpriv->vm, 0);
>>
>> This is racy as well.
>>
>> Two threads could try to reserve a VMID at the same time. You need to
>> integrate the check if it's already allocated into
>> amdgpu_vm_alloc_dedicated_vmid().
>>
>> Regards,
>> Christian.
>>
>>> +            if (r)
>>> +                return r;
>>> +        }
>>> +        break;
>>>       case AMDGPU_VM_OP_UNRESERVE_VMID:
>>> -        return -EINVAL;
>>> +        amdgpu_vm_free_dedicated_vmid(adev, &fpriv->vm, 0);
>>>           break;
>>>       default:
>>>           return -EINVAL;
>>
>>
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2
       [not found]     ` <1493205039-3721-6-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
  2017-04-26 12:57       ` Christian König
@ 2017-04-27  2:52       ` Zhang, Jerry (Junwei)
       [not found]         ` <59015CD8.3000303-5C7GfCeVMHo@public.gmane.org>
  1 sibling, 1 reply; 17+ messages in thread
From: Zhang, Jerry (Junwei) @ 2017-04-27  2:52 UTC (permalink / raw)
  To: Chunming Zhou, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

On 04/26/2017 07:10 PM, Chunming Zhou wrote:
> v2: move sync waiting only when flush needs
>
> Change-Id: I64da2701c9fdcf986afb90ba1492a78d5bef1b6c
> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 61 ++++++++++++++++++++++++++++++++++
>   1 file changed, 61 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 214ac50..bce7701 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -402,6 +402,63 @@ static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, unsigned vmhub)
>   	return !!vm->dedicated_vmid[vmhub];
>   }
>
> +static int amdgpu_vm_grab_dedicated_vmid(struct amdgpu_vm *vm,
> +					 struct amdgpu_ring *ring,
> +					 struct amdgpu_sync *sync,
> +					 struct fence *fence,
> +					 struct amdgpu_job *job)
> +{
> +	struct amdgpu_device *adev = ring->adev;
> +	unsigned vmhub = ring->funcs->vmhub;
> +	struct amdgpu_vm_id *id = vm->dedicated_vmid[vmhub];
> +	struct amdgpu_vm_id_manager *id_mgr = &adev->vm_manager.id_mgr[vmhub];
> +	struct fence *updates = sync->last_vm_update;
> +	int r = 0;
> +	struct fence *flushed, *tmp;
> +	bool needs_flush = false;
> +
> +	mutex_lock(&id_mgr->lock);
> +	if (amdgpu_vm_had_gpu_reset(adev, id))
> +		needs_flush = true;
> +
> +	flushed  = id->flushed_updates;
> +	if (updates && (!flushed || updates->context != flushed->context ||
> +			fence_is_later(updates, flushed)))
> +		needs_flush = true;

Just question:
Do we need to consider concurrent flush for Vega10 like grab id func?

Jerry

> +	if (needs_flush) {
> +		tmp = amdgpu_sync_get_fence(&id->active);
> +		if (tmp) {
> +			r = amdgpu_sync_fence(adev, sync, tmp);
> +			fence_put(tmp);
> +			mutex_unlock(&id_mgr->lock);
> +			return r;
> +		}
> +	}
> +
> +	/* Good we can use this VMID. Remember this submission as
> +	* user of the VMID.
> +	*/
> +	r = amdgpu_sync_fence(ring->adev, &id->active, fence);
> +	if (r)
> +		goto out;
> +
> +	if (updates && (!flushed || updates->context != flushed->context ||
> +			fence_is_later(updates, flushed))) {
> +		fence_put(id->flushed_updates);
> +		id->flushed_updates = fence_get(updates);
> +	}
> +	id->pd_gpu_addr = job->vm_pd_addr;
> +	id->current_gpu_reset_count = atomic_read(&adev->gpu_reset_counter);
> +	atomic64_set(&id->owner, vm->client_id);
> +	job->vm_needs_flush = needs_flush;
> +
> +	job->vm_id = id - id_mgr->ids;
> +	trace_amdgpu_vm_grab_id(vm, ring, job);
> +out:
> +	mutex_unlock(&id_mgr->lock);
> +	return r;
> +}
> +
>   /**
>    * amdgpu_vm_grab_id - allocate the next free VMID
>    *
> @@ -426,6 +483,10 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
>   	unsigned i;
>   	int r = 0;
>
> +	if (amdgpu_vm_dedicated_vmid_ready(vm, vmhub))
> +		return amdgpu_vm_grab_dedicated_vmid(vm, ring, sync,
> +						     fence, job);
> +
>   	fences = kmalloc_array(sizeof(void *), id_mgr->num_ids, GFP_KERNEL);
>   	if (!fences)
>   		return -ENOMEM;
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] drm/amdgpu: reserve/unreserve vmid by vm ioctl v3
       [not found]                 ` <59015648.5010403-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-27  4:29                   ` zhoucm1
  2017-04-27  9:13                   ` Christian König
  1 sibling, 0 replies; 17+ messages in thread
From: zhoucm1 @ 2017-04-27  4:29 UTC (permalink / raw)
  To: Zhang, Jerry (Junwei),
	Christian König, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW



On 2017年04月27日 10:24, Zhang, Jerry (Junwei) wrote:
> On 04/27/2017 10:13 AM, zhoucm1 wrote:
>>
>>
>> On 2017年04月26日 20:26, Christian König wrote:
>>> Am 26.04.2017 um 13:10 schrieb Chunming Zhou:
>>>> add reserve/unreserve vmid funtions.
>>>> v3:
>>>> only reserve vmid from gfxhub
>>>>
>>>> Change-Id: I5f80dc39dc9d44660a96a2b710b0dbb4d3b9039d
>>>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>>>> ---
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 70
>>>> +++++++++++++++++++++++++++-------
>>>>   1 file changed, 57 insertions(+), 13 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>> index ec87d64..a783e8e 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>> @@ -397,6 +397,11 @@ static bool amdgpu_vm_had_gpu_reset(struct
>>>> amdgpu_device *adev,
>>>>           atomic_read(&adev->gpu_reset_counter);
>>>>   }
>>>>   +static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, 
>>>> unsigned
>>>> vmhub)
>>>> +{
>>>> +    return !!vm->dedicated_vmid[vmhub];
>>>> +}
>>>> +
>>>>   /**
>>>>    * amdgpu_vm_grab_id - allocate the next free VMID
>>>>    *
>>>> @@ -546,6 +551,47 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, 
>>>> struct
>>>> amdgpu_ring *ring,
>>>>       return r;
>>>>   }
>>>>   +static void amdgpu_vm_free_dedicated_vmid(struct amdgpu_device 
>>>> *adev,
>>>> +                      struct amdgpu_vm *vm,
>>>> +                      unsigned vmhub)
>>>> +{
>>>> +    struct amdgpu_vm_id_manager *id_mgr = 
>>>> &adev->vm_manager.id_mgr[vmhub];
>>>> +
>>>> +    mutex_lock(&id_mgr->lock);
>>>> +    if (vm->dedicated_vmid[vmhub]) {
>>>> + list_add(&vm->dedicated_vmid[vmhub]->list,
>>>> +            &id_mgr->ids_lru);
>>>> +        vm->dedicated_vmid[vmhub] = NULL;
>>>> +    }
>>>> +    mutex_unlock(&id_mgr->lock);
>>>> +}
>>>> +
>>>> +static int amdgpu_vm_alloc_dedicated_vmid(struct amdgpu_device *adev,
>>>> +                      struct amdgpu_vm *vm,
>>>> +                      unsigned vmhub)
>>>> +{
>>>> +    struct amdgpu_vm_id_manager *id_mgr;
>>>> +    struct amdgpu_vm_id *idle;
>>>> +    int r;
>>>> +
>>>> +    id_mgr = &adev->vm_manager.id_mgr[vmhub];
>>>> +    mutex_lock(&id_mgr->lock);
>>>> +    /* Select the first entry VMID */
>>>> +    idle = list_first_entry(&id_mgr->ids_lru, struct amdgpu_vm_id, 
>>>> list);
>>>> +    list_del_init(&idle->list);
>>>> +    vm->dedicated_vmid[vmhub] = idle;
>>>
>>> We need a check here if there isn't an ID already reserved for the VM.
>>>
>>> Additional to that I would still rename the field to reserved_vmid, 
>>> that
>>> describes better what it is actually doing.
>> the vmid is global, reserve it from global lru, so it makes sense to 
>> name
>> 'reserved_vmid' like in patch#4.
>> the reserved vmid is dedicated to one vm, so the field in vm is
>> 'dedicated_vmid' like here, which is my reason to name it.
>
> Just the same thing looking at different views.
>
> IMHO, it's reserved vmid from global group, naming "reserved_vmid".
> And in Patch#4, it constrains the number, possibly naming 
> "reserved_vmid_seq" or "reserved_vmid_ref".
How about reserved_vmid_num?

Regards,
David Zhou
>
> Any of them looks reasonable, select the one you like. :)
>
> Jerry
>
>>
>> Anyway, the alternative is ok to me although I prefer mine, I hope we 
>> can
>> deliver this solution by today, many RGP issues depend on it.
>>
>> Thanks,
>> David Zhou
>>>
>>>> + mutex_unlock(&id_mgr->lock);
>>>> +
>>>> +    r = amdgpu_sync_wait(&idle->active);
>>>> +    if (r)
>>>> +        goto err;
>>>
>>> This is racy. A command submission could happen while we wait for 
>>> the id to
>>> become idle.
>>>
>>> The easiest way to solve this is to hold the lock while waiting for 
>>> the ID to
>>> become available.
>>>
>>>> +
>>>> +    return 0;
>>>> +err:
>>>> +    amdgpu_vm_free_dedicated_vmid(adev, vm, vmhub);
>>>> +    return r;
>>>> +}
>>>> +
>>>>   static bool amdgpu_vm_ring_has_compute_vm_bug(struct amdgpu_ring 
>>>> *ring)
>>>>   {
>>>>       struct amdgpu_device *adev = ring->adev;
>>>> @@ -2316,18 +2362,8 @@ void amdgpu_vm_fini(struct amdgpu_device *adev,
>>>> struct amdgpu_vm *vm)
>>>>         amdgpu_vm_free_levels(&vm->root);
>>>>       fence_put(vm->last_dir_update);
>>>> -    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++) {
>>>> -        struct amdgpu_vm_id_manager *id_mgr =
>>>> -            &adev->vm_manager.id_mgr[i];
>>>> -
>>>> -        mutex_lock(&id_mgr->lock);
>>>> -        if (vm->dedicated_vmid[i]) {
>>>> - list_add(&vm->dedicated_vmid[i]->list,
>>>> -                 &id_mgr->ids_lru);
>>>> -            vm->dedicated_vmid[i] = NULL;
>>>> -        }
>>>> -        mutex_unlock(&id_mgr->lock);
>>>> -    }
>>>> +    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
>>>> +        amdgpu_vm_free_dedicated_vmid(adev, vm, i);
>>>>   }
>>>>     /**
>>>> @@ -2400,11 +2436,19 @@ int amdgpu_vm_ioctl(struct drm_device *dev, 
>>>> void
>>>> *data, struct drm_file *filp)
>>>>       union drm_amdgpu_vm *args = data;
>>>>       struct amdgpu_device *adev = dev->dev_private;
>>>>       struct amdgpu_fpriv *fpriv = filp->driver_priv;
>>>> +    int r;
>>>>         switch (args->in.op) {
>>>>       case AMDGPU_VM_OP_RESERVE_VMID:
>>>> +        /* current, we only have requirement to reserve vmid from 
>>>> gfxhub */
>>>> +        if (!amdgpu_vm_dedicated_vmid_ready(&fpriv->vm, 0)) {
>>>> +            r = amdgpu_vm_alloc_dedicated_vmid(adev, &fpriv->vm, 0);
>>>
>>> This is racy as well.
>>>
>>> Two threads could try to reserve a VMID at the same time. You need to
>>> integrate the check if it's already allocated into
>>> amdgpu_vm_alloc_dedicated_vmid().
>>>
>>> Regards,
>>> Christian.
>>>
>>>> +            if (r)
>>>> +                return r;
>>>> +        }
>>>> +        break;
>>>>       case AMDGPU_VM_OP_UNRESERVE_VMID:
>>>> -        return -EINVAL;
>>>> +        amdgpu_vm_free_dedicated_vmid(adev, &fpriv->vm, 0);
>>>>           break;
>>>>       default:
>>>>           return -EINVAL;
>>>
>>>
>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2
       [not found]         ` <59015CD8.3000303-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-27  4:42           ` zhoucm1
       [not found]             ` <590176A2.70304-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 17+ messages in thread
From: zhoucm1 @ 2017-04-27  4:42 UTC (permalink / raw)
  To: Zhang, Jerry (Junwei), amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW



On 2017年04月27日 10:52, Zhang, Jerry (Junwei) wrote:
> On 04/26/2017 07:10 PM, Chunming Zhou wrote:
>> v2: move sync waiting only when flush needs
>>
>> Change-Id: I64da2701c9fdcf986afb90ba1492a78d5bef1b6c
>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 61 
>> ++++++++++++++++++++++++++++++++++
>>   1 file changed, 61 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> index 214ac50..bce7701 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> @@ -402,6 +402,63 @@ static bool 
>> amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, unsigned vmhub)
>>       return !!vm->dedicated_vmid[vmhub];
>>   }
>>
>> +static int amdgpu_vm_grab_dedicated_vmid(struct amdgpu_vm *vm,
>> +                     struct amdgpu_ring *ring,
>> +                     struct amdgpu_sync *sync,
>> +                     struct fence *fence,
>> +                     struct amdgpu_job *job)
>> +{
>> +    struct amdgpu_device *adev = ring->adev;
>> +    unsigned vmhub = ring->funcs->vmhub;
>> +    struct amdgpu_vm_id *id = vm->dedicated_vmid[vmhub];
>> +    struct amdgpu_vm_id_manager *id_mgr = 
>> &adev->vm_manager.id_mgr[vmhub];
>> +    struct fence *updates = sync->last_vm_update;
>> +    int r = 0;
>> +    struct fence *flushed, *tmp;
>> +    bool needs_flush = false;
>> +
>> +    mutex_lock(&id_mgr->lock);
>> +    if (amdgpu_vm_had_gpu_reset(adev, id))
>> +        needs_flush = true;
>> +
>> +    flushed  = id->flushed_updates;
>> +    if (updates && (!flushed || updates->context != flushed->context ||
>> +            fence_is_later(updates, flushed)))
>> +        needs_flush = true;
>
> Just question:
> Do we need to consider concurrent flush for Vega10 like grab id func?
Christian has pointed that old asic has hardware bug.

Regards,
David Zhou
>
> Jerry
>
>> +    if (needs_flush) {
>> +        tmp = amdgpu_sync_get_fence(&id->active);
>> +        if (tmp) {
>> +            r = amdgpu_sync_fence(adev, sync, tmp);
>> +            fence_put(tmp);
>> +            mutex_unlock(&id_mgr->lock);
>> +            return r;
>> +        }
>> +    }
>> +
>> +    /* Good we can use this VMID. Remember this submission as
>> +    * user of the VMID.
>> +    */
>> +    r = amdgpu_sync_fence(ring->adev, &id->active, fence);
>> +    if (r)
>> +        goto out;
>> +
>> +    if (updates && (!flushed || updates->context != flushed->context ||
>> +            fence_is_later(updates, flushed))) {
>> +        fence_put(id->flushed_updates);
>> +        id->flushed_updates = fence_get(updates);
>> +    }
>> +    id->pd_gpu_addr = job->vm_pd_addr;
>> +    id->current_gpu_reset_count = 
>> atomic_read(&adev->gpu_reset_counter);
>> +    atomic64_set(&id->owner, vm->client_id);
>> +    job->vm_needs_flush = needs_flush;
>> +
>> +    job->vm_id = id - id_mgr->ids;
>> +    trace_amdgpu_vm_grab_id(vm, ring, job);
>> +out:
>> +    mutex_unlock(&id_mgr->lock);
>> +    return r;
>> +}
>> +
>>   /**
>>    * amdgpu_vm_grab_id - allocate the next free VMID
>>    *
>> @@ -426,6 +483,10 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, 
>> struct amdgpu_ring *ring,
>>       unsigned i;
>>       int r = 0;
>>
>> +    if (amdgpu_vm_dedicated_vmid_ready(vm, vmhub))
>> +        return amdgpu_vm_grab_dedicated_vmid(vm, ring, sync,
>> +                             fence, job);
>> +
>>       fences = kmalloc_array(sizeof(void *), id_mgr->num_ids, 
>> GFP_KERNEL);
>>       if (!fences)
>>           return -ENOMEM;
>>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] drm/amdgpu: reserve/unreserve vmid by vm ioctl v3
       [not found]                 ` <59015648.5010403-5C7GfCeVMHo@public.gmane.org>
  2017-04-27  4:29                   ` zhoucm1
@ 2017-04-27  9:13                   ` Christian König
  1 sibling, 0 replies; 17+ messages in thread
From: Christian König @ 2017-04-27  9:13 UTC (permalink / raw)
  To: Zhang, Jerry (Junwei), zhoucm1, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 27.04.2017 um 04:24 schrieb Zhang, Jerry (Junwei):
> On 04/27/2017 10:13 AM, zhoucm1 wrote:
>>
>>
>> On 2017年04月26日 20:26, Christian König wrote:
>>> Am 26.04.2017 um 13:10 schrieb Chunming Zhou:
>>>> add reserve/unreserve vmid funtions.
>>>> v3:
>>>> only reserve vmid from gfxhub
>>>>
>>>> Change-Id: I5f80dc39dc9d44660a96a2b710b0dbb4d3b9039d
>>>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>>>> ---
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 70
>>>> +++++++++++++++++++++++++++-------
>>>>   1 file changed, 57 insertions(+), 13 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>> index ec87d64..a783e8e 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>> @@ -397,6 +397,11 @@ static bool amdgpu_vm_had_gpu_reset(struct
>>>> amdgpu_device *adev,
>>>>           atomic_read(&adev->gpu_reset_counter);
>>>>   }
>>>>   +static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, 
>>>> unsigned
>>>> vmhub)
>>>> +{
>>>> +    return !!vm->dedicated_vmid[vmhub];
>>>> +}
>>>> +
>>>>   /**
>>>>    * amdgpu_vm_grab_id - allocate the next free VMID
>>>>    *
>>>> @@ -546,6 +551,47 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, 
>>>> struct
>>>> amdgpu_ring *ring,
>>>>       return r;
>>>>   }
>>>>   +static void amdgpu_vm_free_dedicated_vmid(struct amdgpu_device 
>>>> *adev,
>>>> +                      struct amdgpu_vm *vm,
>>>> +                      unsigned vmhub)
>>>> +{
>>>> +    struct amdgpu_vm_id_manager *id_mgr = 
>>>> &adev->vm_manager.id_mgr[vmhub];
>>>> +
>>>> +    mutex_lock(&id_mgr->lock);
>>>> +    if (vm->dedicated_vmid[vmhub]) {
>>>> + list_add(&vm->dedicated_vmid[vmhub]->list,
>>>> +            &id_mgr->ids_lru);
>>>> +        vm->dedicated_vmid[vmhub] = NULL;
>>>> +    }
>>>> +    mutex_unlock(&id_mgr->lock);
>>>> +}
>>>> +
>>>> +static int amdgpu_vm_alloc_dedicated_vmid(struct amdgpu_device *adev,
>>>> +                      struct amdgpu_vm *vm,
>>>> +                      unsigned vmhub)
>>>> +{
>>>> +    struct amdgpu_vm_id_manager *id_mgr;
>>>> +    struct amdgpu_vm_id *idle;
>>>> +    int r;
>>>> +
>>>> +    id_mgr = &adev->vm_manager.id_mgr[vmhub];
>>>> +    mutex_lock(&id_mgr->lock);
>>>> +    /* Select the first entry VMID */
>>>> +    idle = list_first_entry(&id_mgr->ids_lru, struct amdgpu_vm_id, 
>>>> list);
>>>> +    list_del_init(&idle->list);
>>>> +    vm->dedicated_vmid[vmhub] = idle;
>>>
>>> We need a check here if there isn't an ID already reserved for the VM.
>>>
>>> Additional to that I would still rename the field to reserved_vmid, 
>>> that
>>> describes better what it is actually doing.
>> the vmid is global, reserve it from global lru, so it makes sense to 
>> name
>> 'reserved_vmid' like in patch#4.
>> the reserved vmid is dedicated to one vm, so the field in vm is
>> 'dedicated_vmid' like here, which is my reason to name it.
>
> Just the same thing looking at different views.
>
> IMHO, it's reserved vmid from global group, naming "reserved_vmid".
> And in Patch#4, it constrains the number, possibly naming 
> "reserved_vmid_seq" or "reserved_vmid_ref".
>
> Any of them looks reasonable, select the one you like. :)

Yeah, agree. More like a gut feeling that reserved is more common 
wording, but I don't have a strong opinion on that either.

Christian.

>
> Jerry
>
>>
>> Anyway, the alternative is ok to me although I prefer mine, I hope we 
>> can
>> deliver this solution by today, many RGP issues depend on it.
>>
>> Thanks,
>> David Zhou
>>>
>>>> + mutex_unlock(&id_mgr->lock);
>>>> +
>>>> +    r = amdgpu_sync_wait(&idle->active);
>>>> +    if (r)
>>>> +        goto err;
>>>
>>> This is racy. A command submission could happen while we wait for 
>>> the id to
>>> become idle.
>>>
>>> The easiest way to solve this is to hold the lock while waiting for 
>>> the ID to
>>> become available.
>>>
>>>> +
>>>> +    return 0;
>>>> +err:
>>>> +    amdgpu_vm_free_dedicated_vmid(adev, vm, vmhub);
>>>> +    return r;
>>>> +}
>>>> +
>>>>   static bool amdgpu_vm_ring_has_compute_vm_bug(struct amdgpu_ring 
>>>> *ring)
>>>>   {
>>>>       struct amdgpu_device *adev = ring->adev;
>>>> @@ -2316,18 +2362,8 @@ void amdgpu_vm_fini(struct amdgpu_device *adev,
>>>> struct amdgpu_vm *vm)
>>>>         amdgpu_vm_free_levels(&vm->root);
>>>>       fence_put(vm->last_dir_update);
>>>> -    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++) {
>>>> -        struct amdgpu_vm_id_manager *id_mgr =
>>>> -            &adev->vm_manager.id_mgr[i];
>>>> -
>>>> -        mutex_lock(&id_mgr->lock);
>>>> -        if (vm->dedicated_vmid[i]) {
>>>> - list_add(&vm->dedicated_vmid[i]->list,
>>>> -                 &id_mgr->ids_lru);
>>>> -            vm->dedicated_vmid[i] = NULL;
>>>> -        }
>>>> -        mutex_unlock(&id_mgr->lock);
>>>> -    }
>>>> +    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
>>>> +        amdgpu_vm_free_dedicated_vmid(adev, vm, i);
>>>>   }
>>>>     /**
>>>> @@ -2400,11 +2436,19 @@ int amdgpu_vm_ioctl(struct drm_device *dev, 
>>>> void
>>>> *data, struct drm_file *filp)
>>>>       union drm_amdgpu_vm *args = data;
>>>>       struct amdgpu_device *adev = dev->dev_private;
>>>>       struct amdgpu_fpriv *fpriv = filp->driver_priv;
>>>> +    int r;
>>>>         switch (args->in.op) {
>>>>       case AMDGPU_VM_OP_RESERVE_VMID:
>>>> +        /* current, we only have requirement to reserve vmid from 
>>>> gfxhub */
>>>> +        if (!amdgpu_vm_dedicated_vmid_ready(&fpriv->vm, 0)) {
>>>> +            r = amdgpu_vm_alloc_dedicated_vmid(adev, &fpriv->vm, 0);
>>>
>>> This is racy as well.
>>>
>>> Two threads could try to reserve a VMID at the same time. You need to
>>> integrate the check if it's already allocated into
>>> amdgpu_vm_alloc_dedicated_vmid().
>>>
>>> Regards,
>>> Christian.
>>>
>>>> +            if (r)
>>>> +                return r;
>>>> +        }
>>>> +        break;
>>>>       case AMDGPU_VM_OP_UNRESERVE_VMID:
>>>> -        return -EINVAL;
>>>> +        amdgpu_vm_free_dedicated_vmid(adev, &fpriv->vm, 0);
>>>>           break;
>>>>       default:
>>>>           return -EINVAL;
>>>
>>>
>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2
       [not found]             ` <590176A2.70304-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-27  9:14               ` Christian König
  0 siblings, 0 replies; 17+ messages in thread
From: Christian König @ 2017-04-27  9:14 UTC (permalink / raw)
  To: zhoucm1, Zhang, Jerry (Junwei), amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 27.04.2017 um 06:42 schrieb zhoucm1:
>
>
> On 2017年04月27日 10:52, Zhang, Jerry (Junwei) wrote:
>> On 04/26/2017 07:10 PM, Chunming Zhou wrote:
>>> v2: move sync waiting only when flush needs
>>>
>>> Change-Id: I64da2701c9fdcf986afb90ba1492a78d5bef1b6c
>>> Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 61 
>>> ++++++++++++++++++++++++++++++++++
>>>   1 file changed, 61 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index 214ac50..bce7701 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -402,6 +402,63 @@ static bool 
>>> amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, unsigned vmhub)
>>>       return !!vm->dedicated_vmid[vmhub];
>>>   }
>>>
>>> +static int amdgpu_vm_grab_dedicated_vmid(struct amdgpu_vm *vm,
>>> +                     struct amdgpu_ring *ring,
>>> +                     struct amdgpu_sync *sync,
>>> +                     struct fence *fence,
>>> +                     struct amdgpu_job *job)
>>> +{
>>> +    struct amdgpu_device *adev = ring->adev;
>>> +    unsigned vmhub = ring->funcs->vmhub;
>>> +    struct amdgpu_vm_id *id = vm->dedicated_vmid[vmhub];
>>> +    struct amdgpu_vm_id_manager *id_mgr = 
>>> &adev->vm_manager.id_mgr[vmhub];
>>> +    struct fence *updates = sync->last_vm_update;
>>> +    int r = 0;
>>> +    struct fence *flushed, *tmp;
>>> +    bool needs_flush = false;
>>> +
>>> +    mutex_lock(&id_mgr->lock);
>>> +    if (amdgpu_vm_had_gpu_reset(adev, id))
>>> +        needs_flush = true;
>>> +
>>> +    flushed  = id->flushed_updates;
>>> +    if (updates && (!flushed || updates->context != 
>>> flushed->context ||
>>> +            fence_is_later(updates, flushed)))
>>> +        needs_flush = true;
>>
>> Just question:
>> Do we need to consider concurrent flush for Vega10 like grab id func?
> Christian has pointed that old asic has hardware bug.

Which is fixed for Vega10, on that hardware concurrent flushing works fine.

It's just that Tonga/Fiji have a real problem with that and for 
CIK/Polaris it's more a subtle one which is hard to trigger.

Regards,
Christian.

>
> Regards,
> David Zhou
>>
>> Jerry
>>
>>> +    if (needs_flush) {
>>> +        tmp = amdgpu_sync_get_fence(&id->active);
>>> +        if (tmp) {
>>> +            r = amdgpu_sync_fence(adev, sync, tmp);
>>> +            fence_put(tmp);
>>> +            mutex_unlock(&id_mgr->lock);
>>> +            return r;
>>> +        }
>>> +    }
>>> +
>>> +    /* Good we can use this VMID. Remember this submission as
>>> +    * user of the VMID.
>>> +    */
>>> +    r = amdgpu_sync_fence(ring->adev, &id->active, fence);
>>> +    if (r)
>>> +        goto out;
>>> +
>>> +    if (updates && (!flushed || updates->context != 
>>> flushed->context ||
>>> +            fence_is_later(updates, flushed))) {
>>> +        fence_put(id->flushed_updates);
>>> +        id->flushed_updates = fence_get(updates);
>>> +    }
>>> +    id->pd_gpu_addr = job->vm_pd_addr;
>>> +    id->current_gpu_reset_count = 
>>> atomic_read(&adev->gpu_reset_counter);
>>> +    atomic64_set(&id->owner, vm->client_id);
>>> +    job->vm_needs_flush = needs_flush;
>>> +
>>> +    job->vm_id = id - id_mgr->ids;
>>> +    trace_amdgpu_vm_grab_id(vm, ring, job);
>>> +out:
>>> +    mutex_unlock(&id_mgr->lock);
>>> +    return r;
>>> +}
>>> +
>>>   /**
>>>    * amdgpu_vm_grab_id - allocate the next free VMID
>>>    *
>>> @@ -426,6 +483,10 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, 
>>> struct amdgpu_ring *ring,
>>>       unsigned i;
>>>       int r = 0;
>>>
>>> +    if (amdgpu_vm_dedicated_vmid_ready(vm, vmhub))
>>> +        return amdgpu_vm_grab_dedicated_vmid(vm, ring, sync,
>>> +                             fence, job);
>>> +
>>>       fences = kmalloc_array(sizeof(void *), id_mgr->num_ids, 
>>> GFP_KERNEL);
>>>       if (!fences)
>>>           return -ENOMEM;
>>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2017-04-27  9:14 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-26 11:10 [PATCH 0/6 v3] *** Dedicated vmid per process v3 *** Chunming Zhou
     [not found] ` <1493205039-3721-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
2017-04-26 11:10   ` [PATCH 1/6] drm/amdgpu: add vm ioctl Chunming Zhou
2017-04-26 11:10   ` [PATCH 2/6] drm/amdgpu: add dedicated vmid field in vm struct Chunming Zhou
2017-04-26 11:10   ` [PATCH 3/6] drm/amdgpu: reserve/unreserve vmid by vm ioctl v3 Chunming Zhou
     [not found]     ` <1493205039-3721-4-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
2017-04-26 12:26       ` Christian König
     [not found]         ` <bf812c57-d9ee-8a15-1198-acc687145fea-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
2017-04-27  2:13           ` zhoucm1
     [not found]             ` <590153B1.3070305-5C7GfCeVMHo@public.gmane.org>
2017-04-27  2:24               ` Zhang, Jerry (Junwei)
     [not found]                 ` <59015648.5010403-5C7GfCeVMHo@public.gmane.org>
2017-04-27  4:29                   ` zhoucm1
2017-04-27  9:13                   ` Christian König
2017-04-27  2:18           ` Zhang, Jerry (Junwei)
2017-04-26 11:10   ` [PATCH 4/6] drm/amdgpu: add limitation for dedicated vm number v3 Chunming Zhou
2017-04-26 11:10   ` [PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2 Chunming Zhou
     [not found]     ` <1493205039-3721-6-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
2017-04-26 12:57       ` Christian König
2017-04-27  2:52       ` Zhang, Jerry (Junwei)
     [not found]         ` <59015CD8.3000303-5C7GfCeVMHo@public.gmane.org>
2017-04-27  4:42           ` zhoucm1
     [not found]             ` <590176A2.70304-5C7GfCeVMHo@public.gmane.org>
2017-04-27  9:14               ` Christian König
2017-04-26 11:10   ` [PATCH 6/6] drm/amdgpu: bump module verion for reserved vmid Chunming Zhou

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.