All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/13] shadow page table support
@ 2016-07-25  7:22 Chunming Zhou
       [not found] ` <1469431353-15787-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
  2016-07-26  8:52 ` Liu, Monk
  0 siblings, 2 replies; 23+ messages in thread
From: Chunming Zhou @ 2016-07-25  7:22 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

Since we cannot make sure VRAM is safe after gpu reset, page table backup
is neccessary, shadow page table is sense way to recovery page talbe when
gpu reset happens.
We need to allocate GTT bo as the shadow of VRAM bo when creating page table,
and make them same. After gpu reset, we will need to use SDMA to copy GTT bo
content to VRAM bo, then page table will be recoveried.

Chunming Zhou (13):
  drm/amdgpu: add pd/pt bo shadow
  drm/amdgpu: update shadow pt bo while update pt
  drm/amdgpu: update pd shadow while updating pd
  drm/amdgpu: implement amdgpu_vm_recover_page_table_from_shadow
  drm/amdgpu: link all vm clients
  drm/amdgpu: add vm_list_lock
  drm/amd: add block entity function
  drm/amdgpu: recover page tables after gpu reset
  drm/amdgpu: add vm recover pt fence
  drm/amd: reset hw count when reset job
  drm/amd: fix deadlock of job_list_lock
  drm/amd: wait neccessary dependency before running job
  drm/amdgpu: fix sched deadoff

 drivers/gpu/drm/amd/amdgpu/amdgpu.h           |  17 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c        |  12 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  30 ++++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |   5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c       |   5 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 161 ++++++++++++++++++++++++--
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.c |  35 +++++-
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.h |   3 +
 8 files changed, 250 insertions(+), 18 deletions(-)

-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 01/13] drm/amdgpu: add pd/pt bo shadow
       [not found] ` <1469431353-15787-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
@ 2016-07-25  7:22   ` Chunming Zhou
  2016-07-25  7:22   ` [PATCH 02/13] drm/amdgpu: update shadow pt bo while update pt Chunming Zhou
                     ` (13 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Chunming Zhou @ 2016-07-25  7:22 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

the pd/pt shadow bo will be used to backup page table, when gpu reset
happens, we can restore the page table by them.

Change-Id: I31eeb581f203d1db0654a48745ef4e64ed40ed9b
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h    |  3 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 24 +++++++++++++++++++++++-
 2 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 5dd98c1..af536fb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -870,6 +870,8 @@ struct amdgpu_ring {
 struct amdgpu_vm_pt {
 	struct amdgpu_bo_list_entry	entry;
 	uint64_t			addr;
+	struct amdgpu_bo_list_entry	entry_shadow;
+	uint64_t			addr_shadow;
 };
 
 struct amdgpu_vm {
@@ -890,6 +892,7 @@ struct amdgpu_vm {
 
 	/* contains the page directory */
 	struct amdgpu_bo	*page_directory;
+	struct amdgpu_bo	*page_directory_shadow;
 	unsigned		max_pde_used;
 	struct fence		*page_directory_fence;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index d43bced..b149eb9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1307,9 +1307,10 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
 	/* walk over the address space and allocate the page tables */
 	for (pt_idx = saddr; pt_idx <= eaddr; ++pt_idx) {
 		struct reservation_object *resv = vm->page_directory->tbo.resv;
-		struct amdgpu_bo_list_entry *entry;
+		struct amdgpu_bo_list_entry *entry, *entry_shadow;
 		struct amdgpu_bo *pt;
 
+		entry_shadow = &vm->page_tables[pt_idx].entry_shadow;
 		entry = &vm->page_tables[pt_idx].entry;
 		if (entry->robj)
 			continue;
@@ -1339,6 +1340,20 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
 		entry->tv.shared = true;
 		entry->user_pages = NULL;
 		vm->page_tables[pt_idx].addr = 0;
+
+		r = amdgpu_bo_create(adev, AMDGPU_VM_PTE_COUNT * 8,
+				     AMDGPU_GPU_PAGE_SIZE, true,
+				     AMDGPU_GEM_DOMAIN_GTT,
+				     AMDGPU_GEM_CREATE_CPU_GTT_USWC,
+				     NULL, resv, &pt);
+		if (r)
+			goto error_free;
+		entry_shadow->robj = pt;
+		entry_shadow->priority = 0;
+		entry_shadow->tv.bo = &entry_shadow->robj->tbo;
+		entry_shadow->tv.shared = true;
+		entry_shadow->user_pages = NULL;
+		vm->page_tables[pt_idx].addr_shadow = 0;
 	}
 
 	return 0;
@@ -1530,6 +1545,13 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 	if (r)
 		goto error_free_page_directory;
 
+	r = amdgpu_bo_create(adev, pd_size, align, true,
+			     AMDGPU_GEM_DOMAIN_GTT,
+			     AMDGPU_GEM_CREATE_CPU_GTT_USWC,
+			     NULL, NULL, &vm->page_directory_shadow);
+	if (r)
+		goto error_free_page_directory;
+
 	return 0;
 
 error_free_page_directory:
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 02/13] drm/amdgpu: update shadow pt bo while update pt
       [not found] ` <1469431353-15787-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
  2016-07-25  7:22   ` [PATCH 01/13] drm/amdgpu: add pd/pt bo shadow Chunming Zhou
@ 2016-07-25  7:22   ` Chunming Zhou
  2016-07-25  7:22   ` [PATCH 03/13] drm/amdgpu: update pd shadow while updating pd Chunming Zhou
                     ` (12 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Chunming Zhou @ 2016-07-25  7:22 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

Change-Id: I8245cdad490d2a0b8cf4b9320e53e14db0b6add4
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index b149eb9..c0f6479a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -642,6 +642,7 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev,
 		if (vm->page_tables[pt_idx].addr == pt)
 			continue;
 		vm->page_tables[pt_idx].addr = pt;
+		vm->page_tables[pt_idx].addr_shadow = pt;
 
 		pde = pd_addr + pt_idx * 8;
 		if (((last_pde + 8 * count) != pde) ||
@@ -792,7 +793,7 @@ static void amdgpu_vm_update_ptes(struct amdgpu_device *adev,
 					*vm_update_params,
 				  struct amdgpu_vm *vm,
 				  uint64_t start, uint64_t end,
-				  uint64_t dst, uint32_t flags)
+				  uint64_t dst, uint32_t flags, bool shadow)
 {
 	const uint64_t mask = AMDGPU_VM_PTE_COUNT - 1;
 
@@ -806,7 +807,8 @@ static void amdgpu_vm_update_ptes(struct amdgpu_device *adev,
 	/* initialize the variables */
 	addr = start;
 	pt_idx = addr >> amdgpu_vm_block_size;
-	pt = vm->page_tables[pt_idx].entry.robj;
+	pt = shadow ? vm->page_tables[pt_idx].entry_shadow.robj :
+		vm->page_tables[pt_idx].entry.robj;
 
 	if ((addr & ~mask) == (end & ~mask))
 		nptes = end - addr;
@@ -825,7 +827,8 @@ static void amdgpu_vm_update_ptes(struct amdgpu_device *adev,
 	/* walk over the address space and update the page tables */
 	while (addr < end) {
 		pt_idx = addr >> amdgpu_vm_block_size;
-		pt = vm->page_tables[pt_idx].entry.robj;
+		pt = shadow ? vm->page_tables[pt_idx].entry_shadow.robj :
+			vm->page_tables[pt_idx].entry.robj;
 
 		if ((addr & ~mask) == (end & ~mask))
 			nptes = end - addr;
@@ -930,6 +933,8 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
 		/* two extra commands for begin/end of fragment */
 		ndw += 2 * 10;
 	}
+	/* double ndw, since need to update shadow pt bo as well */
+	ndw *= 2;
 
 	r = amdgpu_job_alloc_with_ib(adev, ndw * 4, &job);
 	if (r)
@@ -947,7 +952,10 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
 		goto error_free;
 
 	amdgpu_vm_update_ptes(adev, &vm_update_params, vm, start,
-			      last + 1, addr, flags);
+			      last + 1, addr, flags, false);
+	/* update shadow pt bo */
+	amdgpu_vm_update_ptes(adev, &vm_update_params, vm, start,
+			      last + 1, addr, flags, true);
 
 	amdgpu_ring_pad_ib(ring, vm_update_params.ib);
 	WARN_ON(vm_update_params.ib->length_dw > ndw);
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 03/13] drm/amdgpu: update pd shadow while updating pd
       [not found] ` <1469431353-15787-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
  2016-07-25  7:22   ` [PATCH 01/13] drm/amdgpu: add pd/pt bo shadow Chunming Zhou
  2016-07-25  7:22   ` [PATCH 02/13] drm/amdgpu: update shadow pt bo while update pt Chunming Zhou
@ 2016-07-25  7:22   ` Chunming Zhou
  2016-07-25  7:22   ` [PATCH 04/13] drm/amdgpu: implement amdgpu_vm_recover_page_table_from_shadow Chunming Zhou
                     ` (11 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Chunming Zhou @ 2016-07-25  7:22 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

Change-Id: Icafa90a6625ea7b5ab3e360ba0d73544cda251b0
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h     |  3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  |  6 +++++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c |  5 ++++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c  | 32 +++++++++++++++++++++++---------
 4 files changed, 34 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index af536fb..7f57b0e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -893,6 +893,7 @@ struct amdgpu_vm {
 	/* contains the page directory */
 	struct amdgpu_bo	*page_directory;
 	struct amdgpu_bo	*page_directory_shadow;
+	struct amdgpu_bo_list_entry	pd_entry_shadow;
 	unsigned		max_pde_used;
 	struct fence		*page_directory_fence;
 
@@ -980,7 +981,7 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job);
 void amdgpu_vm_reset_id(struct amdgpu_device *adev, unsigned vm_id);
 uint64_t amdgpu_vm_map_gart(const dma_addr_t *pages_addr, uint64_t addr);
 int amdgpu_vm_update_page_directory(struct amdgpu_device *adev,
-				    struct amdgpu_vm *vm);
+				    struct amdgpu_vm *vm, bool shadow);
 int amdgpu_vm_clear_freed(struct amdgpu_device *adev,
 			  struct amdgpu_vm *vm);
 int amdgpu_vm_clear_invalids(struct amdgpu_device *adev, struct amdgpu_vm *vm,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 55bba02..4f89bad 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -590,7 +590,11 @@ static int amdgpu_bo_vm_update_pte(struct amdgpu_cs_parser *p,
 	struct amdgpu_bo *bo;
 	int i, r;
 
-	r = amdgpu_vm_update_page_directory(adev, vm);
+	r = amdgpu_vm_update_page_directory(adev, vm, false);
+	if (r)
+		return r;
+
+	r = amdgpu_vm_update_page_directory(adev, vm, true);
 	if (r)
 		return r;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 0069aec..29729b0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -578,7 +578,10 @@ static void amdgpu_gem_va_update_vm(struct amdgpu_device *adev,
 			goto error_unreserve;
 	}
 
-	r = amdgpu_vm_update_page_directory(adev, bo_va->vm);
+	r = amdgpu_vm_update_page_directory(adev, bo_va->vm, false);
+	if (r)
+		goto error_unreserve;
+	r = amdgpu_vm_update_page_directory(adev, bo_va->vm, true);
 	if (r)
 		goto error_unreserve;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index c0f6479a..f13bab9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -132,13 +132,15 @@ void amdgpu_vm_get_pt_bos(struct amdgpu_vm *vm, struct list_head *duplicates)
 	/* add the vm page table to the list */
 	for (i = 0; i <= vm->max_pde_used; ++i) {
 		struct amdgpu_bo_list_entry *entry = &vm->page_tables[i].entry;
+		struct amdgpu_bo_list_entry *entry_shadow = &vm->page_tables[i].entry_shadow;
 
-		if (!entry->robj)
+		if (!entry->robj || !entry_shadow->robj)
 			continue;
 
 		list_add(&entry->tv.head, duplicates);
+		list_add(&entry_shadow->tv.head, duplicates);
 	}
-
+	list_add(&vm->pd_entry_shadow.tv.head, duplicates);
 }
 
 /**
@@ -601,10 +603,11 @@ uint64_t amdgpu_vm_map_gart(const dma_addr_t *pages_addr, uint64_t addr)
  * Returns 0 for success, error for failure.
  */
 int amdgpu_vm_update_page_directory(struct amdgpu_device *adev,
-				    struct amdgpu_vm *vm)
+				    struct amdgpu_vm *vm, bool shadow)
 {
 	struct amdgpu_ring *ring;
-	struct amdgpu_bo *pd = vm->page_directory;
+	struct amdgpu_bo *pd = shadow ? vm->page_directory_shadow :
+		vm->page_directory;
 	uint64_t pd_addr = amdgpu_bo_gpu_offset(pd);
 	uint32_t incr = AMDGPU_VM_PTE_COUNT * 8;
 	uint64_t last_pde = ~0, last_pt = ~0;
@@ -639,10 +642,15 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev,
 			continue;
 
 		pt = amdgpu_bo_gpu_offset(bo);
-		if (vm->page_tables[pt_idx].addr == pt)
-			continue;
-		vm->page_tables[pt_idx].addr = pt;
-		vm->page_tables[pt_idx].addr_shadow = pt;
+		if (!shadow) {
+			if (vm->page_tables[pt_idx].addr == pt)
+				continue;
+			vm->page_tables[pt_idx].addr = pt;
+		} else {
+			if (vm->page_tables[pt_idx].addr_shadow == pt)
+				continue;
+			vm->page_tables[pt_idx].addr_shadow = pt;
+		}
 
 		pde = pd_addr + pt_idx * 8;
 		if (((last_pde + 8 * count) != pde) ||
@@ -1556,9 +1564,15 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 	r = amdgpu_bo_create(adev, pd_size, align, true,
 			     AMDGPU_GEM_DOMAIN_GTT,
 			     AMDGPU_GEM_CREATE_CPU_GTT_USWC,
-			     NULL, NULL, &vm->page_directory_shadow);
+			     NULL, vm->page_directory->tbo.resv,
+			     &vm->page_directory_shadow);
 	if (r)
 		goto error_free_page_directory;
+	vm->pd_entry_shadow.robj = vm->page_directory_shadow;
+	vm->pd_entry_shadow.priority = 0;
+	vm->pd_entry_shadow.tv.bo = &vm->page_directory_shadow->tbo;
+	vm->pd_entry_shadow.tv.shared = true;
+	vm->pd_entry_shadow.user_pages = NULL;
 
 	return 0;
 
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 04/13] drm/amdgpu: implement amdgpu_vm_recover_page_table_from_shadow
       [not found] ` <1469431353-15787-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (2 preceding siblings ...)
  2016-07-25  7:22   ` [PATCH 03/13] drm/amdgpu: update pd shadow while updating pd Chunming Zhou
@ 2016-07-25  7:22   ` Chunming Zhou
  2016-07-25  7:22   ` [PATCH 05/13] drm/amdgpu: link all vm clients Chunming Zhou
                     ` (10 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Chunming Zhou @ 2016-07-25  7:22 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

Change-Id: I9957e726576289448911f5fb2ff7bcb9311a1906
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h    |  2 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 77 ++++++++++++++++++++++++++++++++++
 2 files changed, 79 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 7f57b0e..c8e3887 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -1005,6 +1005,8 @@ int amdgpu_vm_bo_unmap(struct amdgpu_device *adev,
 		       uint64_t addr);
 void amdgpu_vm_bo_rmv(struct amdgpu_device *adev,
 		      struct amdgpu_bo_va *bo_va);
+int amdgpu_vm_recover_page_table_from_shadow(struct amdgpu_device *adev,
+					     struct amdgpu_vm *vm);
 
 /*
  * context related structures
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index f13bab9..1630adb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -702,6 +702,83 @@ error_free:
 	return r;
 }
 
+static int amdgpu_vm_recover_bo_from_shadow(struct amdgpu_device *adev,
+					    struct amdgpu_bo *bo,
+					    struct amdgpu_bo *bo_shadow,
+					    struct reservation_object *resv)
+
+{
+	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
+	struct fence *fence;
+	int r;
+	uint64_t vram_addr, gtt_addr;
+
+	r = amdgpu_bo_pin(bo, AMDGPU_GEM_DOMAIN_VRAM, &vram_addr);
+	if (r) {
+		DRM_ERROR("Failed to pin bo object\n");
+		goto err1;
+	}
+	r = amdgpu_bo_pin(bo_shadow, AMDGPU_GEM_DOMAIN_GTT, &gtt_addr);
+	if (r) {
+		DRM_ERROR("Failed to pin bo shadow object\n");
+		goto err2;
+	}
+
+	r = reservation_object_reserve_shared(bo->tbo.resv);
+	if (r)
+		goto err3;
+
+	r = amdgpu_copy_buffer(ring, gtt_addr, vram_addr,
+			       amdgpu_bo_size(bo), resv, &fence);
+	if (!r)
+		amdgpu_bo_fence(bo, fence, true);
+
+err3:
+	amdgpu_bo_unpin(bo_shadow);
+err2:
+	amdgpu_bo_unpin(bo);
+err1:
+
+	return r;
+}
+
+int amdgpu_vm_recover_page_table_from_shadow(struct amdgpu_device *adev,
+					     struct amdgpu_vm *vm)
+{
+	uint64_t pt_idx;
+	int r;
+
+	/* bo and shadow use same resv, so reverve one time */
+	r = amdgpu_bo_reserve(vm->page_directory, false);
+	if (unlikely(r != 0))
+		return r;
+
+	r = amdgpu_vm_recover_bo_from_shadow(adev, vm->page_directory,
+					     vm->page_directory_shadow,
+					     NULL);
+	if (r) {
+		DRM_ERROR("recover page table failed!\n");
+		goto err;
+	}
+
+	for (pt_idx = 0; pt_idx <= vm->max_pde_used; ++pt_idx) {
+		struct amdgpu_bo *bo = vm->page_tables[pt_idx].entry.robj;
+		struct amdgpu_bo *bo_shadow = vm->page_tables[pt_idx].entry_shadow.robj;
+
+		if (!bo || !bo_shadow)
+			continue;
+		r = amdgpu_vm_recover_bo_from_shadow(adev, bo, bo_shadow,
+						     NULL);
+		if (r) {
+			DRM_ERROR("recover page table failed!\n");
+			goto err;
+		}
+	}
+
+err:
+	amdgpu_bo_unreserve(vm->page_directory);
+	return r;
+}
 /**
  * amdgpu_vm_frag_ptes - add fragment information to PTEs
  *
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 05/13] drm/amdgpu: link all vm clients
       [not found] ` <1469431353-15787-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (3 preceding siblings ...)
  2016-07-25  7:22   ` [PATCH 04/13] drm/amdgpu: implement amdgpu_vm_recover_page_table_from_shadow Chunming Zhou
@ 2016-07-25  7:22   ` Chunming Zhou
  2016-07-25  7:22   ` [PATCH 06/13] drm/amdgpu: add vm_list_lock Chunming Zhou
                     ` (9 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Chunming Zhou @ 2016-07-25  7:22 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

Add vm client to list tail when creating it, move to head while submit to scheduler.

Change-Id: I0625092f918853303a5ee97ea2eac87fb790ed69
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h        | 6 ++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c     | 4 ++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c     | 3 +++
 4 files changed, 15 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index c8e3887..61c4ff5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -890,6 +890,9 @@ struct amdgpu_vm {
 	/* BO mappings freed, but not yet updated in the PT */
 	struct list_head	freed;
 
+	/* vm itself list */
+	struct list_head	list;
+
 	/* contains the page directory */
 	struct amdgpu_bo	*page_directory;
 	struct amdgpu_bo	*page_directory_shadow;
@@ -2158,6 +2161,9 @@ struct amdgpu_device {
 	struct kfd_dev          *kfd;
 
 	struct amdgpu_virtualization virtualization;
+
+	/* link all vm clients */
+	struct list_head		vm_list;
 };
 
 bool amdgpu_device_is_px(struct drm_device *dev);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 4f89bad..518c9fa 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -835,7 +835,10 @@ static int amdgpu_cs_dependencies(struct amdgpu_device *adev,
 static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 			    union drm_amdgpu_cs *cs)
 {
+	struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+	struct amdgpu_vm *vm = &fpriv->vm;
 	struct amdgpu_ring *ring = p->job->ring;
+	struct amdgpu_device *adev = ring->adev;
 	struct amd_sched_entity *entity = &p->ctx->rings[ring->idx].entity;
 	struct amdgpu_job *job;
 	int r;
@@ -858,6 +861,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 
 	trace_amdgpu_cs_ioctl(job);
 	amd_sched_entity_push_job(&job->base);
+	list_move(&vm->list, &adev->vm_list);
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 08c67d8..9aa3ef3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1533,6 +1533,8 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 	spin_lock_init(&adev->didt_idx_lock);
 	spin_lock_init(&adev->audio_endpt_idx_lock);
 
+	INIT_LIST_HEAD(&adev->vm_list);
+
 	adev->rmmio_base = pci_resource_start(adev->pdev, 5);
 	adev->rmmio_size = pci_resource_len(adev->pdev, 5);
 	adev->rmmio = ioremap(adev->rmmio_base, adev->rmmio_size);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 1630adb..86684c8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1598,6 +1598,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 	INIT_LIST_HEAD(&vm->invalidated);
 	INIT_LIST_HEAD(&vm->cleared);
 	INIT_LIST_HEAD(&vm->freed);
+	INIT_LIST_HEAD(&vm->list);
 
 	pd_size = amdgpu_vm_directory_size(adev);
 	pd_entries = amdgpu_vm_num_pdes(adev);
@@ -1650,6 +1651,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 	vm->pd_entry_shadow.tv.bo = &vm->page_directory_shadow->tbo;
 	vm->pd_entry_shadow.tv.shared = true;
 	vm->pd_entry_shadow.user_pages = NULL;
+	list_add_tail(&vm->list, &adev->vm_list);
 
 	return 0;
 
@@ -1677,6 +1679,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 	struct amdgpu_bo_va_mapping *mapping, *tmp;
 	int i;
 
+	list_del(&vm->list);
 	amd_sched_entity_fini(vm->entity.sched, &vm->entity);
 
 	if (!RB_EMPTY_ROOT(&vm->va)) {
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 06/13] drm/amdgpu: add vm_list_lock
       [not found] ` <1469431353-15787-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (4 preceding siblings ...)
  2016-07-25  7:22   ` [PATCH 05/13] drm/amdgpu: link all vm clients Chunming Zhou
@ 2016-07-25  7:22   ` Chunming Zhou
  2016-07-25  7:22   ` [PATCH 07/13] drm/amd: add block entity function Chunming Zhou
                     ` (8 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Chunming Zhou @ 2016-07-25  7:22 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

To lock adev->vm_list.

Change-Id: I74d309eca9c22d190dd4072c69d26fa7fdea8884
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h        | 1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c     | 2 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c     | 4 ++++
 4 files changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 61c4ff5..878a599 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -2164,6 +2164,7 @@ struct amdgpu_device {
 
 	/* link all vm clients */
 	struct list_head		vm_list;
+	spinlock_t			vm_list_lock;
 };
 
 bool amdgpu_device_is_px(struct drm_device *dev);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 518c9fa..02e43a2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -861,7 +861,9 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 
 	trace_amdgpu_cs_ioctl(job);
 	amd_sched_entity_push_job(&job->base);
+	spin_lock(&adev->vm_list_lock);
 	list_move(&vm->list, &adev->vm_list);
+	spin_unlock(&adev->vm_list_lock);
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 9aa3ef3..4d7d305 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1534,6 +1534,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 	spin_lock_init(&adev->audio_endpt_idx_lock);
 
 	INIT_LIST_HEAD(&adev->vm_list);
+	spin_lock_init(&adev->vm_list_lock);
 
 	adev->rmmio_base = pci_resource_start(adev->pdev, 5);
 	adev->rmmio_size = pci_resource_len(adev->pdev, 5);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 86684c8..8f030a4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1651,7 +1651,9 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 	vm->pd_entry_shadow.tv.bo = &vm->page_directory_shadow->tbo;
 	vm->pd_entry_shadow.tv.shared = true;
 	vm->pd_entry_shadow.user_pages = NULL;
+	spin_lock(&adev->vm_list_lock);
 	list_add_tail(&vm->list, &adev->vm_list);
+	spin_unlock(&adev->vm_list_lock);
 
 	return 0;
 
@@ -1679,7 +1681,9 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 	struct amdgpu_bo_va_mapping *mapping, *tmp;
 	int i;
 
+	spin_lock(&adev->vm_list_lock);
 	list_del(&vm->list);
+	spin_unlock(&adev->vm_list_lock);
 	amd_sched_entity_fini(vm->entity.sched, &vm->entity);
 
 	if (!RB_EMPTY_ROOT(&vm->va)) {
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 07/13] drm/amd: add block entity function
       [not found] ` <1469431353-15787-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (5 preceding siblings ...)
  2016-07-25  7:22   ` [PATCH 06/13] drm/amdgpu: add vm_list_lock Chunming Zhou
@ 2016-07-25  7:22   ` Chunming Zhou
  2016-07-25  7:22   ` [PATCH 08/13] drm/amdgpu: recover page tables after gpu reset Chunming Zhou
                     ` (7 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Chunming Zhou @ 2016-07-25  7:22 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

Change-Id: Ia0378640962eef362569e0bbe090aea1ca083a55
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.c | 24 ++++++++++++++++++++++++
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.h |  3 +++
 2 files changed, 27 insertions(+)

diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
index 70ff09d..2c8c234 100644
--- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
+++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
@@ -110,6 +110,26 @@ amd_sched_rq_select_entity(struct amd_sched_rq *rq)
 }
 
 /**
+ * block all entity of this run queue
+ *
+ * @rq		The run queue to check.
+ *
+ */
+int amd_sched_rq_block_entity(struct amd_sched_rq *rq, bool block)
+{
+	struct amd_sched_entity *entity;
+
+	spin_lock(&rq->lock);
+
+	list_for_each_entry(entity, &rq->entities, list)
+		entity->block = block;
+
+	spin_unlock(&rq->lock);
+
+	return 0;
+}
+
+/**
  * Init a context entity used by scheduler when submit to HW ring.
  *
  * @sched	The pointer to the scheduler
@@ -134,6 +154,7 @@ int amd_sched_entity_init(struct amd_gpu_scheduler *sched,
 	INIT_LIST_HEAD(&entity->list);
 	entity->rq = rq;
 	entity->sched = sched;
+	entity->block = false;
 
 	spin_lock_init(&entity->queue_lock);
 	r = kfifo_alloc(&entity->job_queue, jobs * sizeof(void *), GFP_KERNEL);
@@ -186,6 +207,9 @@ static bool amd_sched_entity_is_idle(struct amd_sched_entity *entity)
  */
 static bool amd_sched_entity_is_ready(struct amd_sched_entity *entity)
 {
+	if (entity->block)
+		return false;
+
 	if (kfifo_is_empty(&entity->job_queue))
 		return false;
 
diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
index 7f978777..7c82232 100644
--- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
+++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
@@ -56,6 +56,8 @@ struct amd_sched_entity {
 
 	struct fence			*dependency;
 	struct fence_cb			cb;
+
+	bool                            block;
 };
 
 /**
@@ -159,4 +161,5 @@ int amd_sched_job_init(struct amd_sched_job *job,
 		       void *owner);
 void amd_sched_hw_job_reset(struct amd_gpu_scheduler *sched);
 void amd_sched_job_recovery(struct amd_gpu_scheduler *sched);
+int amd_sched_rq_block_entity(struct amd_sched_rq *rq, bool block);
 #endif
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 08/13] drm/amdgpu: recover page tables after gpu reset
       [not found] ` <1469431353-15787-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (6 preceding siblings ...)
  2016-07-25  7:22   ` [PATCH 07/13] drm/amd: add block entity function Chunming Zhou
@ 2016-07-25  7:22   ` Chunming Zhou
  2016-07-25  7:22   ` [PATCH 09/13] drm/amdgpu: add vm recover pt fence Chunming Zhou
                     ` (6 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Chunming Zhou @ 2016-07-25  7:22 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

Change-Id: I963598ba6eb44bc8620d70e026c0175d1a1de120
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 27 +++++++++++++++++++++++++--
 1 file changed, 25 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 4d7d305..dcd9ad4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2113,19 +2113,42 @@ retry:
 		amdgpu_atombios_scratch_regs_restore(adev);
 	}
 	if (!r) {
+		struct amdgpu_ring *buffer_ring = adev->mman.buffer_funcs_ring;
+
 		r = amdgpu_ib_ring_tests(adev);
 		if (r) {
 			dev_err(adev->dev, "ib ring test failed (%d).\n", r);
 			r = amdgpu_suspend(adev);
+			need_full_reset = true;
 			goto retry;
 		}
-
+		/**
+		 * recovery vm page tables, since we cannot depend on VRAM is no problem
+		 * after gpu full reset.
+		 */
+		if (need_full_reset) {
+			struct amdgpu_vm *vm;
+
+			amd_sched_rq_block_entity(
+				&buffer_ring->sched.sched_rq[AMD_SCHED_PRIORITY_NORMAL], true);
+			kthread_unpark(buffer_ring->sched.thread);
+			spin_lock(&adev->vm_list_lock);
+			list_for_each_entry(vm, &adev->vm_list, list) {
+				spin_unlock(&adev->vm_list_lock);
+				amdgpu_vm_recover_page_table_from_shadow(adev, vm);
+				spin_lock(&adev->vm_list_lock);
+			}
+			spin_unlock(&adev->vm_list_lock);
+			amd_sched_rq_block_entity(
+				&buffer_ring->sched.sched_rq[AMD_SCHED_PRIORITY_NORMAL], false);
+		}
 		for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
 			struct amdgpu_ring *ring = adev->rings[i];
 			if (!ring)
 				continue;
 			amd_sched_job_recovery(&ring->sched);
-			kthread_unpark(ring->sched.thread);
+			if (ring != buffer_ring || !need_full_reset)
+				kthread_unpark(ring->sched.thread);
 		}
 	} else {
 		dev_err(adev->dev, "asic resume failed (%d).\n", r);
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 09/13] drm/amdgpu: add vm recover pt fence
       [not found] ` <1469431353-15787-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (7 preceding siblings ...)
  2016-07-25  7:22   ` [PATCH 08/13] drm/amdgpu: recover page tables after gpu reset Chunming Zhou
@ 2016-07-25  7:22   ` Chunming Zhou
  2016-07-25  7:22   ` [PATCH 10/13] drm/amd: reset hw count when reset job Chunming Zhou
                     ` (5 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Chunming Zhou @ 2016-07-25  7:22 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

Before every job runs, we must make sure which's vm is recoverred completely.

Change-Id: Ibe77a3c8f8206def280543fbb4195ad2ab9772e0
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h     |  2 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c |  9 +++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c  | 21 +++++++++++++++------
 3 files changed, 26 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 878a599..b092eca 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -914,6 +914,8 @@ struct amdgpu_vm {
 
 	/* client id */
 	u64                     client_id;
+
+	struct fence            *recover_pt_fence;
 };
 
 struct amdgpu_vm_id {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index aaee0c8..df8b6e0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -170,6 +170,15 @@ static struct fence *amdgpu_job_run(struct amd_sched_job *sched_job)
 	BUG_ON(amdgpu_sync_peek_fence(&job->sync, NULL));
 
 	trace_amdgpu_sched_run_job(job);
+
+	if (job->vm && job->vm->recover_pt_fence) {
+		signed long r;
+		r = fence_wait_timeout(job->vm->recover_pt_fence, true,
+				       MAX_SCHEDULE_TIMEOUT);
+		if (r < 0)
+			DRM_ERROR("Error (%ld) waiting for fence!\n", r);
+	}
+
 	r = amdgpu_ib_schedule(job->ring, job->num_ibs, job->ibs,
 			       job->sync.last_vm_update, job, &fence);
 	if (r) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 8f030a4..636b558 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -705,11 +705,11 @@ error_free:
 static int amdgpu_vm_recover_bo_from_shadow(struct amdgpu_device *adev,
 					    struct amdgpu_bo *bo,
 					    struct amdgpu_bo *bo_shadow,
-					    struct reservation_object *resv)
+					    struct reservation_object *resv,
+					    struct fence **fence)
 
 {
 	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
-	struct fence *fence;
 	int r;
 	uint64_t vram_addr, gtt_addr;
 
@@ -729,9 +729,9 @@ static int amdgpu_vm_recover_bo_from_shadow(struct amdgpu_device *adev,
 		goto err3;
 
 	r = amdgpu_copy_buffer(ring, gtt_addr, vram_addr,
-			       amdgpu_bo_size(bo), resv, &fence);
+			       amdgpu_bo_size(bo), resv, fence);
 	if (!r)
-		amdgpu_bo_fence(bo, fence, true);
+		amdgpu_bo_fence(bo, *fence, true);
 
 err3:
 	amdgpu_bo_unpin(bo_shadow);
@@ -745,6 +745,7 @@ err1:
 int amdgpu_vm_recover_page_table_from_shadow(struct amdgpu_device *adev,
 					     struct amdgpu_vm *vm)
 {
+	struct fence *fence;
 	uint64_t pt_idx;
 	int r;
 
@@ -755,11 +756,14 @@ int amdgpu_vm_recover_page_table_from_shadow(struct amdgpu_device *adev,
 
 	r = amdgpu_vm_recover_bo_from_shadow(adev, vm->page_directory,
 					     vm->page_directory_shadow,
-					     NULL);
+					     NULL, &fence);
 	if (r) {
 		DRM_ERROR("recover page table failed!\n");
 		goto err;
 	}
+	fence_put(vm->recover_pt_fence);
+	vm->recover_pt_fence = fence_get(fence);
+	fence_put(fence);
 
 	for (pt_idx = 0; pt_idx <= vm->max_pde_used; ++pt_idx) {
 		struct amdgpu_bo *bo = vm->page_tables[pt_idx].entry.robj;
@@ -768,11 +772,14 @@ int amdgpu_vm_recover_page_table_from_shadow(struct amdgpu_device *adev,
 		if (!bo || !bo_shadow)
 			continue;
 		r = amdgpu_vm_recover_bo_from_shadow(adev, bo, bo_shadow,
-						     NULL);
+						     NULL, &fence);
 		if (r) {
 			DRM_ERROR("recover page table failed!\n");
 			goto err;
 		}
+		fence_put(vm->recover_pt_fence);
+		vm->recover_pt_fence = fence_get(fence);
+		fence_put(fence);
 	}
 
 err:
@@ -1599,6 +1606,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 	INIT_LIST_HEAD(&vm->cleared);
 	INIT_LIST_HEAD(&vm->freed);
 	INIT_LIST_HEAD(&vm->list);
+	vm->recover_pt_fence = NULL;
 
 	pd_size = amdgpu_vm_directory_size(adev);
 	pd_entries = amdgpu_vm_num_pdes(adev);
@@ -1705,6 +1713,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 
 	amdgpu_bo_unref(&vm->page_directory);
 	fence_put(vm->page_directory_fence);
+	fence_put(vm->recover_pt_fence);
 }
 
 /**
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 10/13] drm/amd: reset hw count when reset job
       [not found] ` <1469431353-15787-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (8 preceding siblings ...)
  2016-07-25  7:22   ` [PATCH 09/13] drm/amdgpu: add vm recover pt fence Chunming Zhou
@ 2016-07-25  7:22   ` Chunming Zhou
  2016-07-25  7:22   ` [PATCH 11/13] drm/amd: fix deadlock of job_list_lock Chunming Zhou
                     ` (4 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Chunming Zhou @ 2016-07-25  7:22 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

Means the hw ring is empty after gpu reset.

Change-Id: Icd753424640f1377ad9eaa446cd69fffc6009077
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
index 2c8c234..4fff63b 100644
--- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
+++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
@@ -417,6 +417,7 @@ void amd_sched_hw_job_reset(struct amd_gpu_scheduler *sched)
 			s_job->s_fence->parent = NULL;
 		}
 	}
+	atomic_set(&sched->hw_rq_count, 0);
 	spin_unlock(&sched->job_list_lock);
 }
 
@@ -434,6 +435,8 @@ void amd_sched_job_recovery(struct amd_gpu_scheduler *sched)
 	list_for_each_entry(s_job, &sched->ring_mirror_list, node) {
 		struct amd_sched_fence *s_fence = s_job->s_fence;
 		struct fence *fence = sched->ops->run_job(s_job);
+
+		atomic_inc(&sched->hw_rq_count);
 		if (fence) {
 			s_fence->parent = fence_get(fence);
 			r = fence_add_callback(fence, &s_fence->cb,
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 11/13] drm/amd: fix deadlock of job_list_lock
       [not found] ` <1469431353-15787-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (9 preceding siblings ...)
  2016-07-25  7:22   ` [PATCH 10/13] drm/amd: reset hw count when reset job Chunming Zhou
@ 2016-07-25  7:22   ` Chunming Zhou
  2016-07-25  7:22   ` [PATCH 12/13] drm/amd: wait neccessary dependency before running job Chunming Zhou
                     ` (3 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Chunming Zhou @ 2016-07-25  7:22 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

run_job involves mutex, which could sleep.

Change-Id: Ieddf954af492836bdac56fe0f277dca905c30f28
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
index 4fff63b..533da13 100644
--- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
+++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
@@ -434,8 +434,10 @@ void amd_sched_job_recovery(struct amd_gpu_scheduler *sched)
 
 	list_for_each_entry(s_job, &sched->ring_mirror_list, node) {
 		struct amd_sched_fence *s_fence = s_job->s_fence;
-		struct fence *fence = sched->ops->run_job(s_job);
+		struct fence *fence;
 
+		spin_unlock(&sched->job_list_lock);
+		fence = sched->ops->run_job(s_job);
 		atomic_inc(&sched->hw_rq_count);
 		if (fence) {
 			s_fence->parent = fence_get(fence);
@@ -451,6 +453,7 @@ void amd_sched_job_recovery(struct amd_gpu_scheduler *sched)
 			DRM_ERROR("Failed to run job!\n");
 			amd_sched_process_job(NULL, &s_fence->cb);
 		}
+		spin_lock(&sched->job_list_lock);
 	}
 	spin_unlock(&sched->job_list_lock);
 }
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 12/13] drm/amd: wait neccessary dependency before running job
       [not found] ` <1469431353-15787-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (10 preceding siblings ...)
  2016-07-25  7:22   ` [PATCH 11/13] drm/amd: fix deadlock of job_list_lock Chunming Zhou
@ 2016-07-25  7:22   ` Chunming Zhou
  2016-07-25  7:22   ` [PATCH 13/13] drm/amdgpu: fix sched deadoff Chunming Zhou
                     ` (2 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Chunming Zhou @ 2016-07-25  7:22 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

Change-Id: Ibcc3558c2330caad1a2edb9902b3f21bd950d19f
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
index 533da13..c3525b4 100644
--- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
+++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
@@ -434,9 +434,12 @@ void amd_sched_job_recovery(struct amd_gpu_scheduler *sched)
 
 	list_for_each_entry(s_job, &sched->ring_mirror_list, node) {
 		struct amd_sched_fence *s_fence = s_job->s_fence;
-		struct fence *fence;
+		struct fence *fence, *dependency;
 
 		spin_unlock(&sched->job_list_lock);
+		while ((dependency = sched->ops->dependency(s_job)))
+		       fence_wait(dependency, false);
+
 		fence = sched->ops->run_job(s_job);
 		atomic_inc(&sched->hw_rq_count);
 		if (fence) {
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 13/13] drm/amdgpu: fix sched deadoff
       [not found] ` <1469431353-15787-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (11 preceding siblings ...)
  2016-07-25  7:22   ` [PATCH 12/13] drm/amd: wait neccessary dependency before running job Chunming Zhou
@ 2016-07-25  7:22   ` Chunming Zhou
  2016-07-25 10:31   ` [PATCH 00/13] shadow page table support Christian König
  2016-07-26  8:51   ` Liu, Monk
  14 siblings, 0 replies; 23+ messages in thread
From: Chunming Zhou @ 2016-07-25  7:22 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Chunming Zhou

fence_wait in run_job could block scheduler, and the fence itself is
belonged to this scheduler.

Change-Id: I0c69224de5ef1bdb5b0691d668d786858155fe15
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index df8b6e0..48c7f1b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -152,6 +152,10 @@ static struct fence *amdgpu_job_dependency(struct amd_sched_job *sched_job)
 		fence = amdgpu_sync_get_fence(&job->sync);
 	}
 
+	if (fence == NULL && vm && vm->recover_pt_fence &&
+	    !fence_is_signaled(vm->recover_pt_fence))
+		fence = vm->recover_pt_fence;
+
 	return fence;
 }
 
@@ -171,14 +175,6 @@ static struct fence *amdgpu_job_run(struct amd_sched_job *sched_job)
 
 	trace_amdgpu_sched_run_job(job);
 
-	if (job->vm && job->vm->recover_pt_fence) {
-		signed long r;
-		r = fence_wait_timeout(job->vm->recover_pt_fence, true,
-				       MAX_SCHEDULE_TIMEOUT);
-		if (r < 0)
-			DRM_ERROR("Error (%ld) waiting for fence!\n", r);
-	}
-
 	r = amdgpu_ib_schedule(job->ring, job->num_ibs, job->ibs,
 			       job->sync.last_vm_update, job, &fence);
 	if (r) {
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 00/13] shadow page table support
       [not found] ` <1469431353-15787-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (12 preceding siblings ...)
  2016-07-25  7:22   ` [PATCH 13/13] drm/amdgpu: fix sched deadoff Chunming Zhou
@ 2016-07-25 10:31   ` Christian König
       [not found]     ` <b2f1e133-c7e2-88c4-1e0f-d12310d734f0-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
  2016-07-26  8:51   ` Liu, Monk
  14 siblings, 1 reply; 23+ messages in thread
From: Christian König @ 2016-07-25 10:31 UTC (permalink / raw)
  To: Chunming Zhou, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

First of all patches #10 and #11 look like bug fixes to existing code to 
me. So we should fix those problems before working on anything else.

Patch #10 is Reviewed-by: Christian König <christian.koenig@amd.com>

Patch #11:

>      list_for_each_entry(s_job, &sched->ring_mirror_list, node) {
>          struct amd_sched_fence *s_fence = s_job->s_fence;
> -        struct fence *fence = sched->ops->run_job(s_job);
> +        struct fence *fence;
>
> +        spin_unlock(&sched->job_list_lock);
> +        fence = sched->ops->run_job(s_job);
>          atomic_inc(&sched->hw_rq_count);
>          if (fence) {
>              s_fence->parent = fence_get(fence);
> @@ -451,6 +453,7 @@ void amd_sched_job_recovery(struct 
> amd_gpu_scheduler *sched)
>              DRM_ERROR("Failed to run job!\n");
>              amd_sched_process_job(NULL, &s_fence->cb);
>          }
> +        spin_lock(&sched->job_list_lock);
>      }
>      spin_unlock(&sched->job_list_lock);
The problem is that the job might complete while we dropped the lock.

Please use list_for_each_entry_safe here and add a comment why the list 
could be modified in the meantime.

With that fixed the patch is Reviewed-by: Christian König 
<christian.koenig@amd.com> as well.

The remaining set looks very good to me as well, but I was rather 
thinking of a more general approach instead of making it VM PD/PT specific.

For example we also need to backup/restore shaders when a hard GPU reset 
happens.

So I would suggest the following:
1. We add an optional "shadow" flag so that when a BO in VRAM is 
allocated we also allocate a shadow BO in GART.

2. We have another "backup" flag that says on the next command 
submission the BO is backed up from VRAM to GART before that submission.

3. We set the shadow flag for VM PD/PT BOs and every time we modify them 
set the backup flag so they get backed up on next CS.

4. We add an IOCTL to allow setting the backup flag from userspace so 
that we can trigger another backup even after the first CS.

What do you think?

Regards,
Christian.

Am 25.07.2016 um 09:22 schrieb Chunming Zhou:
> Since we cannot make sure VRAM is safe after gpu reset, page table backup
> is neccessary, shadow page table is sense way to recovery page talbe when
> gpu reset happens.
> We need to allocate GTT bo as the shadow of VRAM bo when creating page table,
> and make them same. After gpu reset, we will need to use SDMA to copy GTT bo
> content to VRAM bo, then page table will be recoveried.
>
> Chunming Zhou (13):
>    drm/amdgpu: add pd/pt bo shadow
>    drm/amdgpu: update shadow pt bo while update pt
>    drm/amdgpu: update pd shadow while updating pd
>    drm/amdgpu: implement amdgpu_vm_recover_page_table_from_shadow
>    drm/amdgpu: link all vm clients
>    drm/amdgpu: add vm_list_lock
>    drm/amd: add block entity function
>    drm/amdgpu: recover page tables after gpu reset
>    drm/amdgpu: add vm recover pt fence
>    drm/amd: reset hw count when reset job
>    drm/amd: fix deadlock of job_list_lock
>    drm/amd: wait neccessary dependency before running job
>    drm/amdgpu: fix sched deadoff
>
>   drivers/gpu/drm/amd/amdgpu/amdgpu.h           |  17 ++-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c        |  12 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  30 ++++-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |   5 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c       |   5 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 161 ++++++++++++++++++++++++--
>   drivers/gpu/drm/amd/scheduler/gpu_scheduler.c |  35 +++++-
>   drivers/gpu/drm/amd/scheduler/gpu_scheduler.h |   3 +
>   8 files changed, 250 insertions(+), 18 deletions(-)
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 00/13] shadow page table support
       [not found]     ` <b2f1e133-c7e2-88c4-1e0f-d12310d734f0-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
@ 2016-07-26  2:40       ` zhoucm1
       [not found]         ` <5796CD94.6080405-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 23+ messages in thread
From: zhoucm1 @ 2016-07-26  2:40 UTC (permalink / raw)
  To: Christian König, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW



On 2016年07月25日 18:31, Christian König wrote:
> First of all patches #10 and #11 look like bug fixes to existing code 
> to me. So we should fix those problems before working on anything else.
>
> Patch #10 is Reviewed-by: Christian König <christian.koenig@amd.com>
>
> Patch #11:
>
>>      list_for_each_entry(s_job, &sched->ring_mirror_list, node) {
>>          struct amd_sched_fence *s_fence = s_job->s_fence;
>> -        struct fence *fence = sched->ops->run_job(s_job);
>> +        struct fence *fence;
>>
>> +        spin_unlock(&sched->job_list_lock);
>> +        fence = sched->ops->run_job(s_job);
>>          atomic_inc(&sched->hw_rq_count);
>>          if (fence) {
>>              s_fence->parent = fence_get(fence);
>> @@ -451,6 +453,7 @@ void amd_sched_job_recovery(struct 
>> amd_gpu_scheduler *sched)
>>              DRM_ERROR("Failed to run job!\n");
>>              amd_sched_process_job(NULL, &s_fence->cb);
>>          }
>> +        spin_lock(&sched->job_list_lock);
>>      }
>>      spin_unlock(&sched->job_list_lock);
> The problem is that the job might complete while we dropped the lock.
>
> Please use list_for_each_entry_safe here and add a comment why the 
> list could be modified in the meantime.
>
> With that fixed the patch is Reviewed-by: Christian König 
> <christian.koenig@amd.com> as well.

OK, pushed above two.

>
> The remaining set looks very good to me as well, but I was rather 
> thinking of a more general approach instead of making it VM PD/PT 
> specific.
>
> For example we also need to backup/restore shaders when a hard GPU 
> reset happens.
>
> So I would suggest the following:
> 1. We add an optional "shadow" flag so that when a BO in VRAM is 
> allocated we also allocate a shadow BO in GART.
>
> 2. We have another "backup" flag that says on the next command 
> submission the BO is backed up from VRAM to GART before that submission.
>
> 3. We set the shadow flag for VM PD/PT BOs and every time we modify 
> them set the backup flag so they get backed up on next CS.
>
> 4. We add an IOCTL to allow setting the backup flag from userspace so 
> that we can trigger another backup even after the first CS.
>
> What do you think?

Sounds very good, will try.

Thanks,
David Zhou
>
> Regards,
> Christian.
>
> Am 25.07.2016 um 09:22 schrieb Chunming Zhou:
>> Since we cannot make sure VRAM is safe after gpu reset, page table 
>> backup
>> is neccessary, shadow page table is sense way to recovery page talbe 
>> when
>> gpu reset happens.
>> We need to allocate GTT bo as the shadow of VRAM bo when creating 
>> page table,
>> and make them same. After gpu reset, we will need to use SDMA to copy 
>> GTT bo
>> content to VRAM bo, then page table will be recoveried.
>>
>> Chunming Zhou (13):
>>    drm/amdgpu: add pd/pt bo shadow
>>    drm/amdgpu: update shadow pt bo while update pt
>>    drm/amdgpu: update pd shadow while updating pd
>>    drm/amdgpu: implement amdgpu_vm_recover_page_table_from_shadow
>>    drm/amdgpu: link all vm clients
>>    drm/amdgpu: add vm_list_lock
>>    drm/amd: add block entity function
>>    drm/amdgpu: recover page tables after gpu reset
>>    drm/amdgpu: add vm recover pt fence
>>    drm/amd: reset hw count when reset job
>>    drm/amd: fix deadlock of job_list_lock
>>    drm/amd: wait neccessary dependency before running job
>>    drm/amdgpu: fix sched deadoff
>>
>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h           |  17 ++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c        |  12 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  30 ++++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |   5 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c       |   5 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 161 
>> ++++++++++++++++++++++++--
>>   drivers/gpu/drm/amd/scheduler/gpu_scheduler.c |  35 +++++-
>>   drivers/gpu/drm/amd/scheduler/gpu_scheduler.h |   3 +
>>   8 files changed, 250 insertions(+), 18 deletions(-)
>>
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 00/13] shadow page table support
       [not found]         ` <5796CD94.6080405-5C7GfCeVMHo@public.gmane.org>
@ 2016-07-26  5:33           ` zhoucm1
       [not found]             ` <5796F610.4050204-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 23+ messages in thread
From: zhoucm1 @ 2016-07-26  5:33 UTC (permalink / raw)
  To: Christian König, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW



On 2016年07月26日 10:40, zhoucm1 wrote:
> 1. We add an optional "shadow" flag so that when a BO in VRAM is 
> allocated we also allocate a shadow BO in GART.
>
> 2. We have another "backup" flag that says on the next command 
> submission the BO is backed up from VRAM to GART before that submission.
>
> 3. We set the shadow flag for VM PD/PT BOs and every time we modify 
> them set the backup flag so they get backed up on next CS.
>
> 4. We add an IOCTL to allow setting the backup flag from userspace so 
> that we can trigger another backup even after the first CS. 
When I'm trying it and thinking more, a general shadow BO indeed be a 
sense way, but backup flag seems not necessary, mainly two reasons:
1. we cannot make sure backup job is completed when gpu reset happens.
2. backup flag is to copy the whole BO, which seems overhead. If we 
update shadow BO along with BO in real time, e.g. PD/PT, we could only 
update some ptes not the entire BO.

So can we assume shadow BO always needed to backup if shadow flag is set?

Regards,
David Zhou
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 00/13] shadow page table support
       [not found]             ` <5796F610.4050204-5C7GfCeVMHo@public.gmane.org>
@ 2016-07-26  8:27               ` Christian König
       [not found]                 ` <a53d1727-796b-351e-7254-e8eed6369f2d-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
  0 siblings, 1 reply; 23+ messages in thread
From: Christian König @ 2016-07-26  8:27 UTC (permalink / raw)
  To: zhoucm1, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 26.07.2016 um 07:33 schrieb zhoucm1:
>
>
> On 2016年07月26日 10:40, zhoucm1 wrote:
>> 1. We add an optional "shadow" flag so that when a BO in VRAM is 
>> allocated we also allocate a shadow BO in GART.
>>
>> 2. We have another "backup" flag that says on the next command 
>> submission the BO is backed up from VRAM to GART before that submission.
>>
>> 3. We set the shadow flag for VM PD/PT BOs and every time we modify 
>> them set the backup flag so they get backed up on next CS.
>>
>> 4. We add an IOCTL to allow setting the backup flag from userspace so 
>> that we can trigger another backup even after the first CS. 
> When I'm trying it and thinking more, a general shadow BO indeed be a 
> sense way, but backup flag seems not necessary, mainly two reasons:
> 1. we cannot make sure backup job is completed when gpu reset happens.

Correct, but we can't guarantee that for VM updates either.

> 2. backup flag is to copy the whole BO, which seems overhead. If we 
> update shadow BO along with BO in real time, e.g. PD/PT, we could only 
> update some ptes not the entire BO.

How about using a begin/end range of which parts of the BO needs to be 
backed up instead of a flag?

> So can we assume shadow BO always needed to backup if shadow flag is set?

For shader BOs clearly not a good idea because they are rarely update 
and so backing them up every time would be a huge overhead.

Regards,
Christian.

>
> Regards,
> David Zhou


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 00/13] shadow page table support
       [not found]                 ` <a53d1727-796b-351e-7254-e8eed6369f2d-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
@ 2016-07-26  8:41                   ` zhoucm1
       [not found]                     ` <57972255.7000307-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 23+ messages in thread
From: zhoucm1 @ 2016-07-26  8:41 UTC (permalink / raw)
  To: Christian König, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW



On 2016年07月26日 16:27, Christian König wrote:
> Am 26.07.2016 um 07:33 schrieb zhoucm1:
>>
>>
>> On 2016年07月26日 10:40, zhoucm1 wrote:
>>> 1. We add an optional "shadow" flag so that when a BO in VRAM is 
>>> allocated we also allocate a shadow BO in GART.
>>>
>>> 2. We have another "backup" flag that says on the next command 
>>> submission the BO is backed up from VRAM to GART before that 
>>> submission.
>>>
>>> 3. We set the shadow flag for VM PD/PT BOs and every time we modify 
>>> them set the backup flag so they get backed up on next CS.
>>>
>>> 4. We add an IOCTL to allow setting the backup flag from userspace 
>>> so that we can trigger another backup even after the first CS. 
>> When I'm trying it and thinking more, a general shadow BO indeed be a 
>> sense way, but backup flag seems not necessary, mainly two reasons:
>> 1. we cannot make sure backup job is completed when gpu reset happens.
>
> Correct, but we can't guarantee that for VM updates either.
Since we directly fill pte info to shadow BO, not copy from VRAM to GTT. 
Which ensures shadow BO content always is right, event these jobs don't 
complete when gpu reset, but after gpu reset, we can sync these jobs, 
and then copy GTT to VRAM. The similar case is also same for shader BOs. 
So we should directly fill the shadow BOs.

Regards,
David Zhou
>
>> 2. backup flag is to copy the whole BO, which seems overhead. If we 
>> update shadow BO along with BO in real time, e.g. PD/PT, we could 
>> only update some ptes not the entire BO.
>
> How about using a begin/end range of which parts of the BO needs to be 
> backed up instead of a flag?
>
>> So can we assume shadow BO always needed to backup if shadow flag is 
>> set?
>
> For shader BOs clearly not a good idea because they are rarely update 
> and so backing them up every time would be a huge overhead.
>
> Regards,
> Christian.
>
>>
>> Regards,
>> David Zhou
>
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH 00/13] shadow page table support
       [not found] ` <1469431353-15787-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
                     ` (13 preceding siblings ...)
  2016-07-25 10:31   ` [PATCH 00/13] shadow page table support Christian König
@ 2016-07-26  8:51   ` Liu, Monk
  14 siblings, 0 replies; 23+ messages in thread
From: Liu, Monk @ 2016-07-26  8:51 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Zhou, David(ChunMing)

Hi David

amdgpu_asic_reset will finally took over by SMU, which will reset GFX ip, BIF, etc.. but currently that logic will not clear VRAM at all, and no circuit drop will happen.
So with my understanding, no need to take care VRAM after full asic access...

BR Monk

-----Original Message-----
From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf Of Chunming Zhou
Sent: Monday, July 25, 2016 3:22 PM
To: amd-gfx@lists.freedesktop.org
Cc: Zhou, David(ChunMing) <David1.Zhou@amd.com>
Subject: [PATCH 00/13] shadow page table support

Since we cannot make sure VRAM is safe after gpu reset, page table backup is neccessary, shadow page table is sense way to recovery page talbe when gpu reset happens.
We need to allocate GTT bo as the shadow of VRAM bo when creating page table, and make them same. After gpu reset, we will need to use SDMA to copy GTT bo content to VRAM bo, then page table will be recoveried.

Chunming Zhou (13):
  drm/amdgpu: add pd/pt bo shadow
  drm/amdgpu: update shadow pt bo while update pt
  drm/amdgpu: update pd shadow while updating pd
  drm/amdgpu: implement amdgpu_vm_recover_page_table_from_shadow
  drm/amdgpu: link all vm clients
  drm/amdgpu: add vm_list_lock
  drm/amd: add block entity function
  drm/amdgpu: recover page tables after gpu reset
  drm/amdgpu: add vm recover pt fence
  drm/amd: reset hw count when reset job
  drm/amd: fix deadlock of job_list_lock
  drm/amd: wait neccessary dependency before running job
  drm/amdgpu: fix sched deadoff

 drivers/gpu/drm/amd/amdgpu/amdgpu.h           |  17 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c        |  12 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  30 ++++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |   5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c       |   5 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 161 ++++++++++++++++++++++++--
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.c |  35 +++++-
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.h |   3 +
 8 files changed, 250 insertions(+), 18 deletions(-)

--
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH 00/13] shadow page table support
  2016-07-25  7:22 [PATCH 00/13] shadow page table support Chunming Zhou
       [not found] ` <1469431353-15787-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
@ 2016-07-26  8:52 ` Liu, Monk
  1 sibling, 0 replies; 23+ messages in thread
From: Liu, Monk @ 2016-07-26  8:52 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Zhou, David(ChunMing)

Correction, after full asic reset

BR Monk

-----Original Message-----
From: Liu, Monk 
Sent: Tuesday, July 26, 2016 4:51 PM
To: 'Chunming Zhou' <David1.Zhou@amd.com>; amd-gfx@lists.freedesktop.org
Cc: Zhou, David(ChunMing) <David1.Zhou@amd.com>
Subject: RE: [PATCH 00/13] shadow page table support

Hi David

amdgpu_asic_reset will finally took over by SMU, which will reset GFX ip, BIF, etc.. but currently that logic will not clear VRAM at all, and no circuit drop will happen.
So with my understanding, no need to take care VRAM after full asic access...

BR Monk

-----Original Message-----
From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf Of Chunming Zhou
Sent: Monday, July 25, 2016 3:22 PM
To: amd-gfx@lists.freedesktop.org
Cc: Zhou, David(ChunMing) <David1.Zhou@amd.com>
Subject: [PATCH 00/13] shadow page table support

Since we cannot make sure VRAM is safe after gpu reset, page table backup is neccessary, shadow page table is sense way to recovery page talbe when gpu reset happens.
We need to allocate GTT bo as the shadow of VRAM bo when creating page table, and make them same. After gpu reset, we will need to use SDMA to copy GTT bo content to VRAM bo, then page table will be recoveried.

Chunming Zhou (13):
  drm/amdgpu: add pd/pt bo shadow
  drm/amdgpu: update shadow pt bo while update pt
  drm/amdgpu: update pd shadow while updating pd
  drm/amdgpu: implement amdgpu_vm_recover_page_table_from_shadow
  drm/amdgpu: link all vm clients
  drm/amdgpu: add vm_list_lock
  drm/amd: add block entity function
  drm/amdgpu: recover page tables after gpu reset
  drm/amdgpu: add vm recover pt fence
  drm/amd: reset hw count when reset job
  drm/amd: fix deadlock of job_list_lock
  drm/amd: wait neccessary dependency before running job
  drm/amdgpu: fix sched deadoff

 drivers/gpu/drm/amd/amdgpu/amdgpu.h           |  17 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c        |  12 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  30 ++++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |   5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c       |   5 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        | 161 ++++++++++++++++++++++++--
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.c |  35 +++++-
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.h |   3 +
 8 files changed, 250 insertions(+), 18 deletions(-)

--
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 00/13] shadow page table support
       [not found]                     ` <57972255.7000307-5C7GfCeVMHo@public.gmane.org>
@ 2016-07-26  9:05                       ` Christian König
       [not found]                         ` <3766450b-7dd5-9632-ed0b-81e744d08f32-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
  0 siblings, 1 reply; 23+ messages in thread
From: Christian König @ 2016-07-26  9:05 UTC (permalink / raw)
  To: zhoucm1, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 26.07.2016 um 10:41 schrieb zhoucm1:
>
>
> On 2016年07月26日 16:27, Christian König wrote:
>> Am 26.07.2016 um 07:33 schrieb zhoucm1:
>>>
>>>
>>> On 2016年07月26日 10:40, zhoucm1 wrote:
>>>> 1. We add an optional "shadow" flag so that when a BO in VRAM is 
>>>> allocated we also allocate a shadow BO in GART.
>>>>
>>>> 2. We have another "backup" flag that says on the next command 
>>>> submission the BO is backed up from VRAM to GART before that 
>>>> submission.
>>>>
>>>> 3. We set the shadow flag for VM PD/PT BOs and every time we modify 
>>>> them set the backup flag so they get backed up on next CS.
>>>>
>>>> 4. We add an IOCTL to allow setting the backup flag from userspace 
>>>> so that we can trigger another backup even after the first CS. 
>>> When I'm trying it and thinking more, a general shadow BO indeed be 
>>> a sense way, but backup flag seems not necessary, mainly two reasons:
>>> 1. we cannot make sure backup job is completed when gpu reset happens.
>>
>> Correct, but we can't guarantee that for VM updates either.
> Since we directly fill pte info to shadow BO, not copy from VRAM to 
> GTT. Which ensures shadow BO content always is right, event these jobs 
> don't complete when gpu reset, but after gpu reset, we can sync these 
> jobs, and then copy GTT to VRAM. The similar case is also same for 
> shader BOs. So we should directly fill the shadow BOs.

I see what you mean. The PTE update and the shadow update must be one 
operation for this to be always consistent.

Alternatively you could flip the order and do the update on the shadow 
first and then copy the result to the real one.

BTW: We should make this an optional feature, cause it will certainly 
use a lot of memory and computation resources.

Regards,
Christian.

>
> Regards,
> David Zhou
>>
>>> 2. backup flag is to copy the whole BO, which seems overhead. If we 
>>> update shadow BO along with BO in real time, e.g. PD/PT, we could 
>>> only update some ptes not the entire BO.
>>
>> How about using a begin/end range of which parts of the BO needs to 
>> be backed up instead of a flag?
>>
>>> So can we assume shadow BO always needed to backup if shadow flag is 
>>> set?
>>
>> For shader BOs clearly not a good idea because they are rarely update 
>> and so backing them up every time would be a huge overhead.
>>
>> Regards,
>> Christian.
>>
>>>
>>> Regards,
>>> David Zhou
>>
>>
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 00/13] shadow page table support
       [not found]                         ` <3766450b-7dd5-9632-ed0b-81e744d08f32-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
@ 2016-07-26  9:12                           ` zhoucm1
  0 siblings, 0 replies; 23+ messages in thread
From: zhoucm1 @ 2016-07-26  9:12 UTC (permalink / raw)
  To: Christian König, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW



On 2016年07月26日 17:05, Christian König wrote:
> Am 26.07.2016 um 10:41 schrieb zhoucm1:
>>
>>
>> On 2016年07月26日 16:27, Christian König wrote:
>>> Am 26.07.2016 um 07:33 schrieb zhoucm1:
>>>>
>>>>
>>>> On 2016年07月26日 10:40, zhoucm1 wrote:
>>>>> 1. We add an optional "shadow" flag so that when a BO in VRAM is 
>>>>> allocated we also allocate a shadow BO in GART.
>>>>>
>>>>> 2. We have another "backup" flag that says on the next command 
>>>>> submission the BO is backed up from VRAM to GART before that 
>>>>> submission.
>>>>>
>>>>> 3. We set the shadow flag for VM PD/PT BOs and every time we 
>>>>> modify them set the backup flag so they get backed up on next CS.
>>>>>
>>>>> 4. We add an IOCTL to allow setting the backup flag from userspace 
>>>>> so that we can trigger another backup even after the first CS. 
>>>> When I'm trying it and thinking more, a general shadow BO indeed be 
>>>> a sense way, but backup flag seems not necessary, mainly two reasons:
>>>> 1. we cannot make sure backup job is completed when gpu reset happens.
>>>
>>> Correct, but we can't guarantee that for VM updates either.
>> Since we directly fill pte info to shadow BO, not copy from VRAM to 
>> GTT. Which ensures shadow BO content always is right, event these 
>> jobs don't complete when gpu reset, but after gpu reset, we can sync 
>> these jobs, and then copy GTT to VRAM. The similar case is also same 
>> for shader BOs. So we should directly fill the shadow BOs.
>
> I see what you mean. The PTE update and the shadow update must be one 
> operation for this to be always consistent.
>
> Alternatively you could flip the order and do the update on the shadow 
> first and then copy the result to the real one.
>
> BTW: We should make this an optional feature, cause it will certainly 
> use a lot of memory and computation resources.
Make sense. Can we depend on whether lockup is enabled or not?

Regards,
David Zhou
>
> Regards,
> Christian.
>
>>
>> Regards,
>> David Zhou
>>>
>>>> 2. backup flag is to copy the whole BO, which seems overhead. If we 
>>>> update shadow BO along with BO in real time, e.g. PD/PT, we could 
>>>> only update some ptes not the entire BO.
>>>
>>> How about using a begin/end range of which parts of the BO needs to 
>>> be backed up instead of a flag?
>>>
>>>> So can we assume shadow BO always needed to backup if shadow flag 
>>>> is set?
>>>
>>> For shader BOs clearly not a good idea because they are rarely 
>>> update and so backing them up every time would be a huge overhead.
>>>
>>> Regards,
>>> Christian.
>>>
>>>>
>>>> Regards,
>>>> David Zhou
>>>
>>>
>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2016-07-26  9:12 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-25  7:22 [PATCH 00/13] shadow page table support Chunming Zhou
     [not found] ` <1469431353-15787-1-git-send-email-David1.Zhou-5C7GfCeVMHo@public.gmane.org>
2016-07-25  7:22   ` [PATCH 01/13] drm/amdgpu: add pd/pt bo shadow Chunming Zhou
2016-07-25  7:22   ` [PATCH 02/13] drm/amdgpu: update shadow pt bo while update pt Chunming Zhou
2016-07-25  7:22   ` [PATCH 03/13] drm/amdgpu: update pd shadow while updating pd Chunming Zhou
2016-07-25  7:22   ` [PATCH 04/13] drm/amdgpu: implement amdgpu_vm_recover_page_table_from_shadow Chunming Zhou
2016-07-25  7:22   ` [PATCH 05/13] drm/amdgpu: link all vm clients Chunming Zhou
2016-07-25  7:22   ` [PATCH 06/13] drm/amdgpu: add vm_list_lock Chunming Zhou
2016-07-25  7:22   ` [PATCH 07/13] drm/amd: add block entity function Chunming Zhou
2016-07-25  7:22   ` [PATCH 08/13] drm/amdgpu: recover page tables after gpu reset Chunming Zhou
2016-07-25  7:22   ` [PATCH 09/13] drm/amdgpu: add vm recover pt fence Chunming Zhou
2016-07-25  7:22   ` [PATCH 10/13] drm/amd: reset hw count when reset job Chunming Zhou
2016-07-25  7:22   ` [PATCH 11/13] drm/amd: fix deadlock of job_list_lock Chunming Zhou
2016-07-25  7:22   ` [PATCH 12/13] drm/amd: wait neccessary dependency before running job Chunming Zhou
2016-07-25  7:22   ` [PATCH 13/13] drm/amdgpu: fix sched deadoff Chunming Zhou
2016-07-25 10:31   ` [PATCH 00/13] shadow page table support Christian König
     [not found]     ` <b2f1e133-c7e2-88c4-1e0f-d12310d734f0-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
2016-07-26  2:40       ` zhoucm1
     [not found]         ` <5796CD94.6080405-5C7GfCeVMHo@public.gmane.org>
2016-07-26  5:33           ` zhoucm1
     [not found]             ` <5796F610.4050204-5C7GfCeVMHo@public.gmane.org>
2016-07-26  8:27               ` Christian König
     [not found]                 ` <a53d1727-796b-351e-7254-e8eed6369f2d-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
2016-07-26  8:41                   ` zhoucm1
     [not found]                     ` <57972255.7000307-5C7GfCeVMHo@public.gmane.org>
2016-07-26  9:05                       ` Christian König
     [not found]                         ` <3766450b-7dd5-9632-ed0b-81e744d08f32-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
2016-07-26  9:12                           ` zhoucm1
2016-07-26  8:51   ` Liu, Monk
2016-07-26  8:52 ` Liu, Monk

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.