All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/2] fix gmc page fault on navi1X
@ 2020-03-13 14:07 xinhui pan
  2020-03-13 14:07 ` [PATCH v3 1/2] drm/amdgpu: Add job fence to resv conditionally xinhui pan
  2020-03-13 14:07 ` [PATCH v3 2/2] drm/amdgpu: unref the bo after job submit xinhui pan
  0 siblings, 2 replies; 8+ messages in thread
From: xinhui pan @ 2020-03-13 14:07 UTC (permalink / raw)
  To: amd-gfx; +Cc: xinhui pan

We hit gmc page fault on navi1X.
UMR tells that the physical address of pte is bad.

Two issues:
1) we did not sync job schedule fence while update mapping.
fix it by adding bo fence to resv after every job submit.

2) we might unref page table bo during update ptes, at the same time, there
is job pending on bo. and submit a job in commit after free bo.
We need free the bo after adding all fence to bo.

change from v2:
use the correct page table bo resv

change from v1:
fix rebase issue.

xinhui pan (2):
  drm/amdgpu: Add job fence to resv conditionally
  drm/amdgpu: unref the bo after job submit

 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c      | 42 +++++++++++++++------
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h      |  5 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c |  7 ++++
 3 files changed, 42 insertions(+), 12 deletions(-)

-- 
2.17.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v3 1/2] drm/amdgpu: Add job fence to resv conditionally
  2020-03-13 14:07 [PATCH v3 0/2] fix gmc page fault on navi1X xinhui pan
@ 2020-03-13 14:07 ` xinhui pan
  2020-03-13 14:13   ` Christian König
  2020-03-13 14:07 ` [PATCH v3 2/2] drm/amdgpu: unref the bo after job submit xinhui pan
  1 sibling, 1 reply; 8+ messages in thread
From: xinhui pan @ 2020-03-13 14:07 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Felix Kuehling, xinhui pan, Christian König

Provide resv as a parameter for vm sdma commit.
Job fence on page table should be a shared one, so add it to the resv.

Cc: Christian König <christian.koenig@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Suggested-by: Christian König <christian.koenig@amd.com>
Signed-off-by: xinhui pan <xinhui.pan@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c      | 4 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h      | 5 +++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c | 7 +++++++
 3 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 73398831196f..809ca6e8f40f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1579,6 +1579,7 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
 				       dma_addr_t *pages_addr,
 				       struct dma_fence **fence)
 {
+	struct amdgpu_bo *root = vm->root.base.bo;
 	struct amdgpu_vm_update_params params;
 	enum amdgpu_sync_mode sync_mode;
 	int r;
@@ -1604,8 +1605,6 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
 	}
 
 	if (flags & AMDGPU_PTE_VALID) {
-		struct amdgpu_bo *root = vm->root.base.bo;
-
 		if (!dma_fence_is_signaled(vm->last_direct))
 			amdgpu_bo_fence(root, vm->last_direct, true);
 
@@ -1613,6 +1612,7 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
 			amdgpu_bo_fence(root, vm->last_delayed, true);
 	}
 
+	params.resv = root->tbo.base.resv;
 	r = vm->update_funcs->prepare(&params, resv, sync_mode);
 	if (r)
 		goto error_unlock;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index b5705fcfc935..ca6021b4200b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -226,6 +226,11 @@ struct amdgpu_vm_update_params {
 	 * @num_dw_left: number of dw left for the IB
 	 */
 	unsigned int num_dw_left;
+
+	/**
+	 * @resv: sync the resv and add job fence to it conditionally.
+	 */
+	struct dma_resv *resv;
 };
 
 struct amdgpu_vm_update_funcs {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
index 4cc7881f438c..a1b270a4da8e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
@@ -111,6 +111,13 @@ static int amdgpu_vm_sdma_commit(struct amdgpu_vm_update_params *p,
 		swap(p->vm->last_delayed, tmp);
 	dma_fence_put(tmp);
 
+	/* add job fence to resv.
+	 * MM notifier path is an exception as we can not grab the
+	 * resv lock.
+	 */
+	if (!p->direct && p->resv)
+		dma_resv_add_shared_fence(p->resv, f);
+
 	if (fence && !p->direct)
 		swap(*fence, f);
 	dma_fence_put(f);
-- 
2.17.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 2/2] drm/amdgpu: unref the bo after job submit
  2020-03-13 14:07 [PATCH v3 0/2] fix gmc page fault on navi1X xinhui pan
  2020-03-13 14:07 ` [PATCH v3 1/2] drm/amdgpu: Add job fence to resv conditionally xinhui pan
@ 2020-03-13 14:07 ` xinhui pan
  2020-03-13 14:20   ` Christian König
  1 sibling, 1 reply; 8+ messages in thread
From: xinhui pan @ 2020-03-13 14:07 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Felix Kuehling, xinhui pan, Christian König

Otherwise we might free BOs before job has completed.
We add the fence to BO after commit, so free BOs after that.

Cc: Christian König <christian.koenig@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Signed-off-by: xinhui pan <xinhui.pan@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 38 +++++++++++++++++++-------
 1 file changed, 28 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 809ca6e8f40f..605a1bb40280 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -942,13 +942,17 @@ static int amdgpu_vm_alloc_pts(struct amdgpu_device *adev,
  *
  * @entry: PDE to free
  */
-static void amdgpu_vm_free_table(struct amdgpu_vm_pt *entry)
+static void amdgpu_vm_free_table(struct amdgpu_vm_pt *entry,
+					struct list_head *head)
 {
 	if (entry->base.bo) {
 		entry->base.bo->vm_bo = NULL;
 		list_del(&entry->base.vm_status);
-		amdgpu_bo_unref(&entry->base.bo->shadow);
-		amdgpu_bo_unref(&entry->base.bo);
+		if (!head) {
+			amdgpu_bo_unref(&entry->base.bo->shadow);
+			amdgpu_bo_unref(&entry->base.bo);
+		} else
+			list_add(&entry->base.vm_status, head);
 	}
 	kvfree(entry->entries);
 	entry->entries = NULL;
@@ -965,7 +969,8 @@ static void amdgpu_vm_free_table(struct amdgpu_vm_pt *entry)
  */
 static void amdgpu_vm_free_pts(struct amdgpu_device *adev,
 			       struct amdgpu_vm *vm,
-			       struct amdgpu_vm_pt_cursor *start)
+			       struct amdgpu_vm_pt_cursor *start,
+				   struct list_head *head)
 {
 	struct amdgpu_vm_pt_cursor cursor;
 	struct amdgpu_vm_pt *entry;
@@ -973,10 +978,10 @@ static void amdgpu_vm_free_pts(struct amdgpu_device *adev,
 	vm->bulk_moveable = false;
 
 	for_each_amdgpu_vm_pt_dfs_safe(adev, vm, start, cursor, entry)
-		amdgpu_vm_free_table(entry);
+		amdgpu_vm_free_table(entry, head);
 
 	if (start)
-		amdgpu_vm_free_table(start->entry);
+		amdgpu_vm_free_table(start->entry, head);
 }
 
 /**
@@ -1428,7 +1433,8 @@ static void amdgpu_vm_fragment(struct amdgpu_vm_update_params *params,
  */
 static int amdgpu_vm_update_ptes(struct amdgpu_vm_update_params *params,
 				 uint64_t start, uint64_t end,
-				 uint64_t dst, uint64_t flags)
+				 uint64_t dst, uint64_t flags,
+				 struct list_head *head)
 {
 	struct amdgpu_device *adev = params->adev;
 	struct amdgpu_vm_pt_cursor cursor;
@@ -1539,7 +1545,7 @@ static int amdgpu_vm_update_ptes(struct amdgpu_vm_update_params *params,
 			 * completely covered by the range and so potentially still in use.
 			 */
 			while (cursor.pfn < frag_start) {
-				amdgpu_vm_free_pts(adev, params->vm, &cursor);
+				amdgpu_vm_free_pts(adev, params->vm, &cursor, head);
 				amdgpu_vm_pt_next(adev, &cursor);
 			}
 
@@ -1583,6 +1589,7 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
 	struct amdgpu_vm_update_params params;
 	enum amdgpu_sync_mode sync_mode;
 	int r;
+	struct list_head head;
 
 	memset(&params, 0, sizeof(params));
 	params.adev = adev;
@@ -1590,6 +1597,8 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
 	params.direct = direct;
 	params.pages_addr = pages_addr;
 
+	INIT_LIST_HEAD(&head);
+
 	/* Implicitly sync to command submissions in the same VM before
 	 * unmapping. Sync to moving fences before mapping.
 	 */
@@ -1617,13 +1626,22 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
 	if (r)
 		goto error_unlock;
 
-	r = amdgpu_vm_update_ptes(&params, start, last + 1, addr, flags);
+	r = amdgpu_vm_update_ptes(&params, start, last + 1, addr, flags, &head);
 	if (r)
 		goto error_unlock;
 
 	r = vm->update_funcs->commit(&params, fence);
 
 error_unlock:
+	while (!list_empty(&head)) {
+		struct amdgpu_vm_pt *entry = list_first_entry(&head,
+				struct amdgpu_vm_pt, base.vm_status);
+		list_del(&entry->base.vm_status);
+
+		amdgpu_bo_unref(&entry->base.bo->shadow);
+		amdgpu_bo_unref(&entry->base.bo);
+	}
+
 	amdgpu_vm_eviction_unlock(vm);
 	return r;
 }
@@ -3118,7 +3136,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 		amdgpu_vm_free_mapping(adev, vm, mapping, NULL);
 	}
 
-	amdgpu_vm_free_pts(adev, vm, NULL);
+	amdgpu_vm_free_pts(adev, vm, NULL, NULL);
 	amdgpu_bo_unreserve(root);
 	amdgpu_bo_unref(&root);
 	WARN_ON(vm->root.base.bo);
-- 
2.17.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/2] drm/amdgpu: Add job fence to resv conditionally
  2020-03-13 14:07 ` [PATCH v3 1/2] drm/amdgpu: Add job fence to resv conditionally xinhui pan
@ 2020-03-13 14:13   ` Christian König
  2020-03-13 14:30     ` Pan, Xinhui
  0 siblings, 1 reply; 8+ messages in thread
From: Christian König @ 2020-03-13 14:13 UTC (permalink / raw)
  To: xinhui pan, amd-gfx; +Cc: Alex Deucher, Felix Kuehling

Am 13.03.20 um 15:07 schrieb xinhui pan:
> Provide resv as a parameter for vm sdma commit.
> Job fence on page table should be a shared one, so add it to the resv.
>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Alex Deucher <alexander.deucher@amd.com>
> Cc: Felix Kuehling <Felix.Kuehling@amd.com>
> Suggested-by: Christian König <christian.koenig@amd.com>
> Signed-off-by: xinhui pan <xinhui.pan@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c      | 4 ++--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h      | 5 +++++
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c | 7 +++++++
>   3 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 73398831196f..809ca6e8f40f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -1579,6 +1579,7 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
>   				       dma_addr_t *pages_addr,
>   				       struct dma_fence **fence)
>   {
> +	struct amdgpu_bo *root = vm->root.base.bo;
>   	struct amdgpu_vm_update_params params;
>   	enum amdgpu_sync_mode sync_mode;
>   	int r;
> @@ -1604,8 +1605,6 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
>   	}
>   
>   	if (flags & AMDGPU_PTE_VALID) {
> -		struct amdgpu_bo *root = vm->root.base.bo;
> -
>   		if (!dma_fence_is_signaled(vm->last_direct))
>   			amdgpu_bo_fence(root, vm->last_direct, true);
>   
> @@ -1613,6 +1612,7 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
>   			amdgpu_bo_fence(root, vm->last_delayed, true);
>   	}
>   
> +	params.resv = root->tbo.base.resv;
>   	r = vm->update_funcs->prepare(&params, resv, sync_mode);
>   	if (r)
>   		goto error_unlock;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> index b5705fcfc935..ca6021b4200b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> @@ -226,6 +226,11 @@ struct amdgpu_vm_update_params {
>   	 * @num_dw_left: number of dw left for the IB
>   	 */
>   	unsigned int num_dw_left;
> +
> +	/**
> +	 * @resv: sync the resv and add job fence to it conditionally.
> +	 */
> +	struct dma_resv *resv;
>   };
>   
>   struct amdgpu_vm_update_funcs {
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
> index 4cc7881f438c..a1b270a4da8e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
> @@ -111,6 +111,13 @@ static int amdgpu_vm_sdma_commit(struct amdgpu_vm_update_params *p,
>   		swap(p->vm->last_delayed, tmp);
>   	dma_fence_put(tmp);
>   
> +	/* add job fence to resv.
> +	 * MM notifier path is an exception as we can not grab the
> +	 * resv lock.
> +	 */
> +	if (!p->direct && p->resv)

You can just use p->vm->root.base.bo->tbo.base.resv here, no need for a 
new field in the paramater structure.

And it would probably be best to also remove the vm->last_delayed field 
entirely.

In other words use something like this here

if (p->direct) {
     tmp = dma_fence_get(f);
     swap(p->vm->last_direct, tmp);
     dma_fence_put(tmp);
} else {
dma_resv_add_shared_fence(p->vm->root.base.bo->tbo.base.resv, f);
}

Regards,
Christian.

> +		dma_resv_add_shared_fence(p->resv, f);
> +
>   	if (fence && !p->direct)
>   		swap(*fence, f);
>   	dma_fence_put(f);

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/2] drm/amdgpu: unref the bo after job submit
  2020-03-13 14:07 ` [PATCH v3 2/2] drm/amdgpu: unref the bo after job submit xinhui pan
@ 2020-03-13 14:20   ` Christian König
  2020-03-13 14:54     ` Pan, Xinhui
  0 siblings, 1 reply; 8+ messages in thread
From: Christian König @ 2020-03-13 14:20 UTC (permalink / raw)
  To: xinhui pan, amd-gfx; +Cc: Alex Deucher, Felix Kuehling

Am 13.03.20 um 15:07 schrieb xinhui pan:
> Otherwise we might free BOs before job has completed.
> We add the fence to BO after commit, so free BOs after that.

I'm not sure if this is necessary, but probably better save than sorry.

Some comments below.

>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Alex Deucher <alexander.deucher@amd.com>
> Cc: Felix Kuehling <Felix.Kuehling@amd.com>
> Signed-off-by: xinhui pan <xinhui.pan@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 38 +++++++++++++++++++-------
>   1 file changed, 28 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 809ca6e8f40f..605a1bb40280 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -942,13 +942,17 @@ static int amdgpu_vm_alloc_pts(struct amdgpu_device *adev,
>    *
>    * @entry: PDE to free
>    */
> -static void amdgpu_vm_free_table(struct amdgpu_vm_pt *entry)
> +static void amdgpu_vm_free_table(struct amdgpu_vm_pt *entry,
> +					struct list_head *head)
>   {
>   	if (entry->base.bo) {
>   		entry->base.bo->vm_bo = NULL;
>   		list_del(&entry->base.vm_status);
> -		amdgpu_bo_unref(&entry->base.bo->shadow);
> -		amdgpu_bo_unref(&entry->base.bo);
> +		if (!head) {
> +			amdgpu_bo_unref(&entry->base.bo->shadow);
> +			amdgpu_bo_unref(&entry->base.bo);
> +		} else
> +			list_add(&entry->base.vm_status, head);

Instead of adding a parameter make this a new state in the VM. Something 
like vm->zombies or something similar.

>   	}
>   	kvfree(entry->entries);
>   	entry->entries = NULL;
> @@ -965,7 +969,8 @@ static void amdgpu_vm_free_table(struct amdgpu_vm_pt *entry)
>    */
>   static void amdgpu_vm_free_pts(struct amdgpu_device *adev,
>   			       struct amdgpu_vm *vm,
> -			       struct amdgpu_vm_pt_cursor *start)
> +			       struct amdgpu_vm_pt_cursor *start,
> +				   struct list_head *head)
>   {
>   	struct amdgpu_vm_pt_cursor cursor;
>   	struct amdgpu_vm_pt *entry;
> @@ -973,10 +978,10 @@ static void amdgpu_vm_free_pts(struct amdgpu_device *adev,
>   	vm->bulk_moveable = false;
>   
>   	for_each_amdgpu_vm_pt_dfs_safe(adev, vm, start, cursor, entry)
> -		amdgpu_vm_free_table(entry);
> +		amdgpu_vm_free_table(entry, head);
>   
>   	if (start)
> -		amdgpu_vm_free_table(start->entry);
> +		amdgpu_vm_free_table(start->entry, head);
>   }
>   
>   /**
> @@ -1428,7 +1433,8 @@ static void amdgpu_vm_fragment(struct amdgpu_vm_update_params *params,
>    */
>   static int amdgpu_vm_update_ptes(struct amdgpu_vm_update_params *params,
>   				 uint64_t start, uint64_t end,
> -				 uint64_t dst, uint64_t flags)
> +				 uint64_t dst, uint64_t flags,
> +				 struct list_head *head)
>   {
>   	struct amdgpu_device *adev = params->adev;
>   	struct amdgpu_vm_pt_cursor cursor;
> @@ -1539,7 +1545,7 @@ static int amdgpu_vm_update_ptes(struct amdgpu_vm_update_params *params,
>   			 * completely covered by the range and so potentially still in use.
>   			 */
>   			while (cursor.pfn < frag_start) {
> -				amdgpu_vm_free_pts(adev, params->vm, &cursor);
> +				amdgpu_vm_free_pts(adev, params->vm, &cursor, head);
>   				amdgpu_vm_pt_next(adev, &cursor);
>   			}
>   
> @@ -1583,6 +1589,7 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
>   	struct amdgpu_vm_update_params params;
>   	enum amdgpu_sync_mode sync_mode;
>   	int r;
> +	struct list_head head;
>   
>   	memset(&params, 0, sizeof(params));
>   	params.adev = adev;
> @@ -1590,6 +1597,8 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
>   	params.direct = direct;
>   	params.pages_addr = pages_addr;
>   
> +	INIT_LIST_HEAD(&head);
> +
>   	/* Implicitly sync to command submissions in the same VM before
>   	 * unmapping. Sync to moving fences before mapping.
>   	 */
> @@ -1617,13 +1626,22 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
>   	if (r)
>   		goto error_unlock;
>   
> -	r = amdgpu_vm_update_ptes(&params, start, last + 1, addr, flags);
> +	r = amdgpu_vm_update_ptes(&params, start, last + 1, addr, flags, &head);
>   	if (r)
>   		goto error_unlock;
>   
>   	r = vm->update_funcs->commit(&params, fence);
>   
>   error_unlock:
> +	while (!list_empty(&head)) {
> +		struct amdgpu_vm_pt *entry = list_first_entry(&head,
> +				struct amdgpu_vm_pt, base.vm_status);
> +		list_del(&entry->base.vm_status);
> +
> +		amdgpu_bo_unref(&entry->base.bo->shadow);
> +		amdgpu_bo_unref(&entry->base.bo);
> +	}

Maybe put that into a separate function.

> +
>   	amdgpu_vm_eviction_unlock(vm);
>   	return r;
>   }
> @@ -3118,7 +3136,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>   		amdgpu_vm_free_mapping(adev, vm, mapping, NULL);
>   	}
>   
> -	amdgpu_vm_free_pts(adev, vm, NULL);
> +	amdgpu_vm_free_pts(adev, vm, NULL, NULL);

And then also call it here.

Regards,
Christian.

>   	amdgpu_bo_unreserve(root);
>   	amdgpu_bo_unref(&root);
>   	WARN_ON(vm->root.base.bo);

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/2] drm/amdgpu: Add job fence to resv conditionally
  2020-03-13 14:13   ` Christian König
@ 2020-03-13 14:30     ` Pan, Xinhui
  0 siblings, 0 replies; 8+ messages in thread
From: Pan, Xinhui @ 2020-03-13 14:30 UTC (permalink / raw)
  To: Koenig, Christian
  Cc: Deucher, Alexander, Kuehling, Felix, Pan, Xinhui, amd-gfx



> 2020年3月13日 22:13,Koenig, Christian <Christian.Koenig@amd.com> 写道:
> 
> Am 13.03.20 um 15:07 schrieb xinhui pan:
>> Provide resv as a parameter for vm sdma commit.
>> Job fence on page table should be a shared one, so add it to the resv.
>> 
>> Cc: Christian König <christian.koenig@amd.com>
>> Cc: Alex Deucher <alexander.deucher@amd.com>
>> Cc: Felix Kuehling <Felix.Kuehling@amd.com>
>> Suggested-by: Christian König <christian.koenig@amd.com>
>> Signed-off-by: xinhui pan <xinhui.pan@amd.com>
>> ---
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c      | 4 ++--
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h      | 5 +++++
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c | 7 +++++++
>>  3 files changed, 14 insertions(+), 2 deletions(-)
>> 
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> index 73398831196f..809ca6e8f40f 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> @@ -1579,6 +1579,7 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
>>  				       dma_addr_t *pages_addr,
>>  				       struct dma_fence **fence)
>>  {
>> +	struct amdgpu_bo *root = vm->root.base.bo;
>>  	struct amdgpu_vm_update_params params;
>>  	enum amdgpu_sync_mode sync_mode;
>>  	int r;
>> @@ -1604,8 +1605,6 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
>>  	}
>>    	if (flags & AMDGPU_PTE_VALID) {
>> -		struct amdgpu_bo *root = vm->root.base.bo;
>> -
>>  		if (!dma_fence_is_signaled(vm->last_direct))
>>  			amdgpu_bo_fence(root, vm->last_direct, true);
>>  @@ -1613,6 +1612,7 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
>>  			amdgpu_bo_fence(root, vm->last_delayed, true);
>>  	}
>>  +	params.resv = root->tbo.base.resv;
>>  	r = vm->update_funcs->prepare(&params, resv, sync_mode);
>>  	if (r)
>>  		goto error_unlock;
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>> index b5705fcfc935..ca6021b4200b 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>> @@ -226,6 +226,11 @@ struct amdgpu_vm_update_params {
>>  	 * @num_dw_left: number of dw left for the IB
>>  	 */
>>  	unsigned int num_dw_left;
>> +
>> +	/**
>> +	 * @resv: sync the resv and add job fence to it conditionally.
>> +	 */
>> +	struct dma_resv *resv;
>>  };
>>    struct amdgpu_vm_update_funcs {
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
>> index 4cc7881f438c..a1b270a4da8e 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
>> @@ -111,6 +111,13 @@ static int amdgpu_vm_sdma_commit(struct amdgpu_vm_update_params *p,
>>  		swap(p->vm->last_delayed, tmp);
>>  	dma_fence_put(tmp);
>>  +	/* add job fence to resv.
>> +	 * MM notifier path is an exception as we can not grab the
>> +	 * resv lock.
>> +	 */
>> +	if (!p->direct && p->resv)
> 
> You can just use p->vm->root.base.bo->tbo.base.resv here, no need for a new field in the paramater structure.
> 

yep, make sense.

> And it would probably be best to also remove the vm->last_delayed field entirely.
> 
> In other words use something like this here
> 
> if (p->direct) {
>     tmp = dma_fence_get(f);
>     swap(p->vm->last_direct, tmp);
>     dma_fence_put(tmp);
> } else {
> dma_resv_add_shared_fence(p->vm->root.base.bo->tbo.base.resv, f);
> }
> 

I think we still need udpate the last_delayed. looks like eviction and some other ioctls test this field to determine  the vm state.
this should be done by a new patch if possible.

> Regards,
> Christian.
> 
>> +		dma_resv_add_shared_fence(p->resv, f);
>> +
>>  	if (fence && !p->direct)
>>  		swap(*fence, f);
>>  	dma_fence_put(f);
> 

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/2] drm/amdgpu: unref the bo after job submit
  2020-03-13 14:20   ` Christian König
@ 2020-03-13 14:54     ` Pan, Xinhui
  2020-03-13 18:08       ` Christian König
  0 siblings, 1 reply; 8+ messages in thread
From: Pan, Xinhui @ 2020-03-13 14:54 UTC (permalink / raw)
  To: Koenig, Christian
  Cc: Deucher, Alexander, Kuehling, Felix, Pan, Xinhui, amd-gfx



> 2020年3月13日 22:20,Koenig, Christian <Christian.Koenig@amd.com> 写道:
> 
> Am 13.03.20 um 15:07 schrieb xinhui pan:
>> Otherwise we might free BOs before job has completed.
>> We add the fence to BO after commit, so free BOs after that.
> 
> I'm not sure if this is necessary, but probably better save than sorry.   

without this patch, we hit gmc page fault. 

we have individualized bo resv during bo releasing.
so any fences added to root PT bo resv after that is actually untested.

> Some comments below.
> 
>> 
>> Cc: Christian König <christian.koenig@amd.com>
>> Cc: Alex Deucher <alexander.deucher@amd.com>
>> Cc: Felix Kuehling <Felix.Kuehling@amd.com>
>> Signed-off-by: xinhui pan <xinhui.pan@amd.com>
>> ---
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 38 +++++++++++++++++++-------
>>  1 file changed, 28 insertions(+), 10 deletions(-)
>> 
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> index 809ca6e8f40f..605a1bb40280 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> @@ -942,13 +942,17 @@ static int amdgpu_vm_alloc_pts(struct amdgpu_device *adev,
>>   *
>>   * @entry: PDE to free
>>   */
>> -static void amdgpu_vm_free_table(struct amdgpu_vm_pt *entry)
>> +static void amdgpu_vm_free_table(struct amdgpu_vm_pt *entry,
>> +					struct list_head *head)
>>  {
>>  	if (entry->base.bo) {
>>  		entry->base.bo->vm_bo = NULL;
>>  		list_del(&entry->base.vm_status);
>> -		amdgpu_bo_unref(&entry->base.bo->shadow);
>> -		amdgpu_bo_unref(&entry->base.bo);
>> +		if (!head) {
>> +			amdgpu_bo_unref(&entry->base.bo->shadow);
>> +			amdgpu_bo_unref(&entry->base.bo);
>> +		} else
>> +			list_add(&entry->base.vm_status, head);
> 
> Instead of adding a parameter make this a new state in the VM. Something like vm->zombies or something similar.
> 
>>  	}
>>  	kvfree(entry->entries);
>>  	entry->entries = NULL;
>> @@ -965,7 +969,8 @@ static void amdgpu_vm_free_table(struct amdgpu_vm_pt *entry)
>>   */
>>  static void amdgpu_vm_free_pts(struct amdgpu_device *adev,
>>  			       struct amdgpu_vm *vm,
>> -			       struct amdgpu_vm_pt_cursor *start)
>> +			       struct amdgpu_vm_pt_cursor *start,
>> +				   struct list_head *head)
>>  {
>>  	struct amdgpu_vm_pt_cursor cursor;
>>  	struct amdgpu_vm_pt *entry;
>> @@ -973,10 +978,10 @@ static void amdgpu_vm_free_pts(struct amdgpu_device *adev,
>>  	vm->bulk_moveable = false;
>>    	for_each_amdgpu_vm_pt_dfs_safe(adev, vm, start, cursor, entry)
>> -		amdgpu_vm_free_table(entry);
>> +		amdgpu_vm_free_table(entry, head);
>>    	if (start)
>> -		amdgpu_vm_free_table(start->entry);
>> +		amdgpu_vm_free_table(start->entry, head);
>>  }
>>    /**
>> @@ -1428,7 +1433,8 @@ static void amdgpu_vm_fragment(struct amdgpu_vm_update_params *params,
>>   */
>>  static int amdgpu_vm_update_ptes(struct amdgpu_vm_update_params *params,
>>  				 uint64_t start, uint64_t end,
>> -				 uint64_t dst, uint64_t flags)
>> +				 uint64_t dst, uint64_t flags,
>> +				 struct list_head *head)
>>  {
>>  	struct amdgpu_device *adev = params->adev;
>>  	struct amdgpu_vm_pt_cursor cursor;
>> @@ -1539,7 +1545,7 @@ static int amdgpu_vm_update_ptes(struct amdgpu_vm_update_params *params,
>>  			 * completely covered by the range and so potentially still in use.
>>  			 */
>>  			while (cursor.pfn < frag_start) {
>> -				amdgpu_vm_free_pts(adev, params->vm, &cursor);
>> +				amdgpu_vm_free_pts(adev, params->vm, &cursor, head);
>>  				amdgpu_vm_pt_next(adev, &cursor);
>>  			}
>>  @@ -1583,6 +1589,7 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
>>  	struct amdgpu_vm_update_params params;
>>  	enum amdgpu_sync_mode sync_mode;
>>  	int r;
>> +	struct list_head head;
>>    	memset(&params, 0, sizeof(params));
>>  	params.adev = adev;
>> @@ -1590,6 +1597,8 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
>>  	params.direct = direct;
>>  	params.pages_addr = pages_addr;
>>  +	INIT_LIST_HEAD(&head);
>> +
>>  	/* Implicitly sync to command submissions in the same VM before
>>  	 * unmapping. Sync to moving fences before mapping.
>>  	 */
>> @@ -1617,13 +1626,22 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
>>  	if (r)
>>  		goto error_unlock;
>>  -	r = amdgpu_vm_update_ptes(&params, start, last + 1, addr, flags);
>> +	r = amdgpu_vm_update_ptes(&params, start, last + 1, addr, flags, &head);
>>  	if (r)
>>  		goto error_unlock;
>>    	r = vm->update_funcs->commit(&params, fence);
>>    error_unlock:
>> +	while (!list_empty(&head)) {
>> +		struct amdgpu_vm_pt *entry = list_first_entry(&head,
>> +				struct amdgpu_vm_pt, base.vm_status);
>> +		list_del(&entry->base.vm_status);
>> +
>> +		amdgpu_bo_unref(&entry->base.bo->shadow);
>> +		amdgpu_bo_unref(&entry->base.bo);
>> +	}
> 
> Maybe put that into a separate function.
> 
>> +
>>  	amdgpu_vm_eviction_unlock(vm);
>>  	return r;
>>  }
>> @@ -3118,7 +3136,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>>  		amdgpu_vm_free_mapping(adev, vm, mapping, NULL);
>>  	}
>>  -	amdgpu_vm_free_pts(adev, vm, NULL);
>> +	amdgpu_vm_free_pts(adev, vm, NULL, NULL);
> 
> And then also call it here.
> 
> Regards,
> Christian.
> 
>>  	amdgpu_bo_unreserve(root);
>>  	amdgpu_bo_unref(&root);
>>  	WARN_ON(vm->root.base.bo);
> 

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/2] drm/amdgpu: unref the bo after job submit
  2020-03-13 14:54     ` Pan, Xinhui
@ 2020-03-13 18:08       ` Christian König
  0 siblings, 0 replies; 8+ messages in thread
From: Christian König @ 2020-03-13 18:08 UTC (permalink / raw)
  To: Pan, Xinhui; +Cc: Deucher, Alexander, Kuehling, Felix, amd-gfx

Am 13.03.20 um 15:54 schrieb Pan, Xinhui:
>
>> 2020年3月13日 22:20,Koenig, Christian <Christian.Koenig@amd.com> 写道:
>>
>> Am 13.03.20 um 15:07 schrieb xinhui pan:
>>> Otherwise we might free BOs before job has completed.
>>> We add the fence to BO after commit, so free BOs after that.
>> I'm not sure if this is necessary, but probably better save than sorry.
> without this patch, we hit gmc page fault.

Then this is not the root cause.

See freeing page tables here is save since they can only be reused after 
all jobs completed.

This only fixed a very small window where the PDE would still point to 
the freed PT which might already be reused.

But this can't cause a GMC page fault, it would rather be a security 
problem.

Regards,
Christian.

>
> we have individualized bo resv during bo releasing.
> so any fences added to root PT bo resv after that is actually untested.
>
>> Some comments below.
>>
>>> Cc: Christian König <christian.koenig@amd.com>
>>> Cc: Alex Deucher <alexander.deucher@amd.com>
>>> Cc: Felix Kuehling <Felix.Kuehling@amd.com>
>>> Signed-off-by: xinhui pan <xinhui.pan@amd.com>
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 38 +++++++++++++++++++-------
>>>   1 file changed, 28 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index 809ca6e8f40f..605a1bb40280 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -942,13 +942,17 @@ static int amdgpu_vm_alloc_pts(struct amdgpu_device *adev,
>>>    *
>>>    * @entry: PDE to free
>>>    */
>>> -static void amdgpu_vm_free_table(struct amdgpu_vm_pt *entry)
>>> +static void amdgpu_vm_free_table(struct amdgpu_vm_pt *entry,
>>> +					struct list_head *head)
>>>   {
>>>   	if (entry->base.bo) {
>>>   		entry->base.bo->vm_bo = NULL;
>>>   		list_del(&entry->base.vm_status);
>>> -		amdgpu_bo_unref(&entry->base.bo->shadow);
>>> -		amdgpu_bo_unref(&entry->base.bo);
>>> +		if (!head) {
>>> +			amdgpu_bo_unref(&entry->base.bo->shadow);
>>> +			amdgpu_bo_unref(&entry->base.bo);
>>> +		} else
>>> +			list_add(&entry->base.vm_status, head);
>> Instead of adding a parameter make this a new state in the VM. Something like vm->zombies or something similar.
>>
>>>   	}
>>>   	kvfree(entry->entries);
>>>   	entry->entries = NULL;
>>> @@ -965,7 +969,8 @@ static void amdgpu_vm_free_table(struct amdgpu_vm_pt *entry)
>>>    */
>>>   static void amdgpu_vm_free_pts(struct amdgpu_device *adev,
>>>   			       struct amdgpu_vm *vm,
>>> -			       struct amdgpu_vm_pt_cursor *start)
>>> +			       struct amdgpu_vm_pt_cursor *start,
>>> +				   struct list_head *head)
>>>   {
>>>   	struct amdgpu_vm_pt_cursor cursor;
>>>   	struct amdgpu_vm_pt *entry;
>>> @@ -973,10 +978,10 @@ static void amdgpu_vm_free_pts(struct amdgpu_device *adev,
>>>   	vm->bulk_moveable = false;
>>>     	for_each_amdgpu_vm_pt_dfs_safe(adev, vm, start, cursor, entry)
>>> -		amdgpu_vm_free_table(entry);
>>> +		amdgpu_vm_free_table(entry, head);
>>>     	if (start)
>>> -		amdgpu_vm_free_table(start->entry);
>>> +		amdgpu_vm_free_table(start->entry, head);
>>>   }
>>>     /**
>>> @@ -1428,7 +1433,8 @@ static void amdgpu_vm_fragment(struct amdgpu_vm_update_params *params,
>>>    */
>>>   static int amdgpu_vm_update_ptes(struct amdgpu_vm_update_params *params,
>>>   				 uint64_t start, uint64_t end,
>>> -				 uint64_t dst, uint64_t flags)
>>> +				 uint64_t dst, uint64_t flags,
>>> +				 struct list_head *head)
>>>   {
>>>   	struct amdgpu_device *adev = params->adev;
>>>   	struct amdgpu_vm_pt_cursor cursor;
>>> @@ -1539,7 +1545,7 @@ static int amdgpu_vm_update_ptes(struct amdgpu_vm_update_params *params,
>>>   			 * completely covered by the range and so potentially still in use.
>>>   			 */
>>>   			while (cursor.pfn < frag_start) {
>>> -				amdgpu_vm_free_pts(adev, params->vm, &cursor);
>>> +				amdgpu_vm_free_pts(adev, params->vm, &cursor, head);
>>>   				amdgpu_vm_pt_next(adev, &cursor);
>>>   			}
>>>   @@ -1583,6 +1589,7 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
>>>   	struct amdgpu_vm_update_params params;
>>>   	enum amdgpu_sync_mode sync_mode;
>>>   	int r;
>>> +	struct list_head head;
>>>     	memset(&params, 0, sizeof(params));
>>>   	params.adev = adev;
>>> @@ -1590,6 +1597,8 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
>>>   	params.direct = direct;
>>>   	params.pages_addr = pages_addr;
>>>   +	INIT_LIST_HEAD(&head);
>>> +
>>>   	/* Implicitly sync to command submissions in the same VM before
>>>   	 * unmapping. Sync to moving fences before mapping.
>>>   	 */
>>> @@ -1617,13 +1626,22 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
>>>   	if (r)
>>>   		goto error_unlock;
>>>   -	r = amdgpu_vm_update_ptes(&params, start, last + 1, addr, flags);
>>> +	r = amdgpu_vm_update_ptes(&params, start, last + 1, addr, flags, &head);
>>>   	if (r)
>>>   		goto error_unlock;
>>>     	r = vm->update_funcs->commit(&params, fence);
>>>     error_unlock:
>>> +	while (!list_empty(&head)) {
>>> +		struct amdgpu_vm_pt *entry = list_first_entry(&head,
>>> +				struct amdgpu_vm_pt, base.vm_status);
>>> +		list_del(&entry->base.vm_status);
>>> +
>>> +		amdgpu_bo_unref(&entry->base.bo->shadow);
>>> +		amdgpu_bo_unref(&entry->base.bo);
>>> +	}
>> Maybe put that into a separate function.
>>
>>> +
>>>   	amdgpu_vm_eviction_unlock(vm);
>>>   	return r;
>>>   }
>>> @@ -3118,7 +3136,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>>>   		amdgpu_vm_free_mapping(adev, vm, mapping, NULL);
>>>   	}
>>>   -	amdgpu_vm_free_pts(adev, vm, NULL);
>>> +	amdgpu_vm_free_pts(adev, vm, NULL, NULL);
>> And then also call it here.
>>
>> Regards,
>> Christian.
>>
>>>   	amdgpu_bo_unreserve(root);
>>>   	amdgpu_bo_unref(&root);
>>>   	WARN_ON(vm->root.base.bo);

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-03-13 18:08 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-13 14:07 [PATCH v3 0/2] fix gmc page fault on navi1X xinhui pan
2020-03-13 14:07 ` [PATCH v3 1/2] drm/amdgpu: Add job fence to resv conditionally xinhui pan
2020-03-13 14:13   ` Christian König
2020-03-13 14:30     ` Pan, Xinhui
2020-03-13 14:07 ` [PATCH v3 2/2] drm/amdgpu: unref the bo after job submit xinhui pan
2020-03-13 14:20   ` Christian König
2020-03-13 14:54     ` Pan, Xinhui
2020-03-13 18:08       ` Christian König

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.