All of lore.kernel.org
 help / color / mirror / Atom feed
* gvt gem fixes
@ 2016-10-19 10:11 Chris Wilson
  2016-10-19 10:11 ` [PATCH 01/12] drm/i915/gvt: s/drm_gem_object_unreference/i915_gem_object_put/ Chris Wilson
                   ` (13 more replies)
  0 siblings, 14 replies; 27+ messages in thread
From: Chris Wilson @ 2016-10-19 10:11 UTC (permalink / raw)
  To: intel-gfx

I think this is the set required to bring gvt into line, at least to
unblock myself.
-Chris

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 01/12] drm/i915/gvt: s/drm_gem_object_unreference/i915_gem_object_put/
  2016-10-19 10:11 gvt gem fixes Chris Wilson
@ 2016-10-19 10:11 ` Chris Wilson
  2016-10-19 10:11 ` [PATCH 02/12] drm/i915/gvt: Add runtime pm around fences Chris Wilson
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-10-19 10:11 UTC (permalink / raw)
  To: intel-gfx

Deprecated functions; it is also not clear whether these are called from
the right context.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gvt/execlist.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/execlist.c b/drivers/gpu/drm/i915/gvt/execlist.c
index c50a3d1a5131..a9d04c378755 100644
--- a/drivers/gpu/drm/i915/gvt/execlist.c
+++ b/drivers/gpu/drm/i915/gvt/execlist.c
@@ -498,7 +498,7 @@ static void release_shadow_batch_buffer(struct intel_vgpu_workload *workload)
 
 		list_for_each_entry_safe(entry_obj, temp, &workload->shadow_bb,
 					 list) {
-			drm_gem_object_unreference(&(entry_obj->obj->base));
+			i915_gem_object_put(entry_obj->obj);
 			kvfree(entry_obj->va);
 			list_del(&entry_obj->list);
 			kfree(entry_obj);
@@ -511,7 +511,7 @@ static void release_shadow_wa_ctx(struct intel_shadow_wa_ctx *wa_ctx)
 	if (wa_ctx->indirect_ctx.size == 0)
 		return;
 
-	drm_gem_object_unreference(&(wa_ctx->indirect_ctx.obj->base));
+	i915_gem_object_put(wa_ctx->indirect_ctx.obj);
 	kvfree(wa_ctx->indirect_ctx.shadow_va);
 }
 
-- 
2.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 02/12] drm/i915/gvt: Add runtime pm around fences
  2016-10-19 10:11 gvt gem fixes Chris Wilson
  2016-10-19 10:11 ` [PATCH 01/12] drm/i915/gvt: s/drm_gem_object_unreference/i915_gem_object_put/ Chris Wilson
@ 2016-10-19 10:11 ` Chris Wilson
  2016-10-19 10:11 ` [PATCH 03/12] drm/i915/gvt: i915_gem_object_create() returns an error pointer Chris Wilson
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-10-19 10:11 UTC (permalink / raw)
  To: intel-gfx

Manipulating the fence_list requires the runtime wakelock, as does
writing to the fence registers. Acquire a wakelock for the former, and
assert that the device is awake for the latter.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gvt/aperture_gm.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/gpu/drm/i915/gvt/aperture_gm.c b/drivers/gpu/drm/i915/gvt/aperture_gm.c
index e0211f83bd93..fc51f3e35835 100644
--- a/drivers/gpu/drm/i915/gvt/aperture_gm.c
+++ b/drivers/gpu/drm/i915/gvt/aperture_gm.c
@@ -144,6 +144,8 @@ void intel_vgpu_write_fence(struct intel_vgpu *vgpu,
 	struct drm_i915_fence_reg *reg;
 	i915_reg_t fence_reg_lo, fence_reg_hi;
 
+	assert_rpm_wakelock_held(dev_priv);
+
 	if (WARN_ON(fence > vgpu_fence_sz(vgpu)))
 		return;
 
@@ -172,6 +174,8 @@ static void free_vgpu_fence(struct intel_vgpu *vgpu)
 	if (WARN_ON(!vgpu_fence_sz(vgpu)))
 		return;
 
+	intel_runtime_pm_get(dev_priv);
+
 	mutex_lock(&dev_priv->drm.struct_mutex);
 	for (i = 0; i < vgpu_fence_sz(vgpu); i++) {
 		reg = vgpu->fence.regs[i];
@@ -180,6 +184,8 @@ static void free_vgpu_fence(struct intel_vgpu *vgpu)
 			      &dev_priv->mm.fence_list);
 	}
 	mutex_unlock(&dev_priv->drm.struct_mutex);
+
+	intel_runtime_pm_put(dev_priv);
 }
 
 static int alloc_vgpu_fence(struct intel_vgpu *vgpu)
@@ -190,6 +196,8 @@ static int alloc_vgpu_fence(struct intel_vgpu *vgpu)
 	int i;
 	struct list_head *pos, *q;
 
+	intel_runtime_pm_get(dev_priv);
+
 	/* Request fences from host */
 	mutex_lock(&dev_priv->drm.struct_mutex);
 	i = 0;
@@ -207,6 +215,7 @@ static int alloc_vgpu_fence(struct intel_vgpu *vgpu)
 		goto out_free_fence;
 
 	mutex_unlock(&dev_priv->drm.struct_mutex);
+	intel_runtime_pm_put(dev_priv);
 	return 0;
 out_free_fence:
 	/* Return fences to host, if fail */
@@ -218,6 +227,7 @@ static int alloc_vgpu_fence(struct intel_vgpu *vgpu)
 			      &dev_priv->mm.fence_list);
 	}
 	mutex_unlock(&dev_priv->drm.struct_mutex);
+	intel_runtime_pm_put(dev_priv);
 	return -ENOSPC;
 }
 
-- 
2.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 03/12] drm/i915/gvt: i915_gem_object_create() returns an error pointer
  2016-10-19 10:11 gvt gem fixes Chris Wilson
  2016-10-19 10:11 ` [PATCH 01/12] drm/i915/gvt: s/drm_gem_object_unreference/i915_gem_object_put/ Chris Wilson
  2016-10-19 10:11 ` [PATCH 02/12] drm/i915/gvt: Add runtime pm around fences Chris Wilson
@ 2016-10-19 10:11 ` Chris Wilson
  2016-10-19 10:11 ` [PATCH 04/12] drm/i915: Catch premature unpinning of pages Chris Wilson
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-10-19 10:11 UTC (permalink / raw)
  To: intel-gfx

On failure from i915_gem_object_create(), we need to check for an error
pointer not NULL.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com>
---
 drivers/gpu/drm/i915/gvt/cmd_parser.c | 48 ++++++++++++++++++++---------------
 1 file changed, 28 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/cmd_parser.c b/drivers/gpu/drm/i915/gvt/cmd_parser.c
index 5808ee7c1935..464fc3c9935b 100644
--- a/drivers/gpu/drm/i915/gvt/cmd_parser.c
+++ b/drivers/gpu/drm/i915/gvt/cmd_parser.c
@@ -1638,16 +1638,19 @@ static int perform_bb_shadow(struct parser_exec_state *s)
 	if (entry_obj == NULL)
 		return -ENOMEM;
 
-	entry_obj->obj = i915_gem_object_create(&(s->vgpu->gvt->dev_priv->drm),
-		round_up(bb_size, PAGE_SIZE));
-	if (entry_obj->obj == NULL)
-		return -ENOMEM;
+	entry_obj->obj =
+		i915_gem_object_create(&(s->vgpu->gvt->dev_priv->drm),
+				       roundup(bb_size, PAGE_SIZE));
+	if (IS_ERR(entry_obj->obj)) {
+		ret = PTR_ERR(entry_obj->obj);
+		goto free_entry;
+	}
 	entry_obj->len = bb_size;
 	INIT_LIST_HEAD(&entry_obj->list);
 
 	ret = i915_gem_object_get_pages(entry_obj->obj);
 	if (ret)
-		return ret;
+		goto put_obj;
 
 	i915_gem_object_pin_pages(entry_obj->obj);
 
@@ -1673,7 +1676,7 @@ static int perform_bb_shadow(struct parser_exec_state *s)
 				gma, gma + bb_size, dst);
 	if (ret) {
 		gvt_err("fail to copy guest ring buffer\n");
-		return ret;
+		goto unmap_src;
 	}
 
 	list_add(&entry_obj->list, &s->workload->shadow_bb);
@@ -1694,7 +1697,10 @@ static int perform_bb_shadow(struct parser_exec_state *s)
 	vunmap(dst);
 unpin_src:
 	i915_gem_object_unpin_pages(entry_obj->obj);
-
+put_obj:
+	i915_gem_object_put(entry_obj->obj);
+free_entry:
+	kfree(entry_obj);
 	return ret;
 }
 
@@ -2707,31 +2713,31 @@ static int shadow_indirect_ctx(struct intel_shadow_wa_ctx *wa_ctx)
 	struct drm_device *dev = &wa_ctx->workload->vgpu->gvt->dev_priv->drm;
 	int ctx_size = wa_ctx->indirect_ctx.size;
 	unsigned long guest_gma = wa_ctx->indirect_ctx.guest_gma;
+	struct drm_i915_gem_object *obj;
 	int ret = 0;
 	void *dest = NULL;
 
-	wa_ctx->indirect_ctx.obj = i915_gem_object_create(dev,
-			round_up(ctx_size + CACHELINE_BYTES, PAGE_SIZE));
-	if (wa_ctx->indirect_ctx.obj == NULL)
-		return -ENOMEM;
+	obj = i915_gem_object_create(dev,
+				     roundup(ctx_size + CACHELINE_BYTES,
+					     PAGE_SIZE));
+	if (IS_ERR(obj))
+		return PTR_ERR(obj);
 
-	ret = i915_gem_object_get_pages(wa_ctx->indirect_ctx.obj);
+	ret = i915_gem_object_get_pages(obj);
 	if (ret)
-		return ret;
+		goto put_obj;
 
-	i915_gem_object_pin_pages(wa_ctx->indirect_ctx.obj);
+	i915_gem_object_pin_pages(obj);
 
 	/* get the va of the shadow batch buffer */
-	dest = (void *)vmap_batch(wa_ctx->indirect_ctx.obj, 0,
-			ctx_size + CACHELINE_BYTES);
+	dest = (void *)vmap_batch(obj, 0, ctx_size + CACHELINE_BYTES);
 	if (!dest) {
 		gvt_err("failed to vmap shadow indirect ctx\n");
 		ret = -ENOMEM;
 		goto unpin_src;
 	}
 
-	ret = i915_gem_object_set_to_cpu_domain(wa_ctx->indirect_ctx.obj,
-			false);
+	ret = i915_gem_object_set_to_cpu_domain(obj, false);
 	if (ret) {
 		gvt_err("failed to set shadow indirect ctx to CPU\n");
 		goto unmap_src;
@@ -2746,16 +2752,18 @@ static int shadow_indirect_ctx(struct intel_shadow_wa_ctx *wa_ctx)
 				guest_gma, guest_gma + ctx_size, dest);
 	if (ret) {
 		gvt_err("fail to copy guest indirect ctx\n");
-		return ret;
+		goto unmap_src;
 	}
 
+	wa_ctx->indirect_ctx.obj = obj;
 	return 0;
 
 unmap_src:
 	vunmap(dest);
 unpin_src:
 	i915_gem_object_unpin_pages(wa_ctx->indirect_ctx.obj);
-
+put_obj:
+	i915_gem_object_put(wa_ctx->indirect_ctx.obj);
 	return ret;
 }
 
-- 
2.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 04/12] drm/i915: Catch premature unpinning of pages
  2016-10-19 10:11 gvt gem fixes Chris Wilson
                   ` (2 preceding siblings ...)
  2016-10-19 10:11 ` [PATCH 03/12] drm/i915/gvt: i915_gem_object_create() returns an error pointer Chris Wilson
@ 2016-10-19 10:11 ` Chris Wilson
  2016-10-19 10:26   ` Joonas Lahtinen
  2016-10-19 10:11 ` [PATCH 05/12] drm/i915/gvt: Use the returned VMA to provide the virtual address Chris Wilson
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 27+ messages in thread
From: Chris Wilson @ 2016-10-19 10:11 UTC (permalink / raw)
  To: intel-gfx

Try to catch the violation of unpinning the backing storage whilst still
bound to the GPU.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 33c44c631bab..6c8a104b42ed 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3181,14 +3181,15 @@ i915_gem_object_get_page(struct drm_i915_gem_object *obj, int n)
 
 static inline void i915_gem_object_pin_pages(struct drm_i915_gem_object *obj)
 {
-	BUG_ON(obj->pages == NULL);
+	GEM_BUG_ON(obj->pages == NULL);
 	obj->pages_pin_count++;
 }
 
 static inline void i915_gem_object_unpin_pages(struct drm_i915_gem_object *obj)
 {
-	BUG_ON(obj->pages_pin_count == 0);
+	GEM_BUG_ON(obj->pages_pin_count == 0);
 	obj->pages_pin_count--;
+	GEM_BUG_ON(obj->pages_pin_count < obj->bind_count);
 }
 
 enum i915_map_type {
-- 
2.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 05/12] drm/i915/gvt: Use the returned VMA to provide the virtual address
  2016-10-19 10:11 gvt gem fixes Chris Wilson
                   ` (3 preceding siblings ...)
  2016-10-19 10:11 ` [PATCH 04/12] drm/i915: Catch premature unpinning of pages Chris Wilson
@ 2016-10-19 10:11 ` Chris Wilson
  2016-10-19 10:11 ` [PATCH 06/12] drm/i915/gvt: Remove dangerous unpin of backing storage of bound GPU object Chris Wilson
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-10-19 10:11 UTC (permalink / raw)
  To: intel-gfx

The purpose of returning the just-pinned VMA is so that we can use the
information within, like its address. Also it should be tracked and used
as the cookie to unpin...

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com>
---
 drivers/gpu/drm/i915/gvt/execlist.c | 20 +++++++++-----------
 1 file changed, 9 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/execlist.c b/drivers/gpu/drm/i915/gvt/execlist.c
index a9d04c378755..cfdd3ae13fb0 100644
--- a/drivers/gpu/drm/i915/gvt/execlist.c
+++ b/drivers/gpu/drm/i915/gvt/execlist.c
@@ -385,8 +385,6 @@ static int set_gma_to_bb_cmd(struct intel_shadow_bb_entry *entry_obj,
 static void prepare_shadow_batch_buffer(struct intel_vgpu_workload *workload)
 {
 	int gmadr_bytes = workload->vgpu->gvt->device_info.gmadr_bytes_in_cmd;
-	struct i915_vma *vma;
-	unsigned long gma;
 
 	/* pin the gem object to ggtt */
 	if (!list_empty(&workload->shadow_bb)) {
@@ -398,8 +396,10 @@ static void prepare_shadow_batch_buffer(struct intel_vgpu_workload *workload)
 
 		list_for_each_entry_safe(entry_obj, temp, &workload->shadow_bb,
 				list) {
+			struct i915_vma *vma;
+
 			vma = i915_gem_object_ggtt_pin(entry_obj->obj, NULL, 0,
-					0, 0);
+						       4, 0);
 			if (IS_ERR(vma)) {
 				gvt_err("Cannot pin\n");
 				return;
@@ -407,9 +407,9 @@ static void prepare_shadow_batch_buffer(struct intel_vgpu_workload *workload)
 			i915_gem_object_unpin_pages(entry_obj->obj);
 
 			/* update the relocate gma with shadow batch buffer*/
-			gma = i915_gem_object_ggtt_offset(entry_obj->obj, NULL);
-			WARN_ON(!IS_ALIGNED(gma, 4));
-			set_gma_to_bb_cmd(entry_obj, gma, gmadr_bytes);
+			set_gma_to_bb_cmd(entry_obj,
+					  i915_ggtt_offset(vma),
+					  gmadr_bytes);
 		}
 	}
 }
@@ -441,7 +441,6 @@ static int update_wa_ctx_2_shadow_ctx(struct intel_shadow_wa_ctx *wa_ctx)
 static void prepare_shadow_wa_ctx(struct intel_shadow_wa_ctx *wa_ctx)
 {
 	struct i915_vma *vma;
-	unsigned long gma;
 	unsigned char *per_ctx_va =
 		(unsigned char *)wa_ctx->indirect_ctx.shadow_va +
 		wa_ctx->indirect_ctx.size;
@@ -449,16 +448,15 @@ static void prepare_shadow_wa_ctx(struct intel_shadow_wa_ctx *wa_ctx)
 	if (wa_ctx->indirect_ctx.size == 0)
 		return;
 
-	vma = i915_gem_object_ggtt_pin(wa_ctx->indirect_ctx.obj, NULL, 0, 0, 0);
+	vma = i915_gem_object_ggtt_pin(wa_ctx->indirect_ctx.obj, NULL,
+				       0, CACHELINE_BYTES, 0);
 	if (IS_ERR(vma)) {
 		gvt_err("Cannot pin indirect ctx obj\n");
 		return;
 	}
 	i915_gem_object_unpin_pages(wa_ctx->indirect_ctx.obj);
 
-	gma = i915_gem_object_ggtt_offset(wa_ctx->indirect_ctx.obj, NULL);
-	WARN_ON(!IS_ALIGNED(gma, CACHELINE_BYTES));
-	wa_ctx->indirect_ctx.shadow_gma = gma;
+	wa_ctx->indirect_ctx.shadow_gma = i915_ggtt_offset(vma);
 
 	wa_ctx->per_ctx.shadow_gma = *((unsigned int *)per_ctx_va + 1);
 	memset(per_ctx_va, 0, CACHELINE_BYTES);
-- 
2.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 06/12] drm/i915/gvt: Remove dangerous unpin of backing storage of bound GPU object
  2016-10-19 10:11 gvt gem fixes Chris Wilson
                   ` (4 preceding siblings ...)
  2016-10-19 10:11 ` [PATCH 05/12] drm/i915/gvt: Use the returned VMA to provide the virtual address Chris Wilson
@ 2016-10-19 10:11 ` Chris Wilson
  2016-10-19 10:11 ` [PATCH 07/12] drm/i915/gvt: Hold a reference on the request Chris Wilson
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-10-19 10:11 UTC (permalink / raw)
  To: intel-gfx

Unpinning the pages prior to the object being release from the GPU may
allow the GPU to read and write into system pages (i.e. use after free
by the hw).

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gvt/execlist.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/execlist.c b/drivers/gpu/drm/i915/gvt/execlist.c
index cfdd3ae13fb0..b79d148a4e32 100644
--- a/drivers/gpu/drm/i915/gvt/execlist.c
+++ b/drivers/gpu/drm/i915/gvt/execlist.c
@@ -404,7 +404,11 @@ static void prepare_shadow_batch_buffer(struct intel_vgpu_workload *workload)
 				gvt_err("Cannot pin\n");
 				return;
 			}
-			i915_gem_object_unpin_pages(entry_obj->obj);
+
+			/* FIXME: we are not tracking our pinned VMA leaving it
+			 * up to the core to fix up the stray pin_count upon
+			 * free.
+			 */
 
 			/* update the relocate gma with shadow batch buffer*/
 			set_gma_to_bb_cmd(entry_obj,
@@ -454,7 +458,11 @@ static void prepare_shadow_wa_ctx(struct intel_shadow_wa_ctx *wa_ctx)
 		gvt_err("Cannot pin indirect ctx obj\n");
 		return;
 	}
-	i915_gem_object_unpin_pages(wa_ctx->indirect_ctx.obj);
+
+	/* FIXME: we are not tracking our pinned VMA leaving it
+	 * up to the core to fix up the stray pin_count upon
+	 * free.
+	 */
 
 	wa_ctx->indirect_ctx.shadow_gma = i915_ggtt_offset(vma);
 
-- 
2.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 07/12] drm/i915/gvt: Hold a reference on the request
  2016-10-19 10:11 gvt gem fixes Chris Wilson
                   ` (5 preceding siblings ...)
  2016-10-19 10:11 ` [PATCH 06/12] drm/i915/gvt: Remove dangerous unpin of backing storage of bound GPU object Chris Wilson
@ 2016-10-19 10:11 ` Chris Wilson
  2016-10-19 10:32   ` Zhenyu Wang
  2016-10-20  0:22   ` Zhenyu Wang
  2016-10-19 10:11 ` [PATCH 08/12] drm/i915/gvt: Stop checking for impossible interrupts from a kthread Chris Wilson
                   ` (6 subsequent siblings)
  13 siblings, 2 replies; 27+ messages in thread
From: Chris Wilson @ 2016-10-19 10:11 UTC (permalink / raw)
  To: intel-gfx

The workload took a pointer to the request, and even waited upon,
without holding a reference on the request. Take that reference
explicitly and fix up the error path following request allocation that
missed flushing the request.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gvt/scheduler.c | 24 +++++++++++++-----------
 1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
index b15cdf5978a9..224f19ae61ab 100644
--- a/drivers/gpu/drm/i915/gvt/scheduler.c
+++ b/drivers/gpu/drm/i915/gvt/scheduler.c
@@ -163,6 +163,7 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
 	int ring_id = workload->ring_id;
 	struct i915_gem_context *shadow_ctx = workload->vgpu->shadow_ctx;
 	struct drm_i915_private *dev_priv = workload->vgpu->gvt->dev_priv;
+	struct drm_i915_gem_request *rq;
 	int ret;
 
 	gvt_dbg_sched("ring id %d prepare to dispatch workload %p\n",
@@ -171,17 +172,16 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
 	shadow_ctx->desc_template = workload->ctx_desc.addressing_mode <<
 				    GEN8_CTX_ADDRESSING_MODE_SHIFT;
 
-	workload->req = i915_gem_request_alloc(dev_priv->engine[ring_id],
-					       shadow_ctx);
-	if (IS_ERR_OR_NULL(workload->req)) {
+	rq = i915_gem_request_alloc(dev_priv->engine[ring_id], shadow_ctx);
+	if (IS_ERR(rq)) {
 		gvt_err("fail to allocate gem request\n");
-		workload->status = PTR_ERR(workload->req);
-		workload->req = NULL;
+		workload->status = PTR_ERR(rq);
 		return workload->status;
 	}
 
-	gvt_dbg_sched("ring id %d get i915 gem request %p\n",
-			ring_id, workload->req);
+	gvt_dbg_sched("ring id %d get i915 gem request %p\n", ring_id, rq);
+
+	workload->req = i915_gem_request_get(rq);
 
 	mutex_lock(&gvt->lock);
 
@@ -208,16 +208,16 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
 	gvt_dbg_sched("ring id %d submit workload to i915 %p\n",
 			ring_id, workload->req);
 
-	i915_add_request_no_flush(workload->req);
-
+	i915_add_request_no_flush(rq);
 	workload->dispatched = true;
 	return 0;
 err:
 	workload->status = ret;
-	if (workload->req)
-		workload->req = NULL;
+	i915_gem_request_put(fetch_and_zero(&workload->req));
 
 	mutex_unlock(&gvt->lock);
+
+	i915_add_request_no_flush(rq);
 	return ret;
 }
 
@@ -458,6 +458,8 @@ static int workload_thread(void *priv)
 
 		complete_current_workload(gvt, ring_id);
 
+		i915_gem_request_put(fetch_and_zero(&workload->req));
+
 		if (need_force_wake)
 			intel_uncore_forcewake_put(gvt->dev_priv,
 					FORCEWAKE_ALL);
-- 
2.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 08/12] drm/i915/gvt: Stop checking for impossible interrupts from a kthread
  2016-10-19 10:11 gvt gem fixes Chris Wilson
                   ` (6 preceding siblings ...)
  2016-10-19 10:11 ` [PATCH 07/12] drm/i915/gvt: Hold a reference on the request Chris Wilson
@ 2016-10-19 10:11 ` Chris Wilson
  2016-10-19 10:11 ` [PATCH 09/12] drm/i915/gvt: Stop waiting whilst holding struct_mutex Chris Wilson
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-10-19 10:11 UTC (permalink / raw)
  To: intel-gfx

The kthread will not be interrupted, don't even bother checking.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com>
---
 drivers/gpu/drm/i915/gvt/scheduler.c | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
index 224f19ae61ab..310435498932 100644
--- a/drivers/gpu/drm/i915/gvt/scheduler.c
+++ b/drivers/gpu/drm/i915/gvt/scheduler.c
@@ -423,12 +423,7 @@ static int workload_thread(void *priv)
 		/*
 		 * Always take i915 big lock first
 		 */
-		ret = i915_mutex_lock_interruptible(&gvt->dev_priv->drm);
-		if (ret < 0) {
-			gvt_err("i915 submission is not available, retry\n");
-			schedule_timeout(1);
-			continue;
-		}
+		mutex_lock(&gvt->dev_priv->drm.struct_mutex);
 
 		gvt_dbg_sched("ring id %d will dispatch workload %p\n",
 				workload->ring_id, workload);
@@ -447,7 +442,7 @@ static int workload_thread(void *priv)
 				workload->ring_id, workload);
 
 		workload->status = i915_wait_request(workload->req,
-						     I915_WAIT_INTERRUPTIBLE | I915_WAIT_LOCKED,
+						     I915_WAIT_LOCKED,
 						     NULL, NULL);
 		if (workload->status != 0)
 			gvt_err("fail to wait workload, skip\n");
-- 
2.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 09/12] drm/i915/gvt: Stop waiting whilst holding struct_mutex
  2016-10-19 10:11 gvt gem fixes Chris Wilson
                   ` (7 preceding siblings ...)
  2016-10-19 10:11 ` [PATCH 08/12] drm/i915/gvt: Stop checking for impossible interrupts from a kthread Chris Wilson
@ 2016-10-19 10:11 ` Chris Wilson
  2016-10-19 10:11 ` [PATCH 10/12] drm/i915/gvt: Use common mapping routines for indirect_ctx object Chris Wilson
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-10-19 10:11 UTC (permalink / raw)
  To: intel-gfx

For whatever reason, the gvt scheduler runs synchronously. At the very
least, lets run synchronously without holding the struct_mutex.

v2: cut'n'paste mutex_lock instead of unlock.
Replace long hold of struct_mutex with a mutex to serialise the worker
threads.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gvt/scheduler.c | 22 +++++++++++++---------
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
index 310435498932..9957d8832c63 100644
--- a/drivers/gpu/drm/i915/gvt/scheduler.c
+++ b/drivers/gpu/drm/i915/gvt/scheduler.c
@@ -390,6 +390,8 @@ struct workload_thread_param {
 	int ring_id;
 };
 
+static DEFINE_MUTEX(scheduler_mutex);
+
 static int workload_thread(void *priv)
 {
 	struct workload_thread_param *p = (struct workload_thread_param *)priv;
@@ -414,17 +416,14 @@ static int workload_thread(void *priv)
 		if (kthread_should_stop())
 			break;
 
+		mutex_lock(&scheduler_mutex);
+
 		gvt_dbg_sched("ring id %d next workload %p vgpu %d\n",
 				workload->ring_id, workload,
 				workload->vgpu->id);
 
 		intel_runtime_pm_get(gvt->dev_priv);
 
-		/*
-		 * Always take i915 big lock first
-		 */
-		mutex_lock(&gvt->dev_priv->drm.struct_mutex);
-
 		gvt_dbg_sched("ring id %d will dispatch workload %p\n",
 				workload->ring_id, workload);
 
@@ -432,7 +431,10 @@ static int workload_thread(void *priv)
 			intel_uncore_forcewake_get(gvt->dev_priv,
 					FORCEWAKE_ALL);
 
+		mutex_lock(&gvt->dev_priv->drm.struct_mutex);
 		ret = dispatch_workload(workload);
+		mutex_unlock(&gvt->dev_priv->drm.struct_mutex);
+
 		if (ret) {
 			gvt_err("fail to dispatch workload, skip\n");
 			goto complete;
@@ -442,8 +444,7 @@ static int workload_thread(void *priv)
 				workload->ring_id, workload);
 
 		workload->status = i915_wait_request(workload->req,
-						     I915_WAIT_LOCKED,
-						     NULL, NULL);
+						     0, NULL, NULL);
 		if (workload->status != 0)
 			gvt_err("fail to wait workload, skip\n");
 
@@ -451,7 +452,9 @@ static int workload_thread(void *priv)
 		gvt_dbg_sched("will complete workload %p\n, status: %d\n",
 				workload, workload->status);
 
+		mutex_lock(&gvt->dev_priv->drm.struct_mutex);
 		complete_current_workload(gvt, ring_id);
+		mutex_unlock(&gvt->dev_priv->drm.struct_mutex);
 
 		i915_gem_request_put(fetch_and_zero(&workload->req));
 
@@ -459,9 +462,10 @@ static int workload_thread(void *priv)
 			intel_uncore_forcewake_put(gvt->dev_priv,
 					FORCEWAKE_ALL);
 
-		mutex_unlock(&gvt->dev_priv->drm.struct_mutex);
-
 		intel_runtime_pm_put(gvt->dev_priv);
+
+		mutex_unlock(&scheduler_mutex);
+
 	}
 	return 0;
 }
-- 
2.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 10/12] drm/i915/gvt: Use common mapping routines for indirect_ctx object
  2016-10-19 10:11 gvt gem fixes Chris Wilson
                   ` (8 preceding siblings ...)
  2016-10-19 10:11 ` [PATCH 09/12] drm/i915/gvt: Stop waiting whilst holding struct_mutex Chris Wilson
@ 2016-10-19 10:11 ` Chris Wilson
  2016-10-19 10:26   ` Zhenyu Wang
  2016-10-19 10:11 ` [PATCH 11/12] drm/i915/gvt: Use common mapping routines for shadow_bb object Chris Wilson
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 27+ messages in thread
From: Chris Wilson @ 2016-10-19 10:11 UTC (permalink / raw)
  To: intel-gfx

We have the ability to map an object, so use it rather than opencode it
badly.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gvt/cmd_parser.c | 28 +++++++++-------------------
 drivers/gpu/drm/i915/gvt/execlist.c   |  2 +-
 2 files changed, 10 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/cmd_parser.c b/drivers/gpu/drm/i915/gvt/cmd_parser.c
index 464fc3c9935b..2b166094444b 100644
--- a/drivers/gpu/drm/i915/gvt/cmd_parser.c
+++ b/drivers/gpu/drm/i915/gvt/cmd_parser.c
@@ -2715,7 +2715,7 @@ static int shadow_indirect_ctx(struct intel_shadow_wa_ctx *wa_ctx)
 	unsigned long guest_gma = wa_ctx->indirect_ctx.guest_gma;
 	struct drm_i915_gem_object *obj;
 	int ret = 0;
-	void *dest = NULL;
+	void *map;
 
 	obj = i915_gem_object_create(dev,
 				     roundup(ctx_size + CACHELINE_BYTES,
@@ -2723,18 +2723,12 @@ static int shadow_indirect_ctx(struct intel_shadow_wa_ctx *wa_ctx)
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
-	ret = i915_gem_object_get_pages(obj);
-	if (ret)
-		goto put_obj;
-
-	i915_gem_object_pin_pages(obj);
-
 	/* get the va of the shadow batch buffer */
-	dest = (void *)vmap_batch(obj, 0, ctx_size + CACHELINE_BYTES);
-	if (!dest) {
+	map = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	if (IS_ERR(map)) {
 		gvt_err("failed to vmap shadow indirect ctx\n");
-		ret = -ENOMEM;
-		goto unpin_src;
+		ret = PTR_ERR(map);
+		goto put_obj;
 	}
 
 	ret = i915_gem_object_set_to_cpu_domain(obj, false);
@@ -2743,25 +2737,21 @@ static int shadow_indirect_ctx(struct intel_shadow_wa_ctx *wa_ctx)
 		goto unmap_src;
 	}
 
-	wa_ctx->indirect_ctx.shadow_va = dest;
-
-	memset(dest, 0, round_up(ctx_size + CACHELINE_BYTES, PAGE_SIZE));
-
 	ret = copy_gma_to_hva(wa_ctx->workload->vgpu,
 				wa_ctx->workload->vgpu->gtt.ggtt_mm,
-				guest_gma, guest_gma + ctx_size, dest);
+				guest_gma, guest_gma + ctx_size,
+				map);
 	if (ret) {
 		gvt_err("fail to copy guest indirect ctx\n");
 		goto unmap_src;
 	}
 
 	wa_ctx->indirect_ctx.obj = obj;
+	wa_ctx->indirect_ctx.shadow_va = map;
 	return 0;
 
 unmap_src:
-	vunmap(dest);
-unpin_src:
-	i915_gem_object_unpin_pages(wa_ctx->indirect_ctx.obj);
+	i915_gem_object_unpin_map(obj);
 put_obj:
 	i915_gem_object_put(wa_ctx->indirect_ctx.obj);
 	return ret;
diff --git a/drivers/gpu/drm/i915/gvt/execlist.c b/drivers/gpu/drm/i915/gvt/execlist.c
index b79d148a4e32..d8a6d6366899 100644
--- a/drivers/gpu/drm/i915/gvt/execlist.c
+++ b/drivers/gpu/drm/i915/gvt/execlist.c
@@ -517,8 +517,8 @@ static void release_shadow_wa_ctx(struct intel_shadow_wa_ctx *wa_ctx)
 	if (wa_ctx->indirect_ctx.size == 0)
 		return;
 
+	i915_gem_object_unpin_map(wa_ctx->indirect_ctx.obj);
 	i915_gem_object_put(wa_ctx->indirect_ctx.obj);
-	kvfree(wa_ctx->indirect_ctx.shadow_va);
 }
 
 static int complete_execlist_workload(struct intel_vgpu_workload *workload)
-- 
2.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 11/12] drm/i915/gvt: Use common mapping routines for shadow_bb object
  2016-10-19 10:11 gvt gem fixes Chris Wilson
                   ` (9 preceding siblings ...)
  2016-10-19 10:11 ` [PATCH 10/12] drm/i915/gvt: Use common mapping routines for indirect_ctx object Chris Wilson
@ 2016-10-19 10:11 ` Chris Wilson
  2016-10-19 10:11 ` [PATCH 12/12] drm/i915/gvt: Remove defunct vmap_batch() Chris Wilson
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-10-19 10:11 UTC (permalink / raw)
  To: intel-gfx

We have the ability to map an object, so use it rather than opencode it
badly. Note that the object remains permanently pinned, this is poor
practise.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gvt/cmd_parser.c | 21 ++++++---------------
 drivers/gpu/drm/i915/gvt/execlist.c   |  2 +-
 2 files changed, 7 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/cmd_parser.c b/drivers/gpu/drm/i915/gvt/cmd_parser.c
index 2b166094444b..a91405df394d 100644
--- a/drivers/gpu/drm/i915/gvt/cmd_parser.c
+++ b/drivers/gpu/drm/i915/gvt/cmd_parser.c
@@ -1648,18 +1648,10 @@ static int perform_bb_shadow(struct parser_exec_state *s)
 	entry_obj->len = bb_size;
 	INIT_LIST_HEAD(&entry_obj->list);
 
-	ret = i915_gem_object_get_pages(entry_obj->obj);
-	if (ret)
+	dst = i915_gem_object_pin_map(entry_obj->obj, I915_MAP_WB);
+	if (IS_ERR(dst)) {
+		ret = PTR_ERR(dst);
 		goto put_obj;
-
-	i915_gem_object_pin_pages(entry_obj->obj);
-
-	/* get the va of the shadow batch buffer */
-	dst = (void *)vmap_batch(entry_obj->obj, 0, bb_size);
-	if (!dst) {
-		gvt_err("failed to vmap shadow batch\n");
-		ret = -ENOMEM;
-		goto unpin_src;
 	}
 
 	ret = i915_gem_object_set_to_cpu_domain(entry_obj->obj, false);
@@ -1673,7 +1665,8 @@ static int perform_bb_shadow(struct parser_exec_state *s)
 
 	/* copy batch buffer to shadow batch buffer*/
 	ret = copy_gma_to_hva(s->vgpu, s->vgpu->gtt.ggtt_mm,
-				gma, gma + bb_size, dst);
+			      gma, gma + bb_size,
+			      dst);
 	if (ret) {
 		gvt_err("fail to copy guest ring buffer\n");
 		goto unmap_src;
@@ -1694,9 +1687,7 @@ static int perform_bb_shadow(struct parser_exec_state *s)
 	return 0;
 
 unmap_src:
-	vunmap(dst);
-unpin_src:
-	i915_gem_object_unpin_pages(entry_obj->obj);
+	i915_gem_object_unpin_map(entry_obj->obj);
 put_obj:
 	i915_gem_object_put(entry_obj->obj);
 free_entry:
diff --git a/drivers/gpu/drm/i915/gvt/execlist.c b/drivers/gpu/drm/i915/gvt/execlist.c
index d8a6d6366899..3ed78f7e76f2 100644
--- a/drivers/gpu/drm/i915/gvt/execlist.c
+++ b/drivers/gpu/drm/i915/gvt/execlist.c
@@ -504,8 +504,8 @@ static void release_shadow_batch_buffer(struct intel_vgpu_workload *workload)
 
 		list_for_each_entry_safe(entry_obj, temp, &workload->shadow_bb,
 					 list) {
+			i915_gem_object_unpin_map(entry_obj->obj);
 			i915_gem_object_put(entry_obj->obj);
-			kvfree(entry_obj->va);
 			list_del(&entry_obj->list);
 			kfree(entry_obj);
 		}
-- 
2.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 12/12] drm/i915/gvt: Remove defunct vmap_batch()
  2016-10-19 10:11 gvt gem fixes Chris Wilson
                   ` (10 preceding siblings ...)
  2016-10-19 10:11 ` [PATCH 11/12] drm/i915/gvt: Use common mapping routines for shadow_bb object Chris Wilson
@ 2016-10-19 10:11 ` Chris Wilson
  2016-10-19 10:45 ` gvt gem fixes Zhenyu Wang
  2016-10-19 13:54 ` ✗ Fi.CI.BAT: warning for series starting with [01/12] drm/i915/gvt: s/drm_gem_object_unreference/i915_gem_object_put/ Patchwork
  13 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-10-19 10:11 UTC (permalink / raw)
  To: intel-gfx

This code was removed from i915_cmd_parser.c but still an obsolete
version wound up being duplicated into gvt/cmd_parser.c. Good riddance.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gvt/cmd_parser.c | 38 -----------------------------------
 1 file changed, 38 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/cmd_parser.c b/drivers/gpu/drm/i915/gvt/cmd_parser.c
index a91405df394d..84759ef8bb17 100644
--- a/drivers/gpu/drm/i915/gvt/cmd_parser.c
+++ b/drivers/gpu/drm/i915/gvt/cmd_parser.c
@@ -1581,44 +1581,6 @@ static uint32_t find_bb_size(struct parser_exec_state *s)
 	return bb_size;
 }
 
-static u32 *vmap_batch(struct drm_i915_gem_object *obj,
-		       unsigned int start, unsigned int len)
-{
-	int i;
-	void *addr = NULL;
-	struct sg_page_iter sg_iter;
-	int first_page = start >> PAGE_SHIFT;
-	int last_page = (len + start + 4095) >> PAGE_SHIFT;
-	int npages = last_page - first_page;
-	struct page **pages;
-
-	pages = drm_malloc_ab(npages, sizeof(*pages));
-	if (pages == NULL) {
-		DRM_DEBUG_DRIVER("Failed to get space for pages\n");
-		goto finish;
-	}
-
-	i = 0;
-	for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents,
-			 first_page) {
-		pages[i++] = sg_page_iter_page(&sg_iter);
-		if (i == npages)
-			break;
-	}
-
-	addr = vmap(pages, i, 0, PAGE_KERNEL);
-	if (addr == NULL) {
-		DRM_DEBUG_DRIVER("Failed to vmap pages\n");
-		goto finish;
-	}
-
-finish:
-	if (pages)
-		drm_free_large(pages);
-	return (u32 *)addr;
-}
-
-
 static int perform_bb_shadow(struct parser_exec_state *s)
 {
 	struct intel_shadow_bb_entry *entry_obj;
-- 
2.9.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 10/12] drm/i915/gvt: Use common mapping routines for indirect_ctx object
  2016-10-19 10:11 ` [PATCH 10/12] drm/i915/gvt: Use common mapping routines for indirect_ctx object Chris Wilson
@ 2016-10-19 10:26   ` Zhenyu Wang
  0 siblings, 0 replies; 27+ messages in thread
From: Zhenyu Wang @ 2016-10-19 10:26 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 3590 bytes --]

On 2016.10.19 11:11:45 +0100, Chris Wilson wrote:
> We have the ability to map an object, so use it rather than opencode it
> badly.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Planned to fix these mapping too, obviously not fast than you..

Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com>

> ---
>  drivers/gpu/drm/i915/gvt/cmd_parser.c | 28 +++++++++-------------------
>  drivers/gpu/drm/i915/gvt/execlist.c   |  2 +-
>  2 files changed, 10 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/cmd_parser.c b/drivers/gpu/drm/i915/gvt/cmd_parser.c
> index 464fc3c9935b..2b166094444b 100644
> --- a/drivers/gpu/drm/i915/gvt/cmd_parser.c
> +++ b/drivers/gpu/drm/i915/gvt/cmd_parser.c
> @@ -2715,7 +2715,7 @@ static int shadow_indirect_ctx(struct intel_shadow_wa_ctx *wa_ctx)
>  	unsigned long guest_gma = wa_ctx->indirect_ctx.guest_gma;
>  	struct drm_i915_gem_object *obj;
>  	int ret = 0;
> -	void *dest = NULL;
> +	void *map;
>  
>  	obj = i915_gem_object_create(dev,
>  				     roundup(ctx_size + CACHELINE_BYTES,
> @@ -2723,18 +2723,12 @@ static int shadow_indirect_ctx(struct intel_shadow_wa_ctx *wa_ctx)
>  	if (IS_ERR(obj))
>  		return PTR_ERR(obj);
>  
> -	ret = i915_gem_object_get_pages(obj);
> -	if (ret)
> -		goto put_obj;
> -
> -	i915_gem_object_pin_pages(obj);
> -
>  	/* get the va of the shadow batch buffer */
> -	dest = (void *)vmap_batch(obj, 0, ctx_size + CACHELINE_BYTES);
> -	if (!dest) {
> +	map = i915_gem_object_pin_map(obj, I915_MAP_WB);
> +	if (IS_ERR(map)) {
>  		gvt_err("failed to vmap shadow indirect ctx\n");
> -		ret = -ENOMEM;
> -		goto unpin_src;
> +		ret = PTR_ERR(map);
> +		goto put_obj;
>  	}
>  
>  	ret = i915_gem_object_set_to_cpu_domain(obj, false);
> @@ -2743,25 +2737,21 @@ static int shadow_indirect_ctx(struct intel_shadow_wa_ctx *wa_ctx)
>  		goto unmap_src;
>  	}
>  
> -	wa_ctx->indirect_ctx.shadow_va = dest;
> -
> -	memset(dest, 0, round_up(ctx_size + CACHELINE_BYTES, PAGE_SIZE));
> -
>  	ret = copy_gma_to_hva(wa_ctx->workload->vgpu,
>  				wa_ctx->workload->vgpu->gtt.ggtt_mm,
> -				guest_gma, guest_gma + ctx_size, dest);
> +				guest_gma, guest_gma + ctx_size,
> +				map);
>  	if (ret) {
>  		gvt_err("fail to copy guest indirect ctx\n");
>  		goto unmap_src;
>  	}
>  
>  	wa_ctx->indirect_ctx.obj = obj;
> +	wa_ctx->indirect_ctx.shadow_va = map;
>  	return 0;
>  
>  unmap_src:
> -	vunmap(dest);
> -unpin_src:
> -	i915_gem_object_unpin_pages(wa_ctx->indirect_ctx.obj);
> +	i915_gem_object_unpin_map(obj);
>  put_obj:
>  	i915_gem_object_put(wa_ctx->indirect_ctx.obj);
>  	return ret;
> diff --git a/drivers/gpu/drm/i915/gvt/execlist.c b/drivers/gpu/drm/i915/gvt/execlist.c
> index b79d148a4e32..d8a6d6366899 100644
> --- a/drivers/gpu/drm/i915/gvt/execlist.c
> +++ b/drivers/gpu/drm/i915/gvt/execlist.c
> @@ -517,8 +517,8 @@ static void release_shadow_wa_ctx(struct intel_shadow_wa_ctx *wa_ctx)
>  	if (wa_ctx->indirect_ctx.size == 0)
>  		return;
>  
> +	i915_gem_object_unpin_map(wa_ctx->indirect_ctx.obj);
>  	i915_gem_object_put(wa_ctx->indirect_ctx.obj);
> -	kvfree(wa_ctx->indirect_ctx.shadow_va);
>  }
>  
>  static int complete_execlist_workload(struct intel_vgpu_workload *workload)
> -- 
> 2.9.3
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Open Source Technology Center, Intel ltd.

$gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 163 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 04/12] drm/i915: Catch premature unpinning of pages
  2016-10-19 10:11 ` [PATCH 04/12] drm/i915: Catch premature unpinning of pages Chris Wilson
@ 2016-10-19 10:26   ` Joonas Lahtinen
  0 siblings, 0 replies; 27+ messages in thread
From: Joonas Lahtinen @ 2016-10-19 10:26 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On ke, 2016-10-19 at 11:11 +0100, Chris Wilson wrote:
> Try to catch the violation of unpinning the backing storage whilst still
> bound to the GPU.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 07/12] drm/i915/gvt: Hold a reference on the request
  2016-10-19 10:11 ` [PATCH 07/12] drm/i915/gvt: Hold a reference on the request Chris Wilson
@ 2016-10-19 10:32   ` Zhenyu Wang
  2016-10-19 10:53     ` Chris Wilson
  2016-10-20  0:22   ` Zhenyu Wang
  1 sibling, 1 reply; 27+ messages in thread
From: Zhenyu Wang @ 2016-10-19 10:32 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 3289 bytes --]

On 2016.10.19 11:11:42 +0100, Chris Wilson wrote:
> The workload took a pointer to the request, and even waited upon,
> without holding a reference on the request. Take that reference
> explicitly and fix up the error path following request allocation that
> missed flushing the request.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/gvt/scheduler.c | 24 +++++++++++++-----------
>  1 file changed, 13 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
> index b15cdf5978a9..224f19ae61ab 100644
> --- a/drivers/gpu/drm/i915/gvt/scheduler.c
> +++ b/drivers/gpu/drm/i915/gvt/scheduler.c
> @@ -163,6 +163,7 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
>  	int ring_id = workload->ring_id;
>  	struct i915_gem_context *shadow_ctx = workload->vgpu->shadow_ctx;
>  	struct drm_i915_private *dev_priv = workload->vgpu->gvt->dev_priv;
> +	struct drm_i915_gem_request *rq;
>  	int ret;
>  
>  	gvt_dbg_sched("ring id %d prepare to dispatch workload %p\n",
> @@ -171,17 +172,16 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
>  	shadow_ctx->desc_template = workload->ctx_desc.addressing_mode <<
>  				    GEN8_CTX_ADDRESSING_MODE_SHIFT;
>  
> -	workload->req = i915_gem_request_alloc(dev_priv->engine[ring_id],
> -					       shadow_ctx);
> -	if (IS_ERR_OR_NULL(workload->req)) {
> +	rq = i915_gem_request_alloc(dev_priv->engine[ring_id], shadow_ctx);
> +	if (IS_ERR(rq)) {
>  		gvt_err("fail to allocate gem request\n");
> -		workload->status = PTR_ERR(workload->req);
> -		workload->req = NULL;
> +		workload->status = PTR_ERR(rq);
>  		return workload->status;
>  	}
>  
> -	gvt_dbg_sched("ring id %d get i915 gem request %p\n",
> -			ring_id, workload->req);
> +	gvt_dbg_sched("ring id %d get i915 gem request %p\n", ring_id, rq);
> +
> +	workload->req = i915_gem_request_get(rq);
>  
>  	mutex_lock(&gvt->lock);
>  
> @@ -208,16 +208,16 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
>  	gvt_dbg_sched("ring id %d submit workload to i915 %p\n",
>  			ring_id, workload->req);
>  
> -	i915_add_request_no_flush(workload->req);
> -
> +	i915_add_request_no_flush(rq);
>  	workload->dispatched = true;
>  	return 0;
>  err:
>  	workload->status = ret;
> -	if (workload->req)
> -		workload->req = NULL;
> +	i915_gem_request_put(fetch_and_zero(&workload->req));
>  
>  	mutex_unlock(&gvt->lock);

Might not need to hold gvt->lock when put request?

> +
> +	i915_add_request_no_flush(rq);

Why still add request in error path?

>  	return ret;
>  }
>  
> @@ -458,6 +458,8 @@ static int workload_thread(void *priv)
>  
>  		complete_current_workload(gvt, ring_id);
>  
> +		i915_gem_request_put(fetch_and_zero(&workload->req));
> +
>  		if (need_force_wake)
>  			intel_uncore_forcewake_put(gvt->dev_priv,
>  					FORCEWAKE_ALL);
> -- 
> 2.9.3
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Open Source Technology Center, Intel ltd.

$gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 163 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: gvt gem fixes
  2016-10-19 10:11 gvt gem fixes Chris Wilson
                   ` (11 preceding siblings ...)
  2016-10-19 10:11 ` [PATCH 12/12] drm/i915/gvt: Remove defunct vmap_batch() Chris Wilson
@ 2016-10-19 10:45 ` Zhenyu Wang
  2016-10-19 11:02   ` Chris Wilson
  2016-10-19 13:54 ` ✗ Fi.CI.BAT: warning for series starting with [01/12] drm/i915/gvt: s/drm_gem_object_unreference/i915_gem_object_put/ Patchwork
  13 siblings, 1 reply; 27+ messages in thread
From: Zhenyu Wang @ 2016-10-19 10:45 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 449 bytes --]

On 2016.10.19 11:11:35 +0100, Chris Wilson wrote:
> I think this is the set required to bring gvt into line, at least to
> unblock myself.

Thanks a lot, Chris. I'd like to merge this in next pull request,
or let me know you want to be picked up by drm-intel directly.
If 4/12 would be picked up alone, I'll skip that one in gvt tree.

-- 
Open Source Technology Center, Intel ltd.

$gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 163 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 07/12] drm/i915/gvt: Hold a reference on the request
  2016-10-19 10:32   ` Zhenyu Wang
@ 2016-10-19 10:53     ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-10-19 10:53 UTC (permalink / raw)
  To: Zhenyu Wang; +Cc: intel-gfx

On Wed, Oct 19, 2016 at 06:32:54PM +0800, Zhenyu Wang wrote:
> On 2016.10.19 11:11:42 +0100, Chris Wilson wrote:
> > The workload took a pointer to the request, and even waited upon,
> > without holding a reference on the request. Take that reference
> > explicitly and fix up the error path following request allocation that
> > missed flushing the request.
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > ---
> >  drivers/gpu/drm/i915/gvt/scheduler.c | 24 +++++++++++++-----------
> >  1 file changed, 13 insertions(+), 11 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
> > index b15cdf5978a9..224f19ae61ab 100644
> > --- a/drivers/gpu/drm/i915/gvt/scheduler.c
> > +++ b/drivers/gpu/drm/i915/gvt/scheduler.c
> > @@ -163,6 +163,7 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
> >  	int ring_id = workload->ring_id;
> >  	struct i915_gem_context *shadow_ctx = workload->vgpu->shadow_ctx;
> >  	struct drm_i915_private *dev_priv = workload->vgpu->gvt->dev_priv;
> > +	struct drm_i915_gem_request *rq;
> >  	int ret;
> >  
> >  	gvt_dbg_sched("ring id %d prepare to dispatch workload %p\n",
> > @@ -171,17 +172,16 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
> >  	shadow_ctx->desc_template = workload->ctx_desc.addressing_mode <<
> >  				    GEN8_CTX_ADDRESSING_MODE_SHIFT;
> >  
> > -	workload->req = i915_gem_request_alloc(dev_priv->engine[ring_id],
> > -					       shadow_ctx);
> > -	if (IS_ERR_OR_NULL(workload->req)) {
> > +	rq = i915_gem_request_alloc(dev_priv->engine[ring_id], shadow_ctx);
> > +	if (IS_ERR(rq)) {
> >  		gvt_err("fail to allocate gem request\n");
> > -		workload->status = PTR_ERR(workload->req);
> > -		workload->req = NULL;
> > +		workload->status = PTR_ERR(rq);
> >  		return workload->status;
> >  	}
> >  
> > -	gvt_dbg_sched("ring id %d get i915 gem request %p\n",
> > -			ring_id, workload->req);
> > +	gvt_dbg_sched("ring id %d get i915 gem request %p\n", ring_id, rq);
> > +
> > +	workload->req = i915_gem_request_get(rq);
> >  
> >  	mutex_lock(&gvt->lock);
> >  
> > @@ -208,16 +208,16 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
> >  	gvt_dbg_sched("ring id %d submit workload to i915 %p\n",
> >  			ring_id, workload->req);
> >  
> > -	i915_add_request_no_flush(workload->req);
> > -
> > +	i915_add_request_no_flush(rq);
> >  	workload->dispatched = true;
> >  	return 0;
> >  err:
> >  	workload->status = ret;
> > -	if (workload->req)
> > -		workload->req = NULL;
> > +	i915_gem_request_put(fetch_and_zero(&workload->req));
> >  
> >  	mutex_unlock(&gvt->lock);
> 
> Might not need to hold gvt->lock when put request?

I was just updating the current worklod->req = NULL which was under the
lock, using the same code as later. You can drop i915_gem_request_put(rq)
afterwards which is what I did at first, before deciding using the same
style for both was nicer.

> > +
> > +	i915_add_request_no_flush(rq);
> 
> Why still add request in error path?

The request may have changed global state which is now associated with
the request. You have to pass it back upon completion, whether or not
you have added your own workload to the request.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: gvt gem fixes
  2016-10-19 10:45 ` gvt gem fixes Zhenyu Wang
@ 2016-10-19 11:02   ` Chris Wilson
  2016-10-20  0:33     ` Zhenyu Wang
  0 siblings, 1 reply; 27+ messages in thread
From: Chris Wilson @ 2016-10-19 11:02 UTC (permalink / raw)
  To: Zhenyu Wang; +Cc: intel-gfx

On Wed, Oct 19, 2016 at 06:45:51PM +0800, Zhenyu Wang wrote:
> On 2016.10.19 11:11:35 +0100, Chris Wilson wrote:
> > I think this is the set required to bring gvt into line, at least to
> > unblock myself.
> 
> Thanks a lot, Chris. I'd like to merge this in next pull request,
> or let me know you want to be picked up by drm-intel directly.
> If 4/12 would be picked up alone, I'll skip that one in gvt tree.

If you are confident in having the pull ready in the next day or so,
I'll just preface my series with these and they will evaporate after the
merge.

I'll apply 4/12 right now to get that out of the way.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* ✗ Fi.CI.BAT: warning for series starting with [01/12] drm/i915/gvt: s/drm_gem_object_unreference/i915_gem_object_put/
  2016-10-19 10:11 gvt gem fixes Chris Wilson
                   ` (12 preceding siblings ...)
  2016-10-19 10:45 ` gvt gem fixes Zhenyu Wang
@ 2016-10-19 13:54 ` Patchwork
  13 siblings, 0 replies; 27+ messages in thread
From: Patchwork @ 2016-10-19 13:54 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/12] drm/i915/gvt: s/drm_gem_object_unreference/i915_gem_object_put/
URL   : https://patchwork.freedesktop.org/series/14013/
State : warning

== Summary ==

Series 14013v1 Series without cover letter
https://patchwork.freedesktop.org/api/1.0/series/14013/revisions/1/mbox/

Test gem_exec_suspend:
        Subgroup basic-s3:
                incomplete -> PASS       (fi-snb-2520m)
                fail       -> DMESG-WARN (fi-ilk-650)
Test kms_busy:
        Subgroup basic-flip-default-a:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup basic-flip-default-b:
                pass       -> DMESG-WARN (fi-ilk-650)
Test kms_cursor_legacy:
        Subgroup basic-busy-flip-before-cursor-legacy:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup basic-busy-flip-before-cursor-varying-size:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup basic-flip-after-cursor-legacy:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup basic-flip-after-cursor-varying-size:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup basic-flip-before-cursor-legacy:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup basic-flip-before-cursor-varying-size:
                pass       -> DMESG-WARN (fi-ilk-650)
Test kms_flip:
        Subgroup basic-flip-vs-dpms:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup basic-flip-vs-modeset:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup basic-flip-vs-wf_vblank:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup basic-plain-flip:
                pass       -> DMESG-WARN (fi-ilk-650)
Test kms_force_connector_basic:
        Subgroup force-load-detect:
                incomplete -> PASS       (fi-byt-j1900)
                incomplete -> PASS       (fi-ivb-3770)
Test kms_pipe_crc_basic:
        Subgroup hang-read-crc-pipe-a:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup hang-read-crc-pipe-b:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup nonblocking-crc-pipe-a:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup nonblocking-crc-pipe-a-frame-sequence:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup nonblocking-crc-pipe-b:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup nonblocking-crc-pipe-b-frame-sequence:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup read-crc-pipe-a:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup read-crc-pipe-a-frame-sequence:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup read-crc-pipe-b:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup read-crc-pipe-b-frame-sequence:
                pass       -> DMESG-WARN (fi-ilk-650)
        Subgroup suspend-read-crc-pipe-a:
                fail       -> DMESG-WARN (fi-ilk-650)
        Subgroup suspend-read-crc-pipe-b:
                fail       -> DMESG-WARN (fi-ilk-650)

fi-bdw-5557u     total:246  pass:231  dwarn:0   dfail:0   fail:0   skip:15 
fi-bsw-n3050     total:246  pass:203  dwarn:1   dfail:0   fail:0   skip:42 
fi-bxt-t5700     total:246  pass:216  dwarn:0   dfail:0   fail:0   skip:30 
fi-byt-j1900     total:246  pass:213  dwarn:1   dfail:0   fail:1   skip:31 
fi-byt-n2820     total:246  pass:209  dwarn:1   dfail:0   fail:1   skip:35 
fi-hsw-4770      total:246  pass:224  dwarn:0   dfail:0   fail:0   skip:22 
fi-hsw-4770r     total:246  pass:223  dwarn:1   dfail:0   fail:0   skip:22 
fi-ilk-650       total:246  pass:159  dwarn:25  dfail:0   fail:2   skip:60 
fi-ivb-3520m     total:246  pass:221  dwarn:0   dfail:0   fail:0   skip:25 
fi-ivb-3770      total:209  pass:187  dwarn:0   dfail:0   fail:0   skip:21 
fi-kbl-7200u     total:246  pass:222  dwarn:0   dfail:0   fail:0   skip:24 
fi-skl-6260u     total:246  pass:232  dwarn:0   dfail:0   fail:0   skip:14 
fi-skl-6700hq    total:246  pass:222  dwarn:1   dfail:0   fail:0   skip:23 
fi-skl-6700k     total:246  pass:221  dwarn:1   dfail:0   fail:0   skip:24 
fi-skl-6770hq    total:246  pass:232  dwarn:0   dfail:0   fail:0   skip:14 
fi-snb-2520m     total:246  pass:210  dwarn:0   dfail:0   fail:0   skip:36 
fi-snb-2600      total:246  pass:209  dwarn:0   dfail:0   fail:0   skip:37 

Results at /archive/results/CI_IGT_test/Patchwork_2758/

d5cbeba648bec880aa0e1f7a531e684499a079a7 drm-intel-nightly: 2016y-10m-19d-11h-13m-08s UTC integration manifest
d560dee drm/i915/gvt: Remove defunct vmap_batch()
9bf8686 drm/i915/gvt: Use common mapping routines for shadow_bb object
dc92fe6 drm/i915/gvt: Use common mapping routines for indirect_ctx object
02977e8 drm/i915/gvt: Stop waiting whilst holding struct_mutex
e22ba7c drm/i915/gvt: Stop checking for impossible interrupts from a kthread
8a57b82 drm/i915/gvt: Hold a reference on the request
599585e drm/i915/gvt: Remove dangerous unpin of backing storage of bound GPU object
d107e0e drm/i915/gvt: Use the returned VMA to provide the virtual address
f5c37f4 drm/i915/gvt: i915_gem_object_create() returns an error pointer
9be8094 drm/i915/gvt: Add runtime pm around fences
677f129 drm/i915/gvt: s/drm_gem_object_unreference/i915_gem_object_put/

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 07/12] drm/i915/gvt: Hold a reference on the request
  2016-10-19 10:11 ` [PATCH 07/12] drm/i915/gvt: Hold a reference on the request Chris Wilson
  2016-10-19 10:32   ` Zhenyu Wang
@ 2016-10-20  0:22   ` Zhenyu Wang
  2016-10-20  6:52     ` Chris Wilson
  1 sibling, 1 reply; 27+ messages in thread
From: Zhenyu Wang @ 2016-10-20  0:22 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 3127 bytes --]

On 2016.10.19 11:11:42 +0100, Chris Wilson wrote:
> The workload took a pointer to the request, and even waited upon,
> without holding a reference on the request. Take that reference
> explicitly and fix up the error path following request allocation that
> missed flushing the request.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/gvt/scheduler.c | 24 +++++++++++++-----------
>  1 file changed, 13 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
> index b15cdf5978a9..224f19ae61ab 100644
> --- a/drivers/gpu/drm/i915/gvt/scheduler.c
> +++ b/drivers/gpu/drm/i915/gvt/scheduler.c
> @@ -163,6 +163,7 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
>  	int ring_id = workload->ring_id;
>  	struct i915_gem_context *shadow_ctx = workload->vgpu->shadow_ctx;
>  	struct drm_i915_private *dev_priv = workload->vgpu->gvt->dev_priv;
> +	struct drm_i915_gem_request *rq;
>  	int ret;
>  
>  	gvt_dbg_sched("ring id %d prepare to dispatch workload %p\n",
> @@ -171,17 +172,16 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
>  	shadow_ctx->desc_template = workload->ctx_desc.addressing_mode <<
>  				    GEN8_CTX_ADDRESSING_MODE_SHIFT;
>  
> -	workload->req = i915_gem_request_alloc(dev_priv->engine[ring_id],
> -					       shadow_ctx);
> -	if (IS_ERR_OR_NULL(workload->req)) {
> +	rq = i915_gem_request_alloc(dev_priv->engine[ring_id], shadow_ctx);
> +	if (IS_ERR(rq)) {
>  		gvt_err("fail to allocate gem request\n");
> -		workload->status = PTR_ERR(workload->req);
> -		workload->req = NULL;
> +		workload->status = PTR_ERR(rq);
>  		return workload->status;
>  	}
>  
> -	gvt_dbg_sched("ring id %d get i915 gem request %p\n",
> -			ring_id, workload->req);
> +	gvt_dbg_sched("ring id %d get i915 gem request %p\n", ring_id, rq);
> +
> +	workload->req = i915_gem_request_get(rq);
>  
>  	mutex_lock(&gvt->lock);
>  
> @@ -208,16 +208,16 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
>  	gvt_dbg_sched("ring id %d submit workload to i915 %p\n",
>  			ring_id, workload->req);
>  
> -	i915_add_request_no_flush(workload->req);
> -
> +	i915_add_request_no_flush(rq);
>  	workload->dispatched = true;
>  	return 0;
>  err:
>  	workload->status = ret;
> -	if (workload->req)
> -		workload->req = NULL;
> +	i915_gem_request_put(fetch_and_zero(&workload->req));
>

Looks we don't need put here as in error path from dispatch_workload()
we will go with below put path too in main thread.

>  	mutex_unlock(&gvt->lock);
> +
> +	i915_add_request_no_flush(rq);
>  	return ret;
>  }
>  
> @@ -458,6 +458,8 @@ static int workload_thread(void *priv)
>  
>  		complete_current_workload(gvt, ring_id);
>  
> +		i915_gem_request_put(fetch_and_zero(&workload->req));
> +
>  		if (need_force_wake)
>  			intel_uncore_forcewake_put(gvt->dev_priv,
>  					FORCEWAKE_ALL);


-- 
Open Source Technology Center, Intel ltd.

$gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 163 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: gvt gem fixes
  2016-10-19 11:02   ` Chris Wilson
@ 2016-10-20  0:33     ` Zhenyu Wang
  2016-10-20  7:02       ` Daniel Vetter
  0 siblings, 1 reply; 27+ messages in thread
From: Zhenyu Wang @ 2016-10-20  0:33 UTC (permalink / raw)
  To: Chris Wilson, Zhenyu Wang, intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 869 bytes --]

On 2016.10.19 12:02:58 +0100, Chris Wilson wrote:
> On Wed, Oct 19, 2016 at 06:45:51PM +0800, Zhenyu Wang wrote:
> > On 2016.10.19 11:11:35 +0100, Chris Wilson wrote:
> > > I think this is the set required to bring gvt into line, at least to
> > > unblock myself.
> > 
> > Thanks a lot, Chris. I'd like to merge this in next pull request,
> > or let me know you want to be picked up by drm-intel directly.
> > If 4/12 would be picked up alone, I'll skip that one in gvt tree.
> 
> If you are confident in having the pull ready in the next day or so,
> I'll just preface my series with these and they will evaporate after the
> merge.
>

I'll try to send it today.

> I'll apply 4/12 right now to get that out of the way.

ok, fine.

thanks

-- 
Open Source Technology Center, Intel ltd.

$gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 163 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 07/12] drm/i915/gvt: Hold a reference on the request
  2016-10-20  0:22   ` Zhenyu Wang
@ 2016-10-20  6:52     ` Chris Wilson
  2016-10-20  7:33       ` Zhenyu Wang
  0 siblings, 1 reply; 27+ messages in thread
From: Chris Wilson @ 2016-10-20  6:52 UTC (permalink / raw)
  To: Zhenyu Wang; +Cc: intel-gfx

On Thu, Oct 20, 2016 at 08:22:00AM +0800, Zhenyu Wang wrote:
> On 2016.10.19 11:11:42 +0100, Chris Wilson wrote:
> > The workload took a pointer to the request, and even waited upon,
> > without holding a reference on the request. Take that reference
> > explicitly and fix up the error path following request allocation that
> > missed flushing the request.
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > ---
> >  drivers/gpu/drm/i915/gvt/scheduler.c | 24 +++++++++++++-----------
> >  1 file changed, 13 insertions(+), 11 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
> > index b15cdf5978a9..224f19ae61ab 100644
> > --- a/drivers/gpu/drm/i915/gvt/scheduler.c
> > +++ b/drivers/gpu/drm/i915/gvt/scheduler.c
> > @@ -163,6 +163,7 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
> >  	int ring_id = workload->ring_id;
> >  	struct i915_gem_context *shadow_ctx = workload->vgpu->shadow_ctx;
> >  	struct drm_i915_private *dev_priv = workload->vgpu->gvt->dev_priv;
> > +	struct drm_i915_gem_request *rq;
> >  	int ret;
> >  
> >  	gvt_dbg_sched("ring id %d prepare to dispatch workload %p\n",
> > @@ -171,17 +172,16 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
> >  	shadow_ctx->desc_template = workload->ctx_desc.addressing_mode <<
> >  				    GEN8_CTX_ADDRESSING_MODE_SHIFT;
> >  
> > -	workload->req = i915_gem_request_alloc(dev_priv->engine[ring_id],
> > -					       shadow_ctx);
> > -	if (IS_ERR_OR_NULL(workload->req)) {
> > +	rq = i915_gem_request_alloc(dev_priv->engine[ring_id], shadow_ctx);
> > +	if (IS_ERR(rq)) {
> >  		gvt_err("fail to allocate gem request\n");
> > -		workload->status = PTR_ERR(workload->req);
> > -		workload->req = NULL;
> > +		workload->status = PTR_ERR(rq);
> >  		return workload->status;
> >  	}
> >  
> > -	gvt_dbg_sched("ring id %d get i915 gem request %p\n",
> > -			ring_id, workload->req);
> > +	gvt_dbg_sched("ring id %d get i915 gem request %p\n", ring_id, rq);
> > +
> > +	workload->req = i915_gem_request_get(rq);
> >  
> >  	mutex_lock(&gvt->lock);
> >  
> > @@ -208,16 +208,16 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
> >  	gvt_dbg_sched("ring id %d submit workload to i915 %p\n",
> >  			ring_id, workload->req);
> >  
> > -	i915_add_request_no_flush(workload->req);
> > -
> > +	i915_add_request_no_flush(rq);
> >  	workload->dispatched = true;
> >  	return 0;
> >  err:
> >  	workload->status = ret;
> > -	if (workload->req)
> > -		workload->req = NULL;
> > +	i915_gem_request_put(fetch_and_zero(&workload->req));
> >
> 
> Looks we don't need put here as in error path from dispatch_workload()
> we will go with below put path too in main thread.

If we clear the request pointer, then we need the put. But yes, we don't
necessarily need to clear the pointer on error for the caller, as the
caller doesn't distinguish the error path and the no-op request can be
handled identically to a real request.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: gvt gem fixes
  2016-10-20  0:33     ` Zhenyu Wang
@ 2016-10-20  7:02       ` Daniel Vetter
  2016-10-20  7:15         ` Zhenyu Wang
  0 siblings, 1 reply; 27+ messages in thread
From: Daniel Vetter @ 2016-10-20  7:02 UTC (permalink / raw)
  To: Zhenyu Wang; +Cc: intel-gfx

On Thu, Oct 20, 2016 at 08:33:05AM +0800, Zhenyu Wang wrote:
> On 2016.10.19 12:02:58 +0100, Chris Wilson wrote:
> > On Wed, Oct 19, 2016 at 06:45:51PM +0800, Zhenyu Wang wrote:
> > > On 2016.10.19 11:11:35 +0100, Chris Wilson wrote:
> > > > I think this is the set required to bring gvt into line, at least to
> > > > unblock myself.
> > > 
> > > Thanks a lot, Chris. I'd like to merge this in next pull request,
> > > or let me know you want to be picked up by drm-intel directly.
> > > If 4/12 would be picked up alone, I'll skip that one in gvt tree.
> > 
> > If you are confident in having the pull ready in the next day or so,
> > I'll just preface my series with these and they will evaporate after the
> > merge.
> >
> 
> I'll try to send it today.
> 
> > I'll apply 4/12 right now to get that out of the way.
> 
> ok, fine.

Yeah, I think anything that touches i915 code should get merged through
drm-intel directly with the usual process. Only exception is when gvt has
a functional depency and it's a small patch, then I think we can sometimes
merge i915 core patches through gvt, with an ack from Jani or me (and
still proper review and CI and everything ofc). But that should be the
rare exception.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: gvt gem fixes
  2016-10-20  7:02       ` Daniel Vetter
@ 2016-10-20  7:15         ` Zhenyu Wang
  2016-10-20  9:13           ` Daniel Vetter
  0 siblings, 1 reply; 27+ messages in thread
From: Zhenyu Wang @ 2016-10-20  7:15 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 801 bytes --]

On 2016.10.20 09:02:45 +0200, Daniel Vetter wrote:
> Yeah, I think anything that touches i915 code should get merged through
> drm-intel directly with the usual process. Only exception is when gvt has
> a functional depency and it's a small patch, then I think we can sometimes
> merge i915 core patches through gvt, with an ack from Jani or me (and
> still proper review and CI and everything ofc). But that should be the
> rare exception.

That's fair enough for me. One prepared change is to fix gvt header
issue you've listed. As it touches intel_gvt.h in i915, I'll send that
seperately first (https://github.com/01org/gvt-linux/commit/b6a1ca7571ae45186394e555dc420481c1a9dba5)

-- 
Open Source Technology Center, Intel ltd.

$gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 163 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 07/12] drm/i915/gvt: Hold a reference on the request
  2016-10-20  6:52     ` Chris Wilson
@ 2016-10-20  7:33       ` Zhenyu Wang
  0 siblings, 0 replies; 27+ messages in thread
From: Zhenyu Wang @ 2016-10-20  7:33 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 3468 bytes --]

On 2016.10.20 07:52:18 +0100, Chris Wilson wrote:
> On Thu, Oct 20, 2016 at 08:22:00AM +0800, Zhenyu Wang wrote:
> > On 2016.10.19 11:11:42 +0100, Chris Wilson wrote:
> > > The workload took a pointer to the request, and even waited upon,
> > > without holding a reference on the request. Take that reference
> > > explicitly and fix up the error path following request allocation that
> > > missed flushing the request.
> > > 
> > > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > > ---
> > >  drivers/gpu/drm/i915/gvt/scheduler.c | 24 +++++++++++++-----------
> > >  1 file changed, 13 insertions(+), 11 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
> > > index b15cdf5978a9..224f19ae61ab 100644
> > > --- a/drivers/gpu/drm/i915/gvt/scheduler.c
> > > +++ b/drivers/gpu/drm/i915/gvt/scheduler.c
> > > @@ -163,6 +163,7 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
> > >  	int ring_id = workload->ring_id;
> > >  	struct i915_gem_context *shadow_ctx = workload->vgpu->shadow_ctx;
> > >  	struct drm_i915_private *dev_priv = workload->vgpu->gvt->dev_priv;
> > > +	struct drm_i915_gem_request *rq;
> > >  	int ret;
> > >  
> > >  	gvt_dbg_sched("ring id %d prepare to dispatch workload %p\n",
> > > @@ -171,17 +172,16 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
> > >  	shadow_ctx->desc_template = workload->ctx_desc.addressing_mode <<
> > >  				    GEN8_CTX_ADDRESSING_MODE_SHIFT;
> > >  
> > > -	workload->req = i915_gem_request_alloc(dev_priv->engine[ring_id],
> > > -					       shadow_ctx);
> > > -	if (IS_ERR_OR_NULL(workload->req)) {
> > > +	rq = i915_gem_request_alloc(dev_priv->engine[ring_id], shadow_ctx);
> > > +	if (IS_ERR(rq)) {
> > >  		gvt_err("fail to allocate gem request\n");
> > > -		workload->status = PTR_ERR(workload->req);
> > > -		workload->req = NULL;
> > > +		workload->status = PTR_ERR(rq);
> > >  		return workload->status;
> > >  	}
> > >  
> > > -	gvt_dbg_sched("ring id %d get i915 gem request %p\n",
> > > -			ring_id, workload->req);
> > > +	gvt_dbg_sched("ring id %d get i915 gem request %p\n", ring_id, rq);
> > > +
> > > +	workload->req = i915_gem_request_get(rq);
> > >  
> > >  	mutex_lock(&gvt->lock);
> > >  
> > > @@ -208,16 +208,16 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
> > >  	gvt_dbg_sched("ring id %d submit workload to i915 %p\n",
> > >  			ring_id, workload->req);
> > >  
> > > -	i915_add_request_no_flush(workload->req);
> > > -
> > > +	i915_add_request_no_flush(rq);
> > >  	workload->dispatched = true;
> > >  	return 0;
> > >  err:
> > >  	workload->status = ret;
> > > -	if (workload->req)
> > > -		workload->req = NULL;
> > > +	i915_gem_request_put(fetch_and_zero(&workload->req));
> > >
> > 
> > Looks we don't need put here as in error path from dispatch_workload()
> > we will go with below put path too in main thread.
> 
> If we clear the request pointer, then we need the put. But yes, we don't
> necessarily need to clear the pointer on error for the caller, as the
> caller doesn't distinguish the error path and the no-op request can be
> handled identically to a real request.

Would you refresh this one? So I'd send out next pull request with this.

thanks

-- 
Open Source Technology Center, Intel ltd.

$gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 163 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: gvt gem fixes
  2016-10-20  7:15         ` Zhenyu Wang
@ 2016-10-20  9:13           ` Daniel Vetter
  0 siblings, 0 replies; 27+ messages in thread
From: Daniel Vetter @ 2016-10-20  9:13 UTC (permalink / raw)
  To: Zhenyu Wang; +Cc: intel-gfx

On Thu, Oct 20, 2016 at 03:15:33PM +0800, Zhenyu Wang wrote:
> On 2016.10.20 09:02:45 +0200, Daniel Vetter wrote:
> > Yeah, I think anything that touches i915 code should get merged through
> > drm-intel directly with the usual process. Only exception is when gvt has
> > a functional depency and it's a small patch, then I think we can sometimes
> > merge i915 core patches through gvt, with an ack from Jani or me (and
> > still proper review and CI and everything ofc). But that should be the
> > rare exception.
> 
> That's fair enough for me. One prepared change is to fix gvt header
> issue you've listed. As it touches intel_gvt.h in i915, I'll send that
> seperately first (https://github.com/01org/gvt-linux/commit/b6a1ca7571ae45186394e555dc420481c1a9dba5)

Yes, anything touching code/files outside of the gvt/ sub-directory needs
to be submitted here to intel-gfx and go through our normal drm-intel
review process.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2016-10-20  9:13 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-19 10:11 gvt gem fixes Chris Wilson
2016-10-19 10:11 ` [PATCH 01/12] drm/i915/gvt: s/drm_gem_object_unreference/i915_gem_object_put/ Chris Wilson
2016-10-19 10:11 ` [PATCH 02/12] drm/i915/gvt: Add runtime pm around fences Chris Wilson
2016-10-19 10:11 ` [PATCH 03/12] drm/i915/gvt: i915_gem_object_create() returns an error pointer Chris Wilson
2016-10-19 10:11 ` [PATCH 04/12] drm/i915: Catch premature unpinning of pages Chris Wilson
2016-10-19 10:26   ` Joonas Lahtinen
2016-10-19 10:11 ` [PATCH 05/12] drm/i915/gvt: Use the returned VMA to provide the virtual address Chris Wilson
2016-10-19 10:11 ` [PATCH 06/12] drm/i915/gvt: Remove dangerous unpin of backing storage of bound GPU object Chris Wilson
2016-10-19 10:11 ` [PATCH 07/12] drm/i915/gvt: Hold a reference on the request Chris Wilson
2016-10-19 10:32   ` Zhenyu Wang
2016-10-19 10:53     ` Chris Wilson
2016-10-20  0:22   ` Zhenyu Wang
2016-10-20  6:52     ` Chris Wilson
2016-10-20  7:33       ` Zhenyu Wang
2016-10-19 10:11 ` [PATCH 08/12] drm/i915/gvt: Stop checking for impossible interrupts from a kthread Chris Wilson
2016-10-19 10:11 ` [PATCH 09/12] drm/i915/gvt: Stop waiting whilst holding struct_mutex Chris Wilson
2016-10-19 10:11 ` [PATCH 10/12] drm/i915/gvt: Use common mapping routines for indirect_ctx object Chris Wilson
2016-10-19 10:26   ` Zhenyu Wang
2016-10-19 10:11 ` [PATCH 11/12] drm/i915/gvt: Use common mapping routines for shadow_bb object Chris Wilson
2016-10-19 10:11 ` [PATCH 12/12] drm/i915/gvt: Remove defunct vmap_batch() Chris Wilson
2016-10-19 10:45 ` gvt gem fixes Zhenyu Wang
2016-10-19 11:02   ` Chris Wilson
2016-10-20  0:33     ` Zhenyu Wang
2016-10-20  7:02       ` Daniel Vetter
2016-10-20  7:15         ` Zhenyu Wang
2016-10-20  9:13           ` Daniel Vetter
2016-10-19 13:54 ` ✗ Fi.CI.BAT: warning for series starting with [01/12] drm/i915/gvt: s/drm_gem_object_unreference/i915_gem_object_put/ Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.