* [PATCH 0/2] Security mitigation for Intel Gen7/7.5 HWs @ 2020-01-30 16:57 ` Akeem G Abodunrin 0 siblings, 0 replies; 20+ messages in thread From: Akeem G Abodunrin @ 2020-01-30 16:57 UTC (permalink / raw) To: akeem.g.abodunrin, intel-gfx, dri-devel, omer.aran, pragyansri.pathi, d.scott.phillips, david.c.stewart, tony.luck, jon.bloomfield, sudeep.dutt, daniel.vetter, joonas.lahtinen, jani.nikula, chris.p.wilson, prathap.kumar.valsan, mika.kuoppala, francesco.balestrieri Intel ID: PSIRT-TA-201910-001 CVEID: CVE-2019-14615 Summary of Vulnerability ------------------------ Insufficient control flow in certain data structures for some Intel(R) Processors with Intel Processor Graphics may allow an unauthenticated user to potentially enable information disclosure via local access Products affected: ------------------ Intel CPU’s with Gen7, Gen7.5 and Gen9 Graphics. Mitigation Summary ------------------ This patch provides mitigation for Gen7 and Gen7.5 hardware only. Patch for Gen9 devices have been provided and merged to Linux mainline, and backported to stable kernels. Note that Gen8 is not impacted due to a previously implemented workaround. The mitigation involves submitting a custom EU kernel prior to every context restore, in order to forcibly clear down residual EU and URB resources. This security mitigation change does not trigger any known performance regression. Performance is on par with current mainline/drm-tip. Note on Address Space Isolation (Full PPGTT) -------------------------------------------- Isolation of EU kernel assets should be considered complementary to the existing support for address space isolation (aka Full PPGTT), since without address space isolation there is minimal value in preventing leakage between EU contexts. Full PPGTT has long been supported on Gen Gfx devices since Gen8, and protection against EU residual leakage is a welcome addition for these newer platforms. By contrast, Gen7 and Gen7.5 device introduced Full PPGTT support only as a hardware development feature for anticipated Gen8 productization. Support was never intended for, or provided to the Linux kernels for these platforms. Recent work (still ongoing) to the mainline kernel is retroactively providing this support, but due to the level of complexity it is not practical to attempt to backport this to earlier stable kernels. Since without Full PPGTT, EU residuals protection has questionable benefit, *there are no plans to provide stable kernel backports for this patch series.* Mika Kuoppala (1): drm/i915: Add mechanism to submit a context WA on ring submission Prathap Kumar Valsan (1): drm/i915/gen7: Clear all EU/L3 residual contexts drivers/gpu/drm/i915/Makefile | 1 + drivers/gpu/drm/i915/gt/gen7_renderclear.c | 535 ++++++++++++++++++ drivers/gpu/drm/i915/gt/gen7_renderclear.h | 15 + drivers/gpu/drm/i915/gt/intel_gpu_commands.h | 17 +- .../gpu/drm/i915/gt/intel_ring_submission.c | 133 ++++- drivers/gpu/drm/i915/i915_utils.h | 5 + 6 files changed, 700 insertions(+), 6 deletions(-) create mode 100644 drivers/gpu/drm/i915/gt/gen7_renderclear.c create mode 100644 drivers/gpu/drm/i915/gt/gen7_renderclear.h -- 2.20.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 20+ messages in thread
* [Intel-gfx] [PATCH 0/2] Security mitigation for Intel Gen7/7.5 HWs @ 2020-01-30 16:57 ` Akeem G Abodunrin 0 siblings, 0 replies; 20+ messages in thread From: Akeem G Abodunrin @ 2020-01-30 16:57 UTC (permalink / raw) To: akeem.g.abodunrin, intel-gfx, dri-devel, omer.aran, pragyansri.pathi, d.scott.phillips, david.c.stewart, tony.luck, jon.bloomfield, sudeep.dutt, daniel.vetter, joonas.lahtinen, jani.nikula, chris.p.wilson, prathap.kumar.valsan, mika.kuoppala, francesco.balestrieri Intel ID: PSIRT-TA-201910-001 CVEID: CVE-2019-14615 Summary of Vulnerability ------------------------ Insufficient control flow in certain data structures for some Intel(R) Processors with Intel Processor Graphics may allow an unauthenticated user to potentially enable information disclosure via local access Products affected: ------------------ Intel CPU’s with Gen7, Gen7.5 and Gen9 Graphics. Mitigation Summary ------------------ This patch provides mitigation for Gen7 and Gen7.5 hardware only. Patch for Gen9 devices have been provided and merged to Linux mainline, and backported to stable kernels. Note that Gen8 is not impacted due to a previously implemented workaround. The mitigation involves submitting a custom EU kernel prior to every context restore, in order to forcibly clear down residual EU and URB resources. This security mitigation change does not trigger any known performance regression. Performance is on par with current mainline/drm-tip. Note on Address Space Isolation (Full PPGTT) -------------------------------------------- Isolation of EU kernel assets should be considered complementary to the existing support for address space isolation (aka Full PPGTT), since without address space isolation there is minimal value in preventing leakage between EU contexts. Full PPGTT has long been supported on Gen Gfx devices since Gen8, and protection against EU residual leakage is a welcome addition for these newer platforms. By contrast, Gen7 and Gen7.5 device introduced Full PPGTT support only as a hardware development feature for anticipated Gen8 productization. Support was never intended for, or provided to the Linux kernels for these platforms. Recent work (still ongoing) to the mainline kernel is retroactively providing this support, but due to the level of complexity it is not practical to attempt to backport this to earlier stable kernels. Since without Full PPGTT, EU residuals protection has questionable benefit, *there are no plans to provide stable kernel backports for this patch series.* Mika Kuoppala (1): drm/i915: Add mechanism to submit a context WA on ring submission Prathap Kumar Valsan (1): drm/i915/gen7: Clear all EU/L3 residual contexts drivers/gpu/drm/i915/Makefile | 1 + drivers/gpu/drm/i915/gt/gen7_renderclear.c | 535 ++++++++++++++++++ drivers/gpu/drm/i915/gt/gen7_renderclear.h | 15 + drivers/gpu/drm/i915/gt/intel_gpu_commands.h | 17 +- .../gpu/drm/i915/gt/intel_ring_submission.c | 133 ++++- drivers/gpu/drm/i915/i915_utils.h | 5 + 6 files changed, 700 insertions(+), 6 deletions(-) create mode 100644 drivers/gpu/drm/i915/gt/gen7_renderclear.c create mode 100644 drivers/gpu/drm/i915/gt/gen7_renderclear.h -- 2.20.1 _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 1/2] drm/i915: Add mechanism to submit a context WA on ring submission 2020-01-30 16:57 ` [Intel-gfx] " Akeem G Abodunrin @ 2020-01-30 16:57 ` Akeem G Abodunrin -1 siblings, 0 replies; 20+ messages in thread From: Akeem G Abodunrin @ 2020-01-30 16:57 UTC (permalink / raw) To: akeem.g.abodunrin, intel-gfx, dri-devel, omer.aran, pragyansri.pathi, d.scott.phillips, david.c.stewart, tony.luck, jon.bloomfield, sudeep.dutt, daniel.vetter, joonas.lahtinen, jani.nikula, chris.p.wilson, prathap.kumar.valsan, mika.kuoppala, francesco.balestrieri From: Mika Kuoppala <mika.kuoppala@linux.intel.com> This patch adds framework to submit an arbitrary batchbuffer on each context switch to clear residual state for render engine on Gen7/7.5 devices. The idea of always emitting the context and vm setup around each request is primary to make reset recovery easy, and not require rewriting the ringbuffer. As each request would set up its own context, leaving it to the HW to notice and elide no-op context switches, we could restart the ring at any point, and reorder the requests freely. However, to avoid emitting clear_residuals() between consecutive requests in the ringbuffer of the same context, we do want to track the current context in the ring. In doing so, we need to be careful to only record a context switch when we are sure the next request will be emitted. This security mitigation change does not trigger any performance regression. Performance is on par with current mainline/drm-tip. Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Prathap Kumar Valsan <prathap.kumar.valsan@intel.com> Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Balestrieri Francesco <francesco.balestrieri@intel.com> Cc: Bloomfield Jon <jon.bloomfield@intel.com> Cc: Dutt Sudeep <sudeep.dutt@intel.com> --- .../gpu/drm/i915/gt/intel_ring_submission.c | 132 +++++++++++++++++- 1 file changed, 129 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c index 9aa86ba15ce7..9f3bfe499446 100644 --- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c @@ -1385,7 +1385,9 @@ static int load_pd_dir(struct i915_request *rq, return rq->engine->emit_flush(rq, EMIT_FLUSH); } -static inline int mi_set_context(struct i915_request *rq, u32 flags) +static inline int mi_set_context(struct i915_request *rq, + struct intel_context *ce, + u32 flags) { struct drm_i915_private *i915 = rq->i915; struct intel_engine_cs *engine = rq->engine; @@ -1460,7 +1462,7 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags) *cs++ = MI_NOOP; *cs++ = MI_SET_CONTEXT; - *cs++ = i915_ggtt_offset(rq->context->state) | flags; + *cs++ = i915_ggtt_offset(ce->state) | flags; /* * w/a: MI_SET_CONTEXT must always be followed by MI_NOOP * WaMiSetContext_Hang:snb,ivb,vlv @@ -1575,13 +1577,56 @@ static int switch_mm(struct i915_request *rq, struct i915_address_space *vm) return rq->engine->emit_flush(rq, EMIT_INVALIDATE); } +static int clear_residuals(struct i915_request *rq) +{ + struct intel_engine_cs *engine = rq->engine; + int ret; + + GEM_BUG_ON(!engine->kernel_context->state); + + ret = switch_mm(rq, vm_alias(engine->kernel_context)); + if (ret) + return ret; + + ret = mi_set_context(rq, + engine->kernel_context, + MI_MM_SPACE_GTT | MI_RESTORE_INHIBIT); + if (ret) + return ret; + + ret = engine->emit_bb_start(rq, + engine->wa_ctx.vma->node.start, 0, + 0); + if (ret) + return ret; + + ret = engine->emit_flush(rq, EMIT_FLUSH); + if (ret) + return ret; + + /* Always invalidate before the next switch_mm() */ + return engine->emit_flush(rq, EMIT_INVALIDATE); +} + static int switch_context(struct i915_request *rq) { + struct intel_engine_cs *engine = rq->engine; struct intel_context *ce = rq->context; + void **residuals = NULL; int ret; GEM_BUG_ON(HAS_EXECLISTS(rq->i915)); + if (engine->wa_ctx.vma && ce != engine->kernel_context) { + if (engine->wa_ctx.vma->private != ce) { + ret = clear_residuals(rq); + if (ret) + return ret; + + residuals = &engine->wa_ctx.vma->private; + } + } + ret = switch_mm(rq, vm_alias(ce)); if (ret) return ret; @@ -1601,7 +1646,7 @@ static int switch_context(struct i915_request *rq) else flags |= MI_RESTORE_INHIBIT; - ret = mi_set_context(rq, flags); + ret = mi_set_context(rq, ce, flags); if (ret) return ret; } @@ -1610,6 +1655,20 @@ static int switch_context(struct i915_request *rq) if (ret) return ret; + /* + * Now past the point of no return, this request _will_ be emitted. + * + * Or at least this preamble will be emitted, the request may be + * interrupted prior to submitting the user payload. If so, we + * still submit the "empty" request in order to preserve global + * state tracking such as this, our tracking of the current + * dirty context. + */ + if (residuals) { + intel_context_put(*residuals); + *residuals = intel_context_get(ce); + } + return 0; } @@ -1794,6 +1853,11 @@ static void ring_release(struct intel_engine_cs *engine) intel_engine_cleanup_common(engine); + if (engine->wa_ctx.vma) { + intel_context_put(engine->wa_ctx.vma->private); + i915_vma_unpin_and_release(&engine->wa_ctx.vma, 0); + } + intel_ring_unpin(engine->legacy.ring); intel_ring_put(engine->legacy.ring); @@ -1941,6 +2005,60 @@ static void setup_vecs(struct intel_engine_cs *engine) engine->emit_fini_breadcrumb = gen7_xcs_emit_breadcrumb; } +static int gen7_ctx_switch_bb_setup(struct intel_engine_cs * const engine, + struct i915_vma * const vma) +{ + return 0; +} + +static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine) +{ + struct drm_i915_gem_object *obj; + struct i915_vma *vma; + int size; + int err; + + size = gen7_ctx_switch_bb_setup(engine, NULL /* probe size */); + if (size <= 0) + return size; + + size = ALIGN(size, PAGE_SIZE); + obj = i915_gem_object_create_internal(engine->i915, size); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + vma = i915_vma_instance(obj, engine->gt->vm, NULL); + if (IS_ERR(vma)) { + err = PTR_ERR(vma); + goto err_obj; + } + + vma->private = intel_context_create(engine); /* dummy residuals */ + if (IS_ERR(vma->private)) { + err = PTR_ERR(vma->private); + goto err_obj; + } + + err = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_HIGH); + if (err) + goto err_private; + + err = gen7_ctx_switch_bb_setup(engine, vma); + if (err) + goto err_unpin; + + engine->wa_ctx.vma = vma; + return 0; + +err_unpin: + i915_vma_unpin(vma); +err_private: + intel_context_put(vma->private); +err_obj: + i915_gem_object_put(obj); + return err; +} + int intel_ring_submission_setup(struct intel_engine_cs *engine) { struct intel_timeline *timeline; @@ -1994,11 +2112,19 @@ int intel_ring_submission_setup(struct intel_engine_cs *engine) GEM_BUG_ON(timeline->hwsp_ggtt != engine->status_page.vma); + if (IS_GEN(engine->i915, 7) && engine->class == RENDER_CLASS) { + err = gen7_ctx_switch_bb_init(engine); + if (err) + goto err_ring_unpin; + } + /* Finally, take ownership and responsibility for cleanup! */ engine->release = ring_release; return 0; +err_ring_unpin: + intel_ring_unpin(ring); err_ring: intel_ring_put(ring); err_timeline_unpin: -- 2.20.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Intel-gfx] [PATCH 1/2] drm/i915: Add mechanism to submit a context WA on ring submission @ 2020-01-30 16:57 ` Akeem G Abodunrin 0 siblings, 0 replies; 20+ messages in thread From: Akeem G Abodunrin @ 2020-01-30 16:57 UTC (permalink / raw) To: akeem.g.abodunrin, intel-gfx, dri-devel, omer.aran, pragyansri.pathi, d.scott.phillips, david.c.stewart, tony.luck, jon.bloomfield, sudeep.dutt, daniel.vetter, joonas.lahtinen, jani.nikula, chris.p.wilson, prathap.kumar.valsan, mika.kuoppala, francesco.balestrieri From: Mika Kuoppala <mika.kuoppala@linux.intel.com> This patch adds framework to submit an arbitrary batchbuffer on each context switch to clear residual state for render engine on Gen7/7.5 devices. The idea of always emitting the context and vm setup around each request is primary to make reset recovery easy, and not require rewriting the ringbuffer. As each request would set up its own context, leaving it to the HW to notice and elide no-op context switches, we could restart the ring at any point, and reorder the requests freely. However, to avoid emitting clear_residuals() between consecutive requests in the ringbuffer of the same context, we do want to track the current context in the ring. In doing so, we need to be careful to only record a context switch when we are sure the next request will be emitted. This security mitigation change does not trigger any performance regression. Performance is on par with current mainline/drm-tip. Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Prathap Kumar Valsan <prathap.kumar.valsan@intel.com> Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Balestrieri Francesco <francesco.balestrieri@intel.com> Cc: Bloomfield Jon <jon.bloomfield@intel.com> Cc: Dutt Sudeep <sudeep.dutt@intel.com> --- .../gpu/drm/i915/gt/intel_ring_submission.c | 132 +++++++++++++++++- 1 file changed, 129 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c index 9aa86ba15ce7..9f3bfe499446 100644 --- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c @@ -1385,7 +1385,9 @@ static int load_pd_dir(struct i915_request *rq, return rq->engine->emit_flush(rq, EMIT_FLUSH); } -static inline int mi_set_context(struct i915_request *rq, u32 flags) +static inline int mi_set_context(struct i915_request *rq, + struct intel_context *ce, + u32 flags) { struct drm_i915_private *i915 = rq->i915; struct intel_engine_cs *engine = rq->engine; @@ -1460,7 +1462,7 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags) *cs++ = MI_NOOP; *cs++ = MI_SET_CONTEXT; - *cs++ = i915_ggtt_offset(rq->context->state) | flags; + *cs++ = i915_ggtt_offset(ce->state) | flags; /* * w/a: MI_SET_CONTEXT must always be followed by MI_NOOP * WaMiSetContext_Hang:snb,ivb,vlv @@ -1575,13 +1577,56 @@ static int switch_mm(struct i915_request *rq, struct i915_address_space *vm) return rq->engine->emit_flush(rq, EMIT_INVALIDATE); } +static int clear_residuals(struct i915_request *rq) +{ + struct intel_engine_cs *engine = rq->engine; + int ret; + + GEM_BUG_ON(!engine->kernel_context->state); + + ret = switch_mm(rq, vm_alias(engine->kernel_context)); + if (ret) + return ret; + + ret = mi_set_context(rq, + engine->kernel_context, + MI_MM_SPACE_GTT | MI_RESTORE_INHIBIT); + if (ret) + return ret; + + ret = engine->emit_bb_start(rq, + engine->wa_ctx.vma->node.start, 0, + 0); + if (ret) + return ret; + + ret = engine->emit_flush(rq, EMIT_FLUSH); + if (ret) + return ret; + + /* Always invalidate before the next switch_mm() */ + return engine->emit_flush(rq, EMIT_INVALIDATE); +} + static int switch_context(struct i915_request *rq) { + struct intel_engine_cs *engine = rq->engine; struct intel_context *ce = rq->context; + void **residuals = NULL; int ret; GEM_BUG_ON(HAS_EXECLISTS(rq->i915)); + if (engine->wa_ctx.vma && ce != engine->kernel_context) { + if (engine->wa_ctx.vma->private != ce) { + ret = clear_residuals(rq); + if (ret) + return ret; + + residuals = &engine->wa_ctx.vma->private; + } + } + ret = switch_mm(rq, vm_alias(ce)); if (ret) return ret; @@ -1601,7 +1646,7 @@ static int switch_context(struct i915_request *rq) else flags |= MI_RESTORE_INHIBIT; - ret = mi_set_context(rq, flags); + ret = mi_set_context(rq, ce, flags); if (ret) return ret; } @@ -1610,6 +1655,20 @@ static int switch_context(struct i915_request *rq) if (ret) return ret; + /* + * Now past the point of no return, this request _will_ be emitted. + * + * Or at least this preamble will be emitted, the request may be + * interrupted prior to submitting the user payload. If so, we + * still submit the "empty" request in order to preserve global + * state tracking such as this, our tracking of the current + * dirty context. + */ + if (residuals) { + intel_context_put(*residuals); + *residuals = intel_context_get(ce); + } + return 0; } @@ -1794,6 +1853,11 @@ static void ring_release(struct intel_engine_cs *engine) intel_engine_cleanup_common(engine); + if (engine->wa_ctx.vma) { + intel_context_put(engine->wa_ctx.vma->private); + i915_vma_unpin_and_release(&engine->wa_ctx.vma, 0); + } + intel_ring_unpin(engine->legacy.ring); intel_ring_put(engine->legacy.ring); @@ -1941,6 +2005,60 @@ static void setup_vecs(struct intel_engine_cs *engine) engine->emit_fini_breadcrumb = gen7_xcs_emit_breadcrumb; } +static int gen7_ctx_switch_bb_setup(struct intel_engine_cs * const engine, + struct i915_vma * const vma) +{ + return 0; +} + +static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine) +{ + struct drm_i915_gem_object *obj; + struct i915_vma *vma; + int size; + int err; + + size = gen7_ctx_switch_bb_setup(engine, NULL /* probe size */); + if (size <= 0) + return size; + + size = ALIGN(size, PAGE_SIZE); + obj = i915_gem_object_create_internal(engine->i915, size); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + vma = i915_vma_instance(obj, engine->gt->vm, NULL); + if (IS_ERR(vma)) { + err = PTR_ERR(vma); + goto err_obj; + } + + vma->private = intel_context_create(engine); /* dummy residuals */ + if (IS_ERR(vma->private)) { + err = PTR_ERR(vma->private); + goto err_obj; + } + + err = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_HIGH); + if (err) + goto err_private; + + err = gen7_ctx_switch_bb_setup(engine, vma); + if (err) + goto err_unpin; + + engine->wa_ctx.vma = vma; + return 0; + +err_unpin: + i915_vma_unpin(vma); +err_private: + intel_context_put(vma->private); +err_obj: + i915_gem_object_put(obj); + return err; +} + int intel_ring_submission_setup(struct intel_engine_cs *engine) { struct intel_timeline *timeline; @@ -1994,11 +2112,19 @@ int intel_ring_submission_setup(struct intel_engine_cs *engine) GEM_BUG_ON(timeline->hwsp_ggtt != engine->status_page.vma); + if (IS_GEN(engine->i915, 7) && engine->class == RENDER_CLASS) { + err = gen7_ctx_switch_bb_init(engine); + if (err) + goto err_ring_unpin; + } + /* Finally, take ownership and responsibility for cleanup! */ engine->release = ring_release; return 0; +err_ring_unpin: + intel_ring_unpin(ring); err_ring: intel_ring_put(ring); err_timeline_unpin: -- 2.20.1 _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 2/2] drm/i915/gen7: Clear all EU/L3 residual contexts 2020-01-30 16:57 ` [Intel-gfx] " Akeem G Abodunrin @ 2020-01-30 16:57 ` Akeem G Abodunrin -1 siblings, 0 replies; 20+ messages in thread From: Akeem G Abodunrin @ 2020-01-30 16:57 UTC (permalink / raw) To: akeem.g.abodunrin, intel-gfx, dri-devel, omer.aran, pragyansri.pathi, d.scott.phillips, david.c.stewart, tony.luck, jon.bloomfield, sudeep.dutt, daniel.vetter, joonas.lahtinen, jani.nikula, chris.p.wilson, prathap.kumar.valsan, mika.kuoppala, francesco.balestrieri From: Prathap Kumar Valsan <prathap.kumar.valsan@intel.com> On gen7 and gen7.5 devices, there could be leftover data residuals in EU/L3 from the retiring context. This patch introduces workaround to clear that residual contexts, by submitting a batch buffer with dedicated HW context to the GPU with ring allocation for each context switching. This security mitigation change does not trigger any performance regression. Performance is on par with current mainline/drm-tip. Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Prathap Kumar Valsan <prathap.kumar.valsan@intel.com> Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Cc: Chris Wilson <chris.p.wilson@intel.com> Cc: Balestrieri Francesco <francesco.balestrieri@intel.com> Cc: Bloomfield Jon <jon.bloomfield@intel.com> Cc: Dutt Sudeep <sudeep.dutt@intel.com> --- drivers/gpu/drm/i915/Makefile | 1 + drivers/gpu/drm/i915/gt/gen7_renderclear.c | 535 ++++++++++++++++++ drivers/gpu/drm/i915/gt/gen7_renderclear.h | 15 + drivers/gpu/drm/i915/gt/intel_gpu_commands.h | 17 +- .../gpu/drm/i915/gt/intel_ring_submission.c | 3 +- drivers/gpu/drm/i915/i915_utils.h | 5 + 6 files changed, 572 insertions(+), 4 deletions(-) create mode 100644 drivers/gpu/drm/i915/gt/gen7_renderclear.c create mode 100644 drivers/gpu/drm/i915/gt/gen7_renderclear.h diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index 3c88d7d8c764..f96bae664a03 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -78,6 +78,7 @@ gt-y += \ gt/debugfs_gt.o \ gt/debugfs_gt_pm.o \ gt/gen6_ppgtt.o \ + gt/gen7_renderclear.o \ gt/gen8_ppgtt.o \ gt/intel_breadcrumbs.o \ gt/intel_context.o \ diff --git a/drivers/gpu/drm/i915/gt/gen7_renderclear.c b/drivers/gpu/drm/i915/gt/gen7_renderclear.c new file mode 100644 index 000000000000..a6f5f1602e33 --- /dev/null +++ b/drivers/gpu/drm/i915/gt/gen7_renderclear.c @@ -0,0 +1,535 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2019 Intel Corporation + */ + +#include "gen7_renderclear.h" +#include "i915_drv.h" +#include "i915_utils.h" +#include "intel_gpu_commands.h" + +#define MAX_URB_ENTRIES 64 +#define STATE_SIZE (4 * 1024) +#define GT3_INLINE_DATA_DELAYS 0x1E00 +#define batch_advance(Y, CS) GEM_BUG_ON((Y)->end != (CS)) + +/* + * Media CB Kernel for gen7 devices + * TODO: Add comments to kernel, indicating what each array of hex does or + * include header file, which has assembly source and support in igt to be + * able to generate kernel in this same format + */ +static const u32 cb7_kernel[][4] = { + { 0x00000001, 0x26020128, 0x00000024, 0x00000000 }, + { 0x00000040, 0x20280c21, 0x00000028, 0x00000001 }, + { 0x01000010, 0x20000c20, 0x0000002c, 0x00000000 }, + { 0x00010220, 0x34001c00, 0x00001400, 0x0000002c }, + { 0x00600001, 0x20600061, 0x00000000, 0x00000000 }, + { 0x00000008, 0x20601c85, 0x00000e00, 0x0000000c }, + { 0x00000005, 0x20601ca5, 0x00000060, 0x00000001 }, + { 0x00000008, 0x20641c85, 0x00000e00, 0x0000000d }, + { 0x00000005, 0x20641ca5, 0x00000064, 0x00000003 }, + { 0x00000041, 0x207424a5, 0x00000064, 0x00000034 }, + { 0x00000040, 0x206014a5, 0x00000060, 0x00000074 }, + { 0x00000008, 0x20681c85, 0x00000e00, 0x00000008 }, + { 0x00000005, 0x20681ca5, 0x00000068, 0x0000000f }, + { 0x00000041, 0x20701ca5, 0x00000060, 0x00000010 }, + { 0x00000040, 0x206814a5, 0x00000068, 0x00000070 }, + { 0x00600001, 0x20a00061, 0x00000000, 0x00000000 }, + { 0x00000005, 0x206c1c85, 0x00000e00, 0x00000007 }, + { 0x00000041, 0x206c1ca5, 0x0000006c, 0x00000004 }, + { 0x00600001, 0x20800021, 0x008d0000, 0x00000000 }, + { 0x00000001, 0x20800021, 0x0000006c, 0x00000000 }, + { 0x00000001, 0x20840021, 0x00000068, 0x00000000 }, + { 0x00000001, 0x20880061, 0x00000000, 0x00000003 }, + { 0x00000005, 0x208c0d21, 0x00000086, 0xffffffff }, + { 0x05600032, 0x20a01fa1, 0x008d0080, 0x02190001 }, + { 0x00000040, 0x20a01ca5, 0x000000a0, 0x00000001 }, + { 0x05600032, 0x20a01fa1, 0x008d0080, 0x040a8001 }, + { 0x02000040, 0x20281c21, 0x00000028, 0xffffffff }, + { 0x00010220, 0x34001c00, 0x00001400, 0xfffffffc }, + { 0x00000001, 0x26020128, 0x00000024, 0x00000000 }, + { 0x00000001, 0x220000e4, 0x00000000, 0x00000000 }, + { 0x00000001, 0x220801ec, 0x00000000, 0x007f007f }, + { 0x00600001, 0x20400021, 0x008d0000, 0x00000000 }, + { 0x00600001, 0x2fe00021, 0x008d0000, 0x00000000 }, + { 0x00200001, 0x20400121, 0x00450020, 0x00000000 }, + { 0x00000001, 0x20480061, 0x00000000, 0x000f000f }, + { 0x00000005, 0x204c0d21, 0x00000046, 0xffffffef }, + { 0x00800001, 0x20600061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x20800061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x20a00061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x20c00061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x20e00061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x21000061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x21200061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x21400061, 0x00000000, 0x00000000 }, + { 0x05600032, 0x20001fa0, 0x008d0040, 0x120a8000 }, + { 0x00000040, 0x20402d21, 0x00000020, 0x00100010 }, + { 0x05600032, 0x20001fa0, 0x008d0040, 0x120a8000 }, + { 0x02000040, 0x22083d8c, 0x00000208, 0xffffffff }, + { 0x00800001, 0xa0000109, 0x00000602, 0x00000000 }, + { 0x00000040, 0x22001c84, 0x00000200, 0x00000020 }, + { 0x00010220, 0x34001c00, 0x00001400, 0xfffffff8 }, + { 0x07600032, 0x20001fa0, 0x008d0fe0, 0x82000010 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, +}; + +/* + * Media CB Kernel for gen7.5 devices + * TODO: Add comments to kernel, indicating what each array of hex does or + * include header file, which has assembly source and support in igt to be + * able to generate kernel in this same format + */ +static const u32 cb75_kernel[][4] = { + { 0x00000001, 0x26020128, 0x00000024, 0x00000000 }, + { 0x00000040, 0x20280c21, 0x00000028, 0x00000001 }, + { 0x01000010, 0x20000c20, 0x0000002c, 0x00000000 }, + { 0x00010220, 0x34001c00, 0x00001400, 0x00000160 }, + { 0x00600001, 0x20600061, 0x00000000, 0x00000000 }, + { 0x00000008, 0x20601c85, 0x00000e00, 0x0000000c }, + { 0x00000005, 0x20601ca5, 0x00000060, 0x00000001 }, + { 0x00000008, 0x20641c85, 0x00000e00, 0x0000000d }, + { 0x00000005, 0x20641ca5, 0x00000064, 0x00000003 }, + { 0x00000041, 0x207424a5, 0x00000064, 0x00000034 }, + { 0x00000040, 0x206014a5, 0x00000060, 0x00000074 }, + { 0x00000008, 0x20681c85, 0x00000e00, 0x00000008 }, + { 0x00000005, 0x20681ca5, 0x00000068, 0x0000000f }, + { 0x00000041, 0x20701ca5, 0x00000060, 0x00000010 }, + { 0x00000040, 0x206814a5, 0x00000068, 0x00000070 }, + { 0x00600001, 0x20a00061, 0x00000000, 0x00000000 }, + { 0x00000005, 0x206c1c85, 0x00000e00, 0x00000007 }, + { 0x00000041, 0x206c1ca5, 0x0000006c, 0x00000004 }, + { 0x00600001, 0x20800021, 0x008d0000, 0x00000000 }, + { 0x00000001, 0x20800021, 0x0000006c, 0x00000000 }, + { 0x00000001, 0x20840021, 0x00000068, 0x00000000 }, + { 0x00000001, 0x20880061, 0x00000000, 0x00000003 }, + { 0x00000005, 0x208c0d21, 0x00000086, 0xffffffff }, + { 0x05600032, 0x20a01fa1, 0x008d0080, 0x02190001 }, + { 0x00000040, 0x20a01ca5, 0x000000a0, 0x00000001 }, + { 0x05600032, 0x20a01fa1, 0x008d0080, 0x040a8001 }, + { 0x02000040, 0x20281c21, 0x00000028, 0xffffffff }, + { 0x00010220, 0x34001c00, 0x00001400, 0xffffffe0 }, + { 0x00000001, 0x26020128, 0x00000024, 0x00000000 }, + { 0x00000001, 0x220000e4, 0x00000000, 0x00000000 }, + { 0x00000001, 0x220801ec, 0x00000000, 0x007f007f }, + { 0x00600001, 0x20400021, 0x008d0000, 0x00000000 }, + { 0x00600001, 0x2fe00021, 0x008d0000, 0x00000000 }, + { 0x00200001, 0x20400121, 0x00450020, 0x00000000 }, + { 0x00000001, 0x20480061, 0x00000000, 0x000f000f }, + { 0x00000005, 0x204c0d21, 0x00000046, 0xffffffef }, + { 0x00800001, 0x20600061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x20800061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x20a00061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x20c00061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x20e00061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x21000061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x21200061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x21400061, 0x00000000, 0x00000000 }, + { 0x05600032, 0x20001fa0, 0x008d0040, 0x120a8000 }, + { 0x00000040, 0x20402d21, 0x00000020, 0x00100010 }, + { 0x05600032, 0x20001fa0, 0x008d0040, 0x120a8000 }, + { 0x02000040, 0x22083d8c, 0x00000208, 0xffffffff }, + { 0x00800001, 0xa0000109, 0x00000602, 0x00000000 }, + { 0x00000040, 0x22001c84, 0x00000200, 0x00000020 }, + { 0x00010220, 0x34001c00, 0x00001400, 0xffffffc0 }, + { 0x07600032, 0x20001fa0, 0x008d0fe0, 0x82000010 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, +}; + +struct cb_kernel { + const void *data; + u32 size; +}; + +#define CB_KERNEL(name) { .data = (name), .size = sizeof(name) } + +static const struct cb_kernel cb_kernel_gen7 = CB_KERNEL(cb7_kernel); +static const struct cb_kernel cb_kernel_hsw = CB_KERNEL(cb75_kernel); + +struct batch_chunk { + struct i915_vma *vma; + u32 offset; + u32 *start; + u32 *end; + u32 max_items; +}; + +struct batch_vals { + u32 max_primitives; + u32 max_urb_entries; + u32 cmd_size; + u32 state_size; + u32 state_start; + u32 batch_size; + u32 surface_height; + u32 surface_width; + u32 scratch_size; + u32 max_size; +}; + +static void +batch_get_defaults(struct drm_i915_private *i915, struct batch_vals *bv) +{ + if (IS_HASWELL(i915)) { + bv->max_primitives = 280; + bv->max_urb_entries = MAX_URB_ENTRIES; + bv->surface_height = 16 * 16; + bv->surface_width = 32 * 2 * 16; + } else { + bv->max_primitives = 128; + bv->max_urb_entries = MAX_URB_ENTRIES / 2; + bv->surface_height = 16 * 8; + bv->surface_width = 32 * 16; + } + bv->cmd_size = bv->max_primitives * 4096; + bv->state_size = STATE_SIZE; + bv->state_start = bv->cmd_size; + bv->batch_size = bv->cmd_size + bv->state_size; + bv->scratch_size = bv->surface_height * bv->surface_width; + bv->max_size = bv->batch_size + bv->scratch_size; +} + +static void batch_init(struct batch_chunk *bc, + struct i915_vma *vma, + u32 *start, u32 offset, u32 max_bytes) +{ + bc->vma = vma; + bc->offset = offset; + bc->start = start + bc->offset / sizeof(*bc->start); + bc->end = bc->start; + bc->max_items = max_bytes / sizeof(*bc->start); +} + +static u32 batch_offset(const struct batch_chunk *bc, u32 *cs) +{ + return (cs - bc->start) * sizeof(*bc->start) + bc->offset; +} + +static u32 batch_addr(const struct batch_chunk *bc) +{ + return bc->vma->node.start; +} + +static void batch_add(struct batch_chunk *bc, const u32 d) +{ + GEM_DEBUG_WARN_ON((bc->end - bc->start) >= bc->max_items); + *bc->end++ = d; +} + +static u32 *batch_alloc_items(struct batch_chunk *bc, u32 align, u32 items) +{ + u32 *map; + + if (align) { + u32 *end = ptr_align(bc->end, align); + + memset32(bc->end, 0, end - bc->end); + bc->end = end; + } + + map = bc->end; + bc->end += items; + + return map; +} + +static u32 *batch_alloc_bytes(struct batch_chunk *bc, u32 align, u32 bytes) +{ + GEM_BUG_ON(!IS_ALIGNED(bytes, sizeof(*bc->start))); + return batch_alloc_items(bc, align, bytes / sizeof(*bc->start)); +} + +static u32 +gen7_fill_surface_state(struct batch_chunk *state, + const u32 dst_offset, + const struct batch_vals *bv) +{ + u32 surface_h = bv->surface_height; + u32 surface_w = bv->surface_width; + u32 *cs = batch_alloc_items(state, 32, 8); + u32 offset = batch_offset(state, cs); + +#define SURFACE_2D 1 +#define SURFACEFORMAT_B8G8R8A8_UNORM 0x0C0 +#define RENDER_CACHE_READ_WRITE 1 + + *cs++ = SURFACE_2D << 29 | + (SURFACEFORMAT_B8G8R8A8_UNORM << 18) | + (RENDER_CACHE_READ_WRITE << 8); + + *cs++ = batch_addr(state) + dst_offset; + + *cs++ = ((surface_h / 4 - 1) << 16) | (surface_w / 4 - 1); + *cs++ = surface_w; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; +#define SHADER_CHANNELS(r, g, b, a) \ + (((r) << 25) | ((g) << 22) | ((b) << 19) | ((a) << 16)) + *cs++ = SHADER_CHANNELS(4, 5, 6, 7); + batch_advance(state, cs); + + return offset; +} + +static u32 +gen7_fill_binding_table(struct batch_chunk *state, + const struct batch_vals *bv) +{ + u32 *cs = batch_alloc_items(state, 32, 8); + u32 offset = batch_offset(state, cs); + u32 surface_start; + + surface_start = gen7_fill_surface_state(state, bv->batch_size, bv); + *cs++ = surface_start - state->offset; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + batch_advance(state, cs); + + return offset; +} + +static u32 +gen7_fill_kernel_data(struct batch_chunk *state, + const u32 *data, + const u32 size) +{ + return batch_offset(state, + memcpy(batch_alloc_bytes(state, 64, size), + data, size)); +} + +static u32 +gen7_fill_interface_descriptor(struct batch_chunk *state, + const struct batch_vals *bv, + const struct cb_kernel *kernel, + unsigned int count) +{ + u32 *cs = batch_alloc_items(state, 32, 8 * count); + u32 offset = batch_offset(state, cs); + + *cs++ = gen7_fill_kernel_data(state, kernel->data, kernel->size); + *cs++ = (1 << 7) | (1 << 13); + *cs++ = 0; + *cs++ = (gen7_fill_binding_table(state, bv) - state->offset) | 1; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + batch_advance(state, cs); + + /* 1 - 63dummy idds */ + memset32(cs, 0x00, (count - 1) * 8); + + return offset; +} + +static void +gen7_emit_state_base_address(struct batch_chunk *batch, + u32 surface_state_base) +{ + u32 *cs = batch_alloc_items(batch, 0, 12); + + *cs++ = STATE_BASE_ADDRESS | (12 - 2); + /* general */ + *cs++ = batch_addr(batch) | BASE_ADDRESS_MODIFY; + /* surface */ + *cs++ = batch_addr(batch) | surface_state_base | BASE_ADDRESS_MODIFY; + /* dynamic */ + *cs++ = batch_addr(batch) | BASE_ADDRESS_MODIFY; + /* indirect */ + *cs++ = batch_addr(batch) | BASE_ADDRESS_MODIFY; + /* instruction */ + *cs++ = batch_addr(batch) | BASE_ADDRESS_MODIFY; + + /* general/dynamic/indirect/instruction access Bound */ + *cs++ = 0; + *cs++ = BASE_ADDRESS_MODIFY; + *cs++ = 0; + *cs++ = BASE_ADDRESS_MODIFY; + *cs++ = 0; + *cs++ = 0; + batch_advance(batch, cs); +} + +static void +gen7_emit_vfe_state(struct batch_chunk *batch, + const struct batch_vals *bv, + u32 urb_size, u32 curbe_size, + u32 mode) +{ + u32 urb_entries = bv->max_urb_entries; + u32 threads = bv->max_primitives - 1; + u32 *cs = batch_alloc_items(batch, 32, 8); + + *cs++ = MEDIA_VFE_STATE | (8 - 2); + + /* scratch buffer */ + *cs++ = 0; + + /* number of threads & urb entries for GPGPU vs Media Mode */ + *cs++ = threads << 16 | urb_entries << 8 | mode << 2; + + *cs++ = 0; + + /* urb entry size & curbe size in 256 bits unit */ + *cs++ = urb_size << 16 | curbe_size; + + /* scoreboard */ + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + batch_advance(batch, cs); +} + +static void +gen7_emit_interface_descriptor_load(struct batch_chunk *batch, + const u32 interface_descriptor, + unsigned int count) +{ + u32 *cs = batch_alloc_items(batch, 8, 4); + + *cs++ = MEDIA_INTERFACE_DESCRIPTOR_LOAD | (4 - 2); + *cs++ = 0; + *cs++ = count * 8 * sizeof(*cs); + + /* + * interface descriptor address - it is relative to the dynamics base + * address + */ + *cs++ = interface_descriptor; + batch_advance(batch, cs); +} + +static void +gen7_emit_media_object(struct batch_chunk *batch, + unsigned int media_object_index) +{ + unsigned int x_offset = (media_object_index % 16) * 64; + unsigned int y_offset = (media_object_index / 16) * 16; + unsigned int inline_data_size; + unsigned int media_batch_size; + unsigned int i; + u32 *cs; + + inline_data_size = 112 * 8; + media_batch_size = inline_data_size + 6; + + cs = batch_alloc_items(batch, 8, media_batch_size); + + *cs++ = MEDIA_OBJECT | (media_batch_size - 2); + + /* interface descriptor offset */ + *cs++ = 0; + + /* without indirect data */ + *cs++ = 0; + *cs++ = 0; + + /* scoreboard */ + *cs++ = 0; + *cs++ = 0; + + /* inline */ + *cs++ = (y_offset << 16) | (x_offset); + *cs++ = 0; + *cs++ = GT3_INLINE_DATA_DELAYS; + for (i = 3; i < inline_data_size; i++) + *cs++ = 0; + + batch_advance(batch, cs); +} + +static void gen7_emit_pipeline_flush(struct batch_chunk *batch) +{ + u32 *cs = batch_alloc_items(batch, 0, 5); + + *cs++ = GFX_OP_PIPE_CONTROL(5); + *cs++ = PIPE_CONTROL_STATE_CACHE_INVALIDATE | + PIPE_CONTROL_GLOBAL_GTT_IVB; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + batch_advance(batch, cs); +} + +static void emit_batch(struct i915_vma * const vma, + u32 *start, + const struct batch_vals *bv) +{ + struct drm_i915_private *i915 = vma->vm->i915; + unsigned int desc_count = 64; + const u32 urb_size = 112; + struct batch_chunk cmds, state; + u32 interface_descriptor; + unsigned int i; + + batch_init(&cmds, vma, start, 0, bv->cmd_size); + batch_init(&state, vma, start, bv->state_start, bv->state_size); + + interface_descriptor = + gen7_fill_interface_descriptor(&state, bv, + IS_HASWELL(i915) ? + &cb_kernel_hsw : &cb_kernel_gen7, + desc_count); + gen7_emit_pipeline_flush(&cmds); + batch_add(&cmds, PIPELINE_SELECT | PIPELINE_SELECT_MEDIA); + batch_add(&cmds, MI_NOOP); + gen7_emit_state_base_address(&cmds, interface_descriptor); + gen7_emit_pipeline_flush(&cmds); + + gen7_emit_vfe_state(&cmds, bv, urb_size - 1, 0, 0); + + gen7_emit_interface_descriptor_load(&cmds, + interface_descriptor, + desc_count); + + for (i = 0; i < bv->max_primitives; i++) + gen7_emit_media_object(&cmds, i); + + batch_add(&cmds, MI_BATCH_BUFFER_END); +} + +int gen7_setup_clear_gpr_bb(struct intel_engine_cs * const engine, + struct i915_vma * const vma) +{ + struct batch_vals bv; + u32 *batch; + + batch_get_defaults(engine->i915, &bv); + if (!vma) + return bv.max_size; + + GEM_BUG_ON(vma->obj->base.size < bv.max_size); + + batch = i915_gem_object_pin_map(vma->obj, I915_MAP_WC); + if (IS_ERR(batch)) + return PTR_ERR(batch); + + emit_batch(vma, memset(batch, 0, bv.max_size), &bv); + + i915_gem_object_flush_map(vma->obj); + i915_gem_object_unpin_map(vma->obj); + + return 0; +} diff --git a/drivers/gpu/drm/i915/gt/gen7_renderclear.h b/drivers/gpu/drm/i915/gt/gen7_renderclear.h new file mode 100644 index 000000000000..bb100748e2c6 --- /dev/null +++ b/drivers/gpu/drm/i915/gt/gen7_renderclear.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2019 Intel Corporation + */ + +#ifndef __GEN7_RENDERCLEAR_H__ +#define __GEN7_RENDERCLEAR_H__ + +struct intel_engine_cs; +struct i915_vma; + +int gen7_setup_clear_gpr_bb(struct intel_engine_cs * const engine, + struct i915_vma * const vma); + +#endif /* __GEN7_RENDERCLEAR_H__ */ diff --git a/drivers/gpu/drm/i915/gt/intel_gpu_commands.h b/drivers/gpu/drm/i915/gt/intel_gpu_commands.h index 51b8718513bc..f04214a54f75 100644 --- a/drivers/gpu/drm/i915/gt/intel_gpu_commands.h +++ b/drivers/gpu/drm/i915/gt/intel_gpu_commands.h @@ -292,10 +292,21 @@ #define MI_STORE_URB_MEM MI_INSTR(0x2D, 0) #define MI_CONDITIONAL_BATCH_BUFFER_END MI_INSTR(0x36, 0) -#define PIPELINE_SELECT ((0x3<<29)|(0x1<<27)|(0x1<<24)|(0x4<<16)) -#define GFX_OP_3DSTATE_VF_STATISTICS ((0x3<<29)|(0x1<<27)|(0x0<<24)|(0xB<<16)) -#define MEDIA_VFE_STATE ((0x3<<29)|(0x2<<27)|(0x0<<24)|(0x0<<16)) +#define STATE_BASE_ADDRESS \ + ((0x3 << 29) | (0x0 << 27) | (0x1 << 24) | (0x1 << 16)) +#define BASE_ADDRESS_MODIFY REG_BIT(0) +#define PIPELINE_SELECT \ + ((0x3 << 29) | (0x1 << 27) | (0x1 << 24) | (0x4 << 16)) +#define PIPELINE_SELECT_MEDIA REG_BIT(0) +#define GFX_OP_3DSTATE_VF_STATISTICS \ + ((0x3 << 29) | (0x1 << 27) | (0x0 << 24) | (0xB << 16)) +#define MEDIA_VFE_STATE \ + ((0x3 << 29) | (0x2 << 27) | (0x0 << 24) | (0x0 << 16)) #define MEDIA_VFE_STATE_MMIO_ACCESS_MASK (0x18) +#define MEDIA_INTERFACE_DESCRIPTOR_LOAD \ + ((0x3 << 29) | (0x2 << 27) | (0x0 << 24) | (0x2 << 16)) +#define MEDIA_OBJECT \ + ((0x3 << 29) | (0x2 << 27) | (0x1 << 24) | (0x0 << 16)) #define GPGPU_OBJECT ((0x3<<29)|(0x2<<27)|(0x1<<24)|(0x4<<16)) #define GPGPU_WALKER ((0x3<<29)|(0x2<<27)|(0x1<<24)|(0x5<<16)) #define GFX_OP_3DSTATE_DX9_CONSTANTF_VS \ diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c index 9f3bfe499446..dc0632ea9fde 100644 --- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c @@ -34,6 +34,7 @@ #include "gem/i915_gem_context.h" #include "gen6_ppgtt.h" +#include "gen7_renderclear.h" #include "i915_drv.h" #include "i915_trace.h" #include "intel_context.h" @@ -2008,7 +2009,7 @@ static void setup_vecs(struct intel_engine_cs *engine) static int gen7_ctx_switch_bb_setup(struct intel_engine_cs * const engine, struct i915_vma * const vma) { - return 0; + return gen7_setup_clear_gpr_bb(engine, vma); } static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine) diff --git a/drivers/gpu/drm/i915/i915_utils.h b/drivers/gpu/drm/i915/i915_utils.h index b0ade76bec90..7ac5b3565845 100644 --- a/drivers/gpu/drm/i915/i915_utils.h +++ b/drivers/gpu/drm/i915/i915_utils.h @@ -172,6 +172,11 @@ __check_struct_size(size_t base, size_t arr, size_t count, size_t *size) (typeof(ptr))(__v + 1); \ }) +#define ptr_align(ptr, align) ({ \ + unsigned long __v = (unsigned long)(ptr); \ + (typeof(ptr))round_up(__v, (align)); \ +}) + #define page_mask_bits(ptr) ptr_mask_bits(ptr, PAGE_SHIFT) #define page_unmask_bits(ptr) ptr_unmask_bits(ptr, PAGE_SHIFT) #define page_pack_bits(ptr, bits) ptr_pack_bits(ptr, bits, PAGE_SHIFT) -- 2.20.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Intel-gfx] [PATCH 2/2] drm/i915/gen7: Clear all EU/L3 residual contexts @ 2020-01-30 16:57 ` Akeem G Abodunrin 0 siblings, 0 replies; 20+ messages in thread From: Akeem G Abodunrin @ 2020-01-30 16:57 UTC (permalink / raw) To: akeem.g.abodunrin, intel-gfx, dri-devel, omer.aran, pragyansri.pathi, d.scott.phillips, david.c.stewart, tony.luck, jon.bloomfield, sudeep.dutt, daniel.vetter, joonas.lahtinen, jani.nikula, chris.p.wilson, prathap.kumar.valsan, mika.kuoppala, francesco.balestrieri From: Prathap Kumar Valsan <prathap.kumar.valsan@intel.com> On gen7 and gen7.5 devices, there could be leftover data residuals in EU/L3 from the retiring context. This patch introduces workaround to clear that residual contexts, by submitting a batch buffer with dedicated HW context to the GPU with ring allocation for each context switching. This security mitigation change does not trigger any performance regression. Performance is on par with current mainline/drm-tip. Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Prathap Kumar Valsan <prathap.kumar.valsan@intel.com> Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Cc: Chris Wilson <chris.p.wilson@intel.com> Cc: Balestrieri Francesco <francesco.balestrieri@intel.com> Cc: Bloomfield Jon <jon.bloomfield@intel.com> Cc: Dutt Sudeep <sudeep.dutt@intel.com> --- drivers/gpu/drm/i915/Makefile | 1 + drivers/gpu/drm/i915/gt/gen7_renderclear.c | 535 ++++++++++++++++++ drivers/gpu/drm/i915/gt/gen7_renderclear.h | 15 + drivers/gpu/drm/i915/gt/intel_gpu_commands.h | 17 +- .../gpu/drm/i915/gt/intel_ring_submission.c | 3 +- drivers/gpu/drm/i915/i915_utils.h | 5 + 6 files changed, 572 insertions(+), 4 deletions(-) create mode 100644 drivers/gpu/drm/i915/gt/gen7_renderclear.c create mode 100644 drivers/gpu/drm/i915/gt/gen7_renderclear.h diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index 3c88d7d8c764..f96bae664a03 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -78,6 +78,7 @@ gt-y += \ gt/debugfs_gt.o \ gt/debugfs_gt_pm.o \ gt/gen6_ppgtt.o \ + gt/gen7_renderclear.o \ gt/gen8_ppgtt.o \ gt/intel_breadcrumbs.o \ gt/intel_context.o \ diff --git a/drivers/gpu/drm/i915/gt/gen7_renderclear.c b/drivers/gpu/drm/i915/gt/gen7_renderclear.c new file mode 100644 index 000000000000..a6f5f1602e33 --- /dev/null +++ b/drivers/gpu/drm/i915/gt/gen7_renderclear.c @@ -0,0 +1,535 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2019 Intel Corporation + */ + +#include "gen7_renderclear.h" +#include "i915_drv.h" +#include "i915_utils.h" +#include "intel_gpu_commands.h" + +#define MAX_URB_ENTRIES 64 +#define STATE_SIZE (4 * 1024) +#define GT3_INLINE_DATA_DELAYS 0x1E00 +#define batch_advance(Y, CS) GEM_BUG_ON((Y)->end != (CS)) + +/* + * Media CB Kernel for gen7 devices + * TODO: Add comments to kernel, indicating what each array of hex does or + * include header file, which has assembly source and support in igt to be + * able to generate kernel in this same format + */ +static const u32 cb7_kernel[][4] = { + { 0x00000001, 0x26020128, 0x00000024, 0x00000000 }, + { 0x00000040, 0x20280c21, 0x00000028, 0x00000001 }, + { 0x01000010, 0x20000c20, 0x0000002c, 0x00000000 }, + { 0x00010220, 0x34001c00, 0x00001400, 0x0000002c }, + { 0x00600001, 0x20600061, 0x00000000, 0x00000000 }, + { 0x00000008, 0x20601c85, 0x00000e00, 0x0000000c }, + { 0x00000005, 0x20601ca5, 0x00000060, 0x00000001 }, + { 0x00000008, 0x20641c85, 0x00000e00, 0x0000000d }, + { 0x00000005, 0x20641ca5, 0x00000064, 0x00000003 }, + { 0x00000041, 0x207424a5, 0x00000064, 0x00000034 }, + { 0x00000040, 0x206014a5, 0x00000060, 0x00000074 }, + { 0x00000008, 0x20681c85, 0x00000e00, 0x00000008 }, + { 0x00000005, 0x20681ca5, 0x00000068, 0x0000000f }, + { 0x00000041, 0x20701ca5, 0x00000060, 0x00000010 }, + { 0x00000040, 0x206814a5, 0x00000068, 0x00000070 }, + { 0x00600001, 0x20a00061, 0x00000000, 0x00000000 }, + { 0x00000005, 0x206c1c85, 0x00000e00, 0x00000007 }, + { 0x00000041, 0x206c1ca5, 0x0000006c, 0x00000004 }, + { 0x00600001, 0x20800021, 0x008d0000, 0x00000000 }, + { 0x00000001, 0x20800021, 0x0000006c, 0x00000000 }, + { 0x00000001, 0x20840021, 0x00000068, 0x00000000 }, + { 0x00000001, 0x20880061, 0x00000000, 0x00000003 }, + { 0x00000005, 0x208c0d21, 0x00000086, 0xffffffff }, + { 0x05600032, 0x20a01fa1, 0x008d0080, 0x02190001 }, + { 0x00000040, 0x20a01ca5, 0x000000a0, 0x00000001 }, + { 0x05600032, 0x20a01fa1, 0x008d0080, 0x040a8001 }, + { 0x02000040, 0x20281c21, 0x00000028, 0xffffffff }, + { 0x00010220, 0x34001c00, 0x00001400, 0xfffffffc }, + { 0x00000001, 0x26020128, 0x00000024, 0x00000000 }, + { 0x00000001, 0x220000e4, 0x00000000, 0x00000000 }, + { 0x00000001, 0x220801ec, 0x00000000, 0x007f007f }, + { 0x00600001, 0x20400021, 0x008d0000, 0x00000000 }, + { 0x00600001, 0x2fe00021, 0x008d0000, 0x00000000 }, + { 0x00200001, 0x20400121, 0x00450020, 0x00000000 }, + { 0x00000001, 0x20480061, 0x00000000, 0x000f000f }, + { 0x00000005, 0x204c0d21, 0x00000046, 0xffffffef }, + { 0x00800001, 0x20600061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x20800061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x20a00061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x20c00061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x20e00061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x21000061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x21200061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x21400061, 0x00000000, 0x00000000 }, + { 0x05600032, 0x20001fa0, 0x008d0040, 0x120a8000 }, + { 0x00000040, 0x20402d21, 0x00000020, 0x00100010 }, + { 0x05600032, 0x20001fa0, 0x008d0040, 0x120a8000 }, + { 0x02000040, 0x22083d8c, 0x00000208, 0xffffffff }, + { 0x00800001, 0xa0000109, 0x00000602, 0x00000000 }, + { 0x00000040, 0x22001c84, 0x00000200, 0x00000020 }, + { 0x00010220, 0x34001c00, 0x00001400, 0xfffffff8 }, + { 0x07600032, 0x20001fa0, 0x008d0fe0, 0x82000010 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, +}; + +/* + * Media CB Kernel for gen7.5 devices + * TODO: Add comments to kernel, indicating what each array of hex does or + * include header file, which has assembly source and support in igt to be + * able to generate kernel in this same format + */ +static const u32 cb75_kernel[][4] = { + { 0x00000001, 0x26020128, 0x00000024, 0x00000000 }, + { 0x00000040, 0x20280c21, 0x00000028, 0x00000001 }, + { 0x01000010, 0x20000c20, 0x0000002c, 0x00000000 }, + { 0x00010220, 0x34001c00, 0x00001400, 0x00000160 }, + { 0x00600001, 0x20600061, 0x00000000, 0x00000000 }, + { 0x00000008, 0x20601c85, 0x00000e00, 0x0000000c }, + { 0x00000005, 0x20601ca5, 0x00000060, 0x00000001 }, + { 0x00000008, 0x20641c85, 0x00000e00, 0x0000000d }, + { 0x00000005, 0x20641ca5, 0x00000064, 0x00000003 }, + { 0x00000041, 0x207424a5, 0x00000064, 0x00000034 }, + { 0x00000040, 0x206014a5, 0x00000060, 0x00000074 }, + { 0x00000008, 0x20681c85, 0x00000e00, 0x00000008 }, + { 0x00000005, 0x20681ca5, 0x00000068, 0x0000000f }, + { 0x00000041, 0x20701ca5, 0x00000060, 0x00000010 }, + { 0x00000040, 0x206814a5, 0x00000068, 0x00000070 }, + { 0x00600001, 0x20a00061, 0x00000000, 0x00000000 }, + { 0x00000005, 0x206c1c85, 0x00000e00, 0x00000007 }, + { 0x00000041, 0x206c1ca5, 0x0000006c, 0x00000004 }, + { 0x00600001, 0x20800021, 0x008d0000, 0x00000000 }, + { 0x00000001, 0x20800021, 0x0000006c, 0x00000000 }, + { 0x00000001, 0x20840021, 0x00000068, 0x00000000 }, + { 0x00000001, 0x20880061, 0x00000000, 0x00000003 }, + { 0x00000005, 0x208c0d21, 0x00000086, 0xffffffff }, + { 0x05600032, 0x20a01fa1, 0x008d0080, 0x02190001 }, + { 0x00000040, 0x20a01ca5, 0x000000a0, 0x00000001 }, + { 0x05600032, 0x20a01fa1, 0x008d0080, 0x040a8001 }, + { 0x02000040, 0x20281c21, 0x00000028, 0xffffffff }, + { 0x00010220, 0x34001c00, 0x00001400, 0xffffffe0 }, + { 0x00000001, 0x26020128, 0x00000024, 0x00000000 }, + { 0x00000001, 0x220000e4, 0x00000000, 0x00000000 }, + { 0x00000001, 0x220801ec, 0x00000000, 0x007f007f }, + { 0x00600001, 0x20400021, 0x008d0000, 0x00000000 }, + { 0x00600001, 0x2fe00021, 0x008d0000, 0x00000000 }, + { 0x00200001, 0x20400121, 0x00450020, 0x00000000 }, + { 0x00000001, 0x20480061, 0x00000000, 0x000f000f }, + { 0x00000005, 0x204c0d21, 0x00000046, 0xffffffef }, + { 0x00800001, 0x20600061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x20800061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x20a00061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x20c00061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x20e00061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x21000061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x21200061, 0x00000000, 0x00000000 }, + { 0x00800001, 0x21400061, 0x00000000, 0x00000000 }, + { 0x05600032, 0x20001fa0, 0x008d0040, 0x120a8000 }, + { 0x00000040, 0x20402d21, 0x00000020, 0x00100010 }, + { 0x05600032, 0x20001fa0, 0x008d0040, 0x120a8000 }, + { 0x02000040, 0x22083d8c, 0x00000208, 0xffffffff }, + { 0x00800001, 0xa0000109, 0x00000602, 0x00000000 }, + { 0x00000040, 0x22001c84, 0x00000200, 0x00000020 }, + { 0x00010220, 0x34001c00, 0x00001400, 0xffffffc0 }, + { 0x07600032, 0x20001fa0, 0x008d0fe0, 0x82000010 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, + { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, +}; + +struct cb_kernel { + const void *data; + u32 size; +}; + +#define CB_KERNEL(name) { .data = (name), .size = sizeof(name) } + +static const struct cb_kernel cb_kernel_gen7 = CB_KERNEL(cb7_kernel); +static const struct cb_kernel cb_kernel_hsw = CB_KERNEL(cb75_kernel); + +struct batch_chunk { + struct i915_vma *vma; + u32 offset; + u32 *start; + u32 *end; + u32 max_items; +}; + +struct batch_vals { + u32 max_primitives; + u32 max_urb_entries; + u32 cmd_size; + u32 state_size; + u32 state_start; + u32 batch_size; + u32 surface_height; + u32 surface_width; + u32 scratch_size; + u32 max_size; +}; + +static void +batch_get_defaults(struct drm_i915_private *i915, struct batch_vals *bv) +{ + if (IS_HASWELL(i915)) { + bv->max_primitives = 280; + bv->max_urb_entries = MAX_URB_ENTRIES; + bv->surface_height = 16 * 16; + bv->surface_width = 32 * 2 * 16; + } else { + bv->max_primitives = 128; + bv->max_urb_entries = MAX_URB_ENTRIES / 2; + bv->surface_height = 16 * 8; + bv->surface_width = 32 * 16; + } + bv->cmd_size = bv->max_primitives * 4096; + bv->state_size = STATE_SIZE; + bv->state_start = bv->cmd_size; + bv->batch_size = bv->cmd_size + bv->state_size; + bv->scratch_size = bv->surface_height * bv->surface_width; + bv->max_size = bv->batch_size + bv->scratch_size; +} + +static void batch_init(struct batch_chunk *bc, + struct i915_vma *vma, + u32 *start, u32 offset, u32 max_bytes) +{ + bc->vma = vma; + bc->offset = offset; + bc->start = start + bc->offset / sizeof(*bc->start); + bc->end = bc->start; + bc->max_items = max_bytes / sizeof(*bc->start); +} + +static u32 batch_offset(const struct batch_chunk *bc, u32 *cs) +{ + return (cs - bc->start) * sizeof(*bc->start) + bc->offset; +} + +static u32 batch_addr(const struct batch_chunk *bc) +{ + return bc->vma->node.start; +} + +static void batch_add(struct batch_chunk *bc, const u32 d) +{ + GEM_DEBUG_WARN_ON((bc->end - bc->start) >= bc->max_items); + *bc->end++ = d; +} + +static u32 *batch_alloc_items(struct batch_chunk *bc, u32 align, u32 items) +{ + u32 *map; + + if (align) { + u32 *end = ptr_align(bc->end, align); + + memset32(bc->end, 0, end - bc->end); + bc->end = end; + } + + map = bc->end; + bc->end += items; + + return map; +} + +static u32 *batch_alloc_bytes(struct batch_chunk *bc, u32 align, u32 bytes) +{ + GEM_BUG_ON(!IS_ALIGNED(bytes, sizeof(*bc->start))); + return batch_alloc_items(bc, align, bytes / sizeof(*bc->start)); +} + +static u32 +gen7_fill_surface_state(struct batch_chunk *state, + const u32 dst_offset, + const struct batch_vals *bv) +{ + u32 surface_h = bv->surface_height; + u32 surface_w = bv->surface_width; + u32 *cs = batch_alloc_items(state, 32, 8); + u32 offset = batch_offset(state, cs); + +#define SURFACE_2D 1 +#define SURFACEFORMAT_B8G8R8A8_UNORM 0x0C0 +#define RENDER_CACHE_READ_WRITE 1 + + *cs++ = SURFACE_2D << 29 | + (SURFACEFORMAT_B8G8R8A8_UNORM << 18) | + (RENDER_CACHE_READ_WRITE << 8); + + *cs++ = batch_addr(state) + dst_offset; + + *cs++ = ((surface_h / 4 - 1) << 16) | (surface_w / 4 - 1); + *cs++ = surface_w; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; +#define SHADER_CHANNELS(r, g, b, a) \ + (((r) << 25) | ((g) << 22) | ((b) << 19) | ((a) << 16)) + *cs++ = SHADER_CHANNELS(4, 5, 6, 7); + batch_advance(state, cs); + + return offset; +} + +static u32 +gen7_fill_binding_table(struct batch_chunk *state, + const struct batch_vals *bv) +{ + u32 *cs = batch_alloc_items(state, 32, 8); + u32 offset = batch_offset(state, cs); + u32 surface_start; + + surface_start = gen7_fill_surface_state(state, bv->batch_size, bv); + *cs++ = surface_start - state->offset; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + batch_advance(state, cs); + + return offset; +} + +static u32 +gen7_fill_kernel_data(struct batch_chunk *state, + const u32 *data, + const u32 size) +{ + return batch_offset(state, + memcpy(batch_alloc_bytes(state, 64, size), + data, size)); +} + +static u32 +gen7_fill_interface_descriptor(struct batch_chunk *state, + const struct batch_vals *bv, + const struct cb_kernel *kernel, + unsigned int count) +{ + u32 *cs = batch_alloc_items(state, 32, 8 * count); + u32 offset = batch_offset(state, cs); + + *cs++ = gen7_fill_kernel_data(state, kernel->data, kernel->size); + *cs++ = (1 << 7) | (1 << 13); + *cs++ = 0; + *cs++ = (gen7_fill_binding_table(state, bv) - state->offset) | 1; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + batch_advance(state, cs); + + /* 1 - 63dummy idds */ + memset32(cs, 0x00, (count - 1) * 8); + + return offset; +} + +static void +gen7_emit_state_base_address(struct batch_chunk *batch, + u32 surface_state_base) +{ + u32 *cs = batch_alloc_items(batch, 0, 12); + + *cs++ = STATE_BASE_ADDRESS | (12 - 2); + /* general */ + *cs++ = batch_addr(batch) | BASE_ADDRESS_MODIFY; + /* surface */ + *cs++ = batch_addr(batch) | surface_state_base | BASE_ADDRESS_MODIFY; + /* dynamic */ + *cs++ = batch_addr(batch) | BASE_ADDRESS_MODIFY; + /* indirect */ + *cs++ = batch_addr(batch) | BASE_ADDRESS_MODIFY; + /* instruction */ + *cs++ = batch_addr(batch) | BASE_ADDRESS_MODIFY; + + /* general/dynamic/indirect/instruction access Bound */ + *cs++ = 0; + *cs++ = BASE_ADDRESS_MODIFY; + *cs++ = 0; + *cs++ = BASE_ADDRESS_MODIFY; + *cs++ = 0; + *cs++ = 0; + batch_advance(batch, cs); +} + +static void +gen7_emit_vfe_state(struct batch_chunk *batch, + const struct batch_vals *bv, + u32 urb_size, u32 curbe_size, + u32 mode) +{ + u32 urb_entries = bv->max_urb_entries; + u32 threads = bv->max_primitives - 1; + u32 *cs = batch_alloc_items(batch, 32, 8); + + *cs++ = MEDIA_VFE_STATE | (8 - 2); + + /* scratch buffer */ + *cs++ = 0; + + /* number of threads & urb entries for GPGPU vs Media Mode */ + *cs++ = threads << 16 | urb_entries << 8 | mode << 2; + + *cs++ = 0; + + /* urb entry size & curbe size in 256 bits unit */ + *cs++ = urb_size << 16 | curbe_size; + + /* scoreboard */ + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + batch_advance(batch, cs); +} + +static void +gen7_emit_interface_descriptor_load(struct batch_chunk *batch, + const u32 interface_descriptor, + unsigned int count) +{ + u32 *cs = batch_alloc_items(batch, 8, 4); + + *cs++ = MEDIA_INTERFACE_DESCRIPTOR_LOAD | (4 - 2); + *cs++ = 0; + *cs++ = count * 8 * sizeof(*cs); + + /* + * interface descriptor address - it is relative to the dynamics base + * address + */ + *cs++ = interface_descriptor; + batch_advance(batch, cs); +} + +static void +gen7_emit_media_object(struct batch_chunk *batch, + unsigned int media_object_index) +{ + unsigned int x_offset = (media_object_index % 16) * 64; + unsigned int y_offset = (media_object_index / 16) * 16; + unsigned int inline_data_size; + unsigned int media_batch_size; + unsigned int i; + u32 *cs; + + inline_data_size = 112 * 8; + media_batch_size = inline_data_size + 6; + + cs = batch_alloc_items(batch, 8, media_batch_size); + + *cs++ = MEDIA_OBJECT | (media_batch_size - 2); + + /* interface descriptor offset */ + *cs++ = 0; + + /* without indirect data */ + *cs++ = 0; + *cs++ = 0; + + /* scoreboard */ + *cs++ = 0; + *cs++ = 0; + + /* inline */ + *cs++ = (y_offset << 16) | (x_offset); + *cs++ = 0; + *cs++ = GT3_INLINE_DATA_DELAYS; + for (i = 3; i < inline_data_size; i++) + *cs++ = 0; + + batch_advance(batch, cs); +} + +static void gen7_emit_pipeline_flush(struct batch_chunk *batch) +{ + u32 *cs = batch_alloc_items(batch, 0, 5); + + *cs++ = GFX_OP_PIPE_CONTROL(5); + *cs++ = PIPE_CONTROL_STATE_CACHE_INVALIDATE | + PIPE_CONTROL_GLOBAL_GTT_IVB; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + batch_advance(batch, cs); +} + +static void emit_batch(struct i915_vma * const vma, + u32 *start, + const struct batch_vals *bv) +{ + struct drm_i915_private *i915 = vma->vm->i915; + unsigned int desc_count = 64; + const u32 urb_size = 112; + struct batch_chunk cmds, state; + u32 interface_descriptor; + unsigned int i; + + batch_init(&cmds, vma, start, 0, bv->cmd_size); + batch_init(&state, vma, start, bv->state_start, bv->state_size); + + interface_descriptor = + gen7_fill_interface_descriptor(&state, bv, + IS_HASWELL(i915) ? + &cb_kernel_hsw : &cb_kernel_gen7, + desc_count); + gen7_emit_pipeline_flush(&cmds); + batch_add(&cmds, PIPELINE_SELECT | PIPELINE_SELECT_MEDIA); + batch_add(&cmds, MI_NOOP); + gen7_emit_state_base_address(&cmds, interface_descriptor); + gen7_emit_pipeline_flush(&cmds); + + gen7_emit_vfe_state(&cmds, bv, urb_size - 1, 0, 0); + + gen7_emit_interface_descriptor_load(&cmds, + interface_descriptor, + desc_count); + + for (i = 0; i < bv->max_primitives; i++) + gen7_emit_media_object(&cmds, i); + + batch_add(&cmds, MI_BATCH_BUFFER_END); +} + +int gen7_setup_clear_gpr_bb(struct intel_engine_cs * const engine, + struct i915_vma * const vma) +{ + struct batch_vals bv; + u32 *batch; + + batch_get_defaults(engine->i915, &bv); + if (!vma) + return bv.max_size; + + GEM_BUG_ON(vma->obj->base.size < bv.max_size); + + batch = i915_gem_object_pin_map(vma->obj, I915_MAP_WC); + if (IS_ERR(batch)) + return PTR_ERR(batch); + + emit_batch(vma, memset(batch, 0, bv.max_size), &bv); + + i915_gem_object_flush_map(vma->obj); + i915_gem_object_unpin_map(vma->obj); + + return 0; +} diff --git a/drivers/gpu/drm/i915/gt/gen7_renderclear.h b/drivers/gpu/drm/i915/gt/gen7_renderclear.h new file mode 100644 index 000000000000..bb100748e2c6 --- /dev/null +++ b/drivers/gpu/drm/i915/gt/gen7_renderclear.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2019 Intel Corporation + */ + +#ifndef __GEN7_RENDERCLEAR_H__ +#define __GEN7_RENDERCLEAR_H__ + +struct intel_engine_cs; +struct i915_vma; + +int gen7_setup_clear_gpr_bb(struct intel_engine_cs * const engine, + struct i915_vma * const vma); + +#endif /* __GEN7_RENDERCLEAR_H__ */ diff --git a/drivers/gpu/drm/i915/gt/intel_gpu_commands.h b/drivers/gpu/drm/i915/gt/intel_gpu_commands.h index 51b8718513bc..f04214a54f75 100644 --- a/drivers/gpu/drm/i915/gt/intel_gpu_commands.h +++ b/drivers/gpu/drm/i915/gt/intel_gpu_commands.h @@ -292,10 +292,21 @@ #define MI_STORE_URB_MEM MI_INSTR(0x2D, 0) #define MI_CONDITIONAL_BATCH_BUFFER_END MI_INSTR(0x36, 0) -#define PIPELINE_SELECT ((0x3<<29)|(0x1<<27)|(0x1<<24)|(0x4<<16)) -#define GFX_OP_3DSTATE_VF_STATISTICS ((0x3<<29)|(0x1<<27)|(0x0<<24)|(0xB<<16)) -#define MEDIA_VFE_STATE ((0x3<<29)|(0x2<<27)|(0x0<<24)|(0x0<<16)) +#define STATE_BASE_ADDRESS \ + ((0x3 << 29) | (0x0 << 27) | (0x1 << 24) | (0x1 << 16)) +#define BASE_ADDRESS_MODIFY REG_BIT(0) +#define PIPELINE_SELECT \ + ((0x3 << 29) | (0x1 << 27) | (0x1 << 24) | (0x4 << 16)) +#define PIPELINE_SELECT_MEDIA REG_BIT(0) +#define GFX_OP_3DSTATE_VF_STATISTICS \ + ((0x3 << 29) | (0x1 << 27) | (0x0 << 24) | (0xB << 16)) +#define MEDIA_VFE_STATE \ + ((0x3 << 29) | (0x2 << 27) | (0x0 << 24) | (0x0 << 16)) #define MEDIA_VFE_STATE_MMIO_ACCESS_MASK (0x18) +#define MEDIA_INTERFACE_DESCRIPTOR_LOAD \ + ((0x3 << 29) | (0x2 << 27) | (0x0 << 24) | (0x2 << 16)) +#define MEDIA_OBJECT \ + ((0x3 << 29) | (0x2 << 27) | (0x1 << 24) | (0x0 << 16)) #define GPGPU_OBJECT ((0x3<<29)|(0x2<<27)|(0x1<<24)|(0x4<<16)) #define GPGPU_WALKER ((0x3<<29)|(0x2<<27)|(0x1<<24)|(0x5<<16)) #define GFX_OP_3DSTATE_DX9_CONSTANTF_VS \ diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c index 9f3bfe499446..dc0632ea9fde 100644 --- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c @@ -34,6 +34,7 @@ #include "gem/i915_gem_context.h" #include "gen6_ppgtt.h" +#include "gen7_renderclear.h" #include "i915_drv.h" #include "i915_trace.h" #include "intel_context.h" @@ -2008,7 +2009,7 @@ static void setup_vecs(struct intel_engine_cs *engine) static int gen7_ctx_switch_bb_setup(struct intel_engine_cs * const engine, struct i915_vma * const vma) { - return 0; + return gen7_setup_clear_gpr_bb(engine, vma); } static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine) diff --git a/drivers/gpu/drm/i915/i915_utils.h b/drivers/gpu/drm/i915/i915_utils.h index b0ade76bec90..7ac5b3565845 100644 --- a/drivers/gpu/drm/i915/i915_utils.h +++ b/drivers/gpu/drm/i915/i915_utils.h @@ -172,6 +172,11 @@ __check_struct_size(size_t base, size_t arr, size_t count, size_t *size) (typeof(ptr))(__v + 1); \ }) +#define ptr_align(ptr, align) ({ \ + unsigned long __v = (unsigned long)(ptr); \ + (typeof(ptr))round_up(__v, (align)); \ +}) + #define page_mask_bits(ptr) ptr_mask_bits(ptr, PAGE_SHIFT) #define page_unmask_bits(ptr) ptr_unmask_bits(ptr, PAGE_SHIFT) #define page_pack_bits(ptr, bits) ptr_pack_bits(ptr, bits, PAGE_SHIFT) -- 2.20.1 _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH 2/2] drm/i915/gen7: Clear all EU/L3 residual contexts 2020-01-30 16:57 ` [Intel-gfx] " Akeem G Abodunrin @ 2020-01-31 9:25 ` Jani Nikula -1 siblings, 0 replies; 20+ messages in thread From: Jani Nikula @ 2020-01-31 9:25 UTC (permalink / raw) To: Akeem G Abodunrin, akeem.g.abodunrin, intel-gfx, dri-devel, omer.aran, pragyansri.pathi, d.scott.phillips, david.c.stewart, tony.luck, jon.bloomfield, sudeep.dutt, daniel.vetter, joonas.lahtinen, chris.p.wilson, prathap.kumar.valsan, mika.kuoppala, francesco.balestrieri On Thu, 30 Jan 2020, Akeem G Abodunrin <akeem.g.abodunrin@intel.com> wrote: > diff --git a/drivers/gpu/drm/i915/i915_utils.h b/drivers/gpu/drm/i915/i915_utils.h > index b0ade76bec90..7ac5b3565845 100644 > --- a/drivers/gpu/drm/i915/i915_utils.h > +++ b/drivers/gpu/drm/i915/i915_utils.h > @@ -172,6 +172,11 @@ __check_struct_size(size_t base, size_t arr, size_t count, size_t *size) > (typeof(ptr))(__v + 1); \ > }) > > +#define ptr_align(ptr, align) ({ \ > + unsigned long __v = (unsigned long)(ptr); \ > + (typeof(ptr))round_up(__v, (align)); \ > +}) > + There's PTR_ALIGN() in include/kernel.h. BR, Jani. -- Jani Nikula, Intel Open Source Graphics Center _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Intel-gfx] [PATCH 2/2] drm/i915/gen7: Clear all EU/L3 residual contexts @ 2020-01-31 9:25 ` Jani Nikula 0 siblings, 0 replies; 20+ messages in thread From: Jani Nikula @ 2020-01-31 9:25 UTC (permalink / raw) To: Akeem G Abodunrin, akeem.g.abodunrin, intel-gfx, dri-devel, omer.aran, pragyansri.pathi, d.scott.phillips, david.c.stewart, tony.luck, jon.bloomfield, sudeep.dutt, daniel.vetter, joonas.lahtinen, chris.p.wilson, prathap.kumar.valsan, mika.kuoppala, francesco.balestrieri On Thu, 30 Jan 2020, Akeem G Abodunrin <akeem.g.abodunrin@intel.com> wrote: > diff --git a/drivers/gpu/drm/i915/i915_utils.h b/drivers/gpu/drm/i915/i915_utils.h > index b0ade76bec90..7ac5b3565845 100644 > --- a/drivers/gpu/drm/i915/i915_utils.h > +++ b/drivers/gpu/drm/i915/i915_utils.h > @@ -172,6 +172,11 @@ __check_struct_size(size_t base, size_t arr, size_t count, size_t *size) > (typeof(ptr))(__v + 1); \ > }) > > +#define ptr_align(ptr, align) ({ \ > + unsigned long __v = (unsigned long)(ptr); \ > + (typeof(ptr))round_up(__v, (align)); \ > +}) > + There's PTR_ALIGN() in include/kernel.h. BR, Jani. -- Jani Nikula, Intel Open Source Graphics Center _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/2] drm/i915/gen7: Clear all EU/L3 residual contexts 2020-01-30 16:57 ` [Intel-gfx] " Akeem G Abodunrin @ 2020-01-31 9:52 ` Joonas Lahtinen -1 siblings, 0 replies; 20+ messages in thread From: Joonas Lahtinen @ 2020-01-31 9:52 UTC (permalink / raw) To: akeem.g.abodunrin, chris.p.wilson, d.scott.phillips, daniel.vetter, david.c.stewart, dri-devel, francesco.balestrieri, intel-gfx, jani.nikula, jon.bloomfield, mika.kuoppala, omer.aran, pragyansri.pathi, prathap.kumar.valsan, sudeep.dutt, tony.luck Quoting Akeem G Abodunrin (2020-01-30 18:57:21) > From: Prathap Kumar Valsan <prathap.kumar.valsan@intel.com> > > On gen7 and gen7.5 devices, there could be leftover data residuals in > EU/L3 from the retiring context. This patch introduces workaround to clear > that residual contexts, by submitting a batch buffer with dedicated HW > context to the GPU with ring allocation for each context switching. > > This security mitigation change does not trigger any performance > regression. Performance is on par with current mainline/drm-tip. > > Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> > Signed-off-by: Prathap Kumar Valsan <prathap.kumar.valsan@intel.com> > Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> > Cc: Chris Wilson <chris.p.wilson@intel.com> > Cc: Balestrieri Francesco <francesco.balestrieri@intel.com> > Cc: Bloomfield Jon <jon.bloomfield@intel.com> > Cc: Dutt Sudeep <sudeep.dutt@intel.com> > --- > drivers/gpu/drm/i915/Makefile | 1 + > drivers/gpu/drm/i915/gt/gen7_renderclear.c | 535 ++++++++++++++++++ > drivers/gpu/drm/i915/gt/gen7_renderclear.h | 15 + > drivers/gpu/drm/i915/gt/intel_gpu_commands.h | 17 +- > .../gpu/drm/i915/gt/intel_ring_submission.c | 3 +- > drivers/gpu/drm/i915/i915_utils.h | 5 + > 6 files changed, 572 insertions(+), 4 deletions(-) > create mode 100644 drivers/gpu/drm/i915/gt/gen7_renderclear.c > create mode 100644 drivers/gpu/drm/i915/gt/gen7_renderclear.h > > diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile > index 3c88d7d8c764..f96bae664a03 100644 > --- a/drivers/gpu/drm/i915/Makefile > +++ b/drivers/gpu/drm/i915/Makefile > @@ -78,6 +78,7 @@ gt-y += \ > gt/debugfs_gt.o \ > gt/debugfs_gt_pm.o \ > gt/gen6_ppgtt.o \ > + gt/gen7_renderclear.o \ > gt/gen8_ppgtt.o \ > gt/intel_breadcrumbs.o \ > gt/intel_context.o \ > diff --git a/drivers/gpu/drm/i915/gt/gen7_renderclear.c b/drivers/gpu/drm/i915/gt/gen7_renderclear.c > new file mode 100644 > index 000000000000..a6f5f1602e33 > --- /dev/null > +++ b/drivers/gpu/drm/i915/gt/gen7_renderclear.c > @@ -0,0 +1,535 @@ > +// SPDX-License-Identifier: MIT > +/* > + * Copyright © 2019 Intel Corporation > + */ > + > +#include "gen7_renderclear.h" > +#include "i915_drv.h" > +#include "i915_utils.h" > +#include "intel_gpu_commands.h" > + > +#define MAX_URB_ENTRIES 64 > +#define STATE_SIZE (4 * 1024) > +#define GT3_INLINE_DATA_DELAYS 0x1E00 > +#define batch_advance(Y, CS) GEM_BUG_ON((Y)->end != (CS)) > + > +/* > + * Media CB Kernel for gen7 devices > + * TODO: Add comments to kernel, indicating what each array of hex does or > + * include header file, which has assembly source and support in igt to be > + * able to generate kernel in this same format > + */ Having the original source code for the kernels in IGT is the best way to proceed. The kernels should also be split into separate files which can be generated from IGT and copied over as-is for easy verification. Regards, Joonas _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Intel-gfx] [PATCH 2/2] drm/i915/gen7: Clear all EU/L3 residual contexts @ 2020-01-31 9:52 ` Joonas Lahtinen 0 siblings, 0 replies; 20+ messages in thread From: Joonas Lahtinen @ 2020-01-31 9:52 UTC (permalink / raw) To: akeem.g.abodunrin, chris.p.wilson, d.scott.phillips, daniel.vetter, david.c.stewart, dri-devel, francesco.balestrieri, intel-gfx, jani.nikula, jon.bloomfield, mika.kuoppala, omer.aran, pragyansri.pathi, prathap.kumar.valsan, sudeep.dutt, tony.luck Quoting Akeem G Abodunrin (2020-01-30 18:57:21) > From: Prathap Kumar Valsan <prathap.kumar.valsan@intel.com> > > On gen7 and gen7.5 devices, there could be leftover data residuals in > EU/L3 from the retiring context. This patch introduces workaround to clear > that residual contexts, by submitting a batch buffer with dedicated HW > context to the GPU with ring allocation for each context switching. > > This security mitigation change does not trigger any performance > regression. Performance is on par with current mainline/drm-tip. > > Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> > Signed-off-by: Prathap Kumar Valsan <prathap.kumar.valsan@intel.com> > Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> > Cc: Chris Wilson <chris.p.wilson@intel.com> > Cc: Balestrieri Francesco <francesco.balestrieri@intel.com> > Cc: Bloomfield Jon <jon.bloomfield@intel.com> > Cc: Dutt Sudeep <sudeep.dutt@intel.com> > --- > drivers/gpu/drm/i915/Makefile | 1 + > drivers/gpu/drm/i915/gt/gen7_renderclear.c | 535 ++++++++++++++++++ > drivers/gpu/drm/i915/gt/gen7_renderclear.h | 15 + > drivers/gpu/drm/i915/gt/intel_gpu_commands.h | 17 +- > .../gpu/drm/i915/gt/intel_ring_submission.c | 3 +- > drivers/gpu/drm/i915/i915_utils.h | 5 + > 6 files changed, 572 insertions(+), 4 deletions(-) > create mode 100644 drivers/gpu/drm/i915/gt/gen7_renderclear.c > create mode 100644 drivers/gpu/drm/i915/gt/gen7_renderclear.h > > diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile > index 3c88d7d8c764..f96bae664a03 100644 > --- a/drivers/gpu/drm/i915/Makefile > +++ b/drivers/gpu/drm/i915/Makefile > @@ -78,6 +78,7 @@ gt-y += \ > gt/debugfs_gt.o \ > gt/debugfs_gt_pm.o \ > gt/gen6_ppgtt.o \ > + gt/gen7_renderclear.o \ > gt/gen8_ppgtt.o \ > gt/intel_breadcrumbs.o \ > gt/intel_context.o \ > diff --git a/drivers/gpu/drm/i915/gt/gen7_renderclear.c b/drivers/gpu/drm/i915/gt/gen7_renderclear.c > new file mode 100644 > index 000000000000..a6f5f1602e33 > --- /dev/null > +++ b/drivers/gpu/drm/i915/gt/gen7_renderclear.c > @@ -0,0 +1,535 @@ > +// SPDX-License-Identifier: MIT > +/* > + * Copyright © 2019 Intel Corporation > + */ > + > +#include "gen7_renderclear.h" > +#include "i915_drv.h" > +#include "i915_utils.h" > +#include "intel_gpu_commands.h" > + > +#define MAX_URB_ENTRIES 64 > +#define STATE_SIZE (4 * 1024) > +#define GT3_INLINE_DATA_DELAYS 0x1E00 > +#define batch_advance(Y, CS) GEM_BUG_ON((Y)->end != (CS)) > + > +/* > + * Media CB Kernel for gen7 devices > + * TODO: Add comments to kernel, indicating what each array of hex does or > + * include header file, which has assembly source and support in igt to be > + * able to generate kernel in this same format > + */ Having the original source code for the kernels in IGT is the best way to proceed. The kernels should also be split into separate files which can be generated from IGT and copied over as-is for easy verification. Regards, Joonas _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Intel-gfx] [PATCH 2/2] drm/i915/gen7: Clear all EU/L3 residual contexts 2020-01-31 9:52 ` [Intel-gfx] " Joonas Lahtinen (?) @ 2020-01-31 15:45 ` Bloomfield, Jon 2020-02-03 9:42 ` Joonas Lahtinen -1 siblings, 1 reply; 20+ messages in thread From: Bloomfield, Jon @ 2020-01-31 15:45 UTC (permalink / raw) To: Abodunrin, Akeem G, Wilson, Chris P, Vetter, Daniel, Balestrieri, Francesco, intel-gfx, Nikula, Jani, Kuoppala, Mika, Kumar Valsan, Prathap, Dutt, Sudeep Reducing audience as this series is of high interest externally. I fully agree with Joonas' suggestion here, and we have been looking at doing just that. But can we iterate as a follow up patch series? Putting in the infra to support igt assembly from source will take a little time (igt assembler doesn't like the source right now, so it looks like it will need updating), and we are under pressure to get this security fix out. Jon > -----Original Message----- > From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> > Sent: Friday, January 31, 2020 1:52 AM > To: Abodunrin, Akeem G <akeem.g.abodunrin@intel.com>; Wilson, Chris P > <chris.p.wilson@intel.com>; Phillips, D Scott <d.scott.phillips@intel.com>; > Vetter, Daniel <daniel.vetter@intel.com>; Stewart, David C > <david.c.stewart@intel.com>; dri-devel@lists.freedesktop.org; Balestrieri, > Francesco <francesco.balestrieri@intel.com>; intel-gfx@lists.freedesktop.org; > Nikula, Jani <jani.nikula@intel.com>; Bloomfield, Jon > <jon.bloomfield@intel.com>; Kuoppala, Mika <mika.kuoppala@intel.com>; > Aran, Omer <omer.aran@intel.com>; Pathi, Pragyansri > <pragyansri.pathi@intel.com>; Kumar Valsan, Prathap > <prathap.kumar.valsan@intel.com>; Dutt, Sudeep <sudeep.dutt@intel.com>; > Luck, Tony <tony.luck@intel.com> > Subject: Re: [PATCH 2/2] drm/i915/gen7: Clear all EU/L3 residual contexts > > Quoting Akeem G Abodunrin (2020-01-30 18:57:21) > > From: Prathap Kumar Valsan <prathap.kumar.valsan@intel.com> > > > > On gen7 and gen7.5 devices, there could be leftover data residuals in > > EU/L3 from the retiring context. This patch introduces workaround to clear > > that residual contexts, by submitting a batch buffer with dedicated HW > > context to the GPU with ring allocation for each context switching. > > > > This security mitigation change does not trigger any performance > > regression. Performance is on par with current mainline/drm-tip. > > > > Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> > > Signed-off-by: Prathap Kumar Valsan <prathap.kumar.valsan@intel.com> > > Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> > > Cc: Chris Wilson <chris.p.wilson@intel.com> > > Cc: Balestrieri Francesco <francesco.balestrieri@intel.com> > > Cc: Bloomfield Jon <jon.bloomfield@intel.com> > > Cc: Dutt Sudeep <sudeep.dutt@intel.com> > > --- > > drivers/gpu/drm/i915/Makefile | 1 + > > drivers/gpu/drm/i915/gt/gen7_renderclear.c | 535 ++++++++++++++++++ > > drivers/gpu/drm/i915/gt/gen7_renderclear.h | 15 + > > drivers/gpu/drm/i915/gt/intel_gpu_commands.h | 17 +- > > .../gpu/drm/i915/gt/intel_ring_submission.c | 3 +- > > drivers/gpu/drm/i915/i915_utils.h | 5 + > > 6 files changed, 572 insertions(+), 4 deletions(-) > > create mode 100644 drivers/gpu/drm/i915/gt/gen7_renderclear.c > > create mode 100644 drivers/gpu/drm/i915/gt/gen7_renderclear.h > > > > diff --git a/drivers/gpu/drm/i915/Makefile > b/drivers/gpu/drm/i915/Makefile > > index 3c88d7d8c764..f96bae664a03 100644 > > --- a/drivers/gpu/drm/i915/Makefile > > +++ b/drivers/gpu/drm/i915/Makefile > > @@ -78,6 +78,7 @@ gt-y += \ > > gt/debugfs_gt.o \ > > gt/debugfs_gt_pm.o \ > > gt/gen6_ppgtt.o \ > > + gt/gen7_renderclear.o \ > > gt/gen8_ppgtt.o \ > > gt/intel_breadcrumbs.o \ > > gt/intel_context.o \ > > diff --git a/drivers/gpu/drm/i915/gt/gen7_renderclear.c > b/drivers/gpu/drm/i915/gt/gen7_renderclear.c > > new file mode 100644 > > index 000000000000..a6f5f1602e33 > > --- /dev/null > > +++ b/drivers/gpu/drm/i915/gt/gen7_renderclear.c > > @@ -0,0 +1,535 @@ > > +// SPDX-License-Identifier: MIT > > +/* > > + * Copyright © 2019 Intel Corporation > > + */ > > + > > +#include "gen7_renderclear.h" > > +#include "i915_drv.h" > > +#include "i915_utils.h" > > +#include "intel_gpu_commands.h" > > + > > +#define MAX_URB_ENTRIES 64 > > +#define STATE_SIZE (4 * 1024) > > +#define GT3_INLINE_DATA_DELAYS 0x1E00 > > +#define batch_advance(Y, CS) GEM_BUG_ON((Y)->end != (CS)) > > + > > +/* > > + * Media CB Kernel for gen7 devices > > + * TODO: Add comments to kernel, indicating what each array of hex does > or > > + * include header file, which has assembly source and support in igt to be > > + * able to generate kernel in this same format > > + */ > > Having the original source code for the kernels in IGT is the > best way to proceed. The kernels should also be split into > separate files which can be generated from IGT and copied > over as-is for easy verification. > > Regards, Joonas _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Intel-gfx] [PATCH 2/2] drm/i915/gen7: Clear all EU/L3 residual contexts 2020-01-31 15:45 ` Bloomfield, Jon @ 2020-02-03 9:42 ` Joonas Lahtinen 0 siblings, 0 replies; 20+ messages in thread From: Joonas Lahtinen @ 2020-02-03 9:42 UTC (permalink / raw) To: Abodunrin, Akeem G, Balestrieri, Francesco, Bloomfield, Jon, Dutt, Sudeep, Kumar Valsan, Prathap, Kuoppala, Mika, Nikula, Jani, Vetter, Daniel, Wilson, Chris P, intel-gfx Quoting Bloomfield, Jon (2020-01-31 17:45:02) > Reducing audience as this series is of high interest externally. > > I fully agree with Joonas' suggestion here, and we have been looking at doing just that. But can we iterate as a follow up patch series? Putting in the infra to support igt assembly from source will take a little time (igt assembler doesn't like the source right now, so it looks like it will need updating), and we are under pressure to get this security fix out. In order to merge upstream, we need to be able to show the readable source code with appropriate license for the execution kernel and provide compiling instructions. Compiling with publicly available tools is enough and at merge time it doesn't necessarily have to integrate with IGT fully due to security aspect. If we can't fulfill those requirements, then the right place for the blob is in linux-firmware. Regards, Joonas _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 20+ messages in thread
* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Security mitigation for Intel Gen7/7.5 HWs 2020-01-30 16:57 ` [Intel-gfx] " Akeem G Abodunrin ` (2 preceding siblings ...) (?) @ 2020-01-31 2:33 ` Patchwork -1 siblings, 0 replies; 20+ messages in thread From: Patchwork @ 2020-01-31 2:33 UTC (permalink / raw) To: Akeem G Abodunrin; +Cc: intel-gfx == Series Details == Series: Security mitigation for Intel Gen7/7.5 HWs URL : https://patchwork.freedesktop.org/series/72799/ State : warning == Summary == $ dim checkpatch origin/drm-tip 07091ed6f557 drm/i915: Add mechanism to submit a context WA on ring submission 1c35be27250b drm/i915/gen7: Clear all EU/L3 residual contexts -:35: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating? #35: new file mode 100644 total: 0 errors, 1 warnings, 0 checks, 607 lines checked _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 20+ messages in thread
* [Intel-gfx] ✓ Fi.CI.BAT: success for Security mitigation for Intel Gen7/7.5 HWs 2020-01-30 16:57 ` [Intel-gfx] " Akeem G Abodunrin ` (3 preceding siblings ...) (?) @ 2020-01-31 3:03 ` Patchwork 2020-01-31 9:33 ` Chris Wilson -1 siblings, 1 reply; 20+ messages in thread From: Patchwork @ 2020-01-31 3:03 UTC (permalink / raw) To: Akeem G Abodunrin; +Cc: intel-gfx == Series Details == Series: Security mitigation for Intel Gen7/7.5 HWs URL : https://patchwork.freedesktop.org/series/72799/ State : success == Summary == CI Bug Log - changes from CI_DRM_7847 -> Patchwork_16347 ==================================================== Summary ------- **WARNING** Minor unknown changes coming with Patchwork_16347 need to be verified manually. If you think the reported changes have nothing to do with the changes introduced in Patchwork_16347, please notify your bug team to allow them to document this new failure mode, which will reduce false positives in CI. External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/index.html Possible new issues ------------------- Here are the unknown changes that may have been introduced in Patchwork_16347: ### IGT changes ### #### Warnings #### * igt@kms_psr@primary_mmap_gtt: - fi-skl-6770hq: [SKIP][1] ([fdo#109271]) -> [INCOMPLETE][2] [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/fi-skl-6770hq/igt@kms_psr@primary_mmap_gtt.html [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/fi-skl-6770hq/igt@kms_psr@primary_mmap_gtt.html Known issues ------------ Here are the changes found in Patchwork_16347 that come from known issues: ### IGT changes ### #### Issues hit #### * igt@i915_getparams_basic@basic-eu-total: - fi-tgl-y: [PASS][3] -> [DMESG-WARN][4] ([CI#94] / [i915#402]) [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/fi-tgl-y/igt@i915_getparams_basic@basic-eu-total.html [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/fi-tgl-y/igt@i915_getparams_basic@basic-eu-total.html * igt@i915_module_load@reload-with-fault-injection: - fi-icl-guc: [PASS][5] -> [DMESG-WARN][6] ([i915#109]) [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/fi-icl-guc/igt@i915_module_load@reload-with-fault-injection.html [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/fi-icl-guc/igt@i915_module_load@reload-with-fault-injection.html * igt@i915_selftest@live_gem_contexts: - fi-cfl-8700k: [PASS][7] -> [INCOMPLETE][8] ([i915#424]) [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/fi-cfl-8700k/igt@i915_selftest@live_gem_contexts.html [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/fi-cfl-8700k/igt@i915_selftest@live_gem_contexts.html * igt@kms_flip@basic-flip-vs-wf_vblank: - fi-pnv-d510: [PASS][9] -> [FAIL][10] ([i915#34]) [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/fi-pnv-d510/igt@kms_flip@basic-flip-vs-wf_vblank.html [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/fi-pnv-d510/igt@kms_flip@basic-flip-vs-wf_vblank.html #### Possible fixes #### * igt@gem_flink_basic@bad-flink: - fi-tgl-y: [DMESG-WARN][11] ([CI#94] / [i915#402]) -> [PASS][12] [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/fi-tgl-y/igt@gem_flink_basic@bad-flink.html [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/fi-tgl-y/igt@gem_flink_basic@bad-flink.html * igt@i915_selftest@live_execlists: - fi-icl-y: [DMESG-FAIL][13] ([fdo#108569]) -> [PASS][14] [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/fi-icl-y/igt@i915_selftest@live_execlists.html [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/fi-icl-y/igt@i915_selftest@live_execlists.html * igt@i915_selftest@live_gem_contexts: - fi-cfl-guc: [INCOMPLETE][15] ([fdo#106070] / [i915#424]) -> [PASS][16] [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/fi-cfl-guc/igt@i915_selftest@live_gem_contexts.html [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/fi-cfl-guc/igt@i915_selftest@live_gem_contexts.html * igt@kms_flip@basic-flip-vs-wf_vblank: - fi-bwr-2160: [FAIL][17] ([i915#34]) -> [PASS][18] [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/fi-bwr-2160/igt@kms_flip@basic-flip-vs-wf_vblank.html [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/fi-bwr-2160/igt@kms_flip@basic-flip-vs-wf_vblank.html [CI#94]: https://gitlab.freedesktop.org/gfx-ci/i915-infra/issues/94 [fdo#106070]: https://bugs.freedesktop.org/show_bug.cgi?id=106070 [fdo#108569]: https://bugs.freedesktop.org/show_bug.cgi?id=108569 [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271 [i915#109]: https://gitlab.freedesktop.org/drm/intel/issues/109 [i915#34]: https://gitlab.freedesktop.org/drm/intel/issues/34 [i915#402]: https://gitlab.freedesktop.org/drm/intel/issues/402 [i915#424]: https://gitlab.freedesktop.org/drm/intel/issues/424 Participating hosts (47 -> 42) ------------------------------ Additional (4): fi-bdw-gvtdvm fi-gdg-551 fi-elk-e7500 fi-kbl-7500u Missing (9): fi-hsw-4770r fi-icl-1065g7 fi-hsw-4200u fi-byt-j1900 fi-bsw-cyan fi-hsw-4770 fi-ivb-3770 fi-byt-n2820 fi-bdw-samus Build changes ------------- * CI: CI-20190529 -> None * Linux: CI_DRM_7847 -> Patchwork_16347 CI-20190529: 20190529 CI_DRM_7847: 2515f8cc5d56f8791232dc6b077a370658d4cecf @ git://anongit.freedesktop.org/gfx-ci/linux IGT_5407: a9d69f51dadbcbc53527671f87572d05c3370cba @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools Patchwork_16347: 1c35be27250b4dfe97056048040e1c628cb376c1 @ git://anongit.freedesktop.org/gfx-ci/linux == Linux commits == 1c35be27250b drm/i915/gen7: Clear all EU/L3 residual contexts 07091ed6f557 drm/i915: Add mechanism to submit a context WA on ring submission == Logs == For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/index.html _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Intel-gfx] ✓ Fi.CI.BAT: success for Security mitigation for Intel Gen7/7.5 HWs 2020-01-31 3:03 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork @ 2020-01-31 9:33 ` Chris Wilson 0 siblings, 0 replies; 20+ messages in thread From: Chris Wilson @ 2020-01-31 9:33 UTC (permalink / raw) To: Akeem G Abodunrin, Patchwork, intel-gfx; +Cc: intel-gfx Quoting Patchwork (2020-01-31 03:03:37) > Participating hosts (47 -> 42) > ------------------------------ > > Additional (4): fi-bdw-gvtdvm fi-gdg-551 fi-elk-e7500 fi-kbl-7500u > Missing (9): fi-hsw-4770r fi-icl-1065g7 fi-hsw-4200u fi-byt-j1900 fi-bsw-cyan fi-hsw-4770 fi-ivb-3770 fi-byt-n2820 fi-bdw-samus Please note the absence of any gen7. None survived booting. Hint, it's the same misplaced batch_advance as last time. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 20+ messages in thread
* [Intel-gfx] ✓ Fi.CI.IGT: success for Security mitigation for Intel Gen7/7.5 HWs 2020-01-30 16:57 ` [Intel-gfx] " Akeem G Abodunrin ` (4 preceding siblings ...) (?) @ 2020-02-02 22:27 ` Patchwork -1 siblings, 0 replies; 20+ messages in thread From: Patchwork @ 2020-02-02 22:27 UTC (permalink / raw) To: Akeem G Abodunrin; +Cc: intel-gfx == Series Details == Series: Security mitigation for Intel Gen7/7.5 HWs URL : https://patchwork.freedesktop.org/series/72799/ State : success == Summary == CI Bug Log - changes from CI_DRM_7847_full -> Patchwork_16347_full ==================================================== Summary ------- **SUCCESS** No regressions found. Known issues ------------ Here are the changes found in Patchwork_16347_full that come from known issues: ### IGT changes ### #### Issues hit #### * igt@gem_exec_balancer@smoke: - shard-iclb: [PASS][1] -> [SKIP][2] ([fdo#110854]) [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-iclb1/igt@gem_exec_balancer@smoke.html [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-iclb7/igt@gem_exec_balancer@smoke.html * igt@gem_exec_schedule@pi-common-bsd: - shard-iclb: [PASS][3] -> [SKIP][4] ([i915#677]) +1 similar issue [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-iclb5/igt@gem_exec_schedule@pi-common-bsd.html [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-iclb2/igt@gem_exec_schedule@pi-common-bsd.html * igt@gem_exec_schedule@preempt-other-chain-bsd: - shard-iclb: [PASS][5] -> [SKIP][6] ([fdo#112146]) +8 similar issues [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-iclb8/igt@gem_exec_schedule@preempt-other-chain-bsd.html [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-iclb2/igt@gem_exec_schedule@preempt-other-chain-bsd.html * igt@i915_pm_dc@dc5-dpms: - shard-iclb: [PASS][7] -> [FAIL][8] ([i915#447]) [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-iclb7/igt@i915_pm_dc@dc5-dpms.html [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-iclb3/igt@i915_pm_dc@dc5-dpms.html * igt@i915_pm_dc@dc6-psr: - shard-iclb: [PASS][9] -> [FAIL][10] ([i915#454]) [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-iclb1/igt@i915_pm_dc@dc6-psr.html [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-iclb8/igt@i915_pm_dc@dc6-psr.html * igt@i915_suspend@forcewake: - shard-skl: [PASS][11] -> [INCOMPLETE][12] ([i915#69]) [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-skl10/igt@i915_suspend@forcewake.html [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-skl4/igt@i915_suspend@forcewake.html * igt@kms_draw_crc@draw-method-rgb565-mmap-wc-xtiled: - shard-skl: [PASS][13] -> [FAIL][14] ([i915#52] / [i915#54]) +1 similar issue [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-skl5/igt@kms_draw_crc@draw-method-rgb565-mmap-wc-xtiled.html [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-skl4/igt@kms_draw_crc@draw-method-rgb565-mmap-wc-xtiled.html * igt@kms_flip@flip-vs-suspend: - shard-skl: [PASS][15] -> [INCOMPLETE][16] ([i915#221]) [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-skl10/igt@kms_flip@flip-vs-suspend.html [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-skl7/igt@kms_flip@flip-vs-suspend.html * igt@kms_flip@flip-vs-suspend-interruptible: - shard-apl: [PASS][17] -> [DMESG-WARN][18] ([i915#180]) [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-apl6/igt@kms_flip@flip-vs-suspend-interruptible.html [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-apl8/igt@kms_flip@flip-vs-suspend-interruptible.html * igt@kms_plane@plane-panning-bottom-right-suspend-pipe-c-planes: - shard-kbl: [PASS][19] -> [DMESG-WARN][20] ([i915#180]) +3 similar issues [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-kbl4/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-c-planes.html [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-kbl4/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-c-planes.html * igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min: - shard-skl: [PASS][21] -> [FAIL][22] ([fdo#108145]) [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-skl9/igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min.html [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-skl5/igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min.html * igt@kms_plane_alpha_blend@pipe-b-coverage-7efc: - shard-skl: [PASS][23] -> [FAIL][24] ([fdo#108145] / [i915#265]) [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-skl5/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-skl7/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html * igt@kms_plane_multiple@atomic-pipe-b-tiling-yf: - shard-skl: [PASS][25] -> [DMESG-WARN][26] ([IGT#6]) [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-skl4/igt@kms_plane_multiple@atomic-pipe-b-tiling-yf.html [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-skl7/igt@kms_plane_multiple@atomic-pipe-b-tiling-yf.html * igt@kms_psr2_su@page_flip: - shard-iclb: [PASS][27] -> [SKIP][28] ([fdo#109642] / [fdo#111068]) [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-iclb2/igt@kms_psr2_su@page_flip.html [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-iclb8/igt@kms_psr2_su@page_flip.html * igt@kms_psr@psr2_primary_page_flip: - shard-iclb: [PASS][29] -> [SKIP][30] ([fdo#109441]) +2 similar issues [29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-iclb2/igt@kms_psr@psr2_primary_page_flip.html [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-iclb5/igt@kms_psr@psr2_primary_page_flip.html * igt@kms_setmode@basic: - shard-skl: [PASS][31] -> [FAIL][32] ([i915#31]) [31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-skl7/igt@kms_setmode@basic.html [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-skl10/igt@kms_setmode@basic.html * igt@perf_pmu@busy-check-all-vcs1: - shard-iclb: [PASS][33] -> [SKIP][34] ([fdo#112080]) +15 similar issues [33]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-iclb1/igt@perf_pmu@busy-check-all-vcs1.html [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-iclb8/igt@perf_pmu@busy-check-all-vcs1.html * igt@prime_vgem@fence-wait-bsd2: - shard-iclb: [PASS][35] -> [SKIP][36] ([fdo#109276]) +23 similar issues [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-iclb2/igt@prime_vgem@fence-wait-bsd2.html [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-iclb5/igt@prime_vgem@fence-wait-bsd2.html #### Possible fixes #### * igt@gem_ctx_shared@exec-single-timeline-bsd: - shard-iclb: [SKIP][37] ([fdo#110841]) -> [PASS][38] [37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-iclb2/igt@gem_ctx_shared@exec-single-timeline-bsd.html [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-iclb5/igt@gem_ctx_shared@exec-single-timeline-bsd.html * igt@gem_exec_balancer@hang: - shard-tglb: [TIMEOUT][39] ([fdo#112271]) -> [PASS][40] [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-tglb8/igt@gem_exec_balancer@hang.html [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-tglb2/igt@gem_exec_balancer@hang.html * igt@gem_exec_schedule@out-order-bsd2: - shard-iclb: [SKIP][41] ([fdo#109276]) -> [PASS][42] +26 similar issues [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-iclb7/igt@gem_exec_schedule@out-order-bsd2.html [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-iclb1/igt@gem_exec_schedule@out-order-bsd2.html * igt@gem_exec_schedule@reorder-wide-bsd: - shard-iclb: [SKIP][43] ([fdo#112146]) -> [PASS][44] +4 similar issues [43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-iclb1/igt@gem_exec_schedule@reorder-wide-bsd.html [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-iclb8/igt@gem_exec_schedule@reorder-wide-bsd.html * igt@gem_exec_store@cachelines-vcs1: - shard-iclb: [SKIP][45] ([fdo#112080]) -> [PASS][46] +10 similar issues [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-iclb6/igt@gem_exec_store@cachelines-vcs1.html [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-iclb2/igt@gem_exec_store@cachelines-vcs1.html * igt@gem_ppgtt@flink-and-close-vma-leak: - shard-glk: [FAIL][47] ([i915#644]) -> [PASS][48] [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-glk9/igt@gem_ppgtt@flink-and-close-vma-leak.html [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-glk8/igt@gem_ppgtt@flink-and-close-vma-leak.html - shard-skl: [FAIL][49] ([i915#644]) -> [PASS][50] [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-skl7/igt@gem_ppgtt@flink-and-close-vma-leak.html [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-skl10/igt@gem_ppgtt@flink-and-close-vma-leak.html * igt@i915_pm_rps@reset: - shard-iclb: [FAIL][51] ([i915#413]) -> [PASS][52] +1 similar issue [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-iclb2/igt@i915_pm_rps@reset.html [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-iclb5/igt@i915_pm_rps@reset.html * igt@kms_color@pipe-a-ctm-green-to-red: - shard-skl: [DMESG-WARN][53] ([i915#109]) -> [PASS][54] +2 similar issues [53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-skl6/igt@kms_color@pipe-a-ctm-green-to-red.html [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-skl9/igt@kms_color@pipe-a-ctm-green-to-red.html * igt@kms_draw_crc@draw-method-xrgb8888-mmap-cpu-ytiled: - shard-skl: [FAIL][55] ([i915#52] / [i915#54]) -> [PASS][56] [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-skl5/igt@kms_draw_crc@draw-method-xrgb8888-mmap-cpu-ytiled.html [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-skl4/igt@kms_draw_crc@draw-method-xrgb8888-mmap-cpu-ytiled.html * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a: - shard-kbl: [DMESG-WARN][57] ([i915#180]) -> [PASS][58] +7 similar issues [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-kbl7/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-kbl2/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html * igt@kms_plane@plane-panning-bottom-right-suspend-pipe-a-planes: - shard-kbl: [FAIL][59] ([i915#1036]) -> [PASS][60] [59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-kbl3/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-a-planes.html [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-kbl1/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-a-planes.html * igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min: - shard-skl: [FAIL][61] ([fdo#108145]) -> [PASS][62] [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-skl9/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-skl5/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html * igt@kms_plane_multiple@atomic-pipe-a-tiling-y: - shard-skl: [DMESG-WARN][63] ([IGT#6]) -> [PASS][64] +1 similar issue [63]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-skl7/igt@kms_plane_multiple@atomic-pipe-a-tiling-y.html [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-skl1/igt@kms_plane_multiple@atomic-pipe-a-tiling-y.html * igt@kms_psr@psr2_primary_mmap_cpu: - shard-iclb: [SKIP][65] ([fdo#109441]) -> [PASS][66] +2 similar issues [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-iclb6/igt@kms_psr@psr2_primary_mmap_cpu.html [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-iclb2/igt@kms_psr@psr2_primary_mmap_cpu.html * igt@kms_vblank@pipe-a-ts-continuation-suspend: - shard-apl: [DMESG-WARN][67] ([i915#180]) -> [PASS][68] +1 similar issue [67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-apl1/igt@kms_vblank@pipe-a-ts-continuation-suspend.html [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-apl6/igt@kms_vblank@pipe-a-ts-continuation-suspend.html #### Warnings #### * igt@gem_ctx_isolation@vcs1-nonpriv: - shard-iclb: [FAIL][69] ([IGT#28]) -> [SKIP][70] ([fdo#112080]) +1 similar issue [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-iclb2/igt@gem_ctx_isolation@vcs1-nonpriv.html [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-iclb5/igt@gem_ctx_isolation@vcs1-nonpriv.html * igt@gem_eio@in-flight-immediate: - shard-kbl: [TIMEOUT][71] ([fdo#112271]) -> [INCOMPLETE][72] ([CI#80] / [fdo#103665] / [i915#1098]) [71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-kbl3/igt@gem_eio@in-flight-immediate.html [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-kbl1/igt@gem_eio@in-flight-immediate.html - shard-apl: [INCOMPLETE][73] ([CI#80] / [fdo#103927] / [i915#1098]) -> [TIMEOUT][74] ([fdo#112271]) [73]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-apl3/igt@gem_eio@in-flight-immediate.html [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-apl4/igt@gem_eio@in-flight-immediate.html * igt@i915_pm_rpm@system-suspend-execbuf: - shard-snb: [SKIP][75] ([fdo#109271]) -> [INCOMPLETE][76] ([i915#82]) [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7847/shard-snb6/igt@i915_pm_rpm@system-suspend-execbuf.html [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/shard-snb5/igt@i915_pm_rpm@system-suspend-execbuf.html {name}: This element is suppressed. This means it is ignored when computing the status of the difference (SUCCESS, WARNING, or FAILURE). [CI#80]: https://gitlab.freedesktop.org/gfx-ci/i915-infra/issues/80 [IGT#28]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/28 [IGT#6]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/6 [IGT#68]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/68 [fdo#103665]: https://bugs.freedesktop.org/show_bug.cgi?id=103665 [fdo#103927]: https://bugs.freedesktop.org/show_bug.cgi?id=103927 [fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145 [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271 [fdo#109276]: https://bugs.freedesktop.org/show_bug.cgi?id=109276 [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441 [fdo#109642]: https://bugs.freedesktop.org/show_bug.cgi?id=109642 [fdo#110841]: https://bugs.freedesktop.org/show_bug.cgi?id=110841 [fdo#110854]: https://bugs.freedesktop.org/show_bug.cgi?id=110854 [fdo#111068]: https://bugs.freedesktop.org/show_bug.cgi?id=111068 [fdo#112080]: https://bugs.freedesktop.org/show_bug.cgi?id=112080 [fdo#112146]: https://bugs.freedesktop.org/show_bug.cgi?id=112146 [fdo#112271]: https://bugs.freedesktop.org/show_bug.cgi?id=112271 [i915#1036]: https://gitlab.freedesktop.org/drm/intel/issues/1036 [i915#109]: https://gitlab.freedesktop.org/drm/intel/issues/109 [i915#1098]: https://gitlab.freedesktop.org/drm/intel/issues/1098 [i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180 [i915#221]: https://gitlab.freedesktop.org/drm/intel/issues/221 [i915#265]: https://gitlab.freedesktop.org/drm/intel/issues/265 [i915#31]: https://gitlab.freedesktop.org/drm/intel/issues/31 [i915#413]: https://gitlab.freedesktop.org/drm/intel/issues/413 [i915#447]: https://gitlab.freedesktop.org/drm/intel/issues/447 [i915#454]: https://gitlab.freedesktop.org/drm/intel/issues/454 [i915#52]: https://gitlab.freedesktop.org/drm/intel/issues/52 [i915#54]: https://gitlab.freedesktop.org/drm/intel/issues/54 [i915#644]: https://gitlab.freedesktop.org/drm/intel/issues/644 [i915#677]: https://gitlab.freedesktop.org/drm/intel/issues/677 [i915#69]: https://gitlab.freedesktop.org/drm/intel/issues/69 [i915#82]: https://gitlab.freedesktop.org/drm/intel/issues/82 Participating hosts (10 -> 10) ------------------------------ No changes in participating hosts Build changes ------------- * CI: CI-20190529 -> None * Linux: CI_DRM_7847 -> Patchwork_16347 CI-20190529: 20190529 CI_DRM_7847: 2515f8cc5d56f8791232dc6b077a370658d4cecf @ git://anongit.freedesktop.org/gfx-ci/linux IGT_5407: a9d69f51dadbcbc53527671f87572d05c3370cba @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools Patchwork_16347: 1c35be27250b4dfe97056048040e1c628cb376c1 @ git://anongit.freedesktop.org/gfx-ci/linux piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit == Logs == For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16347/index.html _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 0/2] Security mitigation for Intel Gen7/7.5 HWs @ 2020-02-20 22:57 Akeem G Abodunrin 2020-02-20 22:57 ` [PATCH 1/2] drm/i915: Add mechanism to submit a context WA on ring submission Akeem G Abodunrin 0 siblings, 1 reply; 20+ messages in thread From: Akeem G Abodunrin @ 2020-02-20 22:57 UTC (permalink / raw) To: akeem.g.abodunrin, intel-gfx, dri-devel, omer.aran, pragyansri.pathi, d.scott.phillips, david.c.stewart, tony.luck, jon.bloomfield, sudeep.dutt, daniel.vetter, joonas.lahtinen, jani.nikula, chris.p.wilson, prathap.kumar.valsan, mika.kuoppala, francesco.balestrieri Intel ID: PSIRT-TA-201910-001 CVEID: CVE-2019-14615 Summary of Vulnerability ------------------------ Insufficient control flow in certain data structures for some Intel(R) Processors with Intel Processor Graphics may allow an unauthenticated user to potentially enable information disclosure via local access Products affected: ------------------ Intel CPU’s with Gen7, Gen7.5 and Gen9 Graphics. Mitigation Summary ------------------ This patch provides mitigation for Gen7 and Gen7.5 hardware only. Patch for Gen9 devices have been provided and merged to Linux mainline, and backported to stable kernels. Note that Gen8 is not impacted due to a previously implemented workaround. The mitigation involves submitting a custom EU kernel prior to every context restore, in order to forcibly clear down residual EU and URB resources. The custom CB kernels are generated/assembled automatically, using Mesa (an open source tool) and IGT GPU tool - assembly sources are provided with IGT source code. This security mitigation change does not trigger any known performance regression. Performance is on par with current mainline/drm-tip. Note on Address Space Isolation (Full PPGTT) -------------------------------------------- Isolation of EU kernel assets should be considered complementary to the existing support for address space isolation (aka Full PPGTT), since without address space isolation there is minimal value in preventing leakage between EU contexts. Full PPGTT has long been supported on Gen Gfx devices since Gen8, and protection against EU residual leakage is a welcome addition for these newer platforms. By contrast, Gen7 and Gen7.5 device introduced Full PPGTT support only as a hardware development feature for anticipated Gen8 productization. Support was never intended for, or provided to the Linux kernels for these platforms. Recent work (still ongoing) to the mainline kernel is retroactively providing this support, but due to the level of complexity it is not practical to attempt to backport this to earlier stable kernels. Since without Full PPGTT, EU residuals protection has questionable benefit, *there are no plans to provide stable kernel backports for this patch series.* Mika Kuoppala (1): drm/i915: Add mechanism to submit a context WA on ring submission Prathap Kumar Valsan (1): drm/i915/gen7: Clear all EU/L3 residual contexts drivers/gpu/drm/i915/Makefile | 1 + drivers/gpu/drm/i915/gt/gen7_renderclear.c | 402 ++++++++++++++++++ drivers/gpu/drm/i915/gt/gen7_renderclear.h | 15 + drivers/gpu/drm/i915/gt/hsw_clear_kernel.c | 61 +++ drivers/gpu/drm/i915/gt/intel_gpu_commands.h | 17 +- .../gpu/drm/i915/gt/intel_ring_submission.c | 135 +++++- drivers/gpu/drm/i915/gt/ivb_clear_kernel.c | 61 +++ 7 files changed, 685 insertions(+), 7 deletions(-) create mode 100644 drivers/gpu/drm/i915/gt/gen7_renderclear.c create mode 100644 drivers/gpu/drm/i915/gt/gen7_renderclear.h create mode 100644 drivers/gpu/drm/i915/gt/hsw_clear_kernel.c create mode 100644 drivers/gpu/drm/i915/gt/ivb_clear_kernel.c -- 2.20.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 1/2] drm/i915: Add mechanism to submit a context WA on ring submission 2020-02-20 22:57 [PATCH 0/2] " Akeem G Abodunrin @ 2020-02-20 22:57 ` Akeem G Abodunrin 0 siblings, 0 replies; 20+ messages in thread From: Akeem G Abodunrin @ 2020-02-20 22:57 UTC (permalink / raw) To: akeem.g.abodunrin, intel-gfx, dri-devel, omer.aran, pragyansri.pathi, d.scott.phillips, david.c.stewart, tony.luck, jon.bloomfield, sudeep.dutt, daniel.vetter, joonas.lahtinen, jani.nikula, chris.p.wilson, prathap.kumar.valsan, mika.kuoppala, francesco.balestrieri From: Mika Kuoppala <mika.kuoppala@linux.intel.com> This patch adds framework to submit an arbitrary batchbuffer on each context switch to clear residual state for render engine on Gen7/7.5 devices. The idea of always emitting the context and vm setup around each request is primary to make reset recovery easy, and not require rewriting the ringbuffer. As each request would set up its own context, leaving it to the HW to notice and elide no-op context switches, we could restart the ring at any point, and reorder the requests freely. However, to avoid emitting clear_residuals() between consecutive requests in the ringbuffer of the same context, we do want to track the current context in the ring. In doing so, we need to be careful to only record a context switch when we are sure the next request will be emitted. This security mitigation change does not trigger any performance regression. Performance is on par with current mainline/drm-tip. v2: Update vm_alias params to point to correct address space "vm" due to changes made in the patch "f21613797bae98773" v3-v4: none Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Prathap Kumar Valsan <prathap.kumar.valsan@intel.com> Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Balestrieri Francesco <francesco.balestrieri@intel.com> Cc: Bloomfield Jon <jon.bloomfield@intel.com> Cc: Dutt Sudeep <sudeep.dutt@intel.com> --- .../gpu/drm/i915/gt/intel_ring_submission.c | 134 +++++++++++++++++- 1 file changed, 130 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c index f70b903a98bc..593710558b99 100644 --- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c @@ -1360,7 +1360,9 @@ static int load_pd_dir(struct i915_request *rq, return rq->engine->emit_flush(rq, EMIT_FLUSH); } -static inline int mi_set_context(struct i915_request *rq, u32 flags) +static inline int mi_set_context(struct i915_request *rq, + struct intel_context *ce, + u32 flags) { struct drm_i915_private *i915 = rq->i915; struct intel_engine_cs *engine = rq->engine; @@ -1435,7 +1437,7 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags) *cs++ = MI_NOOP; *cs++ = MI_SET_CONTEXT; - *cs++ = i915_ggtt_offset(rq->context->state) | flags; + *cs++ = i915_ggtt_offset(ce->state) | flags; /* * w/a: MI_SET_CONTEXT must always be followed by MI_NOOP * WaMiSetContext_Hang:snb,ivb,vlv @@ -1550,13 +1552,56 @@ static int switch_mm(struct i915_request *rq, struct i915_address_space *vm) return rq->engine->emit_flush(rq, EMIT_INVALIDATE); } +static int clear_residuals(struct i915_request *rq) +{ + struct intel_engine_cs *engine = rq->engine; + int ret; + + GEM_BUG_ON(!engine->kernel_context->state); + + ret = switch_mm(rq, vm_alias(engine->kernel_context->vm)); + if (ret) + return ret; + + ret = mi_set_context(rq, + engine->kernel_context, + MI_MM_SPACE_GTT | MI_RESTORE_INHIBIT); + if (ret) + return ret; + + ret = engine->emit_bb_start(rq, + engine->wa_ctx.vma->node.start, 0, + 0); + if (ret) + return ret; + + ret = engine->emit_flush(rq, EMIT_FLUSH); + if (ret) + return ret; + + /* Always invalidate before the next switch_mm() */ + return engine->emit_flush(rq, EMIT_INVALIDATE); +} + static int switch_context(struct i915_request *rq) { + struct intel_engine_cs *engine = rq->engine; struct intel_context *ce = rq->context; + void **residuals = NULL; int ret; GEM_BUG_ON(HAS_EXECLISTS(rq->i915)); + if (engine->wa_ctx.vma && ce != engine->kernel_context) { + if (engine->wa_ctx.vma->private != ce) { + ret = clear_residuals(rq); + if (ret) + return ret; + + residuals = &engine->wa_ctx.vma->private; + } + } + ret = switch_mm(rq, vm_alias(ce->vm)); if (ret) return ret; @@ -1564,7 +1609,7 @@ static int switch_context(struct i915_request *rq) if (ce->state) { u32 flags; - GEM_BUG_ON(rq->engine->id != RCS0); + GEM_BUG_ON(engine->id != RCS0); /* For resource streamer on HSW+ and power context elsewhere */ BUILD_BUG_ON(HSW_MI_RS_SAVE_STATE_EN != MI_SAVE_EXT_STATE_EN); @@ -1576,7 +1621,7 @@ static int switch_context(struct i915_request *rq) else flags |= MI_RESTORE_INHIBIT; - ret = mi_set_context(rq, flags); + ret = mi_set_context(rq, ce, flags); if (ret) return ret; } @@ -1585,6 +1630,20 @@ static int switch_context(struct i915_request *rq) if (ret) return ret; + /* + * Now past the point of no return, this request _will_ be emitted. + * + * Or at least this preamble will be emitted, the request may be + * interrupted prior to submitting the user payload. If so, we + * still submit the "empty" request in order to preserve global + * state tracking such as this, our tracking of the current + * dirty context. + */ + if (residuals) { + intel_context_put(*residuals); + *residuals = intel_context_get(ce); + } + return 0; } @@ -1769,6 +1828,11 @@ static void ring_release(struct intel_engine_cs *engine) intel_engine_cleanup_common(engine); + if (engine->wa_ctx.vma) { + intel_context_put(engine->wa_ctx.vma->private); + i915_vma_unpin_and_release(&engine->wa_ctx.vma, 0); + } + intel_ring_unpin(engine->legacy.ring); intel_ring_put(engine->legacy.ring); @@ -1916,6 +1980,60 @@ static void setup_vecs(struct intel_engine_cs *engine) engine->emit_fini_breadcrumb = gen7_xcs_emit_breadcrumb; } +static int gen7_ctx_switch_bb_setup(struct intel_engine_cs * const engine, + struct i915_vma * const vma) +{ + return 0; +} + +static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine) +{ + struct drm_i915_gem_object *obj; + struct i915_vma *vma; + int size; + int err; + + size = gen7_ctx_switch_bb_setup(engine, NULL /* probe size */); + if (size <= 0) + return size; + + size = ALIGN(size, PAGE_SIZE); + obj = i915_gem_object_create_internal(engine->i915, size); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + vma = i915_vma_instance(obj, engine->gt->vm, NULL); + if (IS_ERR(vma)) { + err = PTR_ERR(vma); + goto err_obj; + } + + vma->private = intel_context_create(engine); /* dummy residuals */ + if (IS_ERR(vma->private)) { + err = PTR_ERR(vma->private); + goto err_obj; + } + + err = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_HIGH); + if (err) + goto err_private; + + err = gen7_ctx_switch_bb_setup(engine, vma); + if (err) + goto err_unpin; + + engine->wa_ctx.vma = vma; + return 0; + +err_unpin: + i915_vma_unpin(vma); +err_private: + intel_context_put(vma->private); +err_obj: + i915_gem_object_put(obj); + return err; +} + int intel_ring_submission_setup(struct intel_engine_cs *engine) { struct intel_timeline *timeline; @@ -1969,11 +2087,19 @@ int intel_ring_submission_setup(struct intel_engine_cs *engine) GEM_BUG_ON(timeline->hwsp_ggtt != engine->status_page.vma); + if (IS_GEN(engine->i915, 7) && engine->class == RENDER_CLASS) { + err = gen7_ctx_switch_bb_init(engine); + if (err) + goto err_ring_unpin; + } + /* Finally, take ownership and responsibility for cleanup! */ engine->release = ring_release; return 0; +err_ring_unpin: + intel_ring_unpin(ring); err_ring: intel_ring_put(ring); err_timeline_unpin: -- 2.20.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 20+ messages in thread
* [RFC PATCH v2 1/2] drm/i915: Add mechanism to submit a context WA on ring submission @ 2020-01-14 17:45 Akeem G Abodunrin 2020-01-16 16:16 ` [PATCH " Mika Kuoppala 0 siblings, 1 reply; 20+ messages in thread From: Akeem G Abodunrin @ 2020-01-14 17:45 UTC (permalink / raw) To: akeem.g.abodunrin, intel-gfx, dri-devel, omer.aran, pragyansri.pathi, d.scott.phillips, david.c.stewart, tony.luck, jon.bloomfield, sudeep.dutt, daniel.vetter, joonas.lahtinen, jani.nikula, chris.p.wilson, prathap.kumar.valsan, mika.kuoppala, francesco.balestrieri From: Mika Kuoppala <mika.kuoppala@linux.intel.com> This patch adds framework to submit an arbitrary batchbuffer on each context switch to clear residual state for render engine on Gen7/7.5 devices. Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Cc: Kumar Valsan Prathap <prathap.kumar.valsan@intel.com> Cc: Chris Wilson <chris.p.wilson@intel.com> Cc: Balestrieri Francesco <francesco.balestrieri@intel.com> Cc: Bloomfield Jon <jon.bloomfield@intel.com> Cc: Dutt Sudeep <sudeep.dutt@intel.com> --- .../gpu/drm/i915/gt/intel_ring_submission.c | 102 +++++++++++++++++- 1 file changed, 99 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c index bc44fe8e5ffa..204c450b7c42 100644 --- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c @@ -1384,7 +1384,9 @@ static int load_pd_dir(struct i915_request *rq, return rq->engine->emit_flush(rq, EMIT_FLUSH); } -static inline int mi_set_context(struct i915_request *rq, u32 flags) +static inline int mi_set_context(struct i915_request *rq, + struct intel_context *ce, + u32 flags) { struct drm_i915_private *i915 = rq->i915; struct intel_engine_cs *engine = rq->engine; @@ -1459,7 +1461,7 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags) *cs++ = MI_NOOP; *cs++ = MI_SET_CONTEXT; - *cs++ = i915_ggtt_offset(rq->context->state) | flags; + *cs++ = i915_ggtt_offset(ce->state) | flags; /* * w/a: MI_SET_CONTEXT must always be followed by MI_NOOP * WaMiSetContext_Hang:snb,ivb,vlv @@ -1574,13 +1576,51 @@ static int switch_mm(struct i915_request *rq, struct i915_address_space *vm) return rq->engine->emit_flush(rq, EMIT_INVALIDATE); } +static int clear_residuals(struct i915_request *rq) +{ + struct intel_engine_cs *engine = rq->engine; + int ret; + + GEM_BUG_ON(!engine->kernel_context->state); + + ret = switch_mm(rq, vm_alias(engine->kernel_context)); + if (ret) + return ret; + + ret = mi_set_context(rq, + engine->kernel_context, + MI_MM_SPACE_GTT | MI_RESTORE_INHIBIT); + if (ret) + return ret; + + ret = engine->emit_bb_start(rq, + engine->wa_ctx.vma->node.start, 0, + 0); + if (ret) + return ret; + + ret = engine->emit_flush(rq, EMIT_FLUSH); + if (ret) + return ret; + + /* Always invalidate before the next switch_mm() */ + return engine->emit_flush(rq, EMIT_INVALIDATE); +} + static int switch_context(struct i915_request *rq) { + struct intel_engine_cs *engine = rq->engine; struct intel_context *ce = rq->context; int ret; GEM_BUG_ON(HAS_EXECLISTS(rq->i915)); + if (engine->wa_ctx.vma && ce != engine->kernel_context) { + ret = clear_residuals(rq); + if (ret) + return ret; + } + ret = switch_mm(rq, vm_alias(ce)); if (ret) return ret; @@ -1600,7 +1640,7 @@ static int switch_context(struct i915_request *rq) else flags |= MI_RESTORE_INHIBIT; - ret = mi_set_context(rq, flags); + ret = mi_set_context(rq, ce, flags); if (ret) return ret; } @@ -1792,6 +1832,8 @@ static void ring_release(struct intel_engine_cs *engine) intel_engine_cleanup_common(engine); + i915_vma_unpin_and_release(&engine->wa_ctx.vma, 0); + intel_ring_unpin(engine->legacy.ring); intel_ring_put(engine->legacy.ring); @@ -1939,6 +1981,52 @@ static void setup_vecs(struct intel_engine_cs *engine) engine->emit_fini_breadcrumb = gen7_xcs_emit_breadcrumb; } +static int gen7_ctx_switch_bb_setup(struct intel_engine_cs * const engine, + struct i915_vma * const vma) +{ + return 0; +} + +static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine) +{ + struct drm_i915_gem_object *obj; + struct i915_vma *vma; + int size; + int err; + + size = gen7_ctx_switch_bb_setup(engine, NULL /* probe size */); + if (size <= 0) + return size; + + size = ALIGN(size, PAGE_SIZE); + obj = i915_gem_object_create_internal(engine->i915, size); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + vma = i915_vma_instance(obj, engine->gt->vm, NULL); + if (IS_ERR(vma)) { + err = PTR_ERR(vma); + goto err_obj; + } + + err = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_HIGH); + if (err) + goto err_obj; + + err = gen7_ctx_switch_bb_setup(engine, vma); + if (err) + goto err_unpin; + + engine->wa_ctx.vma = vma; + return 0; + +err_unpin: + i915_vma_unpin(vma); +err_obj: + i915_gem_object_put(obj); + return err; +} + int intel_ring_submission_setup(struct intel_engine_cs *engine) { struct intel_timeline *timeline; @@ -1992,11 +2080,19 @@ int intel_ring_submission_setup(struct intel_engine_cs *engine) GEM_BUG_ON(timeline->hwsp_ggtt != engine->status_page.vma); + if (IS_GEN(engine->i915, 7) && engine->class == RENDER_CLASS) { + err = gen7_ctx_switch_bb_init(engine); + if (err) + goto err_ring_unpin; + } + /* Finally, take ownership and responsibility for cleanup! */ engine->release = ring_release; return 0; +err_ring_unpin: + intel_ring_unpin(ring); err_ring: intel_ring_put(ring); err_timeline_unpin: -- 2.20.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 1/2] drm/i915: Add mechanism to submit a context WA on ring submission 2020-01-14 17:45 [RFC PATCH v2 " Akeem G Abodunrin @ 2020-01-16 16:16 ` Mika Kuoppala 2020-01-16 16:47 ` Mika Kuoppala 0 siblings, 1 reply; 20+ messages in thread From: Mika Kuoppala @ 2020-01-16 16:16 UTC (permalink / raw) To: akeem.g.abodunrin, intel-gfx, dri-devel, omer.aran, pragyansri.pathi, d.scott.phillips, david.c.stewart, tony.luck, jon.bloomfield, sudeep.dutt, daniel.vetter, joonas.lahtinen, jani.nikula, chris.p.wilson Cc: Balestrieri Francesco, Mika Kuoppala, Kumar Valsan Prathap This patch adds framework to submit an arbitrary batchbuffer on each context switch to clear residual state for render engine on Gen7/7.5 devices. The idea of always emitting the context and vm setup around each request is primary to make reset recovery easy, and not require rewriting the ringbuffer. As each request would set up its own context, leaving it to the HW to notice and elide no-op context switches, we could restart the ring at any point, and reorder the requests freely. However, to avoid emitting clear_residuals() between consecutive requests in the ringbuffer of the same context, we do want to track the current context in the ring. In doing so, we need to be careful to only record a context switch when we are sure the next request will be emitted. v2: elide optimization patch squashed, courtesy of Chris Wilson Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Cc: Kumar Valsan Prathap <prathap.kumar.valsan@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Balestrieri Francesco <francesco.balestrieri@intel.com> Cc: Bloomfield Jon <jon.bloomfield@intel.com> Cc: Dutt Sudeep <sudeep.dutt@intel.com> --- .../gpu/drm/i915/gt/intel_ring_submission.c | 132 +++++++++++++++++- 1 file changed, 129 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c index bc44fe8e5ffa..58500032c993 100644 --- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c @@ -1384,7 +1384,9 @@ static int load_pd_dir(struct i915_request *rq, return rq->engine->emit_flush(rq, EMIT_FLUSH); } -static inline int mi_set_context(struct i915_request *rq, u32 flags) +static inline int mi_set_context(struct i915_request *rq, + struct intel_context *ce, + u32 flags) { struct drm_i915_private *i915 = rq->i915; struct intel_engine_cs *engine = rq->engine; @@ -1459,7 +1461,7 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags) *cs++ = MI_NOOP; *cs++ = MI_SET_CONTEXT; - *cs++ = i915_ggtt_offset(rq->context->state) | flags; + *cs++ = i915_ggtt_offset(ce->state) | flags; /* * w/a: MI_SET_CONTEXT must always be followed by MI_NOOP * WaMiSetContext_Hang:snb,ivb,vlv @@ -1574,13 +1576,56 @@ static int switch_mm(struct i915_request *rq, struct i915_address_space *vm) return rq->engine->emit_flush(rq, EMIT_INVALIDATE); } +static int clear_residuals(struct i915_request *rq) +{ + struct intel_engine_cs *engine = rq->engine; + int ret; + + GEM_BUG_ON(!engine->kernel_context->state); + + ret = switch_mm(rq, vm_alias(engine->kernel_context)); + if (ret) + return ret; + + ret = mi_set_context(rq, + engine->kernel_context, + MI_MM_SPACE_GTT | MI_RESTORE_INHIBIT); + if (ret) + return ret; + + ret = engine->emit_bb_start(rq, + engine->wa_ctx.vma->node.start, 0, + 0); + if (ret) + return ret; + + ret = engine->emit_flush(rq, EMIT_FLUSH); + if (ret) + return ret; + + /* Always invalidate before the next switch_mm() */ + return engine->emit_flush(rq, EMIT_INVALIDATE); +} + static int switch_context(struct i915_request *rq) { + struct intel_engine_cs *engine = rq->engine; struct intel_context *ce = rq->context; + void **residuals = NULL; int ret; GEM_BUG_ON(HAS_EXECLISTS(rq->i915)); + if (engine->wa_ctx.vma && ce != engine->kernel_context) { + if (engine->wa_ctx.vma->private != ce) { + ret = clear_residuals(rq); + if (ret) + return ret; + + residuals = &engine->wa_ctx.vma->private; + } + } + ret = switch_mm(rq, vm_alias(ce)); if (ret) return ret; @@ -1600,7 +1645,7 @@ static int switch_context(struct i915_request *rq) else flags |= MI_RESTORE_INHIBIT; - ret = mi_set_context(rq, flags); + ret = mi_set_context(rq, ce, flags); if (ret) return ret; } @@ -1609,6 +1654,20 @@ static int switch_context(struct i915_request *rq) if (ret) return ret; + /* + * Now past the point of no return, this request _will_ be emitted. + * + * Or at least this preamble will be emitted, the request may be + * interrupted prior to submitting the user payload. If so, we + * still submit the "empty" request in order to preserve global + * state tracking such as this, our tracking of the current + * dirty context. + */ + if (residuals) { + intel_context_put(*residuals); + *residuals = intel_context_get(ce); + } + return 0; } @@ -1792,6 +1851,11 @@ static void ring_release(struct intel_engine_cs *engine) intel_engine_cleanup_common(engine); + if (engine->wa_ctx.vma) { + intel_context_put(engine->wa_ctx.vma->private); + i915_vma_unpin_and_release(&engine->wa_ctx.vma, 0); + } + intel_ring_unpin(engine->legacy.ring); intel_ring_put(engine->legacy.ring); @@ -1939,6 +2003,60 @@ static void setup_vecs(struct intel_engine_cs *engine) engine->emit_fini_breadcrumb = gen7_xcs_emit_breadcrumb; } +static int gen7_ctx_switch_bb_setup(struct intel_engine_cs * const engine, + struct i915_vma * const vma) +{ + return 0; +} + +static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine) +{ + struct drm_i915_gem_object *obj; + struct i915_vma *vma; + int size; + int err; + + size = gen7_ctx_switch_bb_setup(engine, NULL /* probe size */); + if (size <= 0) + return size; + + size = ALIGN(size, PAGE_SIZE); + obj = i915_gem_object_create_internal(engine->i915, size); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + vma = i915_vma_instance(obj, engine->gt->vm, NULL); + if (IS_ERR(vma)) { + err = PTR_ERR(vma); + goto err_obj; + } + + vma->private = intel_context_create(engine); /* dummy residuals */ + if (IS_ERR(vma->private)) { + err = PTR_ERR(vma->private); + goto err_obj; + } + + err = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_HIGH); + if (err) + goto err_private; + + err = gen7_ctx_switch_bb_setup(engine, vma); + if (err) + goto err_unpin; + + engine->wa_ctx.vma = vma; + return 0; + +err_unpin: + i915_vma_unpin(vma); +err_private: + intel_context_put(vma->private); +err_obj: + i915_gem_object_put(obj); + return err; +} + int intel_ring_submission_setup(struct intel_engine_cs *engine) { struct intel_timeline *timeline; @@ -1992,11 +2110,19 @@ int intel_ring_submission_setup(struct intel_engine_cs *engine) GEM_BUG_ON(timeline->hwsp_ggtt != engine->status_page.vma); + if (IS_GEN(engine->i915, 7) && engine->class == RENDER_CLASS) { + err = gen7_ctx_switch_bb_init(engine); + if (err) + goto err_ring_unpin; + } + /* Finally, take ownership and responsibility for cleanup! */ engine->release = ring_release; return 0; +err_ring_unpin: + intel_ring_unpin(ring); err_ring: intel_ring_put(ring); err_timeline_unpin: -- 2.17.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH 1/2] drm/i915: Add mechanism to submit a context WA on ring submission 2020-01-16 16:16 ` [PATCH " Mika Kuoppala @ 2020-01-16 16:47 ` Mika Kuoppala 0 siblings, 0 replies; 20+ messages in thread From: Mika Kuoppala @ 2020-01-16 16:47 UTC (permalink / raw) To: akeem.g.abodunrin, intel-gfx, dri-devel, omer.aran, pragyansri.pathi, d.scott.phillips, david.c.stewart, tony.luck, jon.bloomfield, sudeep.dutt, daniel.vetter, joonas.lahtinen, jani.nikula, chris.p.wilson Cc: Balestrieri Francesco, Kumar Valsan Prathap Mika Kuoppala <mika.kuoppala@linux.intel.com> writes: >Subject: Re: [PATCH 1/2] drm/i915: Add mechanism to submit a context WA >on ring submission I forgot to add RFC into patch subject. This should carry the RFC status as it is v2 on a RFC patch. This patch squashes Chris Wilson's ctx switch optimization and the development is still continuing. -Mika _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 20+ messages in thread
* [RFC PATCH 1/2] drm/i915: Add mechanism to submit a context WA on ring submission @ 2020-01-14 14:51 Akeem G Abodunrin 2020-01-16 16:12 ` [PATCH " Mika Kuoppala 0 siblings, 1 reply; 20+ messages in thread From: Akeem G Abodunrin @ 2020-01-14 14:51 UTC (permalink / raw) To: akeem.g.abodunrin, intel-gfx, dri-devel, omer.aran, pragyansri.pathi, d.scott.phillips, david.c.stewart, tony.luck, jon.bloomfield, sudeep.dutt, daniel.vetter, joonas.lahtinen, jani.nikula, chris.p.wilson, prathap.kumar.valsan, mika.kuoppala, francesco.balestrieri From: Mika Kuoppala <mika.kuoppala@linux.intel.com> This patch adds framework to submit an arbitrary batchbuffer on each context switch to clear residual state for render engine on Gen7/7.5 devices. Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Cc: Kumar Valsan Prathap <prathap.kumar.valsan@intel.com> Cc: Chris Wilson <chris.p.wilson@intel.com> Cc: Balestrieri Francesco <francesco.balestrieri@intel.com> Cc: Bloomfield Jon <jon.bloomfield@intel.com> Cc: Dutt Sudeep <sudeep.dutt@intel.com> --- .../gpu/drm/i915/gt/intel_ring_submission.c | 102 +++++++++++++++++- 1 file changed, 99 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c index bc44fe8e5ffa..204c450b7c42 100644 --- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c @@ -1384,7 +1384,9 @@ static int load_pd_dir(struct i915_request *rq, return rq->engine->emit_flush(rq, EMIT_FLUSH); } -static inline int mi_set_context(struct i915_request *rq, u32 flags) +static inline int mi_set_context(struct i915_request *rq, + struct intel_context *ce, + u32 flags) { struct drm_i915_private *i915 = rq->i915; struct intel_engine_cs *engine = rq->engine; @@ -1459,7 +1461,7 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags) *cs++ = MI_NOOP; *cs++ = MI_SET_CONTEXT; - *cs++ = i915_ggtt_offset(rq->context->state) | flags; + *cs++ = i915_ggtt_offset(ce->state) | flags; /* * w/a: MI_SET_CONTEXT must always be followed by MI_NOOP * WaMiSetContext_Hang:snb,ivb,vlv @@ -1574,13 +1576,51 @@ static int switch_mm(struct i915_request *rq, struct i915_address_space *vm) return rq->engine->emit_flush(rq, EMIT_INVALIDATE); } +static int clear_residuals(struct i915_request *rq) +{ + struct intel_engine_cs *engine = rq->engine; + int ret; + + GEM_BUG_ON(!engine->kernel_context->state); + + ret = switch_mm(rq, vm_alias(engine->kernel_context)); + if (ret) + return ret; + + ret = mi_set_context(rq, + engine->kernel_context, + MI_MM_SPACE_GTT | MI_RESTORE_INHIBIT); + if (ret) + return ret; + + ret = engine->emit_bb_start(rq, + engine->wa_ctx.vma->node.start, 0, + 0); + if (ret) + return ret; + + ret = engine->emit_flush(rq, EMIT_FLUSH); + if (ret) + return ret; + + /* Always invalidate before the next switch_mm() */ + return engine->emit_flush(rq, EMIT_INVALIDATE); +} + static int switch_context(struct i915_request *rq) { + struct intel_engine_cs *engine = rq->engine; struct intel_context *ce = rq->context; int ret; GEM_BUG_ON(HAS_EXECLISTS(rq->i915)); + if (engine->wa_ctx.vma && ce != engine->kernel_context) { + ret = clear_residuals(rq); + if (ret) + return ret; + } + ret = switch_mm(rq, vm_alias(ce)); if (ret) return ret; @@ -1600,7 +1640,7 @@ static int switch_context(struct i915_request *rq) else flags |= MI_RESTORE_INHIBIT; - ret = mi_set_context(rq, flags); + ret = mi_set_context(rq, ce, flags); if (ret) return ret; } @@ -1792,6 +1832,8 @@ static void ring_release(struct intel_engine_cs *engine) intel_engine_cleanup_common(engine); + i915_vma_unpin_and_release(&engine->wa_ctx.vma, 0); + intel_ring_unpin(engine->legacy.ring); intel_ring_put(engine->legacy.ring); @@ -1939,6 +1981,52 @@ static void setup_vecs(struct intel_engine_cs *engine) engine->emit_fini_breadcrumb = gen7_xcs_emit_breadcrumb; } +static int gen7_ctx_switch_bb_setup(struct intel_engine_cs * const engine, + struct i915_vma * const vma) +{ + return 0; +} + +static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine) +{ + struct drm_i915_gem_object *obj; + struct i915_vma *vma; + int size; + int err; + + size = gen7_ctx_switch_bb_setup(engine, NULL /* probe size */); + if (size <= 0) + return size; + + size = ALIGN(size, PAGE_SIZE); + obj = i915_gem_object_create_internal(engine->i915, size); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + vma = i915_vma_instance(obj, engine->gt->vm, NULL); + if (IS_ERR(vma)) { + err = PTR_ERR(vma); + goto err_obj; + } + + err = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_HIGH); + if (err) + goto err_obj; + + err = gen7_ctx_switch_bb_setup(engine, vma); + if (err) + goto err_unpin; + + engine->wa_ctx.vma = vma; + return 0; + +err_unpin: + i915_vma_unpin(vma); +err_obj: + i915_gem_object_put(obj); + return err; +} + int intel_ring_submission_setup(struct intel_engine_cs *engine) { struct intel_timeline *timeline; @@ -1992,11 +2080,19 @@ int intel_ring_submission_setup(struct intel_engine_cs *engine) GEM_BUG_ON(timeline->hwsp_ggtt != engine->status_page.vma); + if (IS_GEN(engine->i915, 7) && engine->class == RENDER_CLASS) { + err = gen7_ctx_switch_bb_init(engine); + if (err) + goto err_ring_unpin; + } + /* Finally, take ownership and responsibility for cleanup! */ engine->release = ring_release; return 0; +err_ring_unpin: + intel_ring_unpin(ring); err_ring: intel_ring_put(ring); err_timeline_unpin: -- 2.20.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 1/2] drm/i915: Add mechanism to submit a context WA on ring submission 2020-01-14 14:51 [RFC PATCH " Akeem G Abodunrin @ 2020-01-16 16:12 ` Mika Kuoppala 0 siblings, 0 replies; 20+ messages in thread From: Mika Kuoppala @ 2020-01-16 16:12 UTC (permalink / raw) To: akeem.g.abodunrin, intel-gfx, dri-devel, omer.aran, pragyansri.pathi, d.scott.phillips, david.c.stewart, tony.luck, jon.bloomfield, sudeep.dutt, daniel.vetter, joonas.lahtinen, jani.nikula, chris.p.wilson Cc: Balestrieri Francesco, Mika Kuoppala, Kumar Valsan Prathap This patch adds framework to submit an arbitrary batchbuffer on each context switch to clear residual state for render engine on Gen7/7.5 devices. The idea of always emitting the context and vm setup around each request is primary to make reset recovery easy, and not require rewriting the ringbuffer. As each request would set up its own context, leaving it to the HW to notice and elide no-op context switches, we could restart the ring at any point, and reorder the requests freely. However, to avoid emitting clear_residuals() between consecutive requests in the ringbuffer of the same context, we do want to track the current context in the ring. In doing so, we need to be careful to only record a context switch when we are sure the next request will be emitted. v2: elide optimization patch squashed, courtesy of Chris Wilson Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Cc: Kumar Valsan Prathap <prathap.kumar.valsan@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Balestrieri Francesco <francesco.balestrieri@intel.com> Cc: Bloomfield Jon <jon.bloomfield@intel.com> Cc: Dutt Sudeep <sudeep.dutt@intel.com> --- .../gpu/drm/i915/gt/intel_ring_submission.c | 132 +++++++++++++++++- 1 file changed, 129 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c index bc44fe8e5ffa..58500032c993 100644 --- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c @@ -1384,7 +1384,9 @@ static int load_pd_dir(struct i915_request *rq, return rq->engine->emit_flush(rq, EMIT_FLUSH); } -static inline int mi_set_context(struct i915_request *rq, u32 flags) +static inline int mi_set_context(struct i915_request *rq, + struct intel_context *ce, + u32 flags) { struct drm_i915_private *i915 = rq->i915; struct intel_engine_cs *engine = rq->engine; @@ -1459,7 +1461,7 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags) *cs++ = MI_NOOP; *cs++ = MI_SET_CONTEXT; - *cs++ = i915_ggtt_offset(rq->context->state) | flags; + *cs++ = i915_ggtt_offset(ce->state) | flags; /* * w/a: MI_SET_CONTEXT must always be followed by MI_NOOP * WaMiSetContext_Hang:snb,ivb,vlv @@ -1574,13 +1576,56 @@ static int switch_mm(struct i915_request *rq, struct i915_address_space *vm) return rq->engine->emit_flush(rq, EMIT_INVALIDATE); } +static int clear_residuals(struct i915_request *rq) +{ + struct intel_engine_cs *engine = rq->engine; + int ret; + + GEM_BUG_ON(!engine->kernel_context->state); + + ret = switch_mm(rq, vm_alias(engine->kernel_context)); + if (ret) + return ret; + + ret = mi_set_context(rq, + engine->kernel_context, + MI_MM_SPACE_GTT | MI_RESTORE_INHIBIT); + if (ret) + return ret; + + ret = engine->emit_bb_start(rq, + engine->wa_ctx.vma->node.start, 0, + 0); + if (ret) + return ret; + + ret = engine->emit_flush(rq, EMIT_FLUSH); + if (ret) + return ret; + + /* Always invalidate before the next switch_mm() */ + return engine->emit_flush(rq, EMIT_INVALIDATE); +} + static int switch_context(struct i915_request *rq) { + struct intel_engine_cs *engine = rq->engine; struct intel_context *ce = rq->context; + void **residuals = NULL; int ret; GEM_BUG_ON(HAS_EXECLISTS(rq->i915)); + if (engine->wa_ctx.vma && ce != engine->kernel_context) { + if (engine->wa_ctx.vma->private != ce) { + ret = clear_residuals(rq); + if (ret) + return ret; + + residuals = &engine->wa_ctx.vma->private; + } + } + ret = switch_mm(rq, vm_alias(ce)); if (ret) return ret; @@ -1600,7 +1645,7 @@ static int switch_context(struct i915_request *rq) else flags |= MI_RESTORE_INHIBIT; - ret = mi_set_context(rq, flags); + ret = mi_set_context(rq, ce, flags); if (ret) return ret; } @@ -1609,6 +1654,20 @@ static int switch_context(struct i915_request *rq) if (ret) return ret; + /* + * Now past the point of no return, this request _will_ be emitted. + * + * Or at least this preamble will be emitted, the request may be + * interrupted prior to submitting the user payload. If so, we + * still submit the "empty" request in order to preserve global + * state tracking such as this, our tracking of the current + * dirty context. + */ + if (residuals) { + intel_context_put(*residuals); + *residuals = intel_context_get(ce); + } + return 0; } @@ -1792,6 +1851,11 @@ static void ring_release(struct intel_engine_cs *engine) intel_engine_cleanup_common(engine); + if (engine->wa_ctx.vma) { + intel_context_put(engine->wa_ctx.vma->private); + i915_vma_unpin_and_release(&engine->wa_ctx.vma, 0); + } + intel_ring_unpin(engine->legacy.ring); intel_ring_put(engine->legacy.ring); @@ -1939,6 +2003,60 @@ static void setup_vecs(struct intel_engine_cs *engine) engine->emit_fini_breadcrumb = gen7_xcs_emit_breadcrumb; } +static int gen7_ctx_switch_bb_setup(struct intel_engine_cs * const engine, + struct i915_vma * const vma) +{ + return 0; +} + +static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine) +{ + struct drm_i915_gem_object *obj; + struct i915_vma *vma; + int size; + int err; + + size = gen7_ctx_switch_bb_setup(engine, NULL /* probe size */); + if (size <= 0) + return size; + + size = ALIGN(size, PAGE_SIZE); + obj = i915_gem_object_create_internal(engine->i915, size); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + vma = i915_vma_instance(obj, engine->gt->vm, NULL); + if (IS_ERR(vma)) { + err = PTR_ERR(vma); + goto err_obj; + } + + vma->private = intel_context_create(engine); /* dummy residuals */ + if (IS_ERR(vma->private)) { + err = PTR_ERR(vma->private); + goto err_obj; + } + + err = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_HIGH); + if (err) + goto err_private; + + err = gen7_ctx_switch_bb_setup(engine, vma); + if (err) + goto err_unpin; + + engine->wa_ctx.vma = vma; + return 0; + +err_unpin: + i915_vma_unpin(vma); +err_private: + intel_context_put(vma->private); +err_obj: + i915_gem_object_put(obj); + return err; +} + int intel_ring_submission_setup(struct intel_engine_cs *engine) { struct intel_timeline *timeline; @@ -1992,11 +2110,19 @@ int intel_ring_submission_setup(struct intel_engine_cs *engine) GEM_BUG_ON(timeline->hwsp_ggtt != engine->status_page.vma); + if (IS_GEN(engine->i915, 7) && engine->class == RENDER_CLASS) { + err = gen7_ctx_switch_bb_init(engine); + if (err) + goto err_ring_unpin; + } + /* Finally, take ownership and responsibility for cleanup! */ engine->release = ring_release; return 0; +err_ring_unpin: + intel_ring_unpin(ring); err_ring: intel_ring_put(ring); err_timeline_unpin: -- 2.17.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 20+ messages in thread
end of thread, other threads:[~2020-02-21 6:13 UTC | newest] Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-01-30 16:57 [PATCH 0/2] Security mitigation for Intel Gen7/7.5 HWs Akeem G Abodunrin 2020-01-30 16:57 ` [Intel-gfx] " Akeem G Abodunrin 2020-01-30 16:57 ` [PATCH 1/2] drm/i915: Add mechanism to submit a context WA on ring submission Akeem G Abodunrin 2020-01-30 16:57 ` [Intel-gfx] " Akeem G Abodunrin 2020-01-30 16:57 ` [PATCH 2/2] drm/i915/gen7: Clear all EU/L3 residual contexts Akeem G Abodunrin 2020-01-30 16:57 ` [Intel-gfx] " Akeem G Abodunrin 2020-01-31 9:25 ` Jani Nikula 2020-01-31 9:25 ` [Intel-gfx] " Jani Nikula 2020-01-31 9:52 ` Joonas Lahtinen 2020-01-31 9:52 ` [Intel-gfx] " Joonas Lahtinen 2020-01-31 15:45 ` Bloomfield, Jon 2020-02-03 9:42 ` Joonas Lahtinen 2020-01-31 2:33 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Security mitigation for Intel Gen7/7.5 HWs Patchwork 2020-01-31 3:03 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork 2020-01-31 9:33 ` Chris Wilson 2020-02-02 22:27 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork -- strict thread matches above, loose matches on Subject: below -- 2020-02-20 22:57 [PATCH 0/2] " Akeem G Abodunrin 2020-02-20 22:57 ` [PATCH 1/2] drm/i915: Add mechanism to submit a context WA on ring submission Akeem G Abodunrin 2020-01-14 17:45 [RFC PATCH v2 " Akeem G Abodunrin 2020-01-16 16:16 ` [PATCH " Mika Kuoppala 2020-01-16 16:47 ` Mika Kuoppala 2020-01-14 14:51 [RFC PATCH " Akeem G Abodunrin 2020-01-16 16:12 ` [PATCH " Mika Kuoppala
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.