All of lore.kernel.org
 help / color / mirror / Atom feed
From: Chris Wilson <chris.p.wilson@intel.com>
To: Prathap Kumar Valsan <prathap.kumar.valsan@intel.com>,
	intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH v3 1/1] drm/i915/ehl: Add sysfs interface to control class-of-service
Date: Mon, 07 Oct 2019 21:37:20 +0100	[thread overview]
Message-ID: <157048064079.5063.15601194209529004805@skylake-alporthouse-com> (raw)
In-Reply-To: <20191007165209.31797-2-prathap.kumar.valsan@intel.com>

Quoting Prathap Kumar Valsan (2019-10-07 17:52:09)
> To provide shared last-level-cache isolation to cpu workloads running
> concurrently with gpu workloads, the gpu allocation of cache lines needs
> to be restricted to certain ways. Currently GPU hardware supports four
> class-of-service(CLOS) levels and there is an associated way-mask for
> each CLOS. Each LLC MOCS register has a field to select the clos level.
> So in-order to globally set the gpu to a clos level, driver needs
> to program entire MOCS table.
> 
> Hardware supports reading supported way-mask configuration for GPU using
> a bios pcode interface. This interface has two files--llc_clos_modes and
> llc_clos. The file llc_clos_modes is read only file and will list the
> available way masks. The file llc_clos is read/write and will show the
> currently active way mask and writing a new way mask will update the
> active way mask of the gpu.
> 
> Note of Caution: Restricting cache ways using this mechanism presents a
> larger attack surface for side-channel attacks.
> 
> Example usage:
> > cat /sys/class/drm/card0/llc_clos_modes
> 0xfff 0xfc0 0xc00 0x800
> 
> >cat /sys/class/drm/card0/llc_clos
> 0xfff
> 
> Update to new clos
> echo "0x800" > /sys/class/drm/card0/llc_clos

So the first question is whether this is global on the device or local
to the GT.
 
> Signed-off-by: Prathap Kumar Valsan <prathap.kumar.valsan@intel.com>
> ---
>  drivers/gpu/drm/i915/gt/intel_lrc.c     |   7 +
>  drivers/gpu/drm/i915/gt/intel_lrc_reg.h |   1 +
>  drivers/gpu/drm/i915/gt/intel_mocs.c    | 216 +++++++++++++++++++++++-
>  drivers/gpu/drm/i915/gt/intel_mocs.h    |   6 +-
>  drivers/gpu/drm/i915/i915_drv.h         |   8 +
>  drivers/gpu/drm/i915/i915_reg.h         |   1 +
>  drivers/gpu/drm/i915/i915_sysfs.c       | 100 +++++++++++
>  7 files changed, 337 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> index 468438fb47af..054051969d00 100644
> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> @@ -2137,6 +2137,13 @@ __execlists_update_reg_state(const struct intel_context *ce,
>                         intel_sseu_make_rpcs(engine->i915, &ce->sseu);
>  
>                 i915_oa_init_reg_state(ce, engine);
> +               /*
> +                * Gen11+ wants to support update of LLC class-of-service via
> +                * sysfs interface. CLOS is defined in MOCS registers and for
> +                * Gen11, MOCS is part of context resgister state.
> +                */
> +               if (IS_GEN(engine->i915, 11))
> +                       intel_mocs_init_reg_state(ce);

Do the filtering in intel_mocs_init_reg_state(). The intel_ prefix says
it should be common to all.

Not IS_ELKHARTLAKE(), that is implies by subject?

> +static int
> +mocs_store_clos(struct i915_request *rq,
> +               struct intel_context *ce)
> +{
> +       struct drm_i915_mocs_table t;
> +       unsigned int count, active_clos, index;
> +       u32 offset;
> +       u32 value;
> +       u32 *cs;
> +
> +       if (!get_mocs_settings(rq->engine->gt, &t))
> +               return -ENODEV;
> +
> +       count = t.n_entries;
> +       active_clos = rq->i915->clos.active_clos;
> +       cs = intel_ring_begin(rq, 4 * count);
> +       if (IS_ERR(cs))
> +               return PTR_ERR(cs);
> +
> +       offset = i915_ggtt_offset(ce->state) + LRC_STATE_PN * PAGE_SIZE;
> +
> +       for (index = 0; index < count; index++) {
> +               value = ce->lrc_reg_state[ctx_mocsN(index)];

Their context image is volatile at this point. They are pinned and
active. If you must do a rmw, do it on the GPU. But do we not know the
full value (as I don't expect it to be nonpriv)? [If doing rmw on the
GPU, we're probably have to use a secure batch here to avoid running out
of ringspace.]

> +               value &= ~LE_COS_MASK;
> +               value |= FIELD_PREP(LE_COS_MASK, active_clos);
> +
> +               *cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
> +               *cs++ = offset + ctx_mocsN(index) * sizeof(uint32_t);
> +               *cs++ = 0;
> +               *cs++ = value;
> +       }
> +
> +       intel_ring_advance(rq, cs);
> +
> +       return 0;
> +}
> +
> +static int modify_context_mocs(struct intel_context *ce)
> +{
> +       struct i915_request *rq;
> +       int err;
> +
> +       lockdep_assert_held(&ce->pin_mutex);
> +
> +       rq = i915_request_create(ce->engine->kernel_context);
> +       if (IS_ERR(rq))
> +               return PTR_ERR(rq);
> +
> +       /* Serialise with the remote context */
> +       err = intel_context_prepare_remote_request(ce, rq);
> +       if (err == 0)
> +               err = mocs_store_clos(rq, ce);
> +
> +       i915_request_add(rq);
> +       return err;
> +}
> +
> +static int intel_mocs_configure_context(struct i915_gem_context *ctx)
> +{
> +       struct i915_gem_engines_iter it;
> +       struct intel_context *ce;
> +       int err = 0;
> +
> +       for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) {
> +               GEM_BUG_ON(ce == ce->engine->kernel_context);
> +
> +               if (ce->engine->class != RENDER_CLASS)
> +                       continue;
> +
> +               err = intel_context_lock_pinned(ce);
> +               if (err)
> +                       break;
> +
> +               if (intel_context_is_pinned(ce))
> +                       err = modify_context_mocs(ce);
> +
> +               intel_context_unlock_pinned(ce);
> +               if (err)
> +                       break;
> +       }
> +       i915_gem_context_unlock_engines(ctx);
> +
> +       return err;
> +}
> +
> +static int intel_mocs_configure_all_contexts(struct intel_gt *gt)

So this on the GT,

> +{
> +       struct drm_i915_private *i915 = gt->i915;
> +       struct intel_engine_cs *engine;
> +       struct i915_gem_context *ctx;
> +       enum intel_engine_id id;
> +       int err;
> +
> +       lockdep_assert_held(&i915->drm.struct_mutex);

Wrong lock entirely, you need the context lock for walking the
i915->gem.contexts.list.

> +       /*
> +        * MOCS registers of render engine are context saved and restored to and
> +        * from a context image.
> +        * So for any MOCS update to reflect on the existing contexts requires
> +        * updating the context image.
> +        */
> +       list_for_each_entry(ctx, &i915->gem.contexts.list, link) {
> +               if (ctx == i915->kernel_context)
> +                       continue;
> +
> +               err = intel_mocs_configure_context(ctx);

but this is global,

> +               if (err)
> +                       return err;
> +       }
> +
> +       /*
> +        * After updating all other contexts, update render context image of
> +        * kernel context. Also update the MOCS of non-render engines.
> +        */
> +
> +       for_each_engine(engine, i915, id) {

and here again you are on the gt.

> +               struct i915_request *rq;
> +               struct drm_i915_mocs_table t;
> +
> +               rq = i915_request_create(engine->kernel_context);
> +               if (IS_ERR(rq))
> +                       return PTR_ERR(rq);
> +
> +               get_mocs_settings(rq->engine->gt, &t);
> +               err = emit_mocs_control_table(rq, &t);
> +               if (err) {
> +                       i915_request_skip(rq, err);
> +                       i915_request_add(rq);
> +                       return err;
> +               }
> +
> +               i915_request_add(rq);
> +       }
> +
> +       return 0;
> +}

> diff --git a/drivers/gpu/drm/i915/gt/intel_mocs.h b/drivers/gpu/drm/i915/gt/intel_mocs.h
> index 2ae816b7ca19..65566457408e 100644
> --- a/drivers/gpu/drm/i915/gt/intel_mocs.h
> +++ b/drivers/gpu/drm/i915/gt/intel_mocs.h
> @@ -49,13 +49,17 @@
>   * context handling keep the MOCS in step.
>   */
>  
> -struct i915_request;
>  struct intel_engine_cs;
> +struct intel_context;
> +struct i915_request;
>  struct intel_gt;

Our OCD is to keep these in alphabetical order.
-Chris
---------------------------------------------------------------------
Intel Corporation (UK) Limited
Registered No. 1134945 (England)
Registered Office: Pipers Way, Swindon SN3 1RJ
VAT No: 860 2173 47

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2019-10-07 20:37 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-07 16:52 [PATCH v3 0/1] Add sysfs interface to control class-of-service Prathap Kumar Valsan
2019-10-07 16:52 ` [PATCH v3 1/1] drm/i915/ehl: " Prathap Kumar Valsan
2019-10-07 20:37   ` Chris Wilson [this message]
2019-10-15  7:38     ` Kumar Valsan, Prathap
2019-10-07 16:52 ` ✗ Fi.CI.SPARSE: warning for " Patchwork
2019-10-07 17:15 ` ✗ Fi.CI.BAT: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=157048064079.5063.15601194209529004805@skylake-alporthouse-com \
    --to=chris.p.wilson@intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=prathap.kumar.valsan@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.