intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH 00/16] Enable GuC based power management features
@ 2021-07-10  1:20 Vinay Belgaumkar
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 01/16] drm/i915/guc: Squashed patch - DO NOT REVIEW Vinay Belgaumkar
                   ` (18 more replies)
  0 siblings, 19 replies; 53+ messages in thread
From: Vinay Belgaumkar @ 2021-07-10  1:20 UTC (permalink / raw)
  To: intel-gfx, dri-devel

This series enables Single Loop Power Control (SLPC) feature in GuC.
GuC implements various power management algorithms as part of it's
operation. These need to be specifically enabled by KMD. They replace
the legacy host based management of these features.

With this series, we will enable two PM features - GTPerf and GuCRC. These
are the Turbo and RC6 equivalents of the host based version. GuC provides
various interfaces via host-to-guc messaging, which allows KMD to enable
these features after GuC is loaded and GuC submission is enabled. We will
specifically disable the IA/GT Balancer and Duty Cycle control features in
SLPC.

To enable GTPerf, KMD sends a specific h2g message after setting up
some shared data structures. As part of this, we will gate host RPS as 
well. GuC takes over the duties of requesting frequencies by monitoring
GPU busyness. We can influence what GuC requests by modifying the min 
and max frequencies setup by SLPC through the sysfs interfaces that have
been exposed by legacy Turbo. SLPC typically requests efficient frequency
instead of minimum frequency to optimize performance. It also does not
necessarily stick to platform max, and can request frequencies that are
much higher since pcode will ultimately grant the appropriate values.
However, we will force it to adhere to platform min and max values so as
to maintain legacy behavior. SLPC does not have the concept of waitboost,
so the boost_freq sysfs will show a '0' value for now. There is a patch
forthcoming to ensure the interface is not exposed when slpc is enabled.

GuCRC is enabled similarly through a h2g message. We still need to enable
RC6 feature bit (GEN6_RC_CTL_RC6_ENABLE) before we send this out.
Render/Media power gating still needs to be enabled by host as before.
GuC will take care of setting up the hysterisis values for RC6, host
does not need to set this up anymore.

Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>

Matthew Brost (1):
  drm/i915/guc: Squashed patch - DO NOT REVIEW

Vinay Belgaumkar (15):
  drm/i915/guc/slpc: Initial definitions for slpc
  drm/i915/guc/slpc: Gate Host RPS when slpc is enabled
  drm/i915/guc/slpc: Lay out slpc init/enable/disable/fini
  drm/i915/guc/slpc: Adding slpc communication interfaces
  drm/i915/guc/slpc: Allocate, initialize and release slpc
  drm/i915/guc/slpc: Enable slpc and add related H2G events
  drm/i915/guc/slpc: Add methods to set min/max frequency
  drm/i915/guc/slpc: Add get max/min freq hooks
  drm/i915/guc/slpc: Add debugfs for slpc info
  drm/i915/guc/slpc: Enable ARAT timer interrupt
  drm/i915/guc/slpc: Cache platform frequency limits for slpc
  drm/i915/guc/slpc: Update slpc to use platform min/max
  drm/i915/guc/slpc: Sysfs hooks for slpc
  drm/i915/guc/slpc: slpc selftest
  drm/i915/guc/rc: Setup and enable GUCRC feature

 drivers/gpu/drm/i915/Makefile                 |    3 +
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |   21 +-
 drivers/gpu/drm/i915/gem/i915_gem_context.h   |    1 +
 drivers/gpu/drm/i915/gem/i915_gem_mman.c      |    3 +-
 drivers/gpu/drm/i915/gt/gen8_engine_cs.c      |    6 +-
 drivers/gpu/drm/i915/gt/intel_breadcrumbs.c   |   41 +-
 drivers/gpu/drm/i915/gt/intel_breadcrumbs.h   |   14 +-
 .../gpu/drm/i915/gt/intel_breadcrumbs_types.h |    7 +
 drivers/gpu/drm/i915/gt/intel_context.c       |   50 +-
 drivers/gpu/drm/i915/gt/intel_context.h       |   50 +-
 drivers/gpu/drm/i915/gt/intel_context_types.h |   54 +
 drivers/gpu/drm/i915/gt/intel_engine.h        |   54 +-
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     |  182 +-
 .../gpu/drm/i915/gt/intel_engine_heartbeat.c  |   71 +-
 .../gpu/drm/i915/gt/intel_engine_heartbeat.h  |    4 +
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |   12 +-
 .../drm/i915/gt/intel_execlists_submission.c  |   95 +-
 .../drm/i915/gt/intel_execlists_submission.h  |    4 -
 drivers/gpu/drm/i915/gt/intel_gt.c            |   23 +-
 drivers/gpu/drm/i915/gt/intel_gt.h            |    2 +
 drivers/gpu/drm/i915/gt/intel_gt_pm.c         |    6 +-
 drivers/gpu/drm/i915/gt/intel_gt_requests.c   |   23 +-
 drivers/gpu/drm/i915/gt/intel_gt_requests.h   |    9 +-
 drivers/gpu/drm/i915/gt/intel_lrc_reg.h       |    1 -
 drivers/gpu/drm/i915/gt/intel_rc6.c           |   22 +-
 drivers/gpu/drm/i915/gt/intel_reset.c         |   50 +-
 .../gpu/drm/i915/gt/intel_ring_submission.c   |   48 +
 drivers/gpu/drm/i915/gt/intel_rps.c           |  160 ++
 drivers/gpu/drm/i915/gt/intel_rps.h           |    5 +
 drivers/gpu/drm/i915/gt/intel_workarounds.c   |   46 +-
 .../gpu/drm/i915/gt/intel_workarounds_types.h |    1 +
 drivers/gpu/drm/i915/gt/mock_engine.c         |   41 +-
 drivers/gpu/drm/i915/gt/selftest_context.c    |   10 +
 .../drm/i915/gt/selftest_engine_heartbeat.c   |   22 +
 .../drm/i915/gt/selftest_engine_heartbeat.h   |    2 +
 drivers/gpu/drm/i915/gt/selftest_execlists.c  |   12 +-
 drivers/gpu/drm/i915/gt/selftest_hangcheck.c  |  314 +-
 drivers/gpu/drm/i915/gt/selftest_mocs.c       |   50 +-
 drivers/gpu/drm/i915/gt/selftest_slpc.c       |  333 +++
 drivers/gpu/drm/i915/gt/selftest_slpc.h       |   12 +
 .../gpu/drm/i915/gt/selftest_workarounds.c    |  132 +-
 .../gpu/drm/i915/gt/uc/abi/guc_actions_abi.h  |   21 +
 .../gt/uc/abi/guc_communication_ctb_abi.h     |    3 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc.c        |   99 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc.h        |  114 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c    |  460 ++-
 drivers/gpu/drm/i915/gt/uc/intel_guc_ads.h    |    3 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c     |  368 ++-
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h     |   28 +-
 .../gpu/drm/i915/gt/uc/intel_guc_debugfs.c    |   41 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h   |   90 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c     |   79 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_rc.h     |   32 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c   |  606 ++++
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h   |   47 +
 .../gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h  |  255 ++
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 2528 +++++++++++++++--
 .../gpu/drm/i915/gt/uc/intel_guc_submission.h |   33 +-
 drivers/gpu/drm/i915/gt/uc/intel_uc.c         |  126 +-
 drivers/gpu/drm/i915/gt/uc/intel_uc.h         |   14 +
 drivers/gpu/drm/i915/i915_debugfs.c           |    2 +
 drivers/gpu/drm/i915/i915_debugfs_params.c    |   31 +
 drivers/gpu/drm/i915/i915_gem_evict.c         |    1 +
 drivers/gpu/drm/i915/i915_gpu_error.c         |   25 +-
 drivers/gpu/drm/i915/i915_pmu.c               |    2 +-
 drivers/gpu/drm/i915/i915_reg.h               |    4 +
 drivers/gpu/drm/i915/i915_request.c           |  168 +-
 drivers/gpu/drm/i915/i915_request.h           |   21 +
 drivers/gpu/drm/i915/i915_scheduler.c         |    9 +-
 drivers/gpu/drm/i915/i915_scheduler.h         |   10 +-
 drivers/gpu/drm/i915/i915_scheduler_types.h   |   10 +
 drivers/gpu/drm/i915/i915_sysfs.c             |   71 +-
 drivers/gpu/drm/i915/i915_trace.h             |  207 +-
 .../drm/i915/selftests/i915_live_selftests.h  |    1 +
 drivers/gpu/drm/i915/selftests/i915_request.c |    4 +-
 .../gpu/drm/i915/selftests/igt_flush_test.c   |    2 +-
 .../gpu/drm/i915/selftests/igt_live_test.c    |    2 +-
 .../i915/selftests/intel_scheduler_helpers.c  |   89 +
 .../i915/selftests/intel_scheduler_helpers.h  |   35 +
 .../gpu/drm/i915/selftests/mock_gem_device.c  |    3 +-
 80 files changed, 6705 insertions(+), 935 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/gt/selftest_slpc.c
 create mode 100644 drivers/gpu/drm/i915/gt/selftest_slpc.h
 create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c
 create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_rc.h
 create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
 create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
 create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h
 create mode 100644 drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c
 create mode 100644 drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.h

-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [Intel-gfx] [PATCH 01/16] drm/i915/guc: Squashed patch - DO NOT REVIEW
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
@ 2021-07-10  1:20 ` Vinay Belgaumkar
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 02/16] drm/i915/guc/slpc: Initial definitions for slpc Vinay Belgaumkar
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 53+ messages in thread
From: Vinay Belgaumkar @ 2021-07-10  1:20 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: Fernando Pacheco, Rahul Kumar Singh

From: Matthew Brost <matthew.brost@intel.com>

Squashed patches needed for CI to execute. These
patches are already in review in a separate
series -  (DO NOT REVIEW WITH THIS SERIES) -

https://patchwork.freedesktop.org/series/89844/
https://patchwork.freedesktop.org/series/91417/

Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Improve error message for unsolicited CT response

Improve the error message when a unsolicited CT response is received by
printing fence that couldn't be found, the last fence, and all requests
with a response outstanding.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>

drm/i915/guc: Increase size of CTB buffers

With the introduction of non-blocking CTBs more than one CTB can be in
flight at a time. Increasing the size of the CTBs should reduce how
often software hits the case where no space is available in the CTB
buffer.

Cc: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>

drm/i915/guc: Add non blocking CTB send function

Add non blocking CTB send function, intel_guc_send_nb. GuC submission
will send CTBs in the critical path and does not need to wait for these
CTBs to complete before moving on, hence the need for this new function.

The non-blocking CTB now must have a flow control mechanism to ensure
the buffer isn't overrun. A lazy spin wait is used as we believe the
flow control condition should be rare with a properly sized buffer.

The function, intel_guc_send_nb, is exported in this patch but unused.
Several patches later in the series make use of this function.

v2:
 (Michal)
  - Use define for H2G room calculations
  - Move INTEL_GUC_SEND_NB define
 (Daniel Vetter)
  - Use msleep_interruptible rather than cond_resched
v3:
 (Michal)
  - Move includes to following patch
  - s/INTEL_GUC_SEND_NB/INTEL_GUC_CT_SEND_NB/g
v4:
 (John H)
  - Update comment, add type local variable

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>

drm/i915/guc: Add stall timer to non blocking CTB send function

Implement a stall timer which fails H2G CTBs once a period of time
with no forward progress is reached to prevent deadlock.

v2:
 (Michal)
  - Improve error message in ct_deadlock()
  - Set broken when ct_deadlock() returns true
  - Return -EPIPE on ct_deadlock()
v3:
 (Michal)
  - Add ms to stall timer comment
 (Matthew)
  - Move broken check to intel_guc_ct_send()

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>

drm/i915/guc: Optimize CTB writes and reads

CTB writes are now in the path of command submission and should be
optimized for performance. Rather than reading CTB descriptor values
(e.g. head, tail) which could result in accesses across the PCIe bus,
store shadow local copies and only read/write the descriptor values when
absolutely necessary. Also store the current space in the each channel
locally.

v2:
 (Michal)
  - Add additional sanity checks for head / tail pointers
  - Use GUC_CTB_HDR_LEN rather than magic 1
v3:
 (Michal / John H)
  - Drop redundant check of head value
v4:
 (John H)
  - Drop redundant checks of tail / head values
v5:
 (Michal)
  - Address more nits
v6:
 (Michal)
  - Add GEM_BUG_ON sanity check on ctb->space

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>

drm/i915/guc: Module load failure test for CT buffer creation

Add several module failure load inject points in the CT buffer creation
code path.

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>

drm/i915/guc: Add new GuC interface defines and structures

Add new GuC interface defines and structures while maintaining old ones
in parallel.

Cc: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>

drm/i915/guc: Remove GuC stage descriptor, add LRC descriptor

Remove old GuC stage descriptor, add LRC descriptor which will be used
by the new GuC interface implemented in this patch series.

v2:
 (John Harrison)
  - s/lrc/LRC/g

Cc: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>

drm/i915/guc: Add LRC descriptor context lookup array

Add LRC descriptor context lookup array which can resolve the
intel_context from the LRC descriptor index. In addition to lookup, it
can determine in the LRC descriptor context is currently registered with
the GuC by checking if an entry for a descriptor index is present.
Future patches in the series will make use of this array.

v2:
 (Michal)
  - "linux/xarray.h" -> <linux/xarray.h>
  - s/lrc/LRC

Cc: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>

drm/i915/guc: Implement GuC submission tasklet

Implement GuC submission tasklet for new interface. The new GuC
interface uses H2G to submit contexts to the GuC. Since H2G use a single
channel, a single tasklet submits is used for the submission path.

Also the per engine interrupt handler has been updated to disable the
rescheduling of the physical engine tasklet, when using GuC scheduling,
as the physical engine tasklet is no longer used.

In this patch the field, guc_id, has been added to intel_context and is
not assigned. Patches later in the series will assign this value.

v2:
 (John Harrison)
  - Clean up some comments

Cc: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Add bypass tasklet submission path to GuC

Add bypass tasklet submission path to GuC. The tasklet is only used if H2G
channel has backpresure.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>

drm/i915/guc: Implement GuC context operations for new inteface

Implement GuC context operations which includes GuC specific operations
alloc, pin, unpin, and destroy.

v2:
 (Daniel Vetter)
  - Use msleep_interruptible rather than cond_resched in busy loop
 (Michal)
  - Remove C++ style comment

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Insert fence on context when deregistering

Sometime during context pinning a context with the same guc_id is
registered with the GuC. In this a case deregister must be before before
the context can be registered. A fence is inserted on all requests while
the deregister is in flight. Once the G2H is received indicating the
deregistration is complete the context is registered and the fence is
released.

Cc: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Defer context unpin until scheduling is disabled

With GuC scheduling, it isn't safe to unpin a context while scheduling
is enabled for that context as the GuC may touch some of the pinned
state (e.g. LRC). To ensure scheduling isn't enabled when an unpin is
done, a call back is added to intel_context_unpin when pin count == 1
to disable scheduling for that context. When the response CTB is
received it is safe to do the final unpin.

Future patches may add a heuristic / delay to schedule the disable
call back to avoid thrashing on schedule enable / disable.

Cc: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Disable engine barriers with GuC during unpin

Disable engine barriers for unpinning with GuC. This feature isn't
needed with the GuC as it disables context scheduling before unpinning
which guarantees the HW will not reference the context. Hence it is
not necessary to defer unpinning until a kernel context request
completes on each engine in the context engine mask.

Cc: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>

drm/i915/guc: Extend deregistration fence to schedule disable

Extend the deregistration context fence to fence whne a GuC context has
scheduling disable pending.

Cc: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915: Disable preempt busywait when using GuC scheduling

Disable preempt busywait when using GuC scheduling. This isn't need as
the GuC control preemption when scheduling.

Cc: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Ensure request ordering via completion fences

If two requests are on the same ring, they are explicitly ordered by the
HW. So, a submission fence is sufficient to ensure ordering when using
the new GuC submission interface. Conversely, if two requests share a
timeline and are on the same physical engine but different context this
doesn't ensure ordering on the new GuC submission interface. So, a
completion fence needs to be used to ensure ordering.

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Disable semaphores when using GuC scheduling

Semaphores are an optimization and not required for basic GuC submission
to work properly. Disable until we have time to do the implementation to
enable semaphores and tune them for performance. Also long direction is
just to delete semaphores from the i915 so another reason to not enable
these for GuC submission.

v2: Reword commit message

Cc: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Ensure G2H response has space in buffer

Ensure G2H response has space in the buffer before sending H2G CTB as
the GuC can't handle any backpressure on the G2H interface.

v2:
 (Matthew)
  - s/INTEL_GUC_SEND/INTEL_GUC_CT_SEND
v3:
 (Matthew)
  - Add G2H credit accounting to blocking path, add g2h_release_space
    helper

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Update intel_gt_wait_for_idle to work with GuC

When running the GuC the GPU can't be considered idle if the GuC still
has contexts pinned. As such, a call has been added in
intel_gt_wait_for_idle to idle the UC and in turn the GuC by waiting for
the number of unpinned contexts to go to zero.

v2: rtimeout -> remaining_timeout

Cc: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Update GuC debugfs to support new GuC

Update GuC debugfs to support the new GuC structures.

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Add several request trace points

Add trace points for request dependencies and GuC submit. Extended
existing request trace points to include submit fence value,, guc_id,
and ring tail value.

Cc: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915: Add intel_context tracing

Add intel_context tracing. These trace points are particular helpful
when debugging the GuC firmware and can be enabled via
CONFIG_DRM_I915_LOW_LEVEL_TRACEPOINTS kernel config option.

Cc: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: GuC virtual engines

Implement GuC virtual engines. Rather simple implementation, basically
just allocate an engine, setup context enter / exit function to virtual
engine specific functions, set all other variables / functions to guc
versions, and set the engine mask to that of all the siblings.

v2: Update to work with proto-ctx

Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915: Track 'serial' counts for virtual engines

The serial number tracking of engines happens at the backend of
request submission and was expecting to only be given physical
engines. However, in GuC submission mode, the decomposition of virtual
to physical engines does not happen in i915. Instead, requests are
submitted to their virtual engine mask all the way through to the
hardware (i.e. to GuC). This would mean that the heart beat code
thinks the physical engines are idle due to the serial number not
incrementing.

This patch updates the tracking to decompose virtual engines into
their physical constituents and tracks the request against each. This
is not entirely accurate as the GuC will only be issuing the request
to one physical engine. However, it is the best that i915 can do given
that it has no knowledge of the GuC's scheduling decisions.

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915: Hold reference to intel_context over life of i915_request

Hold a reference to the intel_context over life of an i915_request.
Without this an i915_request can exist after the context has been
destroyed (e.g. request retired, context closed, but user space holds a
reference to the request from an out fence). In the case of GuC
submission + virtual engine, the engine that the request references is
also destroyed which can trigger bad pointer dref in fence ops (e.g.
i915_fence_get_driver_name). We could likely change
i915_fence_get_driver_name to avoid touching the engine but let's just
be safe and hold the intel_context reference.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Disable bonding extension with GuC submission

Update the bonding extension to return -ENODEV when using GuC submission
as this extension fundamentally will not work with the GuC submission
interface.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Direct all breadcrumbs for a class to single breadcrumbs

With GuC virtual engines the physical engine which a request executes
and completes on isn't known to the i915. Therefore we can't attach a
request to a physical engines breadcrumbs. To work around this we create
a single breadcrumbs per engine class when using GuC submission and
direct all physical engine interrupts to this breadcrumbs.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
CC: John Harrison <John.C.Harrison@Intel.com>

drm/i915: Add i915_sched_engine destroy vfunc

This help the backends clean up when the schedule engine object gets
destroyed.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Reset implementation for new GuC interface

Reset implementation for new GuC interface. This is the legacy reset
implementation which is called when the i915 owns the engine hang check.
Future patches will offload the engine hang check to GuC but we will
continue to maintain this legacy path as a fallback and this code path
is also required if the GuC dies.

With the new GuC interface it is not possible to reset individual
engines - it is only possible to reset the GPU entirely. This patch
forces an entire chip reset if any engine hangs.

v2:
 (Michal)
  - Check for -EPIPE rather than -EIO (CT deadlock/corrupt check)

Cc: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915: Reset GPU immediately if submission is disabled

If submission is disabled by the backend for any reason, reset the GPU
immediately in the heartbeat code as the backend can't be reenabled
until the GPU is reset.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Add disable interrupts to guc sanitize

Add disable GuC interrupts to intel_guc_sanitize(). Part of this
requires moving the guc_*_interrupt wrapper function into header file
intel_guc.h.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com

drm/i915/guc: Suspend/resume implementation for new interface

The new GuC interface introduces an MMIO H2G command,
INTEL_GUC_ACTION_RESET_CLIENT, which is used to implement suspend. This
MMIO tears down any active contexts generating a context reset G2H CTB
for each. Once that step completes the GuC tears down the CTB
channels. It is safe to suspend once this MMIO H2G command completes
and all G2H CTBs have been processed. In practice the i915 will likely
never receive a G2H as suspend should only be called after the GPU is
idle.

Resume is implemented in the same manner as before - simply reload the
GuC firmware and reinitialize everything (e.g. CTB channels, contexts,
etc..).

Cc: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>

drm/i915/guc: Handle context reset notification

GuC will issue a reset on detecting an engine hang and will notify
the driver via a G2H message. The driver will service the notification
by resetting the guilty context to a simple state or banning it
completely.

Cc: Matthew Brost <matthew.brost@intel.com>
Cc: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Handle engine reset failure notification

GuC will notify the driver, via G2H, if it fails to
reset an engine. We recover by resorting to a full GPU
reset.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Fernando Pacheco <fernando.pacheco@intel.com>

drm/i915/guc: Enable the timer expired interrupt for GuC

The GuC can implement execution qunatums, detect hung contexts and
other such things but it requires the timer expired interrupt to do so.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
CC: John Harrison <John.C.Harrison@Intel.com>

drm/i915/guc: Provide mmio list to be saved/restored on engine reset

The driver must provide GuC with a list of mmio registers
that should be saved/restored during a GuC-based engine reset.
Unfortunately, the list must be dynamically allocated as its size is
variable. That means the driver must generate the list twice - once to
work out the size and a second time to actually save it.

v2:
 (Alan / CI)
  - GEN7_GT_MODE -> GEN6_GT_MODE to fix WA selftest failure

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Fernando Pacheco <fernando.pacheco@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

drm/i915/guc: Don't complain about reset races

It is impossible to seal all race conditions of resets occurring
concurrent to other operations. At least, not without introducing
excesive mutex locking. Instead, don't complain if it occurs. In
particular, don't complain if trying to send a H2G during a reset.
Whatever the H2G was about should get redone once the reset is over.

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Enable GuC engine reset

Clear the 'disable resets' flag to allow GuC to reset hung contexts
(detected via pre-emption timeout).

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Capture error state on context reset

We receive notification of an engine reset from GuC at its
completion. Meaning GuC has potentially cleared any HW state
we may have been interested in capturing. GuC resumes scheduling
on the engine post-reset, as the resets are meant to be transparent,
further muddling our error state.

There is ongoing work to define an API for a GuC debug state dump. The
suggestion for now is to manually disable FW initiated resets in cases
where debug state is needed.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Fix for error capture after full GPU reset with GuC

In the case of a full GPU reset (e.g. because GuC has died or because
GuC's hang detection has been disabled), the driver can't rely on GuC
reporting the guilty context. Instead, the driver needs to scan all
active contexts and find one that is currently executing, as per the
execlist mode behaviour. In GuC mode, this scan is different to
execlist mode as the active request list is handled very differently.

Similarly, the request state dump in debugfs needs to be handled
differently when in GuC submission mode.

Also refactured some of the request scanning code to avoid duplication
across the multiple code paths that are now replicating it.

Signed-off-by: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Hook GuC scheduling policies up

Use the official driver default scheduling policies for configuring
the GuC scheduler rather than a bunch of hardcoded values.

Signed-off-by: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Cc: Jose Souza <jose.souza@intel.com>

drm/i915/guc: Connect reset modparam updates to GuC policy flags

Changing the reset module parameter has no effect on a running GuC.
The corresponding entry in the ADS must be updated and then the GuC
informed via a Host2GuC message.

The new debugfs interface to module parameters allows this to happen.
However, connecting the parameter data address back to anything useful
is messy. One option would be to pass a new private data structure
address through instead of just the parameter pointer. However, that
means having a new (and different) data structure for each parameter
and a new (and different) write function for each parameter. This
method keeps everything generic by instead using a string lookup on
the directory entry name.

Signed-off-by: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Include scheduling policies in the debugfs state dump

Added the scheduling policy parameters to the 'guc_info' debugfs state
dump.

Signed-off-by: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Add golden context to GuC ADS

The media watchdog mechanism involves GuC doing a silent reset and
continue of the hung context. This requires the i915 driver provide a
golden context to GuC in the ADS.

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Implement banned contexts for GuC submission

When using GuC submission, if a context gets banned disable scheduling
and mark all inflight requests as complete.

Cc: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/guc: Support request cancellation

This adds GuC backend support for i915_request_cancel(), which in turn
makes CONFIG_DRM_I915_REQUEST_TIMEOUT work.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

drm/i915/selftest: Better error reporting from hangcheck selftest

There are many ways in which the hangcheck selftest can fail. Very few
of them actually printed an error message to say what happened. So,
fill in the missing messages.

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>

drm/i915/selftest: Fix workarounds selftest for GuC submission

When GuC submission is enabled, the GuC controls engine resets. Rather
than explicitly triggering a reset, the driver must submit a hanging
context to GuC and wait for the reset to occur.

Signed-off-by: Rahul Kumar Singh <rahul.kumar.singh@intel.com>
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>

drm/i915/selftest: Fix MOCS selftest for GuC submission

When GuC submission is enabled, the GuC controls engine resets. Rather
than explicitly triggering a reset, the driver must submit a hanging
context to GuC and wait for the reset to occur.

Signed-off-by: Rahul Kumar Singh <rahul.kumar.singh@intel.com>
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>

drm/i915/selftest: Increase some timeouts in live_requests

Requests may take slightly longer with GuC submission, let's increase
the timeouts in live_requests.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>

drm/i915/selftest: Fix hangcheck self test for GuC submission

When GuC submission is enabled, the GuC controls engine resets. Rather
than explicitly triggering a reset, the driver must submit a hanging
context to GuC and wait for the reset to occur.

Conversely, one of the tests specifically sends hanging batches to the
engines but wants them to sit around until a manual reset of the full
GT (including GuC itself). That means disabling GuC based engine
resets to prevent those from killing the hanging batch too soon. So,
add support to the scheduling policy helper for disabling resets as
well as making them quicker!

In GuC submission mode, the 'is engine idle' test basically turns into
'is engine PM wakelock held'. Independently, there is a heartbeat
disable helper function that the tests use. For unexplained reasons,
this acquires the engine wakelock before disabling the heartbeat and
only releases it when re-enabling the heartbeat. As one of the tests
tries to do a wait for idle in the middle of a heartbeat disabled
section, it is therefore guaranteed to always fail. Added a 'no_pm'
variant of the heartbeat helper that allows the engine to be asleep
while also having heartbeats disabled.

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>

drm/i915/selftest: Bump selftest timeouts for hangcheck

Some testing environments and some heavier tests are slower than
previous limits allowed for. For example, it can take multiple seconds
for the 'context has been reset' notification handler to reach the
'kill the requests' code in the 'active' version of the 'reset
engines' test. During which time the selftest gets bored, gives up
waiting and fails the test.

There is also an async thread that the selftest uses to pump work
through the hardware in parallel to the context that is marked for
reset. That also could get bored waiting for completions and kill the
test off.

Lastly, the flush at the of various test sections can also see
timeouts due to the large amount of work backed up. This is also true
of the live_hwsp_read test.

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>

drm/i915/guc: Unblock GuC submission on Gen11+

Unblock GuC submission on Gen11+ platforms.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/i915/Makefile                 |    1 +
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |   21 +-
 drivers/gpu/drm/i915/gem/i915_gem_context.h   |    1 +
 drivers/gpu/drm/i915/gem/i915_gem_mman.c      |    3 +-
 drivers/gpu/drm/i915/gt/gen8_engine_cs.c      |    6 +-
 drivers/gpu/drm/i915/gt/intel_breadcrumbs.c   |   41 +-
 drivers/gpu/drm/i915/gt/intel_breadcrumbs.h   |   14 +-
 .../gpu/drm/i915/gt/intel_breadcrumbs_types.h |    7 +
 drivers/gpu/drm/i915/gt/intel_context.c       |   50 +-
 drivers/gpu/drm/i915/gt/intel_context.h       |   50 +-
 drivers/gpu/drm/i915/gt/intel_context_types.h |   54 +
 drivers/gpu/drm/i915/gt/intel_engine.h        |   54 +-
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     |  182 +-
 .../gpu/drm/i915/gt/intel_engine_heartbeat.c  |   71 +-
 .../gpu/drm/i915/gt/intel_engine_heartbeat.h  |    4 +
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |   12 +-
 .../drm/i915/gt/intel_execlists_submission.c  |   95 +-
 .../drm/i915/gt/intel_execlists_submission.h  |    4 -
 drivers/gpu/drm/i915/gt/intel_gt.c            |   21 +
 drivers/gpu/drm/i915/gt/intel_gt.h            |    2 +
 drivers/gpu/drm/i915/gt/intel_gt_pm.c         |    6 +-
 drivers/gpu/drm/i915/gt/intel_gt_requests.c   |   23 +-
 drivers/gpu/drm/i915/gt/intel_gt_requests.h   |    9 +-
 drivers/gpu/drm/i915/gt/intel_lrc_reg.h       |    1 -
 drivers/gpu/drm/i915/gt/intel_reset.c         |   50 +-
 .../gpu/drm/i915/gt/intel_ring_submission.c   |   48 +
 drivers/gpu/drm/i915/gt/intel_rps.c           |    4 +
 drivers/gpu/drm/i915/gt/intel_workarounds.c   |   46 +-
 .../gpu/drm/i915/gt/intel_workarounds_types.h |    1 +
 drivers/gpu/drm/i915/gt/mock_engine.c         |   41 +-
 drivers/gpu/drm/i915/gt/selftest_context.c    |   10 +
 .../drm/i915/gt/selftest_engine_heartbeat.c   |   22 +
 .../drm/i915/gt/selftest_engine_heartbeat.h   |    2 +
 drivers/gpu/drm/i915/gt/selftest_execlists.c  |   12 +-
 drivers/gpu/drm/i915/gt/selftest_hangcheck.c  |  314 +-
 drivers/gpu/drm/i915/gt/selftest_mocs.c       |   50 +-
 .../gpu/drm/i915/gt/selftest_workarounds.c    |  132 +-
 .../gpu/drm/i915/gt/uc/abi/guc_actions_abi.h  |   15 +
 .../gt/uc/abi/guc_communication_ctb_abi.h     |    3 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc.c        |   82 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc.h        |  108 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c    |  460 ++-
 drivers/gpu/drm/i915/gt/uc/intel_guc_ads.h    |    3 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c     |  368 ++-
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h     |   28 +-
 .../gpu/drm/i915/gt/uc/intel_guc_debugfs.c    |   25 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h   |   88 +-
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 2527 +++++++++++++++--
 .../gpu/drm/i915/gt/uc/intel_guc_submission.h |   17 +-
 drivers/gpu/drm/i915/gt/uc/intel_uc.c         |  102 +-
 drivers/gpu/drm/i915/gt/uc/intel_uc.h         |   11 +
 drivers/gpu/drm/i915/i915_debugfs.c           |    2 +
 drivers/gpu/drm/i915/i915_debugfs_params.c    |   31 +
 drivers/gpu/drm/i915/i915_gem_evict.c         |    1 +
 drivers/gpu/drm/i915/i915_gpu_error.c         |   25 +-
 drivers/gpu/drm/i915/i915_reg.h               |    2 +
 drivers/gpu/drm/i915/i915_request.c           |  168 +-
 drivers/gpu/drm/i915/i915_request.h           |   21 +
 drivers/gpu/drm/i915/i915_scheduler.c         |    9 +-
 drivers/gpu/drm/i915/i915_scheduler.h         |   10 +-
 drivers/gpu/drm/i915/i915_scheduler_types.h   |   10 +
 drivers/gpu/drm/i915/i915_trace.h             |  207 +-
 drivers/gpu/drm/i915/selftests/i915_request.c |    4 +-
 .../gpu/drm/i915/selftests/igt_flush_test.c   |    2 +-
 .../gpu/drm/i915/selftests/igt_live_test.c    |    2 +-
 .../i915/selftests/intel_scheduler_helpers.c  |   89 +
 .../i915/selftests/intel_scheduler_helpers.h  |   35 +
 .../gpu/drm/i915/selftests/mock_gem_device.c  |    3 +-
 68 files changed, 5048 insertions(+), 874 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c
 create mode 100644 drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 10b3bb6207ba..ab7679957623 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -280,6 +280,7 @@ i915-$(CONFIG_DRM_I915_CAPTURE_ERROR) += i915_gpu_error.o
 i915-$(CONFIG_DRM_I915_SELFTEST) += \
 	gem/selftests/i915_gem_client_blt.o \
 	gem/selftests/igt_gem_utils.o \
+	selftests/intel_scheduler_helpers.o \
 	selftests/i915_random.o \
 	selftests/i915_selftest.o \
 	selftests/igt_atomic.o \
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 7d6f52d8a801..d87a4c6da5bc 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -74,7 +74,6 @@
 #include "gt/intel_context_param.h"
 #include "gt/intel_engine_heartbeat.h"
 #include "gt/intel_engine_user.h"
-#include "gt/intel_execlists_submission.h" /* virtual_engine */
 #include "gt/intel_gpu_commands.h"
 #include "gt/intel_ring.h"
 
@@ -363,9 +362,6 @@ set_proto_ctx_engines_balance(struct i915_user_extension __user *base,
 	if (!HAS_EXECLISTS(i915))
 		return -ENODEV;
 
-	if (intel_uc_uses_guc_submission(&i915->gt.uc))
-		return -ENODEV; /* not implement yet */
-
 	if (get_user(idx, &ext->engine_index))
 		return -EFAULT;
 
@@ -495,6 +491,11 @@ set_proto_ctx_engines_bond(struct i915_user_extension __user *base, void *data)
 		return -EINVAL;
 	}
 
+	if (intel_engine_uses_guc(master)) {
+		DRM_DEBUG("bonding extension not supported with GuC submission");
+		return -ENODEV;
+	}
+
 	if (get_user(num_bonds, &ext->num_bonds))
 		return -EFAULT;
 
@@ -799,7 +800,8 @@ static int intel_context_set_gem(struct intel_context *ce,
 	}
 
 	if (ctx->sched.priority >= I915_PRIORITY_NORMAL &&
-	    intel_engine_has_timeslices(ce->engine))
+	    intel_engine_has_timeslices(ce->engine) &&
+	    intel_engine_has_semaphores(ce->engine))
 		__set_bit(CONTEXT_USE_SEMAPHORES, &ce->flags);
 
 	if (IS_ACTIVE(CONFIG_DRM_I915_REQUEST_TIMEOUT) &&
@@ -949,8 +951,8 @@ static struct i915_gem_engines *user_engines(struct i915_gem_context *ctx,
 			break;
 
 		case I915_GEM_ENGINE_TYPE_BALANCED:
-			ce = intel_execlists_create_virtual(pe[n].siblings,
-							    pe[n].num_siblings);
+			ce = intel_engine_create_virtual(pe[n].siblings,
+							 pe[n].num_siblings);
 			break;
 
 		case I915_GEM_ENGINE_TYPE_INVALID:
@@ -1082,7 +1084,7 @@ static void kill_engines(struct i915_gem_engines *engines, bool ban)
 	for_each_gem_engine(ce, engines, it) {
 		struct intel_engine_cs *engine;
 
-		if (ban && intel_context_set_banned(ce))
+		if (ban && intel_context_ban(ce, NULL))
 			continue;
 
 		/*
@@ -1778,7 +1780,8 @@ static void __apply_priority(struct intel_context *ce, void *arg)
 	if (!intel_engine_has_timeslices(ce->engine))
 		return;
 
-	if (ctx->sched.priority >= I915_PRIORITY_NORMAL)
+	if (ctx->sched.priority >= I915_PRIORITY_NORMAL &&
+	    intel_engine_has_semaphores(ce->engine))
 		intel_context_set_use_semaphores(ce);
 	else
 		intel_context_clear_use_semaphores(ce);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.h b/drivers/gpu/drm/i915/gem/i915_gem_context.h
index 20411db84914..2639c719a7a6 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.h
@@ -10,6 +10,7 @@
 #include "i915_gem_context_types.h"
 
 #include "gt/intel_context.h"
+#include "gt/intel_engine.h"
 
 #include "i915_drv.h"
 #include "i915_gem.h"
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index a90f796e85c0..6fffd4d377c2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
@@ -645,7 +645,8 @@ mmap_offset_attach(struct drm_i915_gem_object *obj,
 		goto insert;
 
 	/* Attempt to reap some mmap space from dead objects */
-	err = intel_gt_retire_requests_timeout(&i915->gt, MAX_SCHEDULE_TIMEOUT);
+	err = intel_gt_retire_requests_timeout(&i915->gt, MAX_SCHEDULE_TIMEOUT,
+					       NULL);
 	if (err)
 		goto err;
 
diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
index 87b06572fd2e..f7aae502ec3d 100644
--- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
@@ -506,7 +506,8 @@ gen8_emit_fini_breadcrumb_tail(struct i915_request *rq, u32 *cs)
 	*cs++ = MI_USER_INTERRUPT;
 
 	*cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE;
-	if (intel_engine_has_semaphores(rq->engine))
+	if (intel_engine_has_semaphores(rq->engine) &&
+	    !intel_uc_uses_guc_submission(&rq->engine->gt->uc))
 		cs = emit_preempt_busywait(rq, cs);
 
 	rq->tail = intel_ring_offset(rq, cs);
@@ -598,7 +599,8 @@ gen12_emit_fini_breadcrumb_tail(struct i915_request *rq, u32 *cs)
 	*cs++ = MI_USER_INTERRUPT;
 
 	*cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE;
-	if (intel_engine_has_semaphores(rq->engine))
+	if (intel_engine_has_semaphores(rq->engine) &&
+	    !intel_uc_uses_guc_submission(&rq->engine->gt->uc))
 		cs = gen12_emit_preempt_busywait(rq, cs);
 
 	rq->tail = intel_ring_offset(rq, cs);
diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
index 38cc42783dfb..2007dc6f6b99 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
@@ -15,28 +15,14 @@
 #include "intel_gt_pm.h"
 #include "intel_gt_requests.h"
 
-static bool irq_enable(struct intel_engine_cs *engine)
+static bool irq_enable(struct intel_breadcrumbs *b)
 {
-	if (!engine->irq_enable)
-		return false;
-
-	/* Caller disables interrupts */
-	spin_lock(&engine->gt->irq_lock);
-	engine->irq_enable(engine);
-	spin_unlock(&engine->gt->irq_lock);
-
-	return true;
+	return intel_engine_irq_enable(b->irq_engine);
 }
 
-static void irq_disable(struct intel_engine_cs *engine)
+static void irq_disable(struct intel_breadcrumbs *b)
 {
-	if (!engine->irq_disable)
-		return;
-
-	/* Caller disables interrupts */
-	spin_lock(&engine->gt->irq_lock);
-	engine->irq_disable(engine);
-	spin_unlock(&engine->gt->irq_lock);
+	intel_engine_irq_disable(b->irq_engine);
 }
 
 static void __intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b)
@@ -57,7 +43,7 @@ static void __intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b)
 	WRITE_ONCE(b->irq_armed, true);
 
 	/* Requests may have completed before we could enable the interrupt. */
-	if (!b->irq_enabled++ && irq_enable(b->irq_engine))
+	if (!b->irq_enabled++ && b->irq_enable(b))
 		irq_work_queue(&b->irq_work);
 }
 
@@ -76,7 +62,7 @@ static void __intel_breadcrumbs_disarm_irq(struct intel_breadcrumbs *b)
 {
 	GEM_BUG_ON(!b->irq_enabled);
 	if (!--b->irq_enabled)
-		irq_disable(b->irq_engine);
+		b->irq_disable(b);
 
 	WRITE_ONCE(b->irq_armed, false);
 	intel_gt_pm_put_async(b->irq_engine->gt);
@@ -281,7 +267,7 @@ intel_breadcrumbs_create(struct intel_engine_cs *irq_engine)
 	if (!b)
 		return NULL;
 
-	b->irq_engine = irq_engine;
+	kref_init(&b->ref);
 
 	spin_lock_init(&b->signalers_lock);
 	INIT_LIST_HEAD(&b->signalers);
@@ -290,6 +276,10 @@ intel_breadcrumbs_create(struct intel_engine_cs *irq_engine)
 	spin_lock_init(&b->irq_lock);
 	init_irq_work(&b->irq_work, signal_irq_work);
 
+	b->irq_engine = irq_engine;
+	b->irq_enable = irq_enable;
+	b->irq_disable = irq_disable;
+
 	return b;
 }
 
@@ -303,9 +293,9 @@ void intel_breadcrumbs_reset(struct intel_breadcrumbs *b)
 	spin_lock_irqsave(&b->irq_lock, flags);
 
 	if (b->irq_enabled)
-		irq_enable(b->irq_engine);
+		b->irq_enable(b);
 	else
-		irq_disable(b->irq_engine);
+		b->irq_disable(b);
 
 	spin_unlock_irqrestore(&b->irq_lock, flags);
 }
@@ -325,11 +315,14 @@ void __intel_breadcrumbs_park(struct intel_breadcrumbs *b)
 	}
 }
 
-void intel_breadcrumbs_free(struct intel_breadcrumbs *b)
+void intel_breadcrumbs_free(struct kref *kref)
 {
+	struct intel_breadcrumbs *b = container_of(kref, typeof(*b), ref);
+
 	irq_work_sync(&b->irq_work);
 	GEM_BUG_ON(!list_empty(&b->signalers));
 	GEM_BUG_ON(b->irq_armed);
+
 	kfree(b);
 }
 
diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h
index 3ce5ce270b04..72105b74663d 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h
@@ -17,7 +17,7 @@ struct intel_breadcrumbs;
 
 struct intel_breadcrumbs *
 intel_breadcrumbs_create(struct intel_engine_cs *irq_engine);
-void intel_breadcrumbs_free(struct intel_breadcrumbs *b);
+void intel_breadcrumbs_free(struct kref *kref);
 
 void intel_breadcrumbs_reset(struct intel_breadcrumbs *b);
 void __intel_breadcrumbs_park(struct intel_breadcrumbs *b);
@@ -48,4 +48,16 @@ void i915_request_cancel_breadcrumb(struct i915_request *request);
 void intel_context_remove_breadcrumbs(struct intel_context *ce,
 				      struct intel_breadcrumbs *b);
 
+static inline struct intel_breadcrumbs *
+intel_breadcrumbs_get(struct intel_breadcrumbs *b)
+{
+	kref_get(&b->ref);
+	return b;
+}
+
+static inline void intel_breadcrumbs_put(struct intel_breadcrumbs *b)
+{
+	kref_put(&b->ref, intel_breadcrumbs_free);
+}
+
 #endif /* __INTEL_BREADCRUMBS__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h b/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h
index 3a084ce8ff5e..a4e146684be8 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h
@@ -7,10 +7,13 @@
 #define __INTEL_BREADCRUMBS_TYPES__
 
 #include <linux/irq_work.h>
+#include <linux/kref.h>
 #include <linux/list.h>
 #include <linux/spinlock.h>
 #include <linux/types.h>
 
+typedef u8 intel_engine_mask_t;
+
 /*
  * Rather than have every client wait upon all user interrupts,
  * with the herd waking after every interrupt and each doing the
@@ -29,6 +32,7 @@
  * the overhead of waking that client is much preferred.
  */
 struct intel_breadcrumbs {
+	struct kref ref;
 	atomic_t active;
 
 	spinlock_t signalers_lock; /* protects the list of signalers */
@@ -42,7 +46,10 @@ struct intel_breadcrumbs {
 	bool irq_armed;
 
 	/* Not all breadcrumbs are attached to physical HW */
+	intel_engine_mask_t	engine_mask;
 	struct intel_engine_cs *irq_engine;
+	bool	(*irq_enable)(struct intel_breadcrumbs *b);
+	void	(*irq_disable)(struct intel_breadcrumbs *b);
 };
 
 #endif /* __INTEL_BREADCRUMBS_TYPES__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
index bd63813c8a80..b1e3d00fb1f2 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -8,6 +8,7 @@
 
 #include "i915_drv.h"
 #include "i915_globals.h"
+#include "i915_trace.h"
 
 #include "intel_context.h"
 #include "intel_engine.h"
@@ -28,6 +29,7 @@ static void rcu_context_free(struct rcu_head *rcu)
 {
 	struct intel_context *ce = container_of(rcu, typeof(*ce), rcu);
 
+	trace_intel_context_free(ce);
 	kmem_cache_free(global.slab_ce, ce);
 }
 
@@ -46,6 +48,7 @@ intel_context_create(struct intel_engine_cs *engine)
 		return ERR_PTR(-ENOMEM);
 
 	intel_context_init(ce, engine);
+	trace_intel_context_create(ce);
 	return ce;
 }
 
@@ -80,7 +83,7 @@ static int intel_context_active_acquire(struct intel_context *ce)
 
 	__i915_active_acquire(&ce->active);
 
-	if (intel_context_is_barrier(ce))
+	if (intel_context_is_barrier(ce) || intel_engine_uses_guc(ce->engine))
 		return 0;
 
 	/* Preallocate tracking nodes */
@@ -268,6 +271,8 @@ int __intel_context_do_pin_ww(struct intel_context *ce,
 
 	GEM_BUG_ON(!intel_context_is_pinned(ce)); /* no overflow! */
 
+	trace_intel_context_do_pin(ce);
+
 err_unlock:
 	mutex_unlock(&ce->pin_mutex);
 err_post_unpin:
@@ -306,9 +311,9 @@ int __intel_context_do_pin(struct intel_context *ce)
 	return err;
 }
 
-void intel_context_unpin(struct intel_context *ce)
+void __intel_context_do_unpin(struct intel_context *ce, int sub)
 {
-	if (!atomic_dec_and_test(&ce->pin_count))
+	if (!atomic_sub_and_test(sub, &ce->pin_count))
 		return;
 
 	CE_TRACE(ce, "unpin\n");
@@ -323,6 +328,7 @@ void intel_context_unpin(struct intel_context *ce)
 	 */
 	intel_context_get(ce);
 	intel_context_active_release(ce);
+	trace_intel_context_do_unpin(ce);
 	intel_context_put(ce);
 }
 
@@ -360,6 +366,12 @@ static int __intel_context_active(struct i915_active *active)
 	return 0;
 }
 
+static int sw_fence_dummy_notify(struct i915_sw_fence *sf,
+				 enum i915_sw_fence_notify state)
+{
+	return NOTIFY_DONE;
+}
+
 void
 intel_context_init(struct intel_context *ce, struct intel_engine_cs *engine)
 {
@@ -384,6 +396,18 @@ intel_context_init(struct intel_context *ce, struct intel_engine_cs *engine)
 
 	mutex_init(&ce->pin_mutex);
 
+	spin_lock_init(&ce->guc_state.lock);
+	INIT_LIST_HEAD(&ce->guc_state.fences);
+
+	spin_lock_init(&ce->guc_active.lock);
+	INIT_LIST_HEAD(&ce->guc_active.requests);
+
+	ce->guc_id = GUC_INVALID_LRC_ID;
+	INIT_LIST_HEAD(&ce->guc_id_link);
+
+	i915_sw_fence_init(&ce->guc_blocked, sw_fence_dummy_notify);
+	i915_sw_fence_commit(&ce->guc_blocked);
+
 	i915_active_init(&ce->active,
 			 __intel_context_active, __intel_context_retire, 0);
 }
@@ -500,6 +524,26 @@ struct i915_request *intel_context_create_request(struct intel_context *ce)
 	return rq;
 }
 
+struct i915_request *intel_context_find_active_request(struct intel_context *ce)
+{
+	struct i915_request *rq, *active = NULL;
+	unsigned long flags;
+
+	GEM_BUG_ON(!intel_engine_uses_guc(ce->engine));
+
+	spin_lock_irqsave(&ce->guc_active.lock, flags);
+	list_for_each_entry_reverse(rq, &ce->guc_active.requests,
+				    sched.link) {
+		if (i915_request_completed(rq))
+			break;
+
+		active = rq;
+	}
+	spin_unlock_irqrestore(&ce->guc_active.lock, flags);
+
+	return active;
+}
+
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftest_context.c"
 #endif
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h
index b10cbe8fee99..876bdb08303c 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.h
+++ b/drivers/gpu/drm/i915/gt/intel_context.h
@@ -16,6 +16,7 @@
 #include "intel_engine_types.h"
 #include "intel_ring_types.h"
 #include "intel_timeline_types.h"
+#include "i915_trace.h"
 
 #define CE_TRACE(ce, fmt, ...) do {					\
 	const struct intel_context *ce__ = (ce);			\
@@ -69,6 +70,13 @@ intel_context_is_pinned(struct intel_context *ce)
 	return atomic_read(&ce->pin_count);
 }
 
+static inline void intel_context_cancel_request(struct intel_context *ce,
+						struct i915_request *rq)
+{
+	GEM_BUG_ON(!ce->ops->cancel_request);
+	return ce->ops->cancel_request(ce, rq);
+}
+
 /**
  * intel_context_unlock_pinned - Releases the earlier locking of 'pinned' status
  * @ce - the context
@@ -113,7 +121,32 @@ static inline void __intel_context_pin(struct intel_context *ce)
 	atomic_inc(&ce->pin_count);
 }
 
-void intel_context_unpin(struct intel_context *ce);
+void __intel_context_do_unpin(struct intel_context *ce, int sub);
+
+static inline void intel_context_sched_disable_unpin(struct intel_context *ce)
+{
+	__intel_context_do_unpin(ce, 2);
+}
+
+static inline void intel_context_unpin(struct intel_context *ce)
+{
+	if (!ce->ops->sched_disable) {
+		__intel_context_do_unpin(ce, 1);
+	} else {
+		/*
+		 * Move ownership of this pin to the scheduling disable which is
+		 * an async operation. When that operation completes the above
+		 * intel_context_sched_disable_unpin is called potentially
+		 * unpinning the context.
+		 */
+		while (!atomic_add_unless(&ce->pin_count, -1, 1)) {
+			if (atomic_cmpxchg(&ce->pin_count, 1, 2) == 1) {
+				ce->ops->sched_disable(ce);
+				break;
+			}
+		}
+	}
+}
 
 void intel_context_enter_engine(struct intel_context *ce);
 void intel_context_exit_engine(struct intel_context *ce);
@@ -175,6 +208,9 @@ int intel_context_prepare_remote_request(struct intel_context *ce,
 
 struct i915_request *intel_context_create_request(struct intel_context *ce);
 
+struct i915_request *
+intel_context_find_active_request(struct intel_context *ce);
+
 static inline bool intel_context_is_barrier(const struct intel_context *ce)
 {
 	return test_bit(CONTEXT_BARRIER_BIT, &ce->flags);
@@ -215,6 +251,18 @@ static inline bool intel_context_set_banned(struct intel_context *ce)
 	return test_and_set_bit(CONTEXT_BANNED, &ce->flags);
 }
 
+static inline bool intel_context_ban(struct intel_context *ce,
+				     struct i915_request *rq)
+{
+	bool ret = intel_context_set_banned(ce);
+
+	trace_intel_context_ban(ce);
+	if (ce->ops->ban)
+		ce->ops->ban(ce, rq);
+
+	return ret;
+}
+
 static inline bool
 intel_context_force_single_submission(const struct intel_context *ce)
 {
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
index 90026c177105..005a64f2afa7 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -13,6 +13,7 @@
 #include <linux/types.h>
 
 #include "i915_active_types.h"
+#include "i915_sw_fence.h"
 #include "i915_utils.h"
 #include "intel_engine_types.h"
 #include "intel_sseu.h"
@@ -35,16 +36,29 @@ struct intel_context_ops {
 
 	int (*alloc)(struct intel_context *ce);
 
+	void (*ban)(struct intel_context *ce, struct i915_request *rq);
+
 	int (*pre_pin)(struct intel_context *ce, struct i915_gem_ww_ctx *ww, void **vaddr);
 	int (*pin)(struct intel_context *ce, void *vaddr);
 	void (*unpin)(struct intel_context *ce);
 	void (*post_unpin)(struct intel_context *ce);
 
+	void (*cancel_request)(struct intel_context *ce,
+			       struct i915_request *rq);
+
 	void (*enter)(struct intel_context *ce);
 	void (*exit)(struct intel_context *ce);
 
+	void (*sched_disable)(struct intel_context *ce);
+
 	void (*reset)(struct intel_context *ce);
 	void (*destroy)(struct kref *kref);
+
+	/* virtual engine/context interface */
+	struct intel_context *(*create_virtual)(struct intel_engine_cs **engine,
+						unsigned int count);
+	struct intel_engine_cs *(*get_sibling)(struct intel_engine_cs *engine,
+					       unsigned int sibling);
 };
 
 struct intel_context {
@@ -96,6 +110,7 @@ struct intel_context {
 #define CONTEXT_BANNED			6
 #define CONTEXT_FORCE_SINGLE_SUBMISSION	7
 #define CONTEXT_NOPREEMPT		8
+#define CONTEXT_LRCA_DIRTY		9
 
 	struct {
 		u64 timeout_us;
@@ -137,6 +152,45 @@ struct intel_context {
 	struct intel_sseu sseu;
 
 	u8 wa_bb_page; /* if set, page num reserved for context workarounds */
+
+	struct {
+		/** lock: protects everything in guc_state */
+		spinlock_t lock;
+		/**
+		 * sched_state: scheduling state of this context using GuC
+		 * submission
+		 */
+		u8 sched_state;
+		/*
+		 * fences: maintains of list of requests that have a submit
+		 * fence related to GuC submission
+		 */
+		struct list_head fences;
+	} guc_state;
+
+	struct {
+		/** lock: protects everything in guc_active */
+		spinlock_t lock;
+		/** requests: active requests on this context */
+		struct list_head requests;
+	} guc_active;
+
+	/* GuC scheduling state flags that do not require a lock. */
+	atomic_t guc_sched_state_no_lock;
+
+	/* GuC LRC descriptor ID */
+	u16 guc_id;
+
+	/* GuC LRC descriptor reference count */
+	atomic_t guc_id_ref;
+
+	/*
+	 * GuC ID link - in list when unpinned but guc_id still valid in GuC
+	 */
+	struct list_head guc_id_link;
+
+	/* GuC context blocked fence */
+	struct i915_sw_fence guc_blocked;
 };
 
 #endif /* __INTEL_CONTEXT_TYPES__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h
index f911c1224ab2..2310ccda8058 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine.h
@@ -212,6 +212,9 @@ void intel_engine_get_instdone(const struct intel_engine_cs *engine,
 
 void intel_engine_init_execlists(struct intel_engine_cs *engine);
 
+bool intel_engine_irq_enable(struct intel_engine_cs *engine);
+void intel_engine_irq_disable(struct intel_engine_cs *engine);
+
 static inline void __intel_engine_reset(struct intel_engine_cs *engine,
 					bool stalled)
 {
@@ -237,12 +240,15 @@ __printf(3, 4)
 void intel_engine_dump(struct intel_engine_cs *engine,
 		       struct drm_printer *m,
 		       const char *header, ...);
+void intel_engine_dump_active_requests(struct list_head *requests,
+				       struct i915_request *hung_rq,
+				       struct drm_printer *m);
 
 ktime_t intel_engine_get_busy_time(struct intel_engine_cs *engine,
 				   ktime_t *now);
 
 struct i915_request *
-intel_engine_find_active_request(struct intel_engine_cs *engine);
+intel_engine_execlist_find_hung_request(struct intel_engine_cs *engine);
 
 u32 intel_engine_context_size(struct intel_gt *gt, u8 class);
 struct intel_context *
@@ -273,13 +279,57 @@ intel_engine_has_preempt_reset(const struct intel_engine_cs *engine)
 	return intel_engine_has_preemption(engine);
 }
 
+struct intel_context *
+intel_engine_create_virtual(struct intel_engine_cs **siblings,
+			    unsigned int count);
+
+static inline bool
+intel_virtual_engine_has_heartbeat(const struct intel_engine_cs *engine)
+{
+	if (intel_engine_uses_guc(engine))
+		return intel_guc_virtual_engine_has_heartbeat(engine);
+	else
+		GEM_BUG_ON("Only should be called in GuC submission");
+
+	return false;
+}
+
 static inline bool
 intel_engine_has_heartbeat(const struct intel_engine_cs *engine)
 {
 	if (!IS_ACTIVE(CONFIG_DRM_I915_HEARTBEAT_INTERVAL))
 		return false;
 
-	return READ_ONCE(engine->props.heartbeat_interval_ms);
+	if (intel_engine_is_virtual(engine))
+		return intel_virtual_engine_has_heartbeat(engine);
+	else
+		return READ_ONCE(engine->props.heartbeat_interval_ms);
+}
+
+static inline struct intel_engine_cs *
+intel_engine_get_sibling(struct intel_engine_cs *engine, unsigned int sibling)
+{
+	GEM_BUG_ON(!intel_engine_is_virtual(engine));
+	return engine->cops->get_sibling(engine, sibling);
+}
+
+static inline void
+intel_engine_set_hung_context(struct intel_engine_cs *engine,
+			      struct intel_context *ce)
+{
+	engine->hung_ce = ce;
+}
+
+static inline void
+intel_engine_clear_hung_context(struct intel_engine_cs *engine)
+{
+	intel_engine_set_hung_context(engine, NULL);
+}
+
+static inline struct intel_context *
+intel_engine_get_hung_context(struct intel_engine_cs *engine)
+{
+	return engine->hung_ce;
 }
 
 #endif /* _INTEL_RINGBUFFER_H_ */
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index d561573ed98c..51a0d860d551 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -739,7 +739,7 @@ static int engine_setup_common(struct intel_engine_cs *engine)
 err_cmd_parser:
 	i915_sched_engine_put(engine->sched_engine);
 err_sched_engine:
-	intel_breadcrumbs_free(engine->breadcrumbs);
+	intel_breadcrumbs_put(engine->breadcrumbs);
 err_status:
 	cleanup_status_page(engine);
 	return err;
@@ -948,7 +948,7 @@ void intel_engine_cleanup_common(struct intel_engine_cs *engine)
 	GEM_BUG_ON(!list_empty(&engine->sched_engine->requests));
 
 	i915_sched_engine_put(engine->sched_engine);
-	intel_breadcrumbs_free(engine->breadcrumbs);
+	intel_breadcrumbs_put(engine->breadcrumbs);
 
 	intel_engine_fini_retire(engine);
 	intel_engine_cleanup_cmd_parser(engine);
@@ -1265,6 +1265,30 @@ bool intel_engines_are_idle(struct intel_gt *gt)
 	return true;
 }
 
+bool intel_engine_irq_enable(struct intel_engine_cs *engine)
+{
+	if (!engine->irq_enable)
+		return false;
+
+	/* Caller disables interrupts */
+	spin_lock(&engine->gt->irq_lock);
+	engine->irq_enable(engine);
+	spin_unlock(&engine->gt->irq_lock);
+
+	return true;
+}
+
+void intel_engine_irq_disable(struct intel_engine_cs *engine)
+{
+	if (!engine->irq_disable)
+		return;
+
+	/* Caller disables interrupts */
+	spin_lock(&engine->gt->irq_lock);
+	engine->irq_disable(engine);
+	spin_unlock(&engine->gt->irq_lock);
+}
+
 void intel_engines_reset_default_submission(struct intel_gt *gt)
 {
 	struct intel_engine_cs *engine;
@@ -1601,6 +1625,97 @@ static void print_properties(struct intel_engine_cs *engine,
 			   read_ul(&engine->defaults, p->offset));
 }
 
+static void engine_dump_request(struct i915_request *rq, struct drm_printer *m, const char *msg)
+{
+	struct intel_timeline *tl = get_timeline(rq);
+
+	i915_request_show(m, rq, msg, 0);
+
+	drm_printf(m, "\t\tring->start:  0x%08x\n",
+		   i915_ggtt_offset(rq->ring->vma));
+	drm_printf(m, "\t\tring->head:   0x%08x\n",
+		   rq->ring->head);
+	drm_printf(m, "\t\tring->tail:   0x%08x\n",
+		   rq->ring->tail);
+	drm_printf(m, "\t\tring->emit:   0x%08x\n",
+		   rq->ring->emit);
+	drm_printf(m, "\t\tring->space:  0x%08x\n",
+		   rq->ring->space);
+
+	if (tl) {
+		drm_printf(m, "\t\tring->hwsp:   0x%08x\n",
+			   tl->hwsp_offset);
+		intel_timeline_put(tl);
+	}
+
+	print_request_ring(m, rq);
+
+	if (rq->context->lrc_reg_state) {
+		drm_printf(m, "Logical Ring Context:\n");
+		hexdump(m, rq->context->lrc_reg_state, PAGE_SIZE);
+	}
+}
+
+void intel_engine_dump_active_requests(struct list_head *requests,
+				       struct i915_request *hung_rq,
+				       struct drm_printer *m)
+{
+	struct i915_request *rq;
+	const char *msg;
+	enum i915_request_state state;
+
+	list_for_each_entry(rq, requests, sched.link) {
+		if (rq == hung_rq)
+			continue;
+
+		state = i915_test_request_state(rq);
+		if (state < I915_REQUEST_QUEUED)
+			continue;
+
+		if (state == I915_REQUEST_ACTIVE)
+			msg = "\t\tactive on engine";
+		else
+			msg = "\t\tactive in queue";
+
+		engine_dump_request(rq, m, msg);
+	}
+}
+
+static void engine_dump_active_requests(struct intel_engine_cs *engine, struct drm_printer *m)
+{
+	struct i915_request *hung_rq = NULL;
+	struct intel_context *ce;
+	bool guc;
+
+	/*
+	 * No need for an engine->irq_seqno_barrier() before the seqno reads.
+	 * The GPU is still running so requests are still executing and any
+	 * hardware reads will be out of date by the time they are reported.
+	 * But the intention here is just to report an instantaneous snapshot
+	 * so that's fine.
+	 */
+	lockdep_assert_held(&engine->sched_engine->lock);
+
+	drm_printf(m, "\tRequests:\n");
+
+	guc = intel_uc_uses_guc_submission(&engine->gt->uc);
+	if (guc) {
+		ce = intel_engine_get_hung_context(engine);
+		if (ce)
+			hung_rq = intel_context_find_active_request(ce);
+	} else
+		hung_rq = intel_engine_execlist_find_hung_request(engine);
+
+	if (hung_rq)
+		engine_dump_request(hung_rq, m, "\t\thung");
+
+	if (guc)
+		intel_guc_dump_active_requests(engine, hung_rq, m);
+	else
+		intel_engine_dump_active_requests(&engine->sched_engine->requests,
+						  hung_rq, m);
+}
+
 void intel_engine_dump(struct intel_engine_cs *engine,
 		       struct drm_printer *m,
 		       const char *header, ...)
@@ -1645,39 +1760,9 @@ void intel_engine_dump(struct intel_engine_cs *engine,
 		   i915_reset_count(error));
 	print_properties(engine, m);
 
-	drm_printf(m, "\tRequests:\n");
-
 	spin_lock_irqsave(&engine->sched_engine->lock, flags);
-	rq = intel_engine_find_active_request(engine);
-	if (rq) {
-		struct intel_timeline *tl = get_timeline(rq);
+	engine_dump_active_requests(engine, m);
 
-		i915_request_show(m, rq, "\t\tactive ", 0);
-
-		drm_printf(m, "\t\tring->start:  0x%08x\n",
-			   i915_ggtt_offset(rq->ring->vma));
-		drm_printf(m, "\t\tring->head:   0x%08x\n",
-			   rq->ring->head);
-		drm_printf(m, "\t\tring->tail:   0x%08x\n",
-			   rq->ring->tail);
-		drm_printf(m, "\t\tring->emit:   0x%08x\n",
-			   rq->ring->emit);
-		drm_printf(m, "\t\tring->space:  0x%08x\n",
-			   rq->ring->space);
-
-		if (tl) {
-			drm_printf(m, "\t\tring->hwsp:   0x%08x\n",
-				   tl->hwsp_offset);
-			intel_timeline_put(tl);
-		}
-
-		print_request_ring(m, rq);
-
-		if (rq->context->lrc_reg_state) {
-			drm_printf(m, "Logical Ring Context:\n");
-			hexdump(m, rq->context->lrc_reg_state, PAGE_SIZE);
-		}
-	}
 	drm_printf(m, "\tOn hold?: %lu\n",
 		   list_count(&engine->sched_engine->hold));
 	spin_unlock_irqrestore(&engine->sched_engine->lock, flags);
@@ -1737,18 +1822,32 @@ ktime_t intel_engine_get_busy_time(struct intel_engine_cs *engine, ktime_t *now)
 	return total;
 }
 
-static bool match_ring(struct i915_request *rq)
+struct intel_context *
+intel_engine_create_virtual(struct intel_engine_cs **siblings,
+			    unsigned int count)
 {
-	u32 ring = ENGINE_READ(rq->engine, RING_START);
+	if (count == 0)
+		return ERR_PTR(-EINVAL);
+
+	if (count == 1)
+		return intel_context_create(siblings[0]);
 
-	return ring == i915_ggtt_offset(rq->ring->vma);
+	GEM_BUG_ON(!siblings[0]->cops->create_virtual);
+	return siblings[0]->cops->create_virtual(siblings, count);
 }
 
 struct i915_request *
-intel_engine_find_active_request(struct intel_engine_cs *engine)
+intel_engine_execlist_find_hung_request(struct intel_engine_cs *engine)
 {
 	struct i915_request *request, *active = NULL;
 
+	/*
+	 * This search does not work in GuC submission mode. However, the GuC
+	 * will report the hanging context directly to the driver itself. So
+	 * the driver should never get here when in GuC mode.
+	 */
+	GEM_BUG_ON(intel_uc_uses_guc_submission(&engine->gt->uc));
+
 	/*
 	 * We are called by the error capture, reset and to dump engine
 	 * state at random points in time. In particular, note that neither is
@@ -1780,14 +1879,7 @@ intel_engine_find_active_request(struct intel_engine_cs *engine)
 
 	list_for_each_entry(request, &engine->sched_engine->requests,
 			    sched.link) {
-		if (__i915_request_is_complete(request))
-			continue;
-
-		if (!__i915_request_has_started(request))
-			continue;
-
-		/* More than one preemptible request may match! */
-		if (!match_ring(request))
+		if (i915_test_request_state(request) != I915_REQUEST_ACTIVE)
 			continue;
 
 		active = request;
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
index b6a305e6a974..f0768824de6f 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
@@ -70,12 +70,38 @@ static void show_heartbeat(const struct i915_request *rq,
 {
 	struct drm_printer p = drm_debug_printer("heartbeat");
 
-	intel_engine_dump(engine, &p,
-			  "%s heartbeat {seqno:%llx:%lld, prio:%d} not ticking\n",
-			  engine->name,
-			  rq->fence.context,
-			  rq->fence.seqno,
-			  rq->sched.attr.priority);
+	if (!rq) {
+		intel_engine_dump(engine, &p,
+				  "%s heartbeat not ticking\n",
+				  engine->name);
+	} else {
+		intel_engine_dump(engine, &p,
+				  "%s heartbeat {seqno:%llx:%lld, prio:%d} not ticking\n",
+				  engine->name,
+				  rq->fence.context,
+				  rq->fence.seqno,
+				  rq->sched.attr.priority);
+	}
+}
+
+static void
+reset_engine(struct intel_engine_cs *engine, struct i915_request *rq)
+{
+	if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
+		show_heartbeat(rq, engine);
+
+	if (intel_engine_uses_guc(engine))
+		/*
+		 * GuC itself is toast or GuC's hang detection
+		 * is disabled. Either way, need to find the
+		 * hang culprit manually.
+		 */
+		intel_guc_find_hung_context(engine);
+
+	intel_gt_handle_error(engine->gt, engine->mask,
+			      I915_ERROR_CAPTURE,
+			      "stopped heartbeat on %s",
+			      engine->name);
 }
 
 static void heartbeat(struct work_struct *wrk)
@@ -102,6 +128,11 @@ static void heartbeat(struct work_struct *wrk)
 	if (intel_gt_is_wedged(engine->gt))
 		goto out;
 
+	if (i915_sched_engine_disabled(engine->sched_engine)) {
+		reset_engine(engine, engine->heartbeat.systole);
+		goto out;
+	}
+
 	if (engine->heartbeat.systole) {
 		long delay = READ_ONCE(engine->props.heartbeat_interval_ms);
 
@@ -139,13 +170,7 @@ static void heartbeat(struct work_struct *wrk)
 			engine->sched_engine->schedule(rq, &attr);
 			local_bh_enable();
 		} else {
-			if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
-				show_heartbeat(rq, engine);
-
-			intel_gt_handle_error(engine->gt, engine->mask,
-					      I915_ERROR_CAPTURE,
-					      "stopped heartbeat on %s",
-					      engine->name);
+			reset_engine(engine, rq);
 		}
 
 		rq->emitted_jiffies = jiffies;
@@ -194,6 +219,26 @@ void intel_engine_park_heartbeat(struct intel_engine_cs *engine)
 		i915_request_put(fetch_and_zero(&engine->heartbeat.systole));
 }
 
+void intel_gt_unpark_heartbeats(struct intel_gt *gt)
+{
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+
+	for_each_engine(engine, gt, id)
+		if (intel_engine_pm_is_awake(engine))
+			intel_engine_unpark_heartbeat(engine);
+
+}
+
+void intel_gt_park_heartbeats(struct intel_gt *gt)
+{
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+
+	for_each_engine(engine, gt, id)
+		intel_engine_park_heartbeat(engine);
+}
+
 void intel_engine_init_heartbeat(struct intel_engine_cs *engine)
 {
 	INIT_DELAYED_WORK(&engine->heartbeat.work, heartbeat);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
index a488ea3e84a3..5da6d809a87a 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
@@ -7,6 +7,7 @@
 #define INTEL_ENGINE_HEARTBEAT_H
 
 struct intel_engine_cs;
+struct intel_gt;
 
 void intel_engine_init_heartbeat(struct intel_engine_cs *engine);
 
@@ -16,6 +17,9 @@ int intel_engine_set_heartbeat(struct intel_engine_cs *engine,
 void intel_engine_park_heartbeat(struct intel_engine_cs *engine);
 void intel_engine_unpark_heartbeat(struct intel_engine_cs *engine);
 
+void intel_gt_park_heartbeats(struct intel_gt *gt);
+void intel_gt_unpark_heartbeats(struct intel_gt *gt);
+
 int intel_engine_pulse(struct intel_engine_cs *engine);
 int intel_engine_flush_barriers(struct intel_engine_cs *engine);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index 1cb9c3b70b29..e1935c69f7d2 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -64,7 +64,6 @@ struct intel_gt;
 struct intel_ring;
 struct intel_uncore;
 
-typedef u8 intel_engine_mask_t;
 #define ALL_ENGINES ((intel_engine_mask_t)~0ul)
 
 struct intel_hw_status_page {
@@ -304,6 +303,8 @@ struct intel_engine_cs {
 	/* keep a request in reserve for a [pm] barrier under oom */
 	struct i915_request *request_pool;
 
+	struct intel_context *hung_ce;
+
 	struct llist_head barrier_tasks;
 
 	struct intel_context *kernel_context; /* pinned */
@@ -388,6 +389,8 @@ struct intel_engine_cs {
 	void		(*park)(struct intel_engine_cs *engine);
 	void		(*unpark)(struct intel_engine_cs *engine);
 
+	void		(*bump_serial)(struct intel_engine_cs *engine);
+
 	void		(*set_default_submission)(struct intel_engine_cs *engine);
 
 	const struct intel_context_ops *cops;
@@ -418,6 +421,12 @@ struct intel_engine_cs {
 
 	void		(*release)(struct intel_engine_cs *engine);
 
+	/*
+	 * Add / remove request from engine active tracking
+	 */
+	void		(*add_active_request)(struct i915_request *rq);
+	void		(*remove_active_request)(struct i915_request *rq);
+
 	struct intel_engine_execlists execlists;
 
 	/*
@@ -439,6 +448,7 @@ struct intel_engine_cs {
 #define I915_ENGINE_IS_VIRTUAL       BIT(5)
 #define I915_ENGINE_HAS_RELATIVE_MMIO BIT(6)
 #define I915_ENGINE_REQUIRES_CMD_PARSER BIT(7)
+#define I915_ENGINE_WANT_FORCED_PREEMPTION BIT(8)
 	unsigned int flags;
 
 	/*
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 56e25090da67..8f6dc0fb49a6 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -114,6 +114,7 @@
 #include "gen8_engine_cs.h"
 #include "intel_breadcrumbs.h"
 #include "intel_context.h"
+#include "intel_engine_heartbeat.h"
 #include "intel_engine_pm.h"
 #include "intel_engine_stats.h"
 #include "intel_execlists_submission.h"
@@ -193,6 +194,9 @@ static struct virtual_engine *to_virtual_engine(struct intel_engine_cs *engine)
 	return container_of(engine, struct virtual_engine, base);
 }
 
+static struct intel_context *
+execlists_create_virtual(struct intel_engine_cs **siblings, unsigned int count);
+
 static struct i915_request *
 __active_request(const struct intel_timeline * const tl,
 		 struct i915_request *rq,
@@ -2533,11 +2537,26 @@ static int execlists_context_alloc(struct intel_context *ce)
 	return lrc_alloc(ce, ce->engine);
 }
 
+static void execlists_context_cancel_request(struct intel_context *ce,
+					     struct i915_request *rq)
+{
+	struct intel_engine_cs *engine = NULL;
+
+	i915_request_active_engine(rq, &engine);
+
+	if (engine && intel_engine_pulse(engine))
+		intel_gt_handle_error(engine->gt, engine->mask, 0,
+				      "request cancellation by %s",
+				      current->comm);
+}
+
 static const struct intel_context_ops execlists_context_ops = {
 	.flags = COPS_HAS_INFLIGHT,
 
 	.alloc = execlists_context_alloc,
 
+	.cancel_request = execlists_context_cancel_request,
+
 	.pre_pin = execlists_context_pre_pin,
 	.pin = execlists_context_pin,
 	.unpin = lrc_unpin,
@@ -2548,6 +2567,8 @@ static const struct intel_context_ops execlists_context_ops = {
 
 	.reset = lrc_reset,
 	.destroy = lrc_destroy,
+
+	.create_virtual = execlists_create_virtual,
 };
 
 static int emit_pdps(struct i915_request *rq)
@@ -3101,6 +3122,42 @@ static void execlists_park(struct intel_engine_cs *engine)
 	cancel_timer(&engine->execlists.preempt);
 }
 
+static void add_to_engine(struct i915_request *rq)
+{
+	lockdep_assert_held(&rq->engine->sched_engine->lock);
+	list_move_tail(&rq->sched.link, &rq->engine->sched_engine->requests);
+}
+
+static void remove_from_engine(struct i915_request *rq)
+{
+	struct intel_engine_cs *engine, *locked;
+
+	/*
+	 * Virtual engines complicate acquiring the engine timeline lock,
+	 * as their rq->engine pointer is not stable until under that
+	 * engine lock. The simple ploy we use is to take the lock then
+	 * check that the rq still belongs to the newly locked engine.
+	 */
+	locked = READ_ONCE(rq->engine);
+	spin_lock_irq(&locked->sched_engine->lock);
+	while (unlikely(locked != (engine = READ_ONCE(rq->engine)))) {
+		spin_unlock(&locked->sched_engine->lock);
+		spin_lock(&engine->sched_engine->lock);
+		locked = engine;
+	}
+	list_del_init(&rq->sched.link);
+
+	clear_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
+	clear_bit(I915_FENCE_FLAG_HOLD, &rq->fence.flags);
+
+	/* Prevent further __await_execution() registering a cb, then flush */
+	set_bit(I915_FENCE_FLAG_ACTIVE, &rq->fence.flags);
+
+	spin_unlock_irq(&locked->sched_engine->lock);
+
+	i915_request_notify_execute_cb_imm(rq);
+}
+
 static bool can_preempt(struct intel_engine_cs *engine)
 {
 	if (GRAPHICS_VER(engine->i915) > 8)
@@ -3186,6 +3243,11 @@ static void execlists_release(struct intel_engine_cs *engine)
 	lrc_fini_wa_ctx(engine);
 }
 
+static void execlist_bump_serial(struct intel_engine_cs *engine)
+{
+	engine->serial++;
+}
+
 static void
 logical_ring_default_vfuncs(struct intel_engine_cs *engine)
 {
@@ -3195,6 +3257,9 @@ logical_ring_default_vfuncs(struct intel_engine_cs *engine)
 
 	engine->cops = &execlists_context_ops;
 	engine->request_alloc = execlists_request_alloc;
+	engine->bump_serial = execlist_bump_serial;
+	engine->add_active_request = add_to_engine;
+	engine->remove_active_request = remove_from_engine;
 
 	engine->reset.prepare = execlists_reset_prepare;
 	engine->reset.rewind = execlists_reset_rewind;
@@ -3396,7 +3461,7 @@ static void rcu_virtual_context_destroy(struct work_struct *wrk)
 	intel_context_fini(&ve->context);
 
 	if (ve->base.breadcrumbs)
-		intel_breadcrumbs_free(ve->base.breadcrumbs);
+		intel_breadcrumbs_put(ve->base.breadcrumbs);
 	if (ve->base.sched_engine)
 		i915_sched_engine_put(ve->base.sched_engine);
 	intel_engine_free_request_pool(&ve->base);
@@ -3493,11 +3558,24 @@ static void virtual_context_exit(struct intel_context *ce)
 		intel_engine_pm_put(ve->siblings[n]);
 }
 
+static struct intel_engine_cs *
+virtual_get_sibling(struct intel_engine_cs *engine, unsigned int sibling)
+{
+	struct virtual_engine *ve = to_virtual_engine(engine);
+
+	if (sibling >= ve->num_siblings)
+		return NULL;
+
+	return ve->siblings[sibling];
+}
+
 static const struct intel_context_ops virtual_context_ops = {
 	.flags = COPS_HAS_INFLIGHT,
 
 	.alloc = virtual_context_alloc,
 
+	.cancel_request = execlists_context_cancel_request,
+
 	.pre_pin = virtual_context_pre_pin,
 	.pin = virtual_context_pin,
 	.unpin = lrc_unpin,
@@ -3507,6 +3585,8 @@ static const struct intel_context_ops virtual_context_ops = {
 	.exit = virtual_context_exit,
 
 	.destroy = virtual_context_destroy,
+
+	.get_sibling = virtual_get_sibling,
 };
 
 static intel_engine_mask_t virtual_submission_mask(struct virtual_engine *ve)
@@ -3655,20 +3735,13 @@ static void virtual_submit_request(struct i915_request *rq)
 	spin_unlock_irqrestore(&ve->base.sched_engine->lock, flags);
 }
 
-struct intel_context *
-intel_execlists_create_virtual(struct intel_engine_cs **siblings,
-			       unsigned int count)
+static struct intel_context *
+execlists_create_virtual(struct intel_engine_cs **siblings, unsigned int count)
 {
 	struct virtual_engine *ve;
 	unsigned int n;
 	int err;
 
-	if (count == 0)
-		return ERR_PTR(-EINVAL);
-
-	if (count == 1)
-		return intel_context_create(siblings[0]);
-
 	ve = kzalloc(struct_size(ve, siblings, count), GFP_KERNEL);
 	if (!ve)
 		return ERR_PTR(-ENOMEM);
@@ -3780,6 +3853,8 @@ intel_execlists_create_virtual(struct intel_engine_cs **siblings,
 			 "v%dx%d", ve->base.class, count);
 		ve->base.context_size = sibling->context_size;
 
+		ve->base.add_active_request = sibling->add_active_request;
+		ve->base.remove_active_request = sibling->remove_active_request;
 		ve->base.emit_bb_start = sibling->emit_bb_start;
 		ve->base.emit_flush = sibling->emit_flush;
 		ve->base.emit_init_breadcrumb = sibling->emit_init_breadcrumb;
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.h b/drivers/gpu/drm/i915/gt/intel_execlists_submission.h
index ad4f3e1a0fde..a1aa92c983a5 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.h
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.h
@@ -32,10 +32,6 @@ void intel_execlists_show_requests(struct intel_engine_cs *engine,
 							int indent),
 				   unsigned int max);
 
-struct intel_context *
-intel_execlists_create_virtual(struct intel_engine_cs **siblings,
-			       unsigned int count);
-
 bool
 intel_engine_in_execlists_submission_mode(const struct intel_engine_cs *engine);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
index e714e21c0a4d..ceeb517ba259 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt.c
@@ -585,6 +585,25 @@ static void __intel_gt_disable(struct intel_gt *gt)
 	GEM_BUG_ON(intel_gt_pm_is_awake(gt));
 }
 
+int intel_gt_wait_for_idle(struct intel_gt *gt, long timeout)
+{
+	long remaining_timeout;
+
+	/* If the device is asleep, we have no requests outstanding */
+	if (!intel_gt_pm_is_awake(gt))
+		return 0;
+
+	while ((timeout = intel_gt_retire_requests_timeout(gt, timeout,
+							   &remaining_timeout)) > 0) {
+		cond_resched();
+		if (signal_pending(current))
+			return -EINTR;
+	}
+
+	return timeout ? timeout : intel_uc_wait_for_idle(&gt->uc,
+							  remaining_timeout);
+}
+
 int intel_gt_init(struct intel_gt *gt)
 {
 	int err;
@@ -635,6 +654,8 @@ int intel_gt_init(struct intel_gt *gt)
 	if (err)
 		goto err_gt;
 
+	intel_uc_init_late(&gt->uc);
+
 	err = i915_inject_probe_error(gt->i915, -EIO);
 	if (err)
 		goto err_gt;
diff --git a/drivers/gpu/drm/i915/gt/intel_gt.h b/drivers/gpu/drm/i915/gt/intel_gt.h
index e7aabe0cc5bf..74e771871a9b 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt.h
@@ -48,6 +48,8 @@ void intel_gt_driver_release(struct intel_gt *gt);
 
 void intel_gt_driver_late_release(struct intel_gt *gt);
 
+int intel_gt_wait_for_idle(struct intel_gt *gt, long timeout);
+
 void intel_gt_check_and_clear_faults(struct intel_gt *gt);
 void intel_gt_clear_error_registers(struct intel_gt *gt,
 				    intel_engine_mask_t engine_mask);
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
index aef3084e8b16..463a6ae605a0 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
@@ -174,8 +174,6 @@ static void gt_sanitize(struct intel_gt *gt, bool force)
 	if (intel_gt_is_wedged(gt))
 		intel_gt_unset_wedged(gt);
 
-	intel_uc_sanitize(&gt->uc);
-
 	for_each_engine(engine, gt, id)
 		if (engine->reset.prepare)
 			engine->reset.prepare(engine);
@@ -191,6 +189,8 @@ static void gt_sanitize(struct intel_gt *gt, bool force)
 			__intel_engine_reset(engine, false);
 	}
 
+	intel_uc_reset(&gt->uc, false);
+
 	for_each_engine(engine, gt, id)
 		if (engine->reset.finish)
 			engine->reset.finish(engine);
@@ -243,6 +243,8 @@ int intel_gt_resume(struct intel_gt *gt)
 		goto err_wedged;
 	}
 
+	intel_uc_reset_finish(&gt->uc);
+
 	intel_rps_enable(&gt->rps);
 	intel_llc_enable(&gt->llc);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
index 647eca9d867a..9ae5ee256898 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
@@ -13,6 +13,8 @@
 #include "intel_gt_pm.h"
 #include "intel_gt_requests.h"
 #include "intel_timeline.h"
+#include "intel_context.h"
+#include "uc/intel_uc.h"
 
 static bool retire_requests(struct intel_timeline *tl)
 {
@@ -130,7 +132,8 @@ void intel_engine_fini_retire(struct intel_engine_cs *engine)
 	GEM_BUG_ON(engine->retire);
 }
 
-long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
+long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout,
+				      long *remaining_timeout)
 {
 	struct intel_gt_timelines *timelines = &gt->timelines;
 	struct intel_timeline *tl, *tn;
@@ -195,22 +198,10 @@ out_active:	spin_lock(&timelines->lock);
 	if (flush_submission(gt, timeout)) /* Wait, there's more! */
 		active_count++;
 
-	return active_count ? timeout : 0;
-}
-
-int intel_gt_wait_for_idle(struct intel_gt *gt, long timeout)
-{
-	/* If the device is asleep, we have no requests outstanding */
-	if (!intel_gt_pm_is_awake(gt))
-		return 0;
-
-	while ((timeout = intel_gt_retire_requests_timeout(gt, timeout)) > 0) {
-		cond_resched();
-		if (signal_pending(current))
-			return -EINTR;
-	}
+	if (remaining_timeout)
+		*remaining_timeout = timeout;
 
-	return timeout;
+	return active_count ? timeout : 0;
 }
 
 static void retire_work_handler(struct work_struct *work)
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.h b/drivers/gpu/drm/i915/gt/intel_gt_requests.h
index fcc30a6e4fe9..51dbe0e3294e 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_requests.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.h
@@ -6,14 +6,17 @@
 #ifndef INTEL_GT_REQUESTS_H
 #define INTEL_GT_REQUESTS_H
 
+#include <stddef.h>
+
 struct intel_engine_cs;
 struct intel_gt;
 struct intel_timeline;
 
-long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout);
+long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout,
+				      long *remaining_timeout);
 static inline void intel_gt_retire_requests(struct intel_gt *gt)
 {
-	intel_gt_retire_requests_timeout(gt, 0);
+	intel_gt_retire_requests_timeout(gt, 0, NULL);
 }
 
 void intel_engine_init_retire(struct intel_engine_cs *engine);
@@ -21,8 +24,6 @@ void intel_engine_add_retire(struct intel_engine_cs *engine,
 			     struct intel_timeline *tl);
 void intel_engine_fini_retire(struct intel_engine_cs *engine);
 
-int intel_gt_wait_for_idle(struct intel_gt *gt, long timeout);
-
 void intel_gt_init_requests(struct intel_gt *gt);
 void intel_gt_park_requests(struct intel_gt *gt);
 void intel_gt_unpark_requests(struct intel_gt *gt);
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc_reg.h b/drivers/gpu/drm/i915/gt/intel_lrc_reg.h
index 41e5350a7a05..49d4857ad9b7 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc_reg.h
+++ b/drivers/gpu/drm/i915/gt/intel_lrc_reg.h
@@ -87,7 +87,6 @@
 #define GEN11_CSB_WRITE_PTR_MASK	(GEN11_CSB_PTR_MASK << 0)
 
 #define MAX_CONTEXT_HW_ID	(1 << 21) /* exclusive */
-#define MAX_GUC_CONTEXT_HW_ID	(1 << 20) /* exclusive */
 #define GEN11_MAX_CONTEXT_HW_ID	(1 << 11) /* exclusive */
 /* in Gen12 ID 0x7FF is reserved to indicate idle */
 #define GEN12_MAX_CONTEXT_HW_ID	(GEN11_MAX_CONTEXT_HW_ID - 1)
diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c
index 72251638d4ea..3ed694cab5af 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -22,7 +22,6 @@
 #include "intel_reset.h"
 
 #include "uc/intel_guc.h"
-#include "uc/intel_guc_submission.h"
 
 #define RESET_MAX_RETRIES 3
 
@@ -39,21 +38,6 @@ static void rmw_clear_fw(struct intel_uncore *uncore, i915_reg_t reg, u32 clr)
 	intel_uncore_rmw_fw(uncore, reg, clr, 0);
 }
 
-static void skip_context(struct i915_request *rq)
-{
-	struct intel_context *hung_ctx = rq->context;
-
-	list_for_each_entry_from_rcu(rq, &hung_ctx->timeline->requests, link) {
-		if (!i915_request_is_active(rq))
-			return;
-
-		if (rq->context == hung_ctx) {
-			i915_request_set_error_once(rq, -EIO);
-			__i915_request_skip(rq);
-		}
-	}
-}
-
 static void client_mark_guilty(struct i915_gem_context *ctx, bool banned)
 {
 	struct drm_i915_file_private *file_priv = ctx->file_priv;
@@ -88,10 +72,8 @@ static bool mark_guilty(struct i915_request *rq)
 	bool banned;
 	int i;
 
-	if (intel_context_is_closed(rq->context)) {
-		intel_context_set_banned(rq->context);
+	if (intel_context_is_closed(rq->context))
 		return true;
-	}
 
 	rcu_read_lock();
 	ctx = rcu_dereference(rq->context->gem_context);
@@ -123,11 +105,9 @@ static bool mark_guilty(struct i915_request *rq)
 	banned = !i915_gem_context_is_recoverable(ctx);
 	if (time_before(jiffies, prev_hang + CONTEXT_FAST_HANG_JIFFIES))
 		banned = true;
-	if (banned) {
+	if (banned)
 		drm_dbg(&ctx->i915->drm, "context %s: guilty %d, banned\n",
 			ctx->name, atomic_read(&ctx->guilty_count));
-		intel_context_set_banned(rq->context);
-	}
 
 	client_mark_guilty(ctx, banned);
 
@@ -149,6 +129,8 @@ static void mark_innocent(struct i915_request *rq)
 
 void __i915_request_reset(struct i915_request *rq, bool guilty)
 {
+	bool banned = false;
+
 	RQ_TRACE(rq, "guilty? %s\n", yesno(guilty));
 	GEM_BUG_ON(__i915_request_is_complete(rq));
 
@@ -156,13 +138,15 @@ void __i915_request_reset(struct i915_request *rq, bool guilty)
 	if (guilty) {
 		i915_request_set_error_once(rq, -EIO);
 		__i915_request_skip(rq);
-		if (mark_guilty(rq))
-			skip_context(rq);
+		banned = mark_guilty(rq);
 	} else {
 		i915_request_set_error_once(rq, -EAGAIN);
 		mark_innocent(rq);
 	}
 	rcu_read_unlock();
+
+	if (banned)
+		intel_context_ban(rq->context, rq);
 }
 
 static bool i915_in_reset(struct pci_dev *pdev)
@@ -826,6 +810,8 @@ static int gt_reset(struct intel_gt *gt, intel_engine_mask_t stalled_mask)
 		__intel_engine_reset(engine, stalled_mask & engine->mask);
 	local_bh_enable();
 
+	intel_uc_reset(&gt->uc, true);
+
 	intel_ggtt_restore_fences(gt->ggtt);
 
 	return err;
@@ -850,6 +836,8 @@ static void reset_finish(struct intel_gt *gt, intel_engine_mask_t awake)
 		if (awake & engine->mask)
 			intel_engine_pm_put(engine);
 	}
+
+	intel_uc_reset_finish(&gt->uc);
 }
 
 static void nop_submit_request(struct i915_request *request)
@@ -903,6 +891,7 @@ static void __intel_gt_set_wedged(struct intel_gt *gt)
 	for_each_engine(engine, gt, id)
 		if (engine->reset.cancel)
 			engine->reset.cancel(engine);
+	intel_uc_cancel_requests(&gt->uc);
 	local_bh_enable();
 
 	reset_finish(gt, awake);
@@ -1191,6 +1180,9 @@ int __intel_engine_reset_bh(struct intel_engine_cs *engine, const char *msg)
 	ENGINE_TRACE(engine, "flags=%lx\n", gt->reset.flags);
 	GEM_BUG_ON(!test_bit(I915_RESET_ENGINE + engine->id, &gt->reset.flags));
 
+	if (intel_engine_uses_guc(engine))
+		return -ENODEV;
+
 	if (!intel_engine_pm_get_if_awake(engine))
 		return 0;
 
@@ -1201,13 +1193,10 @@ int __intel_engine_reset_bh(struct intel_engine_cs *engine, const char *msg)
 			   "Resetting %s for %s\n", engine->name, msg);
 	atomic_inc(&engine->i915->gpu_error.reset_engine_count[engine->uabi_class]);
 
-	if (intel_engine_uses_guc(engine))
-		ret = intel_guc_reset_engine(&engine->gt->uc.guc, engine);
-	else
-		ret = intel_gt_reset_engine(engine);
+	ret = intel_gt_reset_engine(engine);
 	if (ret) {
 		/* If we fail here, we expect to fallback to a global reset */
-		ENGINE_TRACE(engine, "Failed to reset, err: %d\n", ret);
+		ENGINE_TRACE(engine, "Failed to reset %s, err: %d\n", engine->name, ret);
 		goto out;
 	}
 
@@ -1341,7 +1330,8 @@ void intel_gt_handle_error(struct intel_gt *gt,
 	 * Try engine reset when available. We fall back to full reset if
 	 * single reset fails.
 	 */
-	if (intel_has_reset_engine(gt) && !intel_gt_is_wedged(gt)) {
+	if (!intel_uc_uses_guc_submission(&gt->uc) &&
+	    intel_has_reset_engine(gt) && !intel_gt_is_wedged(gt)) {
 		local_bh_disable();
 		for_each_engine_masked(engine, gt, engine_mask, tmp) {
 			BUILD_BUG_ON(I915_RESET_MODESET >= I915_RESET_ENGINE);
diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
index 5c4d204d07cc..03939be4297e 100644
--- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
@@ -586,9 +586,29 @@ static void ring_context_reset(struct intel_context *ce)
 	clear_bit(CONTEXT_VALID_BIT, &ce->flags);
 }
 
+static void ring_context_ban(struct intel_context *ce,
+			     struct i915_request *rq)
+{
+	struct intel_engine_cs *engine;
+
+	if (!rq || !i915_request_is_active(rq))
+		return;
+
+	engine = rq->engine;
+	lockdep_assert_held(&engine->sched_engine->lock);
+	list_for_each_entry_continue(rq, &engine->sched_engine->requests,
+				     sched.link)
+		if (rq->context == ce) {
+			i915_request_set_error_once(rq, -EIO);
+			__i915_request_skip(rq);
+		}
+}
+
 static const struct intel_context_ops ring_context_ops = {
 	.alloc = ring_context_alloc,
 
+	.ban = ring_context_ban,
+
 	.pre_pin = ring_context_pre_pin,
 	.pin = ring_context_pin,
 	.unpin = ring_context_unpin,
@@ -1047,6 +1067,30 @@ static void setup_irq(struct intel_engine_cs *engine)
 	}
 }
 
+static void ring_bump_serial(struct intel_engine_cs *engine)
+{
+	engine->serial++;
+}
+
+static void add_to_engine(struct i915_request *rq)
+{
+	lockdep_assert_held(&rq->engine->sched_engine->lock);
+	list_move_tail(&rq->sched.link, &rq->engine->sched_engine->requests);
+}
+
+static void remove_from_engine(struct i915_request *rq)
+{
+	spin_lock_irq(&rq->engine->sched_engine->lock);
+	list_del_init(&rq->sched.link);
+
+	/* Prevent further __await_execution() registering a cb, then flush */
+	set_bit(I915_FENCE_FLAG_ACTIVE, &rq->fence.flags);
+
+	spin_unlock_irq(&rq->engine->sched_engine->lock);
+
+	i915_request_notify_execute_cb_imm(rq);
+}
+
 static void setup_common(struct intel_engine_cs *engine)
 {
 	struct drm_i915_private *i915 = engine->i915;
@@ -1064,8 +1108,12 @@ static void setup_common(struct intel_engine_cs *engine)
 	engine->reset.cancel = reset_cancel;
 	engine->reset.finish = reset_finish;
 
+	engine->add_active_request = add_to_engine;
+	engine->remove_active_request = remove_from_engine;
+
 	engine->cops = &ring_context_ops;
 	engine->request_alloc = ring_request_alloc;
+	engine->bump_serial = ring_bump_serial;
 
 	/*
 	 * Using a global execution timeline; the previous final breadcrumb is
diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
index 06e9a8ed4e03..0c8e7f2b06f0 100644
--- a/drivers/gpu/drm/i915/gt/intel_rps.c
+++ b/drivers/gpu/drm/i915/gt/intel_rps.c
@@ -1877,6 +1877,10 @@ void intel_rps_init(struct intel_rps *rps)
 
 	if (GRAPHICS_VER(i915) >= 8 && GRAPHICS_VER(i915) < 11)
 		rps->pm_intrmsk_mbz |= GEN8_PMINTR_DISABLE_REDIRECT_TO_GUC;
+
+	/* GuC needs ARAT expired interrupt unmasked */
+	if (intel_uc_uses_guc_submission(&rps_to_gt(rps)->uc))
+		rps->pm_intrmsk_mbz |= ARAT_EXPIRED_INTRMSK;
 }
 
 void intel_rps_sanitize(struct intel_rps *rps)
diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
index d9a5a445ceec..218a842d8769 100644
--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
+++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
@@ -150,13 +150,14 @@ static void _wa_add(struct i915_wa_list *wal, const struct i915_wa *wa)
 }
 
 static void wa_add(struct i915_wa_list *wal, i915_reg_t reg,
-		   u32 clear, u32 set, u32 read_mask)
+		   u32 clear, u32 set, u32 read_mask, bool masked_reg)
 {
 	struct i915_wa wa = {
 		.reg  = reg,
 		.clr  = clear,
 		.set  = set,
 		.read = read_mask,
+		.masked_reg = masked_reg,
 	};
 
 	_wa_add(wal, &wa);
@@ -165,7 +166,7 @@ static void wa_add(struct i915_wa_list *wal, i915_reg_t reg,
 static void
 wa_write_clr_set(struct i915_wa_list *wal, i915_reg_t reg, u32 clear, u32 set)
 {
-	wa_add(wal, reg, clear, set, clear);
+	wa_add(wal, reg, clear, set, clear, false);
 }
 
 static void
@@ -200,20 +201,20 @@ wa_write_clr(struct i915_wa_list *wal, i915_reg_t reg, u32 clr)
 static void
 wa_masked_en(struct i915_wa_list *wal, i915_reg_t reg, u32 val)
 {
-	wa_add(wal, reg, 0, _MASKED_BIT_ENABLE(val), val);
+	wa_add(wal, reg, 0, _MASKED_BIT_ENABLE(val), val, true);
 }
 
 static void
 wa_masked_dis(struct i915_wa_list *wal, i915_reg_t reg, u32 val)
 {
-	wa_add(wal, reg, 0, _MASKED_BIT_DISABLE(val), val);
+	wa_add(wal, reg, 0, _MASKED_BIT_DISABLE(val), val, true);
 }
 
 static void
 wa_masked_field_set(struct i915_wa_list *wal, i915_reg_t reg,
 		    u32 mask, u32 val)
 {
-	wa_add(wal, reg, 0, _MASKED_FIELD(mask, val), mask);
+	wa_add(wal, reg, 0, _MASKED_FIELD(mask, val), mask, true);
 }
 
 static void gen6_ctx_workarounds_init(struct intel_engine_cs *engine,
@@ -583,10 +584,10 @@ static void icl_ctx_workarounds_init(struct intel_engine_cs *engine,
 			     GEN11_BLEND_EMB_FIX_DISABLE_IN_RCC);
 
 	/* WaEnableFloatBlendOptimization:icl */
-	wa_write_clr_set(wal,
-			 GEN10_CACHE_MODE_SS,
-			 0, /* write-only, so skip validation */
-			 _MASKED_BIT_ENABLE(FLOAT_BLEND_OPTIMIZATION_ENABLE));
+	wa_add(wal, GEN10_CACHE_MODE_SS, 0,
+	       _MASKED_BIT_ENABLE(FLOAT_BLEND_OPTIMIZATION_ENABLE),
+	       0 /* write-only, so skip validation */,
+	       true);
 
 	/* WaDisableGPGPUMidThreadPreemption:icl */
 	wa_masked_field_set(wal, GEN8_CS_CHICKEN1,
@@ -631,7 +632,7 @@ static void gen12_ctx_gt_tuning_init(struct intel_engine_cs *engine,
 	       FF_MODE2,
 	       FF_MODE2_TDS_TIMER_MASK,
 	       FF_MODE2_TDS_TIMER_128,
-	       0);
+	       0, false);
 }
 
 static void gen12_ctx_workarounds_init(struct intel_engine_cs *engine,
@@ -669,7 +670,7 @@ static void gen12_ctx_workarounds_init(struct intel_engine_cs *engine,
 	       FF_MODE2,
 	       FF_MODE2_GS_TIMER_MASK,
 	       FF_MODE2_GS_TIMER_224,
-	       0);
+	       0, false);
 }
 
 static void dg1_ctx_workarounds_init(struct intel_engine_cs *engine,
@@ -840,7 +841,7 @@ hsw_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
 	wa_add(wal,
 	       HSW_ROW_CHICKEN3, 0,
 	       _MASKED_BIT_ENABLE(HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE),
-		0 /* XXX does this reg exist? */);
+	       0 /* XXX does this reg exist? */, true);
 
 	/* WaVSRefCountFullforceMissDisable:hsw */
 	wa_write_clr(wal, GEN7_FF_THREAD_MODE, GEN7_FF_VS_REF_CNT_FFME);
@@ -1929,10 +1930,10 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
 		 * disable bit, which we don't touch here, but it's good
 		 * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
 		 */
-		wa_add(wal, GEN7_GT_MODE, 0,
-		       _MASKED_FIELD(GEN6_WIZ_HASHING_MASK,
-				     GEN6_WIZ_HASHING_16x4),
-		       GEN6_WIZ_HASHING_16x4);
+		wa_masked_field_set(wal,
+				    GEN7_GT_MODE,
+				    GEN6_WIZ_HASHING_MASK,
+				    GEN6_WIZ_HASHING_16x4);
 	}
 
 	if (IS_GRAPHICS_VER(i915, 6, 7))
@@ -1982,10 +1983,10 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
 		 * disable bit, which we don't touch here, but it's good
 		 * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
 		 */
-		wa_add(wal,
-		       GEN6_GT_MODE, 0,
-		       _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4),
-		       GEN6_WIZ_HASHING_16x4);
+		wa_masked_field_set(wal,
+				    GEN6_GT_MODE,
+				    GEN6_WIZ_HASHING_MASK,
+				    GEN6_WIZ_HASHING_16x4);
 
 		/* WaDisable_RenderCache_OperationalFlush:snb */
 		wa_masked_dis(wal, CACHE_MODE_0, RC_OP_FLUSH_ENABLE);
@@ -2006,7 +2007,7 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
 		wa_add(wal, MI_MODE,
 		       0, _MASKED_BIT_ENABLE(VS_TIMER_DISPATCH),
 		       /* XXX bit doesn't stick on Broadwater */
-		       IS_I965G(i915) ? 0 : VS_TIMER_DISPATCH);
+		       IS_I965G(i915) ? 0 : VS_TIMER_DISPATCH, true);
 
 	if (GRAPHICS_VER(i915) == 4)
 		/*
@@ -2021,7 +2022,8 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
 		 */
 		wa_add(wal, ECOSKPD,
 		       0, _MASKED_BIT_ENABLE(ECO_CONSTANT_BUFFER_SR_DISABLE),
-		       0 /* XXX bit doesn't stick on Broadwater */);
+		       0 /* XXX bit doesn't stick on Broadwater */,
+		       true);
 }
 
 static void
diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds_types.h b/drivers/gpu/drm/i915/gt/intel_workarounds_types.h
index c214111ea367..1e873681795d 100644
--- a/drivers/gpu/drm/i915/gt/intel_workarounds_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_workarounds_types.h
@@ -15,6 +15,7 @@ struct i915_wa {
 	u32		clr;
 	u32		set;
 	u32		read;
+	bool		masked_reg;
 };
 
 struct i915_wa_list {
diff --git a/drivers/gpu/drm/i915/gt/mock_engine.c b/drivers/gpu/drm/i915/gt/mock_engine.c
index 68970398e4ef..c12ff3a75ce6 100644
--- a/drivers/gpu/drm/i915/gt/mock_engine.c
+++ b/drivers/gpu/drm/i915/gt/mock_engine.c
@@ -235,6 +235,35 @@ static void mock_submit_request(struct i915_request *request)
 	spin_unlock_irqrestore(&engine->hw_lock, flags);
 }
 
+static void mock_add_to_engine(struct i915_request *rq)
+{
+	lockdep_assert_held(&rq->engine->sched_engine->lock);
+	list_move_tail(&rq->sched.link, &rq->engine->sched_engine->requests);
+}
+
+static void mock_remove_from_engine(struct i915_request *rq)
+{
+	struct intel_engine_cs *engine, *locked;
+
+	/*
+	 * Virtual engines complicate acquiring the engine timeline lock,
+	 * as their rq->engine pointer is not stable until under that
+	 * engine lock. The simple ploy we use is to take the lock then
+	 * check that the rq still belongs to the newly locked engine.
+	 */
+
+	locked = READ_ONCE(rq->engine);
+	spin_lock_irq(&locked->sched_engine->lock);
+	while (unlikely(locked != (engine = READ_ONCE(rq->engine)))) {
+		spin_unlock(&locked->sched_engine->lock);
+		spin_lock(&engine->sched_engine->lock);
+		locked = engine;
+	}
+	list_del_init(&rq->sched.link);
+	spin_unlock_irq(&locked->sched_engine->lock);
+}
+
+
 static void mock_reset_prepare(struct intel_engine_cs *engine)
 {
 }
@@ -284,7 +313,7 @@ static void mock_engine_release(struct intel_engine_cs *engine)
 	GEM_BUG_ON(timer_pending(&mock->hw_delay));
 
 	i915_sched_engine_put(engine->sched_engine);
-	intel_breadcrumbs_free(engine->breadcrumbs);
+	intel_breadcrumbs_put(engine->breadcrumbs);
 
 	intel_context_unpin(engine->kernel_context);
 	intel_context_put(engine->kernel_context);
@@ -292,6 +321,11 @@ static void mock_engine_release(struct intel_engine_cs *engine)
 	intel_engine_fini_retire(engine);
 }
 
+static void mock_bump_serial(struct intel_engine_cs *engine)
+{
+	engine->serial++;
+}
+
 struct intel_engine_cs *mock_engine(struct drm_i915_private *i915,
 				    const char *name,
 				    int id)
@@ -318,9 +352,12 @@ struct intel_engine_cs *mock_engine(struct drm_i915_private *i915,
 
 	engine->base.cops = &mock_context_ops;
 	engine->base.request_alloc = mock_request_alloc;
+	engine->base.bump_serial = mock_bump_serial;
 	engine->base.emit_flush = mock_emit_flush;
 	engine->base.emit_fini_breadcrumb = mock_emit_breadcrumb;
 	engine->base.submit_request = mock_submit_request;
+	engine->base.add_active_request = mock_add_to_engine;
+	engine->base.remove_active_request = mock_remove_from_engine;
 
 	engine->base.reset.prepare = mock_reset_prepare;
 	engine->base.reset.rewind = mock_reset_rewind;
@@ -370,7 +407,7 @@ int mock_engine_init(struct intel_engine_cs *engine)
 	return 0;
 
 err_breadcrumbs:
-	intel_breadcrumbs_free(engine->breadcrumbs);
+	intel_breadcrumbs_put(engine->breadcrumbs);
 err_schedule:
 	i915_sched_engine_put(engine->sched_engine);
 	return -ENOMEM;
diff --git a/drivers/gpu/drm/i915/gt/selftest_context.c b/drivers/gpu/drm/i915/gt/selftest_context.c
index 26685b927169..fa7b99a671dd 100644
--- a/drivers/gpu/drm/i915/gt/selftest_context.c
+++ b/drivers/gpu/drm/i915/gt/selftest_context.c
@@ -209,7 +209,13 @@ static int __live_active_context(struct intel_engine_cs *engine)
 	 * This test makes sure that the context is kept alive until a
 	 * subsequent idle-barrier (emitted when the engine wakeref hits 0
 	 * with no more outstanding requests).
+	 *
+	 * In GuC submission mode we don't use idle barriers and we instead
+	 * get a message from the GuC to signal that it is safe to unpin the
+	 * context from memory.
 	 */
+	if (intel_engine_uses_guc(engine))
+		return 0;
 
 	if (intel_engine_pm_is_awake(engine)) {
 		pr_err("%s is awake before starting %s!\n",
@@ -357,7 +363,11 @@ static int __live_remote_context(struct intel_engine_cs *engine)
 	 * on the context image remotely (intel_context_prepare_remote_request),
 	 * which inserts foreign fences into intel_context.active, does not
 	 * clobber the idle-barrier.
+	 *
+	 * In GuC submission mode we don't use idle barriers.
 	 */
+	if (intel_engine_uses_guc(engine))
+		return 0;
 
 	if (intel_engine_pm_is_awake(engine)) {
 		pr_err("%s is awake before starting %s!\n",
diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_heartbeat.c b/drivers/gpu/drm/i915/gt/selftest_engine_heartbeat.c
index 4896e4ccad50..317eebf086c3 100644
--- a/drivers/gpu/drm/i915/gt/selftest_engine_heartbeat.c
+++ b/drivers/gpu/drm/i915/gt/selftest_engine_heartbeat.c
@@ -405,3 +405,25 @@ void st_engine_heartbeat_enable(struct intel_engine_cs *engine)
 	engine->props.heartbeat_interval_ms =
 		engine->defaults.heartbeat_interval_ms;
 }
+
+void st_engine_heartbeat_disable_no_pm(struct intel_engine_cs *engine)
+{
+	engine->props.heartbeat_interval_ms = 0;
+
+	/*
+	 * Park the heartbeat but without holding the PM lock as that
+	 * makes the engines appear not-idle. Note that if/when unpark
+	 * is called due to the PM lock being acquired later the
+	 * heartbeat still won't be enabled because of the above = 0.
+	 */
+	if (intel_engine_pm_get_if_awake(engine)) {
+		intel_engine_park_heartbeat(engine);
+		intel_engine_pm_put(engine);
+	}
+}
+
+void st_engine_heartbeat_enable_no_pm(struct intel_engine_cs *engine)
+{
+	engine->props.heartbeat_interval_ms =
+		engine->defaults.heartbeat_interval_ms;
+}
diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_heartbeat.h b/drivers/gpu/drm/i915/gt/selftest_engine_heartbeat.h
index cd27113d5400..81da2cd8e406 100644
--- a/drivers/gpu/drm/i915/gt/selftest_engine_heartbeat.h
+++ b/drivers/gpu/drm/i915/gt/selftest_engine_heartbeat.h
@@ -9,6 +9,8 @@
 struct intel_engine_cs;
 
 void st_engine_heartbeat_disable(struct intel_engine_cs *engine);
+void st_engine_heartbeat_disable_no_pm(struct intel_engine_cs *engine);
 void st_engine_heartbeat_enable(struct intel_engine_cs *engine);
+void st_engine_heartbeat_enable_no_pm(struct intel_engine_cs *engine);
 
 #endif /* SELFTEST_ENGINE_HEARTBEAT_H */
diff --git a/drivers/gpu/drm/i915/gt/selftest_execlists.c b/drivers/gpu/drm/i915/gt/selftest_execlists.c
index 73ddc6e14730..59cf8afc6d6f 100644
--- a/drivers/gpu/drm/i915/gt/selftest_execlists.c
+++ b/drivers/gpu/drm/i915/gt/selftest_execlists.c
@@ -3727,7 +3727,7 @@ static int nop_virtual_engine(struct intel_gt *gt,
 	GEM_BUG_ON(!nctx || nctx > ARRAY_SIZE(ve));
 
 	for (n = 0; n < nctx; n++) {
-		ve[n] = intel_execlists_create_virtual(siblings, nsibling);
+		ve[n] = intel_engine_create_virtual(siblings, nsibling);
 		if (IS_ERR(ve[n])) {
 			err = PTR_ERR(ve[n]);
 			nctx = n;
@@ -3923,7 +3923,7 @@ static int mask_virtual_engine(struct intel_gt *gt,
 	 * restrict it to our desired engine within the virtual engine.
 	 */
 
-	ve = intel_execlists_create_virtual(siblings, nsibling);
+	ve = intel_engine_create_virtual(siblings, nsibling);
 	if (IS_ERR(ve)) {
 		err = PTR_ERR(ve);
 		goto out_close;
@@ -4054,7 +4054,7 @@ static int slicein_virtual_engine(struct intel_gt *gt,
 		i915_request_add(rq);
 	}
 
-	ce = intel_execlists_create_virtual(siblings, nsibling);
+	ce = intel_engine_create_virtual(siblings, nsibling);
 	if (IS_ERR(ce)) {
 		err = PTR_ERR(ce);
 		goto out;
@@ -4106,7 +4106,7 @@ static int sliceout_virtual_engine(struct intel_gt *gt,
 
 	/* XXX We do not handle oversubscription and fairness with normal rq */
 	for (n = 0; n < nsibling; n++) {
-		ce = intel_execlists_create_virtual(siblings, nsibling);
+		ce = intel_engine_create_virtual(siblings, nsibling);
 		if (IS_ERR(ce)) {
 			err = PTR_ERR(ce);
 			goto out;
@@ -4208,7 +4208,7 @@ static int preserved_virtual_engine(struct intel_gt *gt,
 	if (err)
 		goto out_scratch;
 
-	ve = intel_execlists_create_virtual(siblings, nsibling);
+	ve = intel_engine_create_virtual(siblings, nsibling);
 	if (IS_ERR(ve)) {
 		err = PTR_ERR(ve);
 		goto out_scratch;
@@ -4348,7 +4348,7 @@ static int reset_virtual_engine(struct intel_gt *gt,
 	if (igt_spinner_init(&spin, gt))
 		return -ENOMEM;
 
-	ve = intel_execlists_create_virtual(siblings, nsibling);
+	ve = intel_engine_create_virtual(siblings, nsibling);
 	if (IS_ERR(ve)) {
 		err = PTR_ERR(ve);
 		goto out_spin;
diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
index 7aea10aa1fb4..a93a9b0d258e 100644
--- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
+++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
@@ -17,6 +17,8 @@
 #include "selftests/igt_flush_test.h"
 #include "selftests/igt_reset.h"
 #include "selftests/igt_atomic.h"
+#include "selftests/igt_spinner.h"
+#include "selftests/intel_scheduler_helpers.h"
 
 #include "selftests/mock_drm.h"
 
@@ -378,6 +380,7 @@ static int igt_reset_nop(void *arg)
 			ce = intel_context_create(engine);
 			if (IS_ERR(ce)) {
 				err = PTR_ERR(ce);
+				pr_err("[%s] Create context failed: %d!\n", engine->name, err);
 				break;
 			}
 
@@ -387,6 +390,7 @@ static int igt_reset_nop(void *arg)
 				rq = intel_context_create_request(ce);
 				if (IS_ERR(rq)) {
 					err = PTR_ERR(rq);
+					pr_err("[%s] Create request failed: %d!\n", engine->name, err);
 					break;
 				}
 
@@ -401,24 +405,31 @@ static int igt_reset_nop(void *arg)
 		igt_global_reset_unlock(gt);
 
 		if (intel_gt_is_wedged(gt)) {
+			pr_err("[%s] GT is wedged!\n", engine->name);
 			err = -EIO;
 			break;
 		}
 
 		if (i915_reset_count(global) != reset_count + ++count) {
-			pr_err("Full GPU reset not recorded!\n");
+			pr_err("[%s] Reset not recorded: %d vs %d + %d!\n",
+			       engine->name, i915_reset_count(global), reset_count, count);
 			err = -EINVAL;
 			break;
 		}
 
 		err = igt_flush_test(gt->i915);
-		if (err)
+		if (err) {
+			pr_err("[%s] Flush failed: %d!\n", engine->name, err);
 			break;
+		}
 	} while (time_before(jiffies, end_time));
 	pr_info("%s: %d resets\n", __func__, count);
 
-	if (igt_flush_test(gt->i915))
+	if (igt_flush_test(gt->i915)) {
+		pr_err("Post flush failed: %d!\n", err);
 		err = -EIO;
+	}
+
 	return err;
 }
 
@@ -440,9 +451,19 @@ static int igt_reset_nop_engine(void *arg)
 		IGT_TIMEOUT(end_time);
 		int err;
 
+		if (intel_engine_uses_guc(engine)) {
+			/* Engine level resets are triggered by GuC when a hang
+			 * is detected. They can't be triggered by the KMD any
+			 * more. Thus a nop batch cannot be used as a reset test
+			 */
+			continue;
+		}
+
 		ce = intel_context_create(engine);
-		if (IS_ERR(ce))
+		if (IS_ERR(ce)) {
+			pr_err("[%s] Create context failed: %d!\n", engine->name, err);
 			return PTR_ERR(ce);
+		}
 
 		reset_count = i915_reset_count(global);
 		reset_engine_count = i915_reset_engine_count(global, engine);
@@ -549,9 +570,15 @@ static int igt_reset_fail_engine(void *arg)
 		IGT_TIMEOUT(end_time);
 		int err;
 
+		/* Can't manually break the reset if i915 doesn't perform it */
+		if (intel_engine_uses_guc(engine))
+			continue;
+
 		ce = intel_context_create(engine);
-		if (IS_ERR(ce))
+		if (IS_ERR(ce)) {
+			pr_err("[%s] Create context failed: %d!\n", engine->name, err);
 			return PTR_ERR(ce);
+		}
 
 		st_engine_heartbeat_disable(engine);
 		set_bit(I915_RESET_ENGINE + id, &gt->reset.flags);
@@ -686,8 +713,12 @@ static int __igt_reset_engine(struct intel_gt *gt, bool active)
 	for_each_engine(engine, gt, id) {
 		unsigned int reset_count, reset_engine_count;
 		unsigned long count;
+		bool using_guc = intel_engine_uses_guc(engine);
 		IGT_TIMEOUT(end_time);
 
+		if (using_guc && !active)
+			continue;
+
 		if (active && !intel_engine_can_store_dword(engine))
 			continue;
 
@@ -705,13 +736,23 @@ static int __igt_reset_engine(struct intel_gt *gt, bool active)
 		set_bit(I915_RESET_ENGINE + id, &gt->reset.flags);
 		count = 0;
 		do {
-			if (active) {
-				struct i915_request *rq;
+			struct i915_request *rq = NULL;
+			struct intel_selftest_saved_policy saved;
+			int err2;
+
+			err = intel_selftest_modify_policy(engine, &saved,
+							   SELFTEST_SCHEDULER_MODIFY_FAST_RESET);
+			if (err) {
+				pr_err("[%s] Modify policy failed: %d!\n", engine->name, err);
+				break;
+			}
 
+			if (active) {
 				rq = hang_create_request(&h, engine);
 				if (IS_ERR(rq)) {
 					err = PTR_ERR(rq);
-					break;
+					pr_err("[%s] Create hang request failed: %d!\n", engine->name, err);
+					goto restore;
 				}
 
 				i915_request_get(rq);
@@ -727,34 +768,58 @@ static int __igt_reset_engine(struct intel_gt *gt, bool active)
 
 					i915_request_put(rq);
 					err = -EIO;
-					break;
+					goto restore;
 				}
+			}
 
-				i915_request_put(rq);
+			if (!using_guc) {
+				err = intel_engine_reset(engine, NULL);
+				if (err) {
+					pr_err("intel_engine_reset(%s) failed, err:%d\n",
+					       engine->name, err);
+					goto skip;
+				}
 			}
 
-			err = intel_engine_reset(engine, NULL);
-			if (err) {
-				pr_err("intel_engine_reset(%s) failed, err:%d\n",
-				       engine->name, err);
-				break;
+			if (rq) {
+				/* Ensure the reset happens and kills the engine */
+				err = intel_selftest_wait_for_rq(rq);
+				if (err)
+					pr_err("[%s] Wait for request %lld:%lld [0x%04X] failed: %d!\n",
+					       engine->name, rq->fence.context, rq->fence.seqno, rq->context->guc_id, err);
 			}
 
+skip:
+			if (rq)
+				i915_request_put(rq);
+
 			if (i915_reset_count(global) != reset_count) {
 				pr_err("Full GPU reset recorded! (engine reset expected)\n");
 				err = -EINVAL;
-				break;
+				goto restore;
 			}
 
-			if (i915_reset_engine_count(global, engine) !=
-			    ++reset_engine_count) {
-				pr_err("%s engine reset not recorded!\n",
-				       engine->name);
-				err = -EINVAL;
-				break;
+			/* GuC based resets are not logged per engine */
+			if (!using_guc) {
+				if (i915_reset_engine_count(global, engine) !=
+				    ++reset_engine_count) {
+					pr_err("%s engine reset not recorded!\n",
+					       engine->name);
+					err = -EINVAL;
+					goto restore;
+				}
 			}
 
 			count++;
+
+restore:
+			err2 = intel_selftest_restore_policy(engine, &saved);
+			if (err2)
+				pr_err("[%s] Restore policy failed: %d!\n", engine->name, err);
+			if (err == 0)
+				err = err2;
+			if (err)
+				break;
 		} while (time_before(jiffies, end_time));
 		clear_bit(I915_RESET_ENGINE + id, &gt->reset.flags);
 		st_engine_heartbeat_enable(engine);
@@ -765,12 +830,16 @@ static int __igt_reset_engine(struct intel_gt *gt, bool active)
 			break;
 
 		err = igt_flush_test(gt->i915);
-		if (err)
+		if (err) {
+			pr_err("[%s] Flush failed: %d!\n", engine->name, err);
 			break;
+		}
 	}
 
-	if (intel_gt_is_wedged(gt))
+	if (intel_gt_is_wedged(gt)) {
+		pr_err("GT is wedged!\n");
 		err = -EIO;
+	}
 
 	if (active)
 		hang_fini(&h);
@@ -807,7 +876,7 @@ static int active_request_put(struct i915_request *rq)
 	if (!rq)
 		return 0;
 
-	if (i915_request_wait(rq, 0, 5 * HZ) < 0) {
+	if (i915_request_wait(rq, 0, 10 * HZ) < 0) {
 		GEM_TRACE("%s timed out waiting for completion of fence %llx:%lld\n",
 			  rq->engine->name,
 			  rq->fence.context,
@@ -837,6 +906,7 @@ static int active_engine(void *data)
 		ce[count] = intel_context_create(engine);
 		if (IS_ERR(ce[count])) {
 			err = PTR_ERR(ce[count]);
+			pr_err("[%s] Create context #%ld failed: %d!\n", engine->name, count, err);
 			while (--count)
 				intel_context_put(ce[count]);
 			return err;
@@ -852,6 +922,7 @@ static int active_engine(void *data)
 		new = intel_context_create_request(ce[idx]);
 		if (IS_ERR(new)) {
 			err = PTR_ERR(new);
+			pr_err("[%s] Create request #%d failed: %d!\n", engine->name, idx, err);
 			break;
 		}
 
@@ -867,8 +938,10 @@ static int active_engine(void *data)
 		}
 
 		err = active_request_put(old);
-		if (err)
+		if (err) {
+			pr_err("[%s] Request put failed: %d!\n", engine->name, err);
 			break;
+		}
 
 		cond_resched();
 	}
@@ -876,6 +949,9 @@ static int active_engine(void *data)
 	for (count = 0; count < ARRAY_SIZE(rq); count++) {
 		int err__ = active_request_put(rq[count]);
 
+		if (err)
+			pr_err("[%s] Request put #%ld failed: %d!\n", engine->name, count, err);
+
 		/* Keep the first error */
 		if (!err)
 			err = err__;
@@ -916,10 +992,13 @@ static int __igt_reset_engines(struct intel_gt *gt,
 		struct active_engine threads[I915_NUM_ENGINES] = {};
 		unsigned long device = i915_reset_count(global);
 		unsigned long count = 0, reported;
+		bool using_guc = intel_engine_uses_guc(engine);
 		IGT_TIMEOUT(end_time);
 
-		if (flags & TEST_ACTIVE &&
-		    !intel_engine_can_store_dword(engine))
+		if (flags & TEST_ACTIVE) {
+			if (!intel_engine_can_store_dword(engine))
+				continue;
+		} else if (using_guc)
 			continue;
 
 		if (!wait_for_idle(engine)) {
@@ -949,6 +1028,7 @@ static int __igt_reset_engines(struct intel_gt *gt,
 					  "igt/%s", other->name);
 			if (IS_ERR(tsk)) {
 				err = PTR_ERR(tsk);
+				pr_err("[%s] Thread spawn failed: %d!\n", engine->name, err);
 				goto unwind;
 			}
 
@@ -958,16 +1038,26 @@ static int __igt_reset_engines(struct intel_gt *gt,
 
 		yield(); /* start all threads before we begin */
 
-		st_engine_heartbeat_disable(engine);
+		st_engine_heartbeat_disable_no_pm(engine);
 		set_bit(I915_RESET_ENGINE + id, &gt->reset.flags);
 		do {
 			struct i915_request *rq = NULL;
+			struct intel_selftest_saved_policy saved;
+			int err2;
+
+			err = intel_selftest_modify_policy(engine, &saved,
+							  SELFTEST_SCHEDULER_MODIFY_FAST_RESET);
+			if (err) {
+				pr_err("[%s] Modify policy failed: %d!\n", engine->name, err);
+				break;
+			}
 
 			if (flags & TEST_ACTIVE) {
 				rq = hang_create_request(&h, engine);
 				if (IS_ERR(rq)) {
 					err = PTR_ERR(rq);
-					break;
+					pr_err("[%s] Create hang request failed: %d!\n", engine->name, err);
+					goto restore;
 				}
 
 				i915_request_get(rq);
@@ -983,15 +1073,27 @@ static int __igt_reset_engines(struct intel_gt *gt,
 
 					i915_request_put(rq);
 					err = -EIO;
-					break;
+					goto restore;
 				}
+			} else {
+				intel_engine_pm_get(engine);
 			}
 
-			err = intel_engine_reset(engine, NULL);
-			if (err) {
-				pr_err("i915_reset_engine(%s:%s): failed, err=%d\n",
-				       engine->name, test_name, err);
-				break;
+			if (!using_guc) {
+				err = intel_engine_reset(engine, NULL);
+				if (err) {
+					pr_err("i915_reset_engine(%s:%s): failed, err=%d\n",
+					       engine->name, test_name, err);
+					goto restore;
+				}
+			}
+
+			if (rq) {
+				/* Ensure the reset happens and kills the engine */
+				err = intel_selftest_wait_for_rq(rq);
+				if (err)
+					pr_err("[%s] Wait for request %lld:%lld [0x%04X] failed: %d!\n",
+					       engine->name, rq->fence.context, rq->fence.seqno, rq->context->guc_id, err);
 			}
 
 			count++;
@@ -999,16 +1101,16 @@ static int __igt_reset_engines(struct intel_gt *gt,
 			if (rq) {
 				if (rq->fence.error != -EIO) {
 					pr_err("i915_reset_engine(%s:%s):"
-					       " failed to reset request %llx:%lld\n",
+					       " failed to reset request %lld:%lld [0x%04X]\n",
 					       engine->name, test_name,
 					       rq->fence.context,
-					       rq->fence.seqno);
+					       rq->fence.seqno, rq->context->guc_id);
 					i915_request_put(rq);
 
 					GEM_TRACE_DUMP();
 					intel_gt_set_wedged(gt);
 					err = -EIO;
-					break;
+					goto restore;
 				}
 
 				if (i915_request_wait(rq, 0, HZ / 5) < 0) {
@@ -1027,12 +1129,15 @@ static int __igt_reset_engines(struct intel_gt *gt,
 					GEM_TRACE_DUMP();
 					intel_gt_set_wedged(gt);
 					err = -EIO;
-					break;
+					goto restore;
 				}
 
 				i915_request_put(rq);
 			}
 
+			if (!(flags & TEST_ACTIVE))
+				intel_engine_pm_put(engine);
+
 			if (!(flags & TEST_SELF) && !wait_for_idle(engine)) {
 				struct drm_printer p =
 					drm_info_printer(gt->i915->drm.dev);
@@ -1044,22 +1149,34 @@ static int __igt_reset_engines(struct intel_gt *gt,
 						  "%s\n", engine->name);
 
 				err = -EIO;
-				break;
+				goto restore;
 			}
+
+restore:
+			err2 = intel_selftest_restore_policy(engine, &saved);
+			if (err2)
+				pr_err("[%s] Restore policy failed: %d!\n", engine->name, err2);
+			if (err == 0)
+				err = err2;
+			if (err)
+				break;
 		} while (time_before(jiffies, end_time));
 		clear_bit(I915_RESET_ENGINE + id, &gt->reset.flags);
-		st_engine_heartbeat_enable(engine);
+		st_engine_heartbeat_enable_no_pm(engine);
 
 		pr_info("i915_reset_engine(%s:%s): %lu resets\n",
 			engine->name, test_name, count);
 
-		reported = i915_reset_engine_count(global, engine);
-		reported -= threads[engine->id].resets;
-		if (reported != count) {
-			pr_err("i915_reset_engine(%s:%s): reset %lu times, but reported %lu\n",
-			       engine->name, test_name, count, reported);
-			if (!err)
-				err = -EINVAL;
+		/* GuC based resets are not logged per engine */
+		if (!using_guc) {
+			reported = i915_reset_engine_count(global, engine);
+			reported -= threads[engine->id].resets;
+			if (reported != count) {
+				pr_err("i915_reset_engine(%s:%s): reset %lu times, but reported %lu\n",
+				       engine->name, test_name, count, reported);
+				if (!err)
+					err = -EINVAL;
+			}
 		}
 
 unwind:
@@ -1078,15 +1195,18 @@ static int __igt_reset_engines(struct intel_gt *gt,
 			}
 			put_task_struct(threads[tmp].task);
 
-			if (other->uabi_class != engine->uabi_class &&
-			    threads[tmp].resets !=
-			    i915_reset_engine_count(global, other)) {
-				pr_err("Innocent engine %s was reset (count=%ld)\n",
-				       other->name,
-				       i915_reset_engine_count(global, other) -
-				       threads[tmp].resets);
-				if (!err)
-					err = -EINVAL;
+			/* GuC based resets are not logged per engine */
+			if (!using_guc) {
+				if (other->uabi_class != engine->uabi_class &&
+				    threads[tmp].resets !=
+				    i915_reset_engine_count(global, other)) {
+					pr_err("Innocent engine %s was reset (count=%ld)\n",
+					       other->name,
+					       i915_reset_engine_count(global, other) -
+					       threads[tmp].resets);
+					if (!err)
+						err = -EINVAL;
+				}
 			}
 		}
 
@@ -1101,8 +1221,10 @@ static int __igt_reset_engines(struct intel_gt *gt,
 			break;
 
 		err = igt_flush_test(gt->i915);
-		if (err)
+		if (err) {
+			pr_err("[%s] Flush failed: %d!\n", engine->name, err);
 			break;
+		}
 	}
 
 	if (intel_gt_is_wedged(gt))
@@ -1180,12 +1302,15 @@ static int igt_reset_wait(void *arg)
 	igt_global_reset_lock(gt);
 
 	err = hang_init(&h, gt);
-	if (err)
+	if (err) {
+		pr_err("[%s] Hang init failed: %d!\n", engine->name, err);
 		goto unlock;
+	}
 
 	rq = hang_create_request(&h, engine);
 	if (IS_ERR(rq)) {
 		err = PTR_ERR(rq);
+		pr_err("[%s] Create hang request failed: %d!\n", engine->name, err);
 		goto fini;
 	}
 
@@ -1310,12 +1435,15 @@ static int __igt_reset_evict_vma(struct intel_gt *gt,
 	/* Check that we can recover an unbind stuck on a hanging request */
 
 	err = hang_init(&h, gt);
-	if (err)
+	if (err) {
+		pr_err("[%s] Hang init failed: %d!\n", engine->name, err);
 		return err;
+	}
 
 	obj = i915_gem_object_create_internal(gt->i915, SZ_1M);
 	if (IS_ERR(obj)) {
 		err = PTR_ERR(obj);
+		pr_err("[%s] Create object failed: %d!\n", engine->name, err);
 		goto fini;
 	}
 
@@ -1330,12 +1458,14 @@ static int __igt_reset_evict_vma(struct intel_gt *gt,
 	arg.vma = i915_vma_instance(obj, vm, NULL);
 	if (IS_ERR(arg.vma)) {
 		err = PTR_ERR(arg.vma);
+		pr_err("[%s] VMA instance failed: %d!\n", engine->name, err);
 		goto out_obj;
 	}
 
 	rq = hang_create_request(&h, engine);
 	if (IS_ERR(rq)) {
 		err = PTR_ERR(rq);
+		pr_err("[%s] Create hang request failed: %d!\n", engine->name, err);
 		goto out_obj;
 	}
 
@@ -1347,6 +1477,7 @@ static int __igt_reset_evict_vma(struct intel_gt *gt,
 	err = i915_vma_pin(arg.vma, 0, 0, pin_flags);
 	if (err) {
 		i915_request_add(rq);
+		pr_err("[%s] VMA pin failed: %d!\n", engine->name, err);
 		goto out_obj;
 	}
 
@@ -1363,8 +1494,14 @@ static int __igt_reset_evict_vma(struct intel_gt *gt,
 	i915_vma_lock(arg.vma);
 	err = i915_request_await_object(rq, arg.vma->obj,
 					flags & EXEC_OBJECT_WRITE);
-	if (err == 0)
+	if (err == 0) {
 		err = i915_vma_move_to_active(arg.vma, rq, flags);
+		if (err)
+			pr_err("[%s] Move to active failed: %d!\n", engine->name, err);
+	} else {
+		pr_err("[%s] Request await failed: %d!\n", engine->name, err);
+	}
+
 	i915_vma_unlock(arg.vma);
 
 	if (flags & EXEC_OBJECT_NEEDS_FENCE)
@@ -1392,6 +1529,7 @@ static int __igt_reset_evict_vma(struct intel_gt *gt,
 	tsk = kthread_run(fn, &arg, "igt/evict_vma");
 	if (IS_ERR(tsk)) {
 		err = PTR_ERR(tsk);
+		pr_err("[%s] Thread spawn failed: %d!\n", engine->name, err);
 		tsk = NULL;
 		goto out_reset;
 	}
@@ -1508,17 +1646,29 @@ static int igt_reset_queue(void *arg)
 		goto unlock;
 
 	for_each_engine(engine, gt, id) {
+		struct intel_selftest_saved_policy saved;
 		struct i915_request *prev;
 		IGT_TIMEOUT(end_time);
 		unsigned int count;
+		bool using_guc = intel_engine_uses_guc(engine);
 
 		if (!intel_engine_can_store_dword(engine))
 			continue;
 
+		if (using_guc) {
+			err = intel_selftest_modify_policy(engine, &saved,
+							  SELFTEST_SCHEDULER_MODIFY_NO_HANGCHECK);
+			if (err) {
+				pr_err("[%s] Modify policy failed: %d!\n", engine->name, err);
+				goto fini;
+			}
+		}
+
 		prev = hang_create_request(&h, engine);
 		if (IS_ERR(prev)) {
 			err = PTR_ERR(prev);
-			goto fini;
+			pr_err("[%s] Create 'prev' hang request failed: %d!\n", engine->name, err);
+			goto restore;
 		}
 
 		i915_request_get(prev);
@@ -1532,7 +1682,8 @@ static int igt_reset_queue(void *arg)
 			rq = hang_create_request(&h, engine);
 			if (IS_ERR(rq)) {
 				err = PTR_ERR(rq);
-				goto fini;
+				pr_err("[%s] Create hang request failed: %d!\n", engine->name, err);
+				goto restore;
 			}
 
 			i915_request_get(rq);
@@ -1557,7 +1708,7 @@ static int igt_reset_queue(void *arg)
 
 				GEM_TRACE_DUMP();
 				intel_gt_set_wedged(gt);
-				goto fini;
+				goto restore;
 			}
 
 			if (!wait_until_running(&h, prev)) {
@@ -1575,7 +1726,7 @@ static int igt_reset_queue(void *arg)
 				intel_gt_set_wedged(gt);
 
 				err = -EIO;
-				goto fini;
+				goto restore;
 			}
 
 			reset_count = fake_hangcheck(gt, BIT(id));
@@ -1586,7 +1737,7 @@ static int igt_reset_queue(void *arg)
 				i915_request_put(rq);
 				i915_request_put(prev);
 				err = -EINVAL;
-				goto fini;
+				goto restore;
 			}
 
 			if (rq->fence.error) {
@@ -1595,7 +1746,7 @@ static int igt_reset_queue(void *arg)
 				i915_request_put(rq);
 				i915_request_put(prev);
 				err = -EINVAL;
-				goto fini;
+				goto restore;
 			}
 
 			if (i915_reset_count(global) == reset_count) {
@@ -1603,7 +1754,7 @@ static int igt_reset_queue(void *arg)
 				i915_request_put(rq);
 				i915_request_put(prev);
 				err = -EINVAL;
-				goto fini;
+				goto restore;
 			}
 
 			i915_request_put(prev);
@@ -1618,9 +1769,22 @@ static int igt_reset_queue(void *arg)
 
 		i915_request_put(prev);
 
-		err = igt_flush_test(gt->i915);
+restore:
+		if (using_guc) {
+			int err2 = intel_selftest_restore_policy(engine, &saved);
+			if (err2)
+				pr_err("%s:%d> [%s] Restore policy failed: %d!\n", __func__, __LINE__, engine->name, err2);
+			if (err == 0)
+				err = err2;
+		}
 		if (err)
+			goto fini;
+
+		err = igt_flush_test(gt->i915);
+		if (err) {
+			pr_err("[%s] Flush failed: %d!\n", engine->name, err);
 			break;
+		}
 	}
 
 fini:
@@ -1653,12 +1817,15 @@ static int igt_handle_error(void *arg)
 		return 0;
 
 	err = hang_init(&h, gt);
-	if (err)
+	if (err) {
+		pr_err("[%s] Hang init failed: %d!\n", engine->name, err);
 		return err;
+	}
 
 	rq = hang_create_request(&h, engine);
 	if (IS_ERR(rq)) {
 		err = PTR_ERR(rq);
+		pr_err("[%s] Create hang request failed: %d!\n", engine->name, err);
 		goto err_fini;
 	}
 
@@ -1743,12 +1910,15 @@ static int igt_atomic_reset_engine(struct intel_engine_cs *engine,
 		return err;
 
 	err = hang_init(&h, engine->gt);
-	if (err)
+	if (err) {
+		pr_err("[%s] Hang init failed: %d!\n", engine->name, err);
 		return err;
+	}
 
 	rq = hang_create_request(&h, engine);
 	if (IS_ERR(rq)) {
 		err = PTR_ERR(rq);
+		pr_err("[%s] Create hang request failed: %d!\n", engine->name, err);
 		goto out;
 	}
 
diff --git a/drivers/gpu/drm/i915/gt/selftest_mocs.c b/drivers/gpu/drm/i915/gt/selftest_mocs.c
index 8763bbeca0f7..13d25bf2a94a 100644
--- a/drivers/gpu/drm/i915/gt/selftest_mocs.c
+++ b/drivers/gpu/drm/i915/gt/selftest_mocs.c
@@ -10,6 +10,7 @@
 #include "gem/selftests/mock_context.h"
 #include "selftests/igt_reset.h"
 #include "selftests/igt_spinner.h"
+#include "selftests/intel_scheduler_helpers.h"
 
 struct live_mocs {
 	struct drm_i915_mocs_table table;
@@ -318,7 +319,8 @@ static int live_mocs_clean(void *arg)
 }
 
 static int active_engine_reset(struct intel_context *ce,
-			       const char *reason)
+			       const char *reason,
+			       bool using_guc)
 {
 	struct igt_spinner spin;
 	struct i915_request *rq;
@@ -335,9 +337,13 @@ static int active_engine_reset(struct intel_context *ce,
 	}
 
 	err = request_add_spin(rq, &spin);
-	if (err == 0)
+	if (err == 0 && !using_guc)
 		err = intel_engine_reset(ce->engine, reason);
 
+	/* Ensure the reset happens and kills the engine */
+	if (err == 0)
+		err = intel_selftest_wait_for_rq(rq);
+
 	igt_spinner_end(&spin);
 	igt_spinner_fini(&spin);
 
@@ -345,21 +351,23 @@ static int active_engine_reset(struct intel_context *ce,
 }
 
 static int __live_mocs_reset(struct live_mocs *mocs,
-			     struct intel_context *ce)
+			     struct intel_context *ce, bool using_guc)
 {
 	struct intel_gt *gt = ce->engine->gt;
 	int err;
 
 	if (intel_has_reset_engine(gt)) {
-		err = intel_engine_reset(ce->engine, "mocs");
-		if (err)
-			return err;
-
-		err = check_mocs_engine(mocs, ce);
-		if (err)
-			return err;
+		if (!using_guc) {
+			err = intel_engine_reset(ce->engine, "mocs");
+			if (err)
+				return err;
+
+			err = check_mocs_engine(mocs, ce);
+			if (err)
+				return err;
+		}
 
-		err = active_engine_reset(ce, "mocs");
+		err = active_engine_reset(ce, "mocs", using_guc);
 		if (err)
 			return err;
 
@@ -395,19 +403,33 @@ static int live_mocs_reset(void *arg)
 
 	igt_global_reset_lock(gt);
 	for_each_engine(engine, gt, id) {
+		bool using_guc = intel_engine_uses_guc(engine);
+		struct intel_selftest_saved_policy saved;
 		struct intel_context *ce;
+		int err2;
+
+		err = intel_selftest_modify_policy(engine, &saved,
+						   SELFTEST_SCHEDULER_MODIFY_FAST_RESET);
+		if (err)
+			break;
 
 		ce = mocs_context_create(engine);
 		if (IS_ERR(ce)) {
 			err = PTR_ERR(ce);
-			break;
+			goto restore;
 		}
 
 		intel_engine_pm_get(engine);
-		err = __live_mocs_reset(&mocs, ce);
-		intel_engine_pm_put(engine);
 
+		err = __live_mocs_reset(&mocs, ce, using_guc);
+
+		intel_engine_pm_put(engine);
 		intel_context_put(ce);
+
+restore:
+		err2 = intel_selftest_restore_policy(engine, &saved);
+		if (err == 0)
+			err = err2;
 		if (err)
 			break;
 	}
diff --git a/drivers/gpu/drm/i915/gt/selftest_workarounds.c b/drivers/gpu/drm/i915/gt/selftest_workarounds.c
index 7ebc4edb8ecf..d820f0b41634 100644
--- a/drivers/gpu/drm/i915/gt/selftest_workarounds.c
+++ b/drivers/gpu/drm/i915/gt/selftest_workarounds.c
@@ -12,6 +12,7 @@
 #include "selftests/igt_flush_test.h"
 #include "selftests/igt_reset.h"
 #include "selftests/igt_spinner.h"
+#include "selftests/intel_scheduler_helpers.h"
 #include "selftests/mock_drm.h"
 
 #include "gem/selftests/igt_gem_utils.h"
@@ -261,28 +262,34 @@ static int do_engine_reset(struct intel_engine_cs *engine)
 	return intel_engine_reset(engine, "live_workarounds");
 }
 
+static int do_guc_reset(struct intel_engine_cs *engine)
+{
+	/* Currently a no-op as the reset is handled by GuC */
+	return 0;
+}
+
 static int
 switch_to_scratch_context(struct intel_engine_cs *engine,
-			  struct igt_spinner *spin)
+			  struct igt_spinner *spin,
+			  struct i915_request **rq)
 {
 	struct intel_context *ce;
-	struct i915_request *rq;
 	int err = 0;
 
 	ce = intel_context_create(engine);
 	if (IS_ERR(ce))
 		return PTR_ERR(ce);
 
-	rq = igt_spinner_create_request(spin, ce, MI_NOOP);
+	*rq = igt_spinner_create_request(spin, ce, MI_NOOP);
 	intel_context_put(ce);
 
-	if (IS_ERR(rq)) {
+	if (IS_ERR(*rq)) {
 		spin = NULL;
-		err = PTR_ERR(rq);
+		err = PTR_ERR(*rq);
 		goto err;
 	}
 
-	err = request_add_spin(rq, spin);
+	err = request_add_spin(*rq, spin);
 err:
 	if (err && spin)
 		igt_spinner_end(spin);
@@ -296,6 +303,7 @@ static int check_whitelist_across_reset(struct intel_engine_cs *engine,
 {
 	struct intel_context *ce, *tmp;
 	struct igt_spinner spin;
+	struct i915_request *rq;
 	intel_wakeref_t wakeref;
 	int err;
 
@@ -316,13 +324,24 @@ static int check_whitelist_across_reset(struct intel_engine_cs *engine,
 		goto out_spin;
 	}
 
-	err = switch_to_scratch_context(engine, &spin);
+	err = switch_to_scratch_context(engine, &spin, &rq);
 	if (err)
 		goto out_spin;
 
+	/* Ensure the spinner hasn't aborted */
+	if (i915_request_completed(rq)) {
+		pr_err("%s spinner failed to start\n", name);
+		err = -ETIMEDOUT;
+		goto out_spin;
+	}
+
 	with_intel_runtime_pm(engine->uncore->rpm, wakeref)
 		err = reset(engine);
 
+	/* Ensure the reset happens and kills the engine */
+	if (err == 0)
+		err = intel_selftest_wait_for_rq(rq);
+
 	igt_spinner_end(&spin);
 
 	if (err) {
@@ -787,9 +806,27 @@ static int live_reset_whitelist(void *arg)
 			continue;
 
 		if (intel_has_reset_engine(gt)) {
-			err = check_whitelist_across_reset(engine,
-							   do_engine_reset,
-							   "engine");
+			if (intel_engine_uses_guc(engine)) {
+				struct intel_selftest_saved_policy saved;
+				int err2;
+
+				err = intel_selftest_modify_policy(engine, &saved,
+								   SELFTEST_SCHEDULER_MODIFY_FAST_RESET);
+				if(err)
+					goto out;
+
+				err = check_whitelist_across_reset(engine,
+								   do_guc_reset,
+								   "guc");
+
+				err2 = intel_selftest_restore_policy(engine, &saved);
+				if (err == 0)
+					err = err2;
+			} else
+				err = check_whitelist_across_reset(engine,
+								   do_engine_reset,
+								   "engine");
+
 			if (err)
 				goto out;
 		}
@@ -1226,31 +1263,42 @@ live_engine_reset_workarounds(void *arg)
 	reference_lists_init(gt, &lists);
 
 	for_each_engine(engine, gt, id) {
+		struct intel_selftest_saved_policy saved;
+		bool using_guc = intel_engine_uses_guc(engine);
 		bool ok;
+		int ret2;
 
 		pr_info("Verifying after %s reset...\n", engine->name);
+		ret = intel_selftest_modify_policy(engine, &saved,
+						   SELFTEST_SCHEDULER_MODIFY_FAST_RESET);
+		if (ret)
+			break;
+
+
 		ce = intel_context_create(engine);
 		if (IS_ERR(ce)) {
 			ret = PTR_ERR(ce);
-			break;
+			goto restore;
 		}
 
-		ok = verify_wa_lists(gt, &lists, "before reset");
-		if (!ok) {
-			ret = -ESRCH;
-			goto err;
-		}
+		if (!using_guc) {
+			ok = verify_wa_lists(gt, &lists, "before reset");
+			if (!ok) {
+				ret = -ESRCH;
+				goto err;
+			}
 
-		ret = intel_engine_reset(engine, "live_workarounds:idle");
-		if (ret) {
-			pr_err("%s: Reset failed while idle\n", engine->name);
-			goto err;
-		}
+			ret = intel_engine_reset(engine, "live_workarounds:idle");
+			if (ret) {
+				pr_err("%s: Reset failed while idle\n", engine->name);
+				goto err;
+			}
 
-		ok = verify_wa_lists(gt, &lists, "after idle reset");
-		if (!ok) {
-			ret = -ESRCH;
-			goto err;
+			ok = verify_wa_lists(gt, &lists, "after idle reset");
+			if (!ok) {
+				ret = -ESRCH;
+				goto err;
+			}
 		}
 
 		ret = igt_spinner_init(&spin, engine->gt);
@@ -1271,25 +1319,41 @@ live_engine_reset_workarounds(void *arg)
 			goto err;
 		}
 
-		ret = intel_engine_reset(engine, "live_workarounds:active");
-		if (ret) {
-			pr_err("%s: Reset failed on an active spinner\n",
-			       engine->name);
-			igt_spinner_fini(&spin);
-			goto err;
+		/* Ensure the spinner hasn't aborted */
+		if (i915_request_completed(rq)) {
+			ret = -ETIMEDOUT;
+			goto skip;
+		}
+
+		if (!using_guc) {
+			ret = intel_engine_reset(engine, "live_workarounds:active");
+			if (ret) {
+				pr_err("%s: Reset failed on an active spinner\n",
+				       engine->name);
+				igt_spinner_fini(&spin);
+				goto err;
+			}
 		}
 
+		/* Ensure the reset happens and kills the engine */
+		if (ret == 0)
+			ret = intel_selftest_wait_for_rq(rq);
+
+skip:
 		igt_spinner_end(&spin);
 		igt_spinner_fini(&spin);
 
 		ok = verify_wa_lists(gt, &lists, "after busy reset");
-		if (!ok) {
+		if (!ok)
 			ret = -ESRCH;
-			goto err;
-		}
 
 err:
 		intel_context_put(ce);
+
+restore:
+		ret2 = intel_selftest_restore_policy(engine, &saved);
+		if (ret == 0)
+			ret = ret2;
 		if (ret)
 			break;
 	}
diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
index 2d6198e63ebe..596cf4b818e5 100644
--- a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
+++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
@@ -124,10 +124,25 @@ enum intel_guc_action {
 	INTEL_GUC_ACTION_FORCE_LOG_BUFFER_FLUSH = 0x302,
 	INTEL_GUC_ACTION_ENTER_S_STATE = 0x501,
 	INTEL_GUC_ACTION_EXIT_S_STATE = 0x502,
+	INTEL_GUC_ACTION_GLOBAL_SCHED_POLICY_CHANGE = 0x506,
+	INTEL_GUC_ACTION_SCHED_CONTEXT = 0x1000,
+	INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_SET = 0x1001,
+	INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_DONE = 0x1002,
+	INTEL_GUC_ACTION_SCHED_ENGINE_MODE_SET = 0x1003,
+	INTEL_GUC_ACTION_SCHED_ENGINE_MODE_DONE = 0x1004,
+	INTEL_GUC_ACTION_SET_CONTEXT_PRIORITY = 0x1005,
+	INTEL_GUC_ACTION_SET_CONTEXT_EXECUTION_QUANTUM = 0x1006,
+	INTEL_GUC_ACTION_SET_CONTEXT_PREEMPTION_TIMEOUT = 0x1007,
+	INTEL_GUC_ACTION_CONTEXT_RESET_NOTIFICATION = 0x1008,
+	INTEL_GUC_ACTION_ENGINE_FAILURE_NOTIFICATION = 0x1009,
 	INTEL_GUC_ACTION_SLPC_REQUEST = 0x3003,
 	INTEL_GUC_ACTION_AUTHENTICATE_HUC = 0x4000,
+	INTEL_GUC_ACTION_REGISTER_CONTEXT = 0x4502,
+	INTEL_GUC_ACTION_DEREGISTER_CONTEXT = 0x4503,
 	INTEL_GUC_ACTION_REGISTER_COMMAND_TRANSPORT_BUFFER = 0x4505,
 	INTEL_GUC_ACTION_DEREGISTER_COMMAND_TRANSPORT_BUFFER = 0x4506,
+	INTEL_GUC_ACTION_DEREGISTER_CONTEXT_DONE = 0x4600,
+	INTEL_GUC_ACTION_RESET_CLIENT = 0x5B01,
 	INTEL_GUC_ACTION_LIMIT
 };
 
diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h b/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h
index e933ca02d0eb..99e1fad5ca20 100644
--- a/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h
+++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h
@@ -79,7 +79,8 @@ static_assert(sizeof(struct guc_ct_buffer_desc) == 64);
  *  +---+-------+--------------------------------------------------------------+
  */
 
-#define GUC_CTB_MSG_MIN_LEN			1u
+#define GUC_CTB_HDR_LEN				1u
+#define GUC_CTB_MSG_MIN_LEN			GUC_CTB_HDR_LEN
 #define GUC_CTB_MSG_MAX_LEN			256u
 #define GUC_CTB_MSG_0_FENCE			(0xffff << 16)
 #define GUC_CTB_MSG_0_FORMAT			(0xf << 12)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
index 6661dcb02239..979128e28372 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
@@ -180,6 +180,11 @@ void intel_guc_init_early(struct intel_guc *guc)
 	}
 }
 
+void intel_guc_init_late(struct intel_guc *guc)
+{
+	intel_guc_ads_init_late(guc);
+}
+
 static u32 guc_ctl_debug_flags(struct intel_guc *guc)
 {
 	u32 level = intel_guc_log_get_level(&guc->log);
@@ -524,65 +529,35 @@ int intel_guc_auth_huc(struct intel_guc *guc, u32 rsa_offset)
  */
 int intel_guc_suspend(struct intel_guc *guc)
 {
-	struct intel_uncore *uncore = guc_to_gt(guc)->uncore;
 	int ret;
-	u32 status;
 	u32 action[] = {
-		INTEL_GUC_ACTION_ENTER_S_STATE,
-		GUC_POWER_D1, /* any value greater than GUC_POWER_D0 */
+		INTEL_GUC_ACTION_RESET_CLIENT,
 	};
 
-	/*
-	 * If GuC communication is enabled but submission is not supported,
-	 * we do not need to suspend the GuC.
-	 */
-	if (!intel_guc_submission_is_used(guc) || !intel_guc_is_ready(guc))
+	if (!intel_guc_is_ready(guc))
 		return 0;
 
-	/*
-	 * The ENTER_S_STATE action queues the save/restore operation in GuC FW
-	 * and then returns, so waiting on the H2G is not enough to guarantee
-	 * GuC is done. When all the processing is done, GuC writes
-	 * INTEL_GUC_SLEEP_STATE_SUCCESS to scratch register 14, so we can poll
-	 * on that. Note that GuC does not ensure that the value in the register
-	 * is different from INTEL_GUC_SLEEP_STATE_SUCCESS while the action is
-	 * in progress so we need to take care of that ourselves as well.
-	 */
-
-	intel_uncore_write(uncore, SOFT_SCRATCH(14),
-			   INTEL_GUC_SLEEP_STATE_INVALID_MASK);
-
-	ret = intel_guc_send(guc, action, ARRAY_SIZE(action));
-	if (ret)
-		return ret;
-
-	ret = __intel_wait_for_register(uncore, SOFT_SCRATCH(14),
-					INTEL_GUC_SLEEP_STATE_INVALID_MASK,
-					0, 0, 10, &status);
-	if (ret)
-		return ret;
-
-	if (status != INTEL_GUC_SLEEP_STATE_SUCCESS) {
-		DRM_ERROR("GuC failed to change sleep state. "
-			  "action=0x%x, err=%u\n",
-			  action[0], status);
-		return -EIO;
+	if (intel_guc_submission_is_used(guc)) {
+		/*
+		 * This H2G MMIO command tears down the GuC in two steps. First it will
+		 * generate a G2H CTB for every active context indicating a reset. In
+		 * practice the i915 shouldn't ever get a G2H as suspend should only be
+		 * called when the GPU is idle. Next, it tears down the CTBs and this
+		 * H2G MMIO command completes.
+		 *
+		 * Don't abort on a failure code from the GuC. Keep going and do the
+		 * clean up in santize() and re-initialisation on resume and hopefully
+		 * the error here won't be problematic.
+		 */
+		ret = intel_guc_send_mmio(guc, action, ARRAY_SIZE(action), NULL, 0);
+		if (ret)
+			DRM_ERROR("GuC suspend: RESET_CLIENT action failed with error %d!\n", ret);
 	}
 
-	return 0;
-}
+	/* Signal that the GuC isn't running. */
+	intel_guc_sanitize(guc);
 
-/**
- * intel_guc_reset_engine() - ask GuC to reset an engine
- * @guc:	intel_guc structure
- * @engine:	engine to be reset
- */
-int intel_guc_reset_engine(struct intel_guc *guc,
-			   struct intel_engine_cs *engine)
-{
-	/* XXX: to be implemented with submission interface rework */
-
-	return -ENODEV;
+	return 0;
 }
 
 /**
@@ -591,7 +566,12 @@ int intel_guc_reset_engine(struct intel_guc *guc,
  */
 int intel_guc_resume(struct intel_guc *guc)
 {
-	/* XXX: to be implemented with submission interface rework */
+	/*
+	 * NB: This function can still be called even if GuC submission is
+	 * disabled, e.g. if GuC is enabled for HuC authentication only. Thus,
+	 * if any code is later added here, it must be support doing nothing
+	 * if submission is disabled (as per intel_guc_suspend).
+	 */
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index 4abc59f6f3cd..5d94cf482516 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -6,6 +6,9 @@
 #ifndef _INTEL_GUC_H_
 #define _INTEL_GUC_H_
 
+#include <linux/xarray.h>
+#include <linux/delay.h>
+
 #include "intel_uncore.h"
 #include "intel_guc_fw.h"
 #include "intel_guc_fwif.h"
@@ -28,23 +31,43 @@ struct intel_guc {
 	struct intel_guc_log log;
 	struct intel_guc_ct ct;
 
+	/* Global engine used to submit requests to GuC */
+	struct i915_sched_engine *sched_engine;
+	struct i915_request *stalled_request;
+
 	/* intel_guc_recv interrupt related state */
 	spinlock_t irq_lock;
 	unsigned int msg_enabled_mask;
 
+	atomic_t outstanding_submission_g2h;
+
 	struct {
 		void (*reset)(struct intel_guc *guc);
 		void (*enable)(struct intel_guc *guc);
 		void (*disable)(struct intel_guc *guc);
 	} interrupts;
 
+	/*
+	 * contexts_lock protects the pool of free guc ids and a linked list of
+	 * guc ids available to be stolen
+	 */
+	spinlock_t contexts_lock;
+	struct ida guc_ids;
+	struct list_head guc_id_list;
+
+	bool submission_supported;
 	bool submission_selected;
 
 	struct i915_vma *ads_vma;
 	struct __guc_ads_blob *ads_blob;
+	u32 ads_regset_size;
+	u32 ads_golden_ctxt_size;
 
-	struct i915_vma *stage_desc_pool;
-	void *stage_desc_pool_vaddr;
+	struct i915_vma *lrc_desc_pool;
+	void *lrc_desc_pool_vaddr;
+
+	/* guc_id to intel_context lookup */
+	struct xarray context_lookup;
 
 	/* Control params for fw initialization */
 	u32 params[GUC_CTL_MAX_DWORDS];
@@ -74,7 +97,15 @@ static inline struct intel_guc *log_to_guc(struct intel_guc_log *log)
 static
 inline int intel_guc_send(struct intel_guc *guc, const u32 *action, u32 len)
 {
-	return intel_guc_ct_send(&guc->ct, action, len, NULL, 0);
+	return intel_guc_ct_send(&guc->ct, action, len, NULL, 0, 0);
+}
+
+static
+inline int intel_guc_send_nb(struct intel_guc *guc, const u32 *action, u32 len,
+			     u32 g2h_len_dw)
+{
+	return intel_guc_ct_send(&guc->ct, action, len, NULL, 0,
+				 MAKE_SEND_FLAGS(g2h_len_dw));
 }
 
 static inline int
@@ -82,7 +113,36 @@ intel_guc_send_and_receive(struct intel_guc *guc, const u32 *action, u32 len,
 			   u32 *response_buf, u32 response_buf_size)
 {
 	return intel_guc_ct_send(&guc->ct, action, len,
-				 response_buf, response_buf_size);
+				 response_buf, response_buf_size, 0);
+}
+
+static inline int intel_guc_send_busy_loop(struct intel_guc* guc,
+					   const u32 *action,
+					   u32 len,
+					   u32 g2h_len_dw,
+					   bool loop)
+{
+	int err;
+	unsigned int sleep_period_ms = 1;
+	bool not_atomic = !in_atomic() && !irqs_disabled();
+
+	/* No sleeping with spin locks, just busy loop */
+	might_sleep_if(loop && not_atomic);
+
+retry:
+	err = intel_guc_send_nb(guc, action, len, g2h_len_dw);
+	if (unlikely(err == -EBUSY && loop)) {
+		if (likely(not_atomic)) {
+			if (msleep_interruptible(sleep_period_ms))
+				return -EINTR;
+			sleep_period_ms = sleep_period_ms << 1;
+		} else {
+			cpu_relax();
+		}
+		goto retry;
+	}
+
+	return err;
 }
 
 static inline void intel_guc_to_host_event_handler(struct intel_guc *guc)
@@ -118,6 +178,7 @@ static inline u32 intel_guc_ggtt_offset(struct intel_guc *guc,
 }
 
 void intel_guc_init_early(struct intel_guc *guc);
+void intel_guc_init_late(struct intel_guc *guc);
 void intel_guc_init_send_regs(struct intel_guc *guc);
 void intel_guc_write_params(struct intel_guc *guc);
 int intel_guc_init(struct intel_guc *guc);
@@ -160,9 +221,25 @@ static inline bool intel_guc_is_ready(struct intel_guc *guc)
 	return intel_guc_is_fw_running(guc) && intel_guc_ct_enabled(&guc->ct);
 }
 
+static inline void intel_guc_reset_interrupts(struct intel_guc *guc)
+{
+	guc->interrupts.reset(guc);
+}
+
+static inline void intel_guc_enable_interrupts(struct intel_guc *guc)
+{
+	guc->interrupts.enable(guc);
+}
+
+static inline void intel_guc_disable_interrupts(struct intel_guc *guc)
+{
+	guc->interrupts.disable(guc);
+}
+
 static inline int intel_guc_sanitize(struct intel_guc *guc)
 {
 	intel_uc_fw_sanitize(&guc->fw);
+	intel_guc_disable_interrupts(guc);
 	intel_guc_ct_sanitize(&guc->ct);
 	guc->mmio_msg = 0;
 
@@ -183,8 +260,27 @@ static inline void intel_guc_disable_msg(struct intel_guc *guc, u32 mask)
 	spin_unlock_irq(&guc->irq_lock);
 }
 
-int intel_guc_reset_engine(struct intel_guc *guc,
-			   struct intel_engine_cs *engine);
+int intel_guc_wait_for_idle(struct intel_guc *guc, long timeout);
+
+int intel_guc_deregister_done_process_msg(struct intel_guc *guc,
+					  const u32 *msg, u32 len);
+int intel_guc_sched_done_process_msg(struct intel_guc *guc,
+				     const u32 *msg, u32 len);
+int intel_guc_context_reset_process_msg(struct intel_guc *guc,
+					const u32 *msg, u32 len);
+int intel_guc_engine_failure_process_msg(struct intel_guc *guc,
+					 const u32 *msg, u32 len);
+
+void intel_guc_find_hung_context(struct intel_engine_cs *engine);
+
+int intel_guc_global_policies_update(struct intel_guc *guc);
+
+void intel_guc_context_ban(struct intel_context *ce, struct i915_request *rq);
+
+void intel_guc_submission_reset_prepare(struct intel_guc *guc);
+void intel_guc_submission_reset(struct intel_guc *guc, bool stalled);
+void intel_guc_submission_reset_finish(struct intel_guc *guc);
+void intel_guc_submission_cancel_requests(struct intel_guc *guc);
 
 void intel_guc_load_status(struct intel_guc *guc, struct drm_printer *p);
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
index b82145652d57..dfaeafc512fb 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
@@ -3,8 +3,11 @@
  * Copyright © 2014-2019 Intel Corporation
  */
 
+#include <linux/bsearch.h>
+
 #include "gt/intel_gt.h"
 #include "gt/intel_lrc.h"
+#include "gt/shmem_utils.h"
 #include "intel_guc_ads.h"
 #include "intel_guc_fwif.h"
 #include "intel_uc.h"
@@ -23,6 +26,15 @@
  *      | guc_policies                          |
  *      +---------------------------------------+
  *      | guc_gt_system_info                    |
+ *      +---------------------------------------+ <== static
+ *      | guc_mmio_reg[countA] (engine 0.0)     |
+ *      | guc_mmio_reg[countB] (engine 0.1)     |
+ *      | guc_mmio_reg[countC] (engine 1.0)     |
+ *      |   ...                                 |
+ *      +---------------------------------------+ <== dynamic
+ *      | padding                               |
+ *      +---------------------------------------+ <== 4K aligned
+ *      | golden contexts                       |
  *      +---------------------------------------+
  *      | padding                               |
  *      +---------------------------------------+ <== 4K aligned
@@ -35,16 +47,49 @@ struct __guc_ads_blob {
 	struct guc_ads ads;
 	struct guc_policies policies;
 	struct guc_gt_system_info system_info;
+	/* From here on, location is dynamic! Refer to above diagram. */
+	struct guc_mmio_reg regset[0];
 } __packed;
 
+static u32 guc_ads_regset_size(struct intel_guc *guc)
+{
+	GEM_BUG_ON(!guc->ads_regset_size);
+	return guc->ads_regset_size;
+}
+
+static u32 guc_ads_golden_ctxt_size(struct intel_guc *guc)
+{
+	return PAGE_ALIGN(guc->ads_golden_ctxt_size);
+}
+
 static u32 guc_ads_private_data_size(struct intel_guc *guc)
 {
 	return PAGE_ALIGN(guc->fw.private_data_size);
 }
 
+static u32 guc_ads_regset_offset(struct intel_guc *guc)
+{
+	return offsetof(struct __guc_ads_blob, regset);
+}
+
+static u32 guc_ads_golden_ctxt_offset(struct intel_guc *guc)
+{
+	u32 offset;
+
+	offset = guc_ads_regset_offset(guc) +
+		 guc_ads_regset_size(guc);
+
+	return PAGE_ALIGN(offset);
+}
+
 static u32 guc_ads_private_data_offset(struct intel_guc *guc)
 {
-	return PAGE_ALIGN(sizeof(struct __guc_ads_blob));
+	u32 offset;
+
+	offset = guc_ads_golden_ctxt_offset(guc) +
+		 guc_ads_golden_ctxt_size(guc);
+
+	return PAGE_ALIGN(offset);
 }
 
 static u32 guc_ads_blob_size(struct intel_guc *guc)
@@ -53,15 +98,67 @@ static u32 guc_ads_blob_size(struct intel_guc *guc)
 	       guc_ads_private_data_size(guc);
 }
 
-static void guc_policies_init(struct guc_policies *policies)
+static void guc_policies_init(struct intel_guc *guc, struct guc_policies *policies)
 {
+	struct intel_gt *gt = guc_to_gt(guc);
+	struct drm_i915_private *i915 = gt->i915;
+
 	policies->dpc_promote_time = GLOBAL_POLICY_DEFAULT_DPC_PROMOTE_TIME_US;
 	policies->max_num_work_items = GLOBAL_POLICY_MAX_NUM_WI;
-	/* Disable automatic resets as not yet supported. */
-	policies->global_flags = GLOBAL_POLICY_DISABLE_ENGINE_RESET;
+
+	policies->global_flags = 0;
+	if (i915->params.reset < 2)
+		policies->global_flags |= GLOBAL_POLICY_DISABLE_ENGINE_RESET;
+
 	policies->is_valid = 1;
 }
 
+void intel_guc_log_policy_info(struct intel_guc *guc, struct drm_printer *dp)
+{
+	struct __guc_ads_blob *blob = guc->ads_blob;
+
+	if (unlikely(!blob))
+		return;
+
+	drm_printf(dp, "Global scheduling policies:\n");
+	drm_printf(dp, "  DPC promote time   = %u\n", blob->policies.dpc_promote_time);
+	drm_printf(dp, "  Max num work items = %u\n", blob->policies.max_num_work_items);
+	drm_printf(dp, "  Flags              = %u\n", blob->policies.global_flags);
+}
+
+static int guc_action_policies_update(struct intel_guc *guc, u32 policy_offset)
+{
+	u32 action[] = {
+		INTEL_GUC_ACTION_GLOBAL_SCHED_POLICY_CHANGE,
+		policy_offset
+	};
+
+	return intel_guc_send_busy_loop(guc, action, ARRAY_SIZE(action), 0, true);
+}
+
+int intel_guc_global_policies_update(struct intel_guc *guc)
+{
+	struct __guc_ads_blob *blob = guc->ads_blob;
+	struct intel_gt *gt = guc_to_gt(guc);
+	intel_wakeref_t wakeref;
+	int ret;
+
+	if (!blob)
+		return -ENOTSUPP;
+
+	GEM_BUG_ON(!blob->ads.scheduler_policies);
+
+	guc_policies_init(guc, &blob->policies);
+
+	if (!intel_guc_is_ready(guc))
+		return 0;
+
+	with_intel_runtime_pm(&gt->i915->runtime_pm, wakeref)
+		ret = guc_action_policies_update(guc, blob->ads.scheduler_policies);
+
+	return ret;
+}
+
 static void guc_mapping_table_init(struct intel_gt *gt,
 				   struct guc_gt_system_info *system_info)
 {
@@ -84,53 +181,321 @@ static void guc_mapping_table_init(struct intel_gt *gt,
 }
 
 /*
- * The first 80 dwords of the register state context, containing the
- * execlists and ppgtt registers.
+ * The save/restore register list must be pre-calculated to a temporary
+ * buffer of driver defined size before it can be generated in place
+ * inside the ADS.
  */
-#define LR_HW_CONTEXT_SIZE	(80 * sizeof(u32))
+#define MAX_MMIO_REGS	128	/* Arbitrary size, increase as needed */
+struct temp_regset {
+	struct guc_mmio_reg *registers;
+	u32 used;
+	u32 size;
+};
 
-static void __guc_ads_init(struct intel_guc *guc)
+static int guc_mmio_reg_cmp(const void *a, const void *b)
+{
+	const struct guc_mmio_reg *ra = a;
+	const struct guc_mmio_reg *rb = b;
+
+	return (int)ra->offset - (int)rb->offset;
+}
+
+static void guc_mmio_reg_add(struct temp_regset *regset,
+			     u32 offset, u32 flags)
+{
+	u32 count = regset->used;
+	struct guc_mmio_reg reg = {
+		.offset = offset,
+		.flags = flags,
+	};
+	struct guc_mmio_reg *slot;
+
+	GEM_BUG_ON(count >= regset->size);
+
+	/*
+	 * The mmio list is built using separate lists within the driver.
+	 * It's possible that at some point we may attempt to add the same
+	 * register more than once. Do not consider this an error; silently
+	 * move on if the register is already in the list.
+	 */
+	if (bsearch(&reg, regset->registers, count,
+		    sizeof(reg), guc_mmio_reg_cmp))
+		return;
+
+	slot = &regset->registers[count];
+	regset->used++;
+	*slot = reg;
+
+	while (slot-- > regset->registers) {
+		GEM_BUG_ON(slot[0].offset == slot[1].offset);
+		if (slot[1].offset > slot[0].offset)
+			break;
+
+		swap(slot[1], slot[0]);
+	}
+}
+
+#define GUC_MMIO_REG_ADD(regset, reg, masked) \
+	guc_mmio_reg_add(regset, \
+			 i915_mmio_reg_offset((reg)), \
+			 (masked) ? GUC_REGSET_MASKED : 0)
+
+static void guc_mmio_regset_init(struct temp_regset *regset,
+				 struct intel_engine_cs *engine)
+{
+	const u32 base = engine->mmio_base;
+	struct i915_wa_list *wal = &engine->wa_list;
+	struct i915_wa *wa;
+	unsigned int i;
+
+	regset->used = 0;
+
+	GUC_MMIO_REG_ADD(regset, RING_MODE_GEN7(base), true);
+	GUC_MMIO_REG_ADD(regset, RING_HWS_PGA(base), false);
+	GUC_MMIO_REG_ADD(regset, RING_IMR(base), false);
+
+	for (i = 0, wa = wal->list; i < wal->count; i++, wa++)
+		GUC_MMIO_REG_ADD(regset, wa->reg, wa->masked_reg);
+
+	/* Be extra paranoid and include all whitelist registers. */
+	for (i = 0; i < RING_MAX_NONPRIV_SLOTS; i++)
+		GUC_MMIO_REG_ADD(regset,
+				 RING_FORCE_TO_NONPRIV(base, i),
+				 false);
+
+	/* add in local MOCS registers */
+	for (i = 0; i < GEN9_LNCFCMOCS_REG_COUNT; i++)
+		GUC_MMIO_REG_ADD(regset, GEN9_LNCFCMOCS(i), false);
+}
+
+static int guc_mmio_reg_state_query(struct intel_guc *guc)
 {
 	struct intel_gt *gt = guc_to_gt(guc);
-	struct drm_i915_private *i915 = gt->i915;
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+	struct temp_regset temp_set;
+	u32 total;
+
+	/*
+	 * Need to actually build the list in order to filter out
+	 * duplicates and other such data dependent constructions.
+	 */
+	temp_set.size = MAX_MMIO_REGS;
+	temp_set.registers = kmalloc_array(temp_set.size,
+					  sizeof(*temp_set.registers),
+					  GFP_KERNEL);
+	if (!temp_set.registers)
+		return -ENOMEM;
+
+	total = 0;
+	for_each_engine(engine, gt, id) {
+		guc_mmio_regset_init(&temp_set, engine);
+		total += temp_set.used;
+	}
+
+	kfree(temp_set.registers);
+
+	return total * sizeof(struct guc_mmio_reg);
+}
+
+static void guc_mmio_reg_state_init(struct intel_guc *guc,
+				    struct __guc_ads_blob *blob)
+{
+	struct intel_gt *gt = guc_to_gt(guc);
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+	struct temp_regset temp_set;
+	struct guc_mmio_reg_set *ads_reg_set;
+	u32 addr_ggtt, offset;
+	u8 guc_class;
+
+	offset = guc_ads_regset_offset(guc);
+	addr_ggtt = intel_guc_ggtt_offset(guc, guc->ads_vma) + offset;
+	temp_set.registers = (struct guc_mmio_reg *) (((u8 *) blob) + offset);
+	temp_set.size = guc->ads_regset_size / sizeof(temp_set.registers[0]);
+
+	for_each_engine(engine, gt, id) {
+		/* Class index is checked in class converter */
+		GEM_BUG_ON(engine->instance >= GUC_MAX_INSTANCES_PER_CLASS);
+
+		guc_class = engine_class_to_guc_class(engine->class);
+		ads_reg_set = &blob->ads.reg_state_list[guc_class][engine->instance];
+
+		guc_mmio_regset_init(&temp_set, engine);
+		if (!temp_set.used) {
+			ads_reg_set->address = 0;
+			ads_reg_set->count = 0;
+			continue;
+		}
+
+		ads_reg_set->address = addr_ggtt;
+		ads_reg_set->count = temp_set.used;
+
+		temp_set.size -= temp_set.used;
+		temp_set.registers += temp_set.used;
+		addr_ggtt += temp_set.used * sizeof(struct guc_mmio_reg);
+	}
+
+	GEM_BUG_ON(temp_set.size);
+}
+
+static void fill_engine_enable_masks(struct intel_gt *gt,
+				     struct guc_gt_system_info *info)
+{
+	info->engine_enabled_masks[GUC_RENDER_CLASS] = 1;
+	info->engine_enabled_masks[GUC_BLITTER_CLASS] = 1;
+	info->engine_enabled_masks[GUC_VIDEO_CLASS] = VDBOX_MASK(gt);
+	info->engine_enabled_masks[GUC_VIDEOENHANCE_CLASS] = VEBOX_MASK(gt);
+}
+
+/* Skip execlist and PPGTT registers */
+#define LR_HW_CONTEXT_SIZE      (80 * sizeof(u32))
+#define SKIP_SIZE               (LRC_PPHWSP_SZ * PAGE_SIZE + LR_HW_CONTEXT_SIZE)
+
+static int guc_prep_golden_context(struct intel_guc *guc,
+				   struct __guc_ads_blob *blob)
+{
+	struct intel_gt *gt = guc_to_gt(guc);
+	u32 addr_ggtt, offset;
+	u32 total_size = 0, alloc_size, real_size;
+	u8 engine_class, guc_class;
+	struct guc_gt_system_info *info, local_info;
+
+	/*
+	 * Reserve the memory for the golden contexts and point GuC at it but
+	 * leave it empty for now. The context data will be filled in later
+	 * once there is something available to put there.
+	 *
+	 * Note that the HWSP and ring context are not included.
+	 *
+	 * Note also that the storage must be pinned in the GGTT, so that the
+	 * address won't change after GuC has been told where to find it. The
+	 * GuC will also validate that the LRC base + size fall within the
+	 * allowed GGTT range.
+	 */
+	if (blob) {
+		offset = guc_ads_golden_ctxt_offset(guc);
+		addr_ggtt = intel_guc_ggtt_offset(guc, guc->ads_vma) + offset;
+		info = &blob->system_info;
+	} else {
+		memset(&local_info, 0, sizeof(local_info));
+		info = &local_info;
+		fill_engine_enable_masks(gt, info);
+	}
+
+	for (engine_class = 0; engine_class <= MAX_ENGINE_CLASS; ++engine_class) {
+		if (engine_class == OTHER_CLASS)
+			continue;
+
+		guc_class = engine_class_to_guc_class(engine_class);
+
+		if (!info->engine_enabled_masks[guc_class])
+			continue;
+
+		real_size = intel_engine_context_size(gt, engine_class);
+		alloc_size = PAGE_ALIGN(real_size);
+		total_size += alloc_size;
+
+		if (!blob)
+			continue;
+
+		blob->ads.eng_state_size[guc_class] = real_size;
+		blob->ads.golden_context_lrca[guc_class] = addr_ggtt;
+		addr_ggtt += alloc_size;
+	}
+
+	if (!blob)
+		return total_size;
+
+	GEM_BUG_ON(guc->ads_golden_ctxt_size != total_size);
+	return total_size;
+}
+
+static struct intel_engine_cs *find_engine_state(struct intel_gt *gt, u8 engine_class)
+{
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+
+	for_each_engine(engine, gt, id) {
+		if (engine->class != engine_class)
+			continue;
+
+		if (!engine->default_state)
+			continue;
+
+		return engine;
+	}
+
+	return NULL;
+}
+
+static void guc_init_golden_context(struct intel_guc *guc)
+{
 	struct __guc_ads_blob *blob = guc->ads_blob;
-	const u32 skipped_size = LRC_PPHWSP_SZ * PAGE_SIZE + LR_HW_CONTEXT_SIZE;
-	u32 base;
+	struct intel_engine_cs *engine;
+	struct intel_gt *gt = guc_to_gt(guc);
+	u32 addr_ggtt, offset;
+	u32 total_size = 0, alloc_size, real_size;
 	u8 engine_class, guc_class;
+	u8 *ptr;
 
-	/* GuC scheduling policies */
-	guc_policies_init(&blob->policies);
+	if (!intel_uc_uses_guc_submission(&gt->uc))
+		return;
+
+	GEM_BUG_ON(!blob);
 
 	/*
-	 * GuC expects a per-engine-class context image and size
-	 * (minus hwsp and ring context). The context image will be
-	 * used to reinitialize engines after a reset. It must exist
-	 * and be pinned in the GGTT, so that the address won't change after
-	 * we have told GuC where to find it. The context size will be used
-	 * to validate that the LRC base + size fall within allowed GGTT.
+	 * Go back and fill in the golden context data now that it is
+	 * available.
 	 */
+	offset = guc_ads_golden_ctxt_offset(guc);
+	addr_ggtt = intel_guc_ggtt_offset(guc, guc->ads_vma) + offset;
+	ptr = ((u8 *) blob) + offset;
+
 	for (engine_class = 0; engine_class <= MAX_ENGINE_CLASS; ++engine_class) {
 		if (engine_class == OTHER_CLASS)
 			continue;
 
 		guc_class = engine_class_to_guc_class(engine_class);
 
-		/*
-		 * TODO: Set context pointer to default state to allow
-		 * GuC to re-init guilty contexts after internal reset.
-		 */
-		blob->ads.golden_context_lrca[guc_class] = 0;
-		blob->ads.eng_state_size[guc_class] =
-			intel_engine_context_size(guc_to_gt(guc),
-						  engine_class) -
-			skipped_size;
+		if (!blob->system_info.engine_enabled_masks[guc_class])
+			continue;
+
+		real_size = intel_engine_context_size(gt, engine_class);
+		alloc_size = PAGE_ALIGN(real_size);
+		total_size += alloc_size;
+
+		engine = find_engine_state(gt, engine_class);
+		if (!engine) {
+			drm_err(&gt->i915->drm, "No engine state recorded for class %d!\n", engine_class);
+			blob->ads.eng_state_size[guc_class] = 0;
+			blob->ads.golden_context_lrca[guc_class] = 0;
+			continue;
+		}
+
+		GEM_BUG_ON(blob->ads.eng_state_size[guc_class] != real_size);
+		GEM_BUG_ON(blob->ads.golden_context_lrca[guc_class] != addr_ggtt);
+		addr_ggtt += alloc_size;
+
+		shmem_read(engine->default_state, SKIP_SIZE, ptr + SKIP_SIZE, real_size);
+		ptr += alloc_size;
 	}
 
+	GEM_BUG_ON(guc->ads_golden_ctxt_size != total_size);
+}
+
+static void __guc_ads_init(struct intel_guc *guc)
+{
+	struct intel_gt *gt = guc_to_gt(guc);
+	struct drm_i915_private *i915 = gt->i915;
+	struct __guc_ads_blob *blob = guc->ads_blob;
+	u32 base;
+
+	/* GuC scheduling policies */
+	guc_policies_init(guc, &blob->policies);
+
 	/* System info */
-	blob->system_info.engine_enabled_masks[GUC_RENDER_CLASS] = 1;
-	blob->system_info.engine_enabled_masks[GUC_BLITTER_CLASS] = 1;
-	blob->system_info.engine_enabled_masks[GUC_VIDEO_CLASS] = VDBOX_MASK(gt);
-	blob->system_info.engine_enabled_masks[GUC_VIDEOENHANCE_CLASS] = VEBOX_MASK(gt);
+	fill_engine_enable_masks(gt, &blob->system_info);
 
 	blob->system_info.generic_gt_sysinfo[GUC_GENERIC_GT_SYSINFO_SLICE_ENABLED] =
 		hweight8(gt->info.sseu.slice_mask);
@@ -145,6 +510,9 @@ static void __guc_ads_init(struct intel_guc *guc)
 			 GEN12_DOORBELLS_PER_SQIDI) + 1;
 	}
 
+	/* Golden contexts for re-initialising after a watchdog reset */
+	guc_prep_golden_context(guc, blob);
+
 	guc_mapping_table_init(guc_to_gt(guc), &blob->system_info);
 
 	base = intel_guc_ggtt_offset(guc, guc->ads_vma);
@@ -153,6 +521,9 @@ static void __guc_ads_init(struct intel_guc *guc)
 	blob->ads.scheduler_policies = base + ptr_offset(blob, policies);
 	blob->ads.gt_system_info = base + ptr_offset(blob, system_info);
 
+	/* MMIO save/restore list */
+	guc_mmio_reg_state_init(guc, blob);
+
 	/* Private Data */
 	blob->ads.private_data = base + guc_ads_private_data_offset(guc);
 
@@ -173,6 +544,19 @@ int intel_guc_ads_create(struct intel_guc *guc)
 
 	GEM_BUG_ON(guc->ads_vma);
 
+	/* Need to calculate the reg state size dynamically: */
+	ret = guc_mmio_reg_state_query(guc);
+	if (ret < 0)
+		return ret;
+	guc->ads_regset_size = ret;
+
+	/* Likewise the golden contexts: */
+	ret = guc_prep_golden_context(guc, NULL);
+	if (ret < 0)
+		return ret;
+	guc->ads_golden_ctxt_size = ret;
+
+	/* Now the total size can be determined: */
 	size = guc_ads_blob_size(guc);
 
 	ret = intel_guc_allocate_and_map_vma(guc, size, &guc->ads_vma,
@@ -185,6 +569,18 @@ int intel_guc_ads_create(struct intel_guc *guc)
 	return 0;
 }
 
+void intel_guc_ads_init_late(struct intel_guc *guc)
+{
+	/*
+	 * The golden context setup requires the saved engine state from
+	 * __engines_record_defaults(). However, that requires engines to be
+	 * operational which means the ADS must already have been configured.
+	 * Fortunately, the golden context state is not needed until a hang
+	 * occurs, so it can be filled in during this late init phase.
+	 */
+	guc_init_golden_context(guc);
+}
+
 void intel_guc_ads_destroy(struct intel_guc *guc)
 {
 	i915_vma_unpin_and_release(&guc->ads_vma, I915_VMA_RELEASE_MAP);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.h
index b00d3ae1113a..dac0dc32da34 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.h
@@ -7,9 +7,12 @@
 #define _INTEL_GUC_ADS_H_
 
 struct intel_guc;
+struct drm_printer;
 
 int intel_guc_ads_create(struct intel_guc *guc);
 void intel_guc_ads_destroy(struct intel_guc *guc);
+void intel_guc_ads_init_late(struct intel_guc *guc);
 void intel_guc_ads_reset(struct intel_guc *guc);
+void intel_guc_log_policy_info(struct intel_guc *guc, struct drm_printer *p);
 
 #endif
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index 43409044528e..170409107b4b 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -3,6 +3,11 @@
  * Copyright © 2016-2019 Intel Corporation
  */
 
+#include <linux/circ_buf.h>
+#include <linux/ktime.h>
+#include <linux/time64.h>
+#include <linux/timekeeping.h>
+
 #include "i915_drv.h"
 #include "intel_guc_ct.h"
 #include "gt/intel_gt.h"
@@ -58,11 +63,17 @@ static inline struct drm_device *ct_to_drm(struct intel_guc_ct *ct)
  *      +--------+-----------------------------------------------+------+
  *
  * Size of each `CT Buffer`_ must be multiple of 4K.
- * As we don't expect too many messages, for now use minimum sizes.
+ * We don't expect too many messages in flight at any time, unless we are
+ * using the GuC submission. In that case each request requires a minimum
+ * 2 dwords which gives us a maximum 256 queue'd requests. Hopefully this
+ * enough space to avoid backpressure on the driver. We increase the size
+ * of the receive buffer (relative to the send) to ensure a G2H response
+ * CTB has a landing spot.
  */
 #define CTB_DESC_SIZE		ALIGN(sizeof(struct guc_ct_buffer_desc), SZ_2K)
 #define CTB_H2G_BUFFER_SIZE	(SZ_4K)
-#define CTB_G2H_BUFFER_SIZE	(SZ_4K)
+#define CTB_G2H_BUFFER_SIZE	(4 * CTB_H2G_BUFFER_SIZE)
+#define G2H_ROOM_BUFFER_SIZE	(PAGE_SIZE)
 
 struct ct_request {
 	struct list_head link;
@@ -98,6 +109,7 @@ void intel_guc_ct_init_early(struct intel_guc_ct *ct)
 	INIT_LIST_HEAD(&ct->requests.incoming);
 	INIT_WORK(&ct->requests.worker, ct_incoming_request_worker_func);
 	tasklet_setup(&ct->receive_tasklet, ct_receive_tasklet_func);
+	init_waitqueue_head(&ct->wq);
 }
 
 static inline const char *guc_ct_buffer_type_to_str(u32 type)
@@ -119,19 +131,27 @@ static void guc_ct_buffer_desc_init(struct guc_ct_buffer_desc *desc)
 
 static void guc_ct_buffer_reset(struct intel_guc_ct_buffer *ctb)
 {
+	u32 space;
+
 	ctb->broken = false;
+	ctb->tail = 0;
+	ctb->head = 0;
+	space = CIRC_SPACE(ctb->tail, ctb->head, ctb->size) - ctb->resv_space;
+	atomic_set(&ctb->space, space);
+
 	guc_ct_buffer_desc_init(ctb->desc);
 }
 
 static void guc_ct_buffer_init(struct intel_guc_ct_buffer *ctb,
 			       struct guc_ct_buffer_desc *desc,
-			       u32 *cmds, u32 size_in_bytes)
+			       u32 *cmds, u32 size_in_bytes, u32 resv_space)
 {
 	GEM_BUG_ON(size_in_bytes % 4);
 
 	ctb->desc = desc;
 	ctb->cmds = cmds;
 	ctb->size = size_in_bytes / 4;
+	ctb->resv_space = resv_space / 4;
 
 	guc_ct_buffer_reset(ctb);
 }
@@ -161,6 +181,10 @@ static int ct_register_buffer(struct intel_guc_ct *ct, u32 type,
 {
 	int err;
 
+	err = i915_inject_probe_error(guc_to_gt(ct_to_guc(ct))->i915, -ENXIO);
+	if (unlikely(err))
+		return err;
+
 	err = guc_action_register_ct_buffer(ct_to_guc(ct), type,
 					    desc_addr, buff_addr, size);
 	if (unlikely(err))
@@ -208,10 +232,15 @@ int intel_guc_ct_init(struct intel_guc_ct *ct)
 	struct guc_ct_buffer_desc *desc;
 	u32 blob_size;
 	u32 cmds_size;
+	u32 resv_space;
 	void *blob;
 	u32 *cmds;
 	int err;
 
+	err = i915_inject_probe_error(guc_to_gt(guc)->i915, -ENXIO);
+	if (err)
+		return err;
+
 	GEM_BUG_ON(ct->vma);
 
 	blob_size = 2 * CTB_DESC_SIZE + CTB_H2G_BUFFER_SIZE + CTB_G2H_BUFFER_SIZE;
@@ -228,19 +257,23 @@ int intel_guc_ct_init(struct intel_guc_ct *ct)
 	desc = blob;
 	cmds = blob + 2 * CTB_DESC_SIZE;
 	cmds_size = CTB_H2G_BUFFER_SIZE;
-	CT_DEBUG(ct, "%s desc %#tx cmds %#tx size %u\n", "send",
-		 ptrdiff(desc, blob), ptrdiff(cmds, blob), cmds_size);
+	resv_space = 0;
+	CT_DEBUG(ct, "%s desc %#tx cmds %#tx size %u/%u\n", "send",
+		 ptrdiff(desc, blob), ptrdiff(cmds, blob), cmds_size,
+		 resv_space);
 
-	guc_ct_buffer_init(&ct->ctbs.send, desc, cmds, cmds_size);
+	guc_ct_buffer_init(&ct->ctbs.send, desc, cmds, cmds_size, resv_space);
 
 	/* store pointers to desc and cmds for recv ctb */
 	desc = blob + CTB_DESC_SIZE;
 	cmds = blob + 2 * CTB_DESC_SIZE + CTB_H2G_BUFFER_SIZE;
 	cmds_size = CTB_G2H_BUFFER_SIZE;
-	CT_DEBUG(ct, "%s desc %#tx cmds %#tx size %u\n", "recv",
-		 ptrdiff(desc, blob), ptrdiff(cmds, blob), cmds_size);
+	resv_space = G2H_ROOM_BUFFER_SIZE;
+	CT_DEBUG(ct, "%s desc %#tx cmds %#tx size %u/%u\n", "recv",
+		 ptrdiff(desc, blob), ptrdiff(cmds, blob), cmds_size,
+		 resv_space);
 
-	guc_ct_buffer_init(&ct->ctbs.recv, desc, cmds, cmds_size);
+	guc_ct_buffer_init(&ct->ctbs.recv, desc, cmds, cmds_size, resv_space);
 
 	return 0;
 }
@@ -309,6 +342,7 @@ int intel_guc_ct_enable(struct intel_guc_ct *ct)
 		goto err_deregister;
 
 	ct->enabled = true;
+	ct->stall_time = KTIME_MAX;
 
 	return 0;
 
@@ -368,44 +402,37 @@ static void write_barrier(struct intel_guc_ct *ct)
 static int ct_write(struct intel_guc_ct *ct,
 		    const u32 *action,
 		    u32 len /* in dwords */,
-		    u32 fence)
+		    u32 fence, u32 flags)
 {
 	struct intel_guc_ct_buffer *ctb = &ct->ctbs.send;
 	struct guc_ct_buffer_desc *desc = ctb->desc;
-	u32 head = desc->head;
-	u32 tail = desc->tail;
+	u32 tail = ctb->tail;
 	u32 size = ctb->size;
-	u32 used;
 	u32 header;
 	u32 hxg;
+	u32 type;
 	u32 *cmds = ctb->cmds;
 	unsigned int i;
 
-	if (unlikely(ctb->broken))
-		return -EPIPE;
-
 	if (unlikely(desc->status))
 		goto corrupted;
 
-	if (unlikely((tail | head) >= size)) {
-		CT_ERROR(ct, "Invalid offsets head=%u tail=%u (size=%u)\n",
-			 head, tail, size);
+	GEM_BUG_ON(tail > size);
+
+#ifdef CONFIG_DRM_I915_DEBUG_GUC
+	if (unlikely(tail != READ_ONCE(desc->tail))) {
+		CT_ERROR(ct, "Tail was modified %u != %u\n",
+			 desc->tail, tail);
+		desc->status |= GUC_CTB_STATUS_MISMATCH;
+		goto corrupted;
+	}
+	if (unlikely(READ_ONCE(desc->head) >= size)) {
+		CT_ERROR(ct, "Invalid head offset %u >= %u)\n",
+			 desc->head, size);
 		desc->status |= GUC_CTB_STATUS_OVERFLOW;
 		goto corrupted;
 	}
-
-	/*
-	 * tail == head condition indicates empty. GuC FW does not support
-	 * using up the entire buffer to get tail == head meaning full.
-	 */
-	if (tail < head)
-		used = (size - head) + tail;
-	else
-		used = tail - head;
-
-	/* make sure there is a space including extra dw for the fence */
-	if (unlikely(used + len + 1 >= size))
-		return -ENOSPC;
+#endif
 
 	/*
 	 * dw0: CT header (including fence)
@@ -416,9 +443,11 @@ static int ct_write(struct intel_guc_ct *ct,
 		 FIELD_PREP(GUC_CTB_MSG_0_NUM_DWORDS, len) |
 		 FIELD_PREP(GUC_CTB_MSG_0_FENCE, fence);
 
-	hxg = FIELD_PREP(GUC_HXG_MSG_0_TYPE, GUC_HXG_TYPE_REQUEST) |
-	      FIELD_PREP(GUC_HXG_REQUEST_MSG_0_ACTION |
-			 GUC_HXG_REQUEST_MSG_0_DATA0, action[0]);
+	type = (flags & INTEL_GUC_CT_SEND_NB) ? GUC_HXG_TYPE_EVENT :
+		GUC_HXG_TYPE_REQUEST;
+	hxg = FIELD_PREP(GUC_HXG_MSG_0_TYPE, type) |
+		FIELD_PREP(GUC_HXG_EVENT_MSG_0_ACTION |
+			   GUC_HXG_EVENT_MSG_0_DATA0, action[0]);
 
 	CT_DEBUG(ct, "writing (tail %u) %*ph %*ph %*ph\n",
 		 tail, 4, &header, 4, &hxg, 4 * (len - 1), &action[1]);
@@ -441,6 +470,11 @@ static int ct_write(struct intel_guc_ct *ct,
 	 */
 	write_barrier(ct);
 
+	/* update local copies */
+	ctb->tail = tail;
+	GEM_BUG_ON(atomic_read(&ctb->space) < len + GUC_CTB_HDR_LEN);
+	atomic_sub(len + GUC_CTB_HDR_LEN, &ctb->space);
+
 	/* now update descriptor */
 	WRITE_ONCE(desc->tail, tail);
 
@@ -458,7 +492,7 @@ static int ct_write(struct intel_guc_ct *ct,
  * @req:	pointer to pending request
  * @status:	placeholder for status
  *
- * For each sent request, Guc shall send bac CT response message.
+ * For each sent request, GuC shall send back CT response message.
  * Our message handler will update status of tracked request once
  * response message with given fence is received. Wait here and
  * check for valid response status value.
@@ -474,14 +508,18 @@ static int wait_for_ct_request_update(struct ct_request *req, u32 *status)
 	/*
 	 * Fast commands should complete in less than 10us, so sample quickly
 	 * up to that length of time, then switch to a slower sleep-wait loop.
-	 * No GuC command should ever take longer than 10ms.
+	 * No GuC command should ever take longer than 10ms but many GuC
+	 * commands can be inflight at time, so use a 1s timeout on the slower
+	 * sleep-wait loop.
 	 */
+#define GUC_CTB_RESPONSE_TIMEOUT_SHORT_MS 10
+#define GUC_CTB_RESPONSE_TIMEOUT_LONG_MS 1000
 #define done \
 	(FIELD_GET(GUC_HXG_MSG_0_ORIGIN, READ_ONCE(req->status)) == \
 	 GUC_HXG_ORIGIN_GUC)
-	err = wait_for_us(done, 10);
+	err = wait_for_us(done, GUC_CTB_RESPONSE_TIMEOUT_SHORT_MS);
 	if (err)
-		err = wait_for(done, 10);
+		err = wait_for(done, GUC_CTB_RESPONSE_TIMEOUT_LONG_MS);
 #undef done
 
 	if (unlikely(err))
@@ -491,6 +529,128 @@ static int wait_for_ct_request_update(struct ct_request *req, u32 *status)
 	return err;
 }
 
+#define GUC_CTB_TIMEOUT_MS	1500
+static inline bool ct_deadlocked(struct intel_guc_ct *ct)
+{
+	long timeout = GUC_CTB_TIMEOUT_MS;
+	bool ret = ktime_ms_delta(ktime_get(), ct->stall_time) > timeout;
+
+	if (unlikely(ret)) {
+		struct guc_ct_buffer_desc *send = ct->ctbs.send.desc;
+		struct guc_ct_buffer_desc *recv = ct->ctbs.send.desc;
+
+		CT_ERROR(ct, "Communication stalled for %lld ms, desc status=%#x,%#x\n",
+			 ktime_ms_delta(ktime_get(), ct->stall_time),
+			 send->status, recv->status);
+		ct->ctbs.send.broken = true;
+	}
+
+	return ret;
+}
+
+static inline bool g2h_has_room(struct intel_guc_ct *ct, u32 g2h_len_dw)
+{
+	struct intel_guc_ct_buffer *ctb = &ct->ctbs.recv;
+
+	/*
+	 * We leave a certain amount of space in the G2H CTB buffer for
+	 * unexpected G2H CTBs (e.g. logging, engine hang, etc...)
+	 */
+	return !g2h_len_dw || atomic_read(&ctb->space) >= g2h_len_dw;
+}
+
+static inline void g2h_reserve_space(struct intel_guc_ct *ct, u32 g2h_len_dw)
+{
+	lockdep_assert_held(&ct->ctbs.send.lock);
+
+	GEM_BUG_ON(!g2h_has_room(ct, g2h_len_dw));
+
+	if (g2h_len_dw)
+		atomic_sub(g2h_len_dw, &ct->ctbs.recv.space);
+}
+
+static inline void g2h_release_space(struct intel_guc_ct *ct, u32 g2h_len_dw)
+{
+	atomic_add(g2h_len_dw, &ct->ctbs.recv.space);
+}
+
+static inline bool h2g_has_room(struct intel_guc_ct *ct, u32 len_dw)
+{
+	struct intel_guc_ct_buffer *ctb = &ct->ctbs.send;
+	struct guc_ct_buffer_desc *desc = ctb->desc;
+	u32 head;
+	u32 space;
+
+	if (atomic_read(&ctb->space) >= len_dw)
+		return true;
+
+	head = READ_ONCE(desc->head);
+	if (unlikely(head > ctb->size)) {
+		CT_ERROR(ct, "Invalid head offset %u >= %u)\n",
+			 head, ctb->size);
+		desc->status |= GUC_CTB_STATUS_OVERFLOW;
+		ctb->broken = true;
+		return false;
+	}
+
+	space = CIRC_SPACE(ctb->tail, head, ctb->size);
+	atomic_set(&ctb->space, space);
+
+	return space >= len_dw;
+}
+
+static int has_room_nb(struct intel_guc_ct *ct, u32 h2g_dw, u32 g2h_dw)
+{
+	lockdep_assert_held(&ct->ctbs.send.lock);
+
+	if (unlikely(!h2g_has_room(ct, h2g_dw) || !g2h_has_room(ct, g2h_dw))) {
+		if (ct->stall_time == KTIME_MAX)
+			ct->stall_time = ktime_get();
+
+		if (unlikely(ct_deadlocked(ct)))
+			return -EPIPE;
+		else
+			return -EBUSY;
+	}
+
+	ct->stall_time = KTIME_MAX;
+	return 0;
+}
+
+#define G2H_LEN_DW(f) \
+	FIELD_GET(INTEL_GUC_CT_SEND_G2H_DW_MASK, f) ? \
+	FIELD_GET(INTEL_GUC_CT_SEND_G2H_DW_MASK, f) + GUC_CTB_HXG_MSG_MIN_LEN : 0
+static int ct_send_nb(struct intel_guc_ct *ct,
+		      const u32 *action,
+		      u32 len,
+		      u32 flags)
+{
+	struct intel_guc_ct_buffer *ctb = &ct->ctbs.send;
+	unsigned long spin_flags;
+	u32 g2h_len_dw = G2H_LEN_DW(flags);
+	u32 fence;
+	int ret;
+
+	spin_lock_irqsave(&ctb->lock, spin_flags);
+
+	ret = has_room_nb(ct, len + GUC_CTB_HDR_LEN, g2h_len_dw);
+	if (unlikely(ret))
+		goto out;
+
+	fence = ct_get_next_fence(ct);
+	ret = ct_write(ct, action, len, fence, flags);
+	if (unlikely(ret))
+		goto out;
+
+	g2h_reserve_space(ct, g2h_len_dw);
+	intel_guc_notify(ct_to_guc(ct));
+
+out:
+	spin_unlock_irqrestore(&ctb->lock, spin_flags);
+
+	return ret;
+}
+
 static int ct_send(struct intel_guc_ct *ct,
 		   const u32 *action,
 		   u32 len,
@@ -498,8 +658,10 @@ static int ct_send(struct intel_guc_ct *ct,
 		   u32 response_buf_size,
 		   u32 *status)
 {
+	struct intel_guc_ct_buffer *ctb = &ct->ctbs.send;
 	struct ct_request request;
 	unsigned long flags;
+	unsigned int sleep_period_ms = 1;
 	u32 fence;
 	int err;
 
@@ -507,8 +669,33 @@ static int ct_send(struct intel_guc_ct *ct,
 	GEM_BUG_ON(!len);
 	GEM_BUG_ON(len & ~GUC_CT_MSG_LEN_MASK);
 	GEM_BUG_ON(!response_buf && response_buf_size);
+	might_sleep();
+
+	/*
+	 * We use a lazy spin wait loop here as we believe that if the CT
+	 * buffers are sized correctly the flow control condition should be
+	 * rare. Reserving the maximum size in the G2H credits as we don't know
+	 * how big the response is going to be.
+	 */
+retry:
+	spin_lock_irqsave(&ctb->lock, flags);
+	if (unlikely(!h2g_has_room(ct, len + GUC_CTB_HDR_LEN) ||
+		     !g2h_has_room(ct, GUC_CTB_HXG_MSG_MAX_LEN))) {
+		if (ct->stall_time == KTIME_MAX)
+			ct->stall_time = ktime_get();
+		spin_unlock_irqrestore(&ctb->lock, flags);
+
+		if (unlikely(ct_deadlocked(ct)))
+			return -EPIPE;
+
+		if (msleep_interruptible(sleep_period_ms))
+			return -EINTR;
+		sleep_period_ms = sleep_period_ms << 1;
+
+		goto retry;
+	}
 
-	spin_lock_irqsave(&ct->ctbs.send.lock, flags);
+	ct->stall_time = KTIME_MAX;
 
 	fence = ct_get_next_fence(ct);
 	request.fence = fence;
@@ -520,9 +707,10 @@ static int ct_send(struct intel_guc_ct *ct,
 	list_add_tail(&request.link, &ct->requests.pending);
 	spin_unlock(&ct->requests.lock);
 
-	err = ct_write(ct, action, len, fence);
+	err = ct_write(ct, action, len, fence, 0);
+	g2h_reserve_space(ct, GUC_CTB_HXG_MSG_MAX_LEN);
 
-	spin_unlock_irqrestore(&ct->ctbs.send.lock, flags);
+	spin_unlock_irqrestore(&ctb->lock, flags);
 
 	if (unlikely(err))
 		goto unlink;
@@ -530,6 +718,7 @@ static int ct_send(struct intel_guc_ct *ct,
 	intel_guc_notify(ct_to_guc(ct));
 
 	err = wait_for_ct_request_update(&request, status);
+	g2h_release_space(ct, GUC_CTB_HXG_MSG_MAX_LEN);
 	if (unlikely(err))
 		goto unlink;
 
@@ -562,16 +751,25 @@ static int ct_send(struct intel_guc_ct *ct,
  * Command Transport (CT) buffer based GuC send function.
  */
 int intel_guc_ct_send(struct intel_guc_ct *ct, const u32 *action, u32 len,
-		      u32 *response_buf, u32 response_buf_size)
+		      u32 *response_buf, u32 response_buf_size, u32 flags)
 {
 	u32 status = ~0; /* undefined */
 	int ret;
 
 	if (unlikely(!ct->enabled)) {
-		WARN(1, "Unexpected send: action=%#x\n", *action);
+		struct intel_guc *guc = ct_to_guc(ct);
+		struct intel_uc *uc = container_of(guc, struct intel_uc, guc);
+
+		WARN(!uc->reset_in_progress, "Unexpected send: action=%#x\n", *action);
 		return -ENODEV;
 	}
 
+	if (unlikely(ct->ctbs.send.broken))
+		return -EPIPE;
+
+	if (flags & INTEL_GUC_CT_SEND_NB)
+		return ct_send_nb(ct, action, len, flags);
+
 	ret = ct_send(ct, action, len, response_buf, response_buf_size, &status);
 	if (unlikely(ret < 0)) {
 		CT_ERROR(ct, "Sending action %#x failed (err=%d status=%#X)\n",
@@ -607,8 +805,8 @@ static int ct_read(struct intel_guc_ct *ct, struct ct_incoming_msg **msg)
 {
 	struct intel_guc_ct_buffer *ctb = &ct->ctbs.recv;
 	struct guc_ct_buffer_desc *desc = ctb->desc;
-	u32 head = desc->head;
-	u32 tail = desc->tail;
+	u32 head = ctb->head;
+	u32 tail = READ_ONCE(desc->tail);
 	u32 size = ctb->size;
 	u32 *cmds = ctb->cmds;
 	s32 available;
@@ -622,9 +820,19 @@ static int ct_read(struct intel_guc_ct *ct, struct ct_incoming_msg **msg)
 	if (unlikely(desc->status))
 		goto corrupted;
 
-	if (unlikely((tail | head) >= size)) {
-		CT_ERROR(ct, "Invalid offsets head=%u tail=%u (size=%u)\n",
-			 head, tail, size);
+	GEM_BUG_ON(head > size);
+
+#ifdef CONFIG_DRM_I915_DEBUG_GUC
+	if (unlikely(head != READ_ONCE(desc->head))) {
+		CT_ERROR(ct, "Head was modified %u != %u\n",
+			 desc->head, head);
+		desc->status |= GUC_CTB_STATUS_MISMATCH;
+		goto corrupted;
+	}
+#endif
+	if (unlikely(tail >= size)) {
+		CT_ERROR(ct, "Invalid tail offset %u >= %u)\n",
+			 tail, size);
 		desc->status |= GUC_CTB_STATUS_OVERFLOW;
 		goto corrupted;
 	}
@@ -639,7 +847,7 @@ static int ct_read(struct intel_guc_ct *ct, struct ct_incoming_msg **msg)
 	/* beware of buffer wrap case */
 	if (unlikely(available < 0))
 		available += size;
-	CT_DEBUG(ct, "available %d (%u:%u)\n", available, head, tail);
+	CT_DEBUG(ct, "available %d (%u:%u:%u)\n", available, head, tail, size);
 	GEM_BUG_ON(available < 0);
 
 	header = cmds[head];
@@ -677,6 +885,9 @@ static int ct_read(struct intel_guc_ct *ct, struct ct_incoming_msg **msg)
 	}
 	CT_DEBUG(ct, "received %*ph\n", 4 * len, (*msg)->msg);
 
+	/* update local copies */
+	ctb->head = head;
+
 	/* now update descriptor */
 	WRITE_ONCE(desc->head, head);
 
@@ -728,12 +939,16 @@ static int ct_handle_response(struct intel_guc_ct *ct, struct ct_incoming_msg *r
 		found = true;
 		break;
 	}
-	spin_unlock_irqrestore(&ct->requests.lock, flags);
-
 	if (!found) {
 		CT_ERROR(ct, "Unsolicited response (fence %u)\n", fence);
-		return -ENOKEY;
+		CT_ERROR(ct, "Could not find fence=%u, last_fence=%u\n", fence,
+			 ct->requests.last_fence);
+		list_for_each_entry(req, &ct->requests.pending, link)
+			CT_ERROR(ct, "request %u awaits response\n",
+				 req->fence);
+		err = -ENOKEY;
 	}
+	spin_unlock_irqrestore(&ct->requests.lock, flags);
 
 	if (unlikely(err))
 		return err;
@@ -762,6 +977,19 @@ static int ct_process_request(struct intel_guc_ct *ct, struct ct_incoming_msg *r
 	case INTEL_GUC_ACTION_DEFAULT:
 		ret = intel_guc_to_host_process_recv_msg(guc, payload, len);
 		break;
+	case INTEL_GUC_ACTION_DEREGISTER_CONTEXT_DONE:
+		ret = intel_guc_deregister_done_process_msg(guc, payload,
+							    len);
+		break;
+	case INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_DONE:
+		ret = intel_guc_sched_done_process_msg(guc, payload, len);
+		break;
+	case INTEL_GUC_ACTION_CONTEXT_RESET_NOTIFICATION:
+		ret = intel_guc_context_reset_process_msg(guc, payload, len);
+		break;
+	case INTEL_GUC_ACTION_ENGINE_FAILURE_NOTIFICATION:
+		ret = intel_guc_engine_failure_process_msg(guc, payload, len);
+		break;
 	default:
 		ret = -EOPNOTSUPP;
 		break;
@@ -819,10 +1047,22 @@ static void ct_incoming_request_worker_func(struct work_struct *w)
 static int ct_handle_event(struct intel_guc_ct *ct, struct ct_incoming_msg *request)
 {
 	const u32 *hxg = &request->msg[GUC_CTB_MSG_MIN_LEN];
+	u32 action = FIELD_GET(GUC_HXG_EVENT_MSG_0_ACTION, hxg[0]);
 	unsigned long flags;
 
 	GEM_BUG_ON(FIELD_GET(GUC_HXG_MSG_0_TYPE, hxg[0]) != GUC_HXG_TYPE_EVENT);
 
+	/*
+	 * Adjusting the space must be done in IRQ or deadlock can occur as the
+	 * CTB processing in the below workqueue can send CTBs which creates a
+	 * circular dependency if the space was returned there.
+	 */
+	switch (action) {
+	case INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_DONE:
+	case INTEL_GUC_ACTION_DEREGISTER_CONTEXT_DONE:
+		g2h_release_space(ct, request->size);
+	}
+
 	spin_lock_irqsave(&ct->requests.lock, flags);
 	list_add_tail(&request->link, &ct->requests.incoming);
 	spin_unlock_irqrestore(&ct->requests.lock, flags);
@@ -940,3 +1180,25 @@ void intel_guc_ct_event_handler(struct intel_guc_ct *ct)
 
 	ct_try_receive_message(ct);
 }
+
+void intel_guc_log_ct_info(struct intel_guc_ct *ct,
+			   struct drm_printer *p)
+{
+	if (!ct->enabled) {
+		drm_puts(p, "CT disabled\n");
+		return;
+	}
+
+	drm_printf(p, "H2G Space: %u\n",
+		   atomic_read(&ct->ctbs.send.space) * 4);
+	drm_printf(p, "Head: %u\n",
+		   ct->ctbs.send.desc->head);
+	drm_printf(p, "Tail: %u\n",
+		   ct->ctbs.send.desc->tail);
+	drm_printf(p, "G2H Space: %u\n",
+		   atomic_read(&ct->ctbs.recv.space) * 4);
+	drm_printf(p, "Head: %u\n",
+		   ct->ctbs.recv.desc->head);
+	drm_printf(p, "Tail: %u\n",
+		   ct->ctbs.recv.desc->tail);
+}
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h
index 1ae2dde6db93..82f0249a11df 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h
@@ -9,11 +9,14 @@
 #include <linux/interrupt.h>
 #include <linux/spinlock.h>
 #include <linux/workqueue.h>
+#include <linux/ktime.h>
+#include <linux/wait.h>
 
 #include "intel_guc_fwif.h"
 
 struct i915_vma;
 struct intel_guc;
+struct drm_printer;
 
 /**
  * DOC: Command Transport (CT).
@@ -32,6 +35,10 @@ struct intel_guc;
  * @desc: pointer to the buffer descriptor
  * @cmds: pointer to the commands buffer
  * @size: size of the commands buffer in dwords
+ * @resv_space: reserved space in buffer in dwords
+ * @head: local shadow copy of head in dwords
+ * @tail: local shadow copy of tail in dwords
+ * @space: local shadow copy of space in dwords
  * @broken: flag to indicate if descriptor data is broken
  */
 struct intel_guc_ct_buffer {
@@ -39,10 +46,13 @@ struct intel_guc_ct_buffer {
 	struct guc_ct_buffer_desc *desc;
 	u32 *cmds;
 	u32 size;
+	u32 resv_space;
+	u32 tail;
+	u32 head;
+	atomic_t space;
 	bool broken;
 };
 
-
 /** Top-level structure for Command Transport related data
  *
  * Includes a pair of CT buffers for bi-directional communication and tracking
@@ -60,6 +70,9 @@ struct intel_guc_ct {
 
 	struct tasklet_struct receive_tasklet;
 
+	/** @wq: wait queue for g2h chanenl */
+	wait_queue_head_t wq;
+
 	struct {
 		u16 last_fence; /* last fence used to send request */
 
@@ -69,6 +82,9 @@ struct intel_guc_ct {
 		struct list_head incoming; /* incoming requests */
 		struct work_struct worker; /* handler for incoming requests */
 	} requests;
+
+	/** @stall_time: time of first time a CTB submission is stalled */
+	ktime_t stall_time;
 };
 
 void intel_guc_ct_init_early(struct intel_guc_ct *ct);
@@ -87,8 +103,16 @@ static inline bool intel_guc_ct_enabled(struct intel_guc_ct *ct)
 	return ct->enabled;
 }
 
+#define INTEL_GUC_CT_SEND_NB		BIT(31)
+#define INTEL_GUC_CT_SEND_G2H_DW_SHIFT	0
+#define INTEL_GUC_CT_SEND_G2H_DW_MASK	(0xff << INTEL_GUC_CT_SEND_G2H_DW_SHIFT)
+#define MAKE_SEND_FLAGS(len) \
+	({GEM_BUG_ON(!FIELD_FIT(INTEL_GUC_CT_SEND_G2H_DW_MASK, len)); \
+	(FIELD_PREP(INTEL_GUC_CT_SEND_G2H_DW_MASK, len) | INTEL_GUC_CT_SEND_NB);})
 int intel_guc_ct_send(struct intel_guc_ct *ct, const u32 *action, u32 len,
-		      u32 *response_buf, u32 response_buf_size);
+		      u32 *response_buf, u32 response_buf_size, u32 flags);
 void intel_guc_ct_event_handler(struct intel_guc_ct *ct);
 
+void intel_guc_log_ct_info(struct intel_guc_ct *ct, struct drm_printer *p);
+
 #endif /* _INTEL_GUC_CT_H_ */
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
index fe7cb7b29a1e..9a03ff56e654 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
@@ -9,6 +9,9 @@
 #include "intel_guc.h"
 #include "intel_guc_debugfs.h"
 #include "intel_guc_log_debugfs.h"
+#include "gt/uc/intel_guc_ct.h"
+#include "gt/uc/intel_guc_ads.h"
+#include "gt/uc/intel_guc_submission.h"
 
 static int guc_info_show(struct seq_file *m, void *data)
 {
@@ -22,16 +25,36 @@ static int guc_info_show(struct seq_file *m, void *data)
 	drm_puts(&p, "\n");
 	intel_guc_log_info(&guc->log, &p);
 
-	/* Add more as required ... */
+	if (!intel_guc_submission_is_used(guc))
+		return 0;
+
+	intel_guc_log_ct_info(&guc->ct, &p);
+	intel_guc_log_submission_info(guc, &p);
+	intel_guc_log_policy_info(guc, &p);
 
 	return 0;
 }
 DEFINE_GT_DEBUGFS_ATTRIBUTE(guc_info);
 
+static int guc_registered_contexts_show(struct seq_file *m, void *data)
+{
+	struct intel_guc *guc = m->private;
+	struct drm_printer p = drm_seq_file_printer(m);
+
+	if (!intel_guc_submission_is_used(guc))
+		return -ENODEV;
+
+	intel_guc_log_context_info(guc, &p);
+
+	return 0;
+}
+DEFINE_GT_DEBUGFS_ATTRIBUTE(guc_registered_contexts);
+
 void intel_guc_debugfs_register(struct intel_guc *guc, struct dentry *root)
 {
 	static const struct debugfs_gt_file files[] = {
 		{ "guc_info", &guc_info_fops, NULL },
+		{ "guc_registered_contexts", &guc_registered_contexts_fops, NULL },
 	};
 
 	if (!intel_guc_is_supported(guc))
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
index 617ec601648d..94bb1ca6f889 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
@@ -17,14 +17,21 @@
 #include "abi/guc_communication_ctb_abi.h"
 #include "abi/guc_messages_abi.h"
 
+/* Payload length only i.e. don't include G2H header length */
+#define G2H_LEN_DW_SCHED_CONTEXT_MODE_SET	2
+#define G2H_LEN_DW_DEREGISTER_CONTEXT		1
+
+#define GUC_CONTEXT_DISABLE		0
+#define GUC_CONTEXT_ENABLE		1
+
 #define GUC_CLIENT_PRIORITY_KMD_HIGH	0
 #define GUC_CLIENT_PRIORITY_HIGH	1
 #define GUC_CLIENT_PRIORITY_KMD_NORMAL	2
 #define GUC_CLIENT_PRIORITY_NORMAL	3
 #define GUC_CLIENT_PRIORITY_NUM		4
 
-#define GUC_MAX_STAGE_DESCRIPTORS	1024
-#define	GUC_INVALID_STAGE_ID		GUC_MAX_STAGE_DESCRIPTORS
+#define GUC_MAX_LRC_DESCRIPTORS		65535
+#define	GUC_INVALID_LRC_ID		GUC_MAX_LRC_DESCRIPTORS
 
 #define GUC_RENDER_ENGINE		0
 #define GUC_VIDEO_ENGINE		1
@@ -175,66 +182,39 @@ struct guc_process_desc {
 	u32 reserved[30];
 } __packed;
 
-/* engine id and context id is packed into guc_execlist_context.context_id*/
-#define GUC_ELC_CTXID_OFFSET		0
-#define GUC_ELC_ENGINE_OFFSET		29
+#define CONTEXT_REGISTRATION_FLAG_KMD	BIT(0)
 
-/* The execlist context including software and HW information */
-struct guc_execlist_context {
-	u32 context_desc;
-	u32 context_id;
-	u32 ring_status;
-	u32 ring_lrca;
-	u32 ring_begin;
-	u32 ring_end;
-	u32 ring_next_free_location;
-	u32 ring_current_tail_pointer_value;
-	u8 engine_state_submit_value;
-	u8 engine_state_wait_value;
-	u16 pagefault_count;
-	u16 engine_submit_queue_count;
-} __packed;
+#define CONTEXT_POLICY_DEFAULT_EXECUTION_QUANTUM_US 1000000
+#define CONTEXT_POLICY_DEFAULT_PREEMPTION_TIME_US 500000
+
+/* Preempt to idle on quantum expiry */
+#define CONTEXT_POLICY_FLAG_PREEMPT_TO_IDLE	BIT(0)
 
 /*
- * This structure describes a stage set arranged for a particular communication
- * between uKernel (GuC) and Driver (KMD). Technically, this is known as a
- * "GuC Context descriptor" in the specs, but we use the term "stage descriptor"
- * to avoid confusion with all the other things already named "context" in the
- * driver. A static pool of these descriptors are stored inside a GEM object
- * (stage_desc_pool) which is held for the entire lifetime of our interaction
- * with the GuC, being allocated before the GuC is loaded with its firmware.
+ * GuC Context registration descriptor.
+ * FIXME: This is only required to exist during context registration.
+ * The current 1:1 between guc_lrc_desc and LRCs for the lifetime of the LRC
+ * is not required.
  */
-struct guc_stage_desc {
-	u32 sched_common_area;
-	u32 stage_id;
-	u32 pas_id;
-	u8 engines_used;
-	u64 db_trigger_cpu;
-	u32 db_trigger_uk;
-	u64 db_trigger_phy;
-	u16 db_id;
-
-	struct guc_execlist_context lrc[GUC_MAX_ENGINES_NUM];
-
-	u8 attribute;
-
+struct guc_lrc_desc {
+	u32 hw_context_desc;
+	u32 slpm_perf_mode_hint;	/* SPLC v1 only */
+	u32 slpm_freq_hint;
+	u32 engine_submit_mask;		/* In logical space */
+	u8 engine_class;
+	u8 reserved0[3];
 	u32 priority;
-
-	u32 wq_sampled_tail_offset;
-	u32 wq_total_submit_enqueues;
-
 	u32 process_desc;
 	u32 wq_addr;
 	u32 wq_size;
-
-	u32 engine_presence;
-
-	u8 engine_suspended;
-
-	u8 reserved0[3];
-	u64 reserved1[1];
-
-	u64 desc_private;
+	u32 context_flags;		/* CONTEXT_REGISTRATION_* */
+	/* Time for one workload to execute. (in micro seconds) */
+	u32 execution_quantum;
+	/* Time to wait for a preemption request to complete before issuing a
+	 * reset. (in micro seconds). */
+	u32 preemption_timeout;
+	u32 policy_flags;		/* CONTEXT_POLICY_* */
+	u32 reserved1[19];
 } __packed;
 
 #define GUC_POWER_UNSPECIFIED	0
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index e9c237b18692..9c102bf0c8e3 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -10,10 +10,13 @@
 #include "gt/intel_breadcrumbs.h"
 #include "gt/intel_context.h"
 #include "gt/intel_engine_pm.h"
+#include "gt/intel_engine_heartbeat.h"
 #include "gt/intel_gt.h"
 #include "gt/intel_gt_irq.h"
 #include "gt/intel_gt_pm.h"
+#include "gt/intel_gt_requests.h"
 #include "gt/intel_lrc.h"
+#include "gt/intel_lrc_reg.h"
 #include "gt/intel_mocs.h"
 #include "gt/intel_ring.h"
 
@@ -58,246 +61,681 @@
  *
  */
 
+/* GuC Virtual Engine */
+struct guc_virtual_engine {
+	struct intel_engine_cs base;
+	struct intel_context context;
+};
+
+static struct intel_context *
+guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count);
+
 #define GUC_REQUEST_SIZE 64 /* bytes */
 
-static inline struct i915_priolist *to_priolist(struct rb_node *rb)
+/*
+ * Below is a set of functions which control the GuC scheduling state which do
+ * not require a lock as all state transitions are mutually exclusive. i.e. It
+ * is not possible for the context pinning code and submission, for the same
+ * context, to be executing simultaneously. We still need an atomic as it is
+ * possible for some of the bits to changing at the same time though.
+ */
+#define SCHED_STATE_NO_LOCK_ENABLED			BIT(0)
+#define SCHED_STATE_NO_LOCK_PENDING_ENABLE		BIT(1)
+#define SCHED_STATE_NO_LOCK_BLOCKED_SHIFT		2
+#define SCHED_STATE_NO_LOCK_BLOCKED \
+	BIT(SCHED_STATE_NO_LOCK_BLOCKED_SHIFT)
+#define SCHED_STATE_NO_LOCK_BLOCKED_MASK \
+	(0xffff << SCHED_STATE_NO_LOCK_BLOCKED_SHIFT)
+static inline bool context_enabled(struct intel_context *ce)
 {
-	return rb_entry(rb, struct i915_priolist, node);
+	return (atomic_read(&ce->guc_sched_state_no_lock) &
+		SCHED_STATE_NO_LOCK_ENABLED);
+}
+
+static inline void set_context_enabled(struct intel_context *ce)
+{
+	atomic_or(SCHED_STATE_NO_LOCK_ENABLED, &ce->guc_sched_state_no_lock);
+}
+
+static inline void clr_context_enabled(struct intel_context *ce)
+{
+	atomic_and((u32)~SCHED_STATE_NO_LOCK_ENABLED,
+		   &ce->guc_sched_state_no_lock);
+}
+
+static inline bool context_pending_enable(struct intel_context *ce)
+{
+	return (atomic_read(&ce->guc_sched_state_no_lock) &
+		SCHED_STATE_NO_LOCK_PENDING_ENABLE);
 }
 
-static struct guc_stage_desc *__get_stage_desc(struct intel_guc *guc, u32 id)
+static inline void set_context_pending_enable(struct intel_context *ce)
 {
-	struct guc_stage_desc *base = guc->stage_desc_pool_vaddr;
+	atomic_or(SCHED_STATE_NO_LOCK_PENDING_ENABLE,
+		  &ce->guc_sched_state_no_lock);
+}
 
-	return &base[id];
+static inline void clr_context_pending_enable(struct intel_context *ce)
+{
+	atomic_and((u32)~SCHED_STATE_NO_LOCK_PENDING_ENABLE,
+		   &ce->guc_sched_state_no_lock);
 }
 
-static int guc_stage_desc_pool_create(struct intel_guc *guc)
+static inline u32 context_blocked(struct intel_context *ce)
 {
-	u32 size = PAGE_ALIGN(sizeof(struct guc_stage_desc) *
-			      GUC_MAX_STAGE_DESCRIPTORS);
+	return (atomic_read(&ce->guc_sched_state_no_lock) &
+		SCHED_STATE_NO_LOCK_BLOCKED_MASK) >>
+		SCHED_STATE_NO_LOCK_BLOCKED_SHIFT;
+}
 
-	return intel_guc_allocate_and_map_vma(guc, size, &guc->stage_desc_pool,
-					      &guc->stage_desc_pool_vaddr);
+static inline void incr_context_blocked(struct intel_context *ce)
+{
+	lockdep_assert_held(&ce->engine->sched_engine->lock);
+	atomic_add(SCHED_STATE_NO_LOCK_BLOCKED,
+		   &ce->guc_sched_state_no_lock);
 }
 
-static void guc_stage_desc_pool_destroy(struct intel_guc *guc)
+static inline void decr_context_blocked(struct intel_context *ce)
 {
-	i915_vma_unpin_and_release(&guc->stage_desc_pool, I915_VMA_RELEASE_MAP);
+	lockdep_assert_held(&ce->engine->sched_engine->lock);
+	atomic_sub(SCHED_STATE_NO_LOCK_BLOCKED,
+		   &ce->guc_sched_state_no_lock);
 }
 
 /*
- * Initialise/clear the stage descriptor shared with the GuC firmware.
- *
- * This descriptor tells the GuC where (in GGTT space) to find the important
- * data structures related to work submission (process descriptor, write queue,
- * etc).
+ * Below is a set of functions which control the GuC scheduling state which
+ * require a lock, aside from the special case where the functions are called
+ * from guc_lrc_desc_pin(). In that case it isn't possible for any other code
+ * path to be executing on the context.
  */
-static void guc_stage_desc_init(struct intel_guc *guc)
+#define SCHED_STATE_WAIT_FOR_DEREGISTER_TO_REGISTER	BIT(0)
+#define SCHED_STATE_DESTROYED				BIT(1)
+#define SCHED_STATE_PENDING_DISABLE			BIT(2)
+#define SCHED_STATE_BANNED				BIT(3)
+static inline void init_sched_state(struct intel_context *ce)
+{
+	/* Only should be called from guc_lrc_desc_pin() */
+	atomic_set(&ce->guc_sched_state_no_lock, 0);
+	ce->guc_state.sched_state = 0;
+}
+
+static inline bool
+context_wait_for_deregister_to_register(struct intel_context *ce)
 {
-	struct guc_stage_desc *desc;
+	return (ce->guc_state.sched_state &
+		SCHED_STATE_WAIT_FOR_DEREGISTER_TO_REGISTER);
+}
 
-	/* we only use 1 stage desc, so hardcode it to 0 */
-	desc = __get_stage_desc(guc, 0);
-	memset(desc, 0, sizeof(*desc));
+static inline void
+set_context_wait_for_deregister_to_register(struct intel_context *ce)
+{
+	/* Only should be called from guc_lrc_desc_pin() without lock */
+	ce->guc_state.sched_state |=
+		SCHED_STATE_WAIT_FOR_DEREGISTER_TO_REGISTER;
+}
 
-	desc->attribute = GUC_STAGE_DESC_ATTR_ACTIVE |
-			  GUC_STAGE_DESC_ATTR_KERNEL;
+static inline void
+clr_context_wait_for_deregister_to_register(struct intel_context *ce)
+{
+	lockdep_assert_held(&ce->guc_state.lock);
+	ce->guc_state.sched_state =
+		(ce->guc_state.sched_state &
+		 ~SCHED_STATE_WAIT_FOR_DEREGISTER_TO_REGISTER);
+}
 
-	desc->stage_id = 0;
-	desc->priority = GUC_CLIENT_PRIORITY_KMD_NORMAL;
+static inline bool
+context_destroyed(struct intel_context *ce)
+{
+	return (ce->guc_state.sched_state & SCHED_STATE_DESTROYED);
+}
 
-	desc->wq_size = GUC_WQ_SIZE;
+static inline void
+set_context_destroyed(struct intel_context *ce)
+{
+	lockdep_assert_held(&ce->guc_state.lock);
+	ce->guc_state.sched_state |= SCHED_STATE_DESTROYED;
 }
 
-static void guc_stage_desc_fini(struct intel_guc *guc)
+static inline bool context_pending_disable(struct intel_context *ce)
 {
-	struct guc_stage_desc *desc;
+	return (ce->guc_state.sched_state & SCHED_STATE_PENDING_DISABLE);
+}
 
-	desc = __get_stage_desc(guc, 0);
-	memset(desc, 0, sizeof(*desc));
+static inline void set_context_pending_disable(struct intel_context *ce)
+{
+	lockdep_assert_held(&ce->guc_state.lock);
+	ce->guc_state.sched_state |= SCHED_STATE_PENDING_DISABLE;
 }
 
-static void guc_add_request(struct intel_guc *guc, struct i915_request *rq)
+static inline void clr_context_pending_disable(struct intel_context *ce)
 {
-	/* Leaving stub as this function will be used in future patches */
+	lockdep_assert_held(&ce->guc_state.lock);
+	ce->guc_state.sched_state =
+		(ce->guc_state.sched_state & ~SCHED_STATE_PENDING_DISABLE);
 }
 
-/*
- * When we're doing submissions using regular execlists backend, writing to
- * ELSP from CPU side is enough to make sure that writes to ringbuffer pages
- * pinned in mappable aperture portion of GGTT are visible to command streamer.
- * Writes done by GuC on our behalf are not guaranteeing such ordering,
- * therefore, to ensure the flush, we're issuing a POSTING READ.
- */
-static void flush_ggtt_writes(struct i915_vma *vma)
+static inline bool context_banned(struct intel_context *ce)
 {
-	if (i915_vma_is_map_and_fenceable(vma))
-		intel_uncore_posting_read_fw(vma->vm->gt->uncore,
-					     GUC_STATUS);
+	return (ce->guc_state.sched_state & SCHED_STATE_BANNED);
 }
 
-static void guc_submit(struct intel_engine_cs *engine,
-		       struct i915_request **out,
-		       struct i915_request **end)
+static inline void set_context_banned(struct intel_context *ce)
 {
-	struct intel_guc *guc = &engine->gt->uc.guc;
+	lockdep_assert_held(&ce->guc_state.lock);
+	ce->guc_state.sched_state |= SCHED_STATE_BANNED;
+}
 
-	do {
-		struct i915_request *rq = *out++;
+static inline void clr_context_banned(struct intel_context *ce)
+{
+	lockdep_assert_held(&ce->guc_state.lock);
+	ce->guc_state.sched_state &= ~SCHED_STATE_BANNED;
+}
 
-		flush_ggtt_writes(rq->ring->vma);
-		guc_add_request(guc, rq);
-	} while (out != end);
+static inline bool context_guc_id_invalid(struct intel_context *ce)
+{
+	return (ce->guc_id == GUC_INVALID_LRC_ID);
 }
 
-static inline int rq_prio(const struct i915_request *rq)
+static inline void set_context_guc_id_invalid(struct intel_context *ce)
 {
-	return rq->sched.attr.priority;
+	ce->guc_id = GUC_INVALID_LRC_ID;
+}
+
+static inline struct intel_guc *ce_to_guc(struct intel_context *ce)
+{
+	return &ce->engine->gt->uc.guc;
+}
+
+static inline struct i915_priolist *to_priolist(struct rb_node *rb)
+{
+	return rb_entry(rb, struct i915_priolist, node);
+}
+
+static struct guc_lrc_desc *__get_lrc_desc(struct intel_guc *guc, u32 index)
+{
+	struct guc_lrc_desc *base = guc->lrc_desc_pool_vaddr;
+
+	GEM_BUG_ON(index >= GUC_MAX_LRC_DESCRIPTORS);
+
+	return &base[index];
+}
+
+static inline struct intel_context *__get_context(struct intel_guc *guc, u32 id)
+{
+	struct intel_context *ce = xa_load(&guc->context_lookup, id);
+
+	GEM_BUG_ON(id >= GUC_MAX_LRC_DESCRIPTORS);
+
+	return ce;
+}
+
+static int guc_lrc_desc_pool_create(struct intel_guc *guc)
+{
+	u32 size;
+	int ret;
+
+	size = PAGE_ALIGN(sizeof(struct guc_lrc_desc) *
+			  GUC_MAX_LRC_DESCRIPTORS);
+	ret = intel_guc_allocate_and_map_vma(guc, size, &guc->lrc_desc_pool,
+					     (void **)&guc->lrc_desc_pool_vaddr);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static void guc_lrc_desc_pool_destroy(struct intel_guc *guc)
+{
+	guc->lrc_desc_pool_vaddr = NULL;
+	i915_vma_unpin_and_release(&guc->lrc_desc_pool, I915_VMA_RELEASE_MAP);
+}
+
+static inline bool guc_submission_initialized(struct intel_guc *guc)
+{
+	return guc->lrc_desc_pool_vaddr != NULL;
+}
+
+static inline void reset_lrc_desc(struct intel_guc *guc, u32 id)
+{
+	if (likely(guc_submission_initialized(guc))) {
+		struct guc_lrc_desc *desc = __get_lrc_desc(guc, id);
+		unsigned long flags;
+
+		memset(desc, 0, sizeof(*desc));
+
+		/*
+		 * xarray API doesn't have xa_erase_irqsave wrapper, so calling
+		 * the lower level functions directly.
+		 */
+		xa_lock_irqsave(&guc->context_lookup, flags);
+		__xa_erase(&guc->context_lookup, id);
+		xa_unlock_irqrestore(&guc->context_lookup, flags);
+	}
+}
+
+static inline bool lrc_desc_registered(struct intel_guc *guc, u32 id)
+{
+	return __get_context(guc, id);
 }
 
-static struct i915_request *schedule_in(struct i915_request *rq, int idx)
+static inline void set_lrc_desc_registered(struct intel_guc *guc, u32 id,
+					   struct intel_context *ce)
 {
-	trace_i915_request_in(rq, idx);
+	unsigned long flags;
 
 	/*
-	 * Currently we are not tracking the rq->context being inflight
-	 * (ce->inflight = rq->engine). It is only used by the execlists
-	 * backend at the moment, a similar counting strategy would be
-	 * required if we generalise the inflight tracking.
+	 * xarray API doesn't have xa_save_irqsave wrapper, so calling the
+	 * lower level functions directly.
 	 */
+	xa_lock_irqsave(&guc->context_lookup, flags);
+	__xa_store(&guc->context_lookup, id, ce, GFP_ATOMIC);
+	xa_unlock_irqrestore(&guc->context_lookup, flags);
+}
+
+static int guc_submission_busy_loop(struct intel_guc* guc,
+				    const u32 *action,
+				    u32 len,
+				    u32 g2h_len_dw,
+				    bool loop)
+{
+	int err;
+
+	err = intel_guc_send_busy_loop(guc, action, len, g2h_len_dw, loop);
 
-	__intel_gt_pm_get(rq->engine->gt);
-	return i915_request_get(rq);
+	if (!err && g2h_len_dw)
+		atomic_inc(&guc->outstanding_submission_g2h);
+
+	return err;
 }
 
-static void schedule_out(struct i915_request *rq)
+int intel_guc_wait_for_pending_msg(struct intel_guc *guc,
+				   atomic_t *wait_var,
+				   bool interruptible,
+				   long timeout)
 {
-	trace_i915_request_out(rq);
+	const int state = interruptible ?
+		TASK_INTERRUPTIBLE : TASK_UNINTERRUPTIBLE;
+	DEFINE_WAIT(wait);
+
+	might_sleep();
+	GEM_BUG_ON(timeout < 0);
+
+	if (!atomic_read(wait_var))
+		return 0;
+
+	if (!timeout)
+		return -ETIME;
+
+	for (;;) {
+		prepare_to_wait(&guc->ct.wq, &wait, state);
 
-	intel_gt_pm_put_async(rq->engine->gt);
-	i915_request_put(rq);
+		if (!atomic_read(wait_var))
+			break;
+
+		if (signal_pending_state(state, current)) {
+			timeout = -ERESTARTSYS;
+			break;
+		}
+
+		if (!timeout) {
+			timeout = -ETIME;
+			break;
+		}
+
+		timeout = io_schedule_timeout(timeout);
+	}
+	finish_wait(&guc->ct.wq, &wait);
+
+	return (timeout < 0) ? timeout : 0;
 }
 
-static void __guc_dequeue(struct intel_engine_cs *engine)
+int intel_guc_wait_for_idle(struct intel_guc *guc, long timeout)
 {
-	struct intel_engine_execlists * const execlists = &engine->execlists;
-	struct i915_sched_engine * const sched_engine = engine->sched_engine;
-	struct i915_request **first = execlists->inflight;
-	struct i915_request ** const last_port = first + execlists->port_mask;
-	struct i915_request *last = first[0];
-	struct i915_request **port;
-	bool submit = false;
-	struct rb_node *rb;
+	bool interruptible = true;
 
-	lockdep_assert_held(&sched_engine->lock);
+	if (unlikely(timeout < 0))
+		timeout = -timeout, interruptible = false;
 
-	if (last) {
-		if (*++first)
-			return;
+	return intel_guc_wait_for_pending_msg(guc, &guc->outstanding_submission_g2h,
+					      interruptible, timeout);
+}
+
+static int guc_lrc_desc_pin(struct intel_context *ce, bool loop);
 
-		last = NULL;
+static int guc_add_request(struct intel_guc *guc, struct i915_request *rq)
+{
+	int err = 0;
+	struct intel_context *ce = rq->context;
+	u32 action[3];
+	int len = 0;
+	u32 g2h_len_dw = 0;
+	bool enabled;
+
+	/*
+	 * Corner case where requests were sitting in the priority list or a
+	 * request resubmitted after the context was banned.
+	 */
+	if (unlikely(intel_context_is_banned(ce))) {
+		i915_request_put(i915_request_mark_eio(rq));
+		intel_engine_signal_breadcrumbs(ce->engine);
+		goto out;
 	}
 
+	GEM_BUG_ON(!atomic_read(&ce->guc_id_ref));
+	GEM_BUG_ON(context_guc_id_invalid(ce));
+
 	/*
-	 * We write directly into the execlists->inflight queue and don't use
-	 * the execlists->pending queue, as we don't have a distinct switch
-	 * event.
+	 * Corner case where the GuC firmware was blown away and reloaded while
+	 * this context was pinned.
 	 */
-	port = first;
+	if (unlikely(!lrc_desc_registered(guc, ce->guc_id))) {
+		err = guc_lrc_desc_pin(ce, false);
+		if (unlikely(err))
+			goto out;
+	}
+
+	if (unlikely(context_blocked(ce)))
+		goto out;
+
+	enabled = context_enabled(ce);
+
+	if (!enabled) {
+		action[len++] = INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_SET;
+		action[len++] = ce->guc_id;
+		action[len++] = GUC_CONTEXT_ENABLE;
+		set_context_pending_enable(ce);
+		intel_context_get(ce);
+		g2h_len_dw = G2H_LEN_DW_SCHED_CONTEXT_MODE_SET;
+	} else {
+		action[len++] = INTEL_GUC_ACTION_SCHED_CONTEXT;
+		action[len++] = ce->guc_id;
+	}
+
+	err = intel_guc_send_nb(guc, action, len, g2h_len_dw);
+	if (!enabled && !err) {
+		trace_intel_context_sched_enable(ce);
+		atomic_inc(&guc->outstanding_submission_g2h);
+		set_context_enabled(ce);
+	} else if (!enabled) {
+		clr_context_pending_enable(ce);
+		intel_context_put(ce);
+	}
+	if (likely(!err))
+		trace_i915_request_guc_submit(rq);
+
+out:
+	return err;
+}
+
+static inline void guc_set_lrc_tail(struct i915_request *rq)
+{
+	rq->context->lrc_reg_state[CTX_RING_TAIL] =
+		intel_ring_set_tail(rq->ring, rq->tail);
+}
+
+static inline int rq_prio(const struct i915_request *rq)
+{
+	return rq->sched.attr.priority;
+}
+
+static int guc_dequeue_one_context(struct intel_guc *guc)
+{
+	struct i915_sched_engine * const sched_engine = guc->sched_engine;
+	struct i915_request *last = NULL;
+	bool submit = false;
+	struct rb_node *rb;
+	int ret;
+
+	lockdep_assert_held(&sched_engine->lock);
+
+	if (guc->stalled_request) {
+		submit = true;
+		last = guc->stalled_request;
+		goto resubmit;
+	}
+
 	while ((rb = rb_first_cached(&sched_engine->queue))) {
 		struct i915_priolist *p = to_priolist(rb);
 		struct i915_request *rq, *rn;
 
 		priolist_for_each_request_consume(rq, rn, p) {
-			if (last && rq->context != last->context) {
-				if (port == last_port)
-					goto done;
-
-				*port = schedule_in(last,
-						    port - execlists->inflight);
-				port++;
-			}
+			if (last && rq->context != last->context)
+				goto done;
 
 			list_del_init(&rq->sched.link);
+
 			__i915_request_submit(rq);
-			submit = true;
+
+			trace_i915_request_in(rq, 0);
 			last = rq;
+			submit = true;
 		}
 
 		rb_erase_cached(&p->node, &sched_engine->queue);
 		i915_priolist_free(p);
 	}
 done:
-	sched_engine->queue_priority_hint =
-		rb ? to_priolist(rb)->priority : INT_MIN;
 	if (submit) {
-		*port = schedule_in(last, port - execlists->inflight);
-		*++port = NULL;
-		guc_submit(engine, first, port);
+		guc_set_lrc_tail(last);
+resubmit:
+		ret = guc_add_request(guc, last);
+		if (unlikely(ret == -EPIPE))
+			goto deadlk;
+		else if (ret == -EBUSY) {
+			tasklet_schedule(&sched_engine->tasklet);
+			guc->stalled_request = last;
+			return false;
+		}
 	}
-	execlists->active = execlists->inflight;
+
+	guc->stalled_request = NULL;
+	return submit;
+
+deadlk:
+	sched_engine->tasklet.callback = NULL;
+	tasklet_disable_nosync(&sched_engine->tasklet);
+	return false;
 }
 
 static void guc_submission_tasklet(struct tasklet_struct *t)
 {
 	struct i915_sched_engine *sched_engine =
 		from_tasklet(sched_engine, t, tasklet);
-	struct intel_engine_cs * const engine = sched_engine->private_data;
-	struct intel_engine_execlists * const execlists = &engine->execlists;
-	struct i915_request **port, *rq;
 	unsigned long flags;
+	bool loop;
+
+	spin_lock_irqsave(&sched_engine->lock, flags);
 
-	spin_lock_irqsave(&engine->sched_engine->lock, flags);
+	do {
+		loop = guc_dequeue_one_context(sched_engine->private_data);
+	} while (loop);
 
-	for (port = execlists->inflight; (rq = *port); port++) {
-		if (!i915_request_completed(rq))
-			break;
+	i915_sched_engine_reset_on_empty(sched_engine);
 
-		schedule_out(rq);
-	}
-	if (port != execlists->inflight) {
-		int idx = port - execlists->inflight;
-		int rem = ARRAY_SIZE(execlists->inflight) - idx;
-		memmove(execlists->inflight, port, rem * sizeof(*port));
+	spin_unlock_irqrestore(&sched_engine->lock, flags);
+}
+
+static void cs_irq_handler(struct intel_engine_cs *engine, u16 iir)
+{
+	if (iir & GT_RENDER_USER_INTERRUPT)
+		intel_engine_signal_breadcrumbs(engine);
+}
+
+static void __guc_context_destroy(struct intel_context *ce);
+static void release_guc_id(struct intel_guc *guc, struct intel_context *ce);
+static void guc_signal_context_fence(struct intel_context *ce);
+static void guc_cancel_context_requests(struct intel_context *ce);
+static void guc_blocked_fence_complete(struct intel_context *ce);
+
+static void scrub_guc_desc_for_outstanding_g2h(struct intel_guc *guc)
+{
+	struct intel_context *ce;
+	unsigned long index, flags;
+	bool pending_disable, pending_enable, deregister, destroyed, banned;
+
+	xa_for_each(&guc->context_lookup, index, ce) {
+		/* Flush context */
+		spin_lock_irqsave(&ce->guc_state.lock, flags);
+		spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+
+		/*
+		 * Once we are at this point submission_disabled() is guaranteed
+		 * to visible to all callers who set the below flags (see above
+		 * flush and flushes in reset_prepare). If submission_disabled()
+		 * is set, the caller shouldn't set these flags.
+		 */
+
+		destroyed = context_destroyed(ce);
+		pending_enable = context_pending_enable(ce);
+		pending_disable = context_pending_disable(ce);
+		deregister = context_wait_for_deregister_to_register(ce);
+		banned = context_banned(ce);
+		init_sched_state(ce);
+
+		if (pending_enable || destroyed || deregister) {
+			atomic_dec(&guc->outstanding_submission_g2h);
+			if (deregister)
+				guc_signal_context_fence(ce);
+			if (destroyed) {
+				release_guc_id(guc, ce);
+				__guc_context_destroy(ce);
+			}
+			if (pending_enable|| deregister)
+				intel_context_put(ce);
+		}
+
+		/* Not mutualy exclusive with above if statement. */
+		if (pending_disable) {
+			guc_signal_context_fence(ce);
+			if (banned) {
+				guc_cancel_context_requests(ce);
+				intel_engine_signal_breadcrumbs(ce->engine);
+			}
+			intel_context_sched_disable_unpin(ce);
+			atomic_dec(&guc->outstanding_submission_g2h);
+			spin_lock_irqsave(&ce->guc_state.lock, flags);
+			guc_blocked_fence_complete(ce);
+			spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+
+			intel_context_put(ce);
+		}
 	}
+}
+
+static inline bool
+submission_disabled(struct intel_guc *guc)
+{
+	struct i915_sched_engine * const sched_engine = guc->sched_engine;
 
-	__guc_dequeue(engine);
+	return unlikely(!sched_engine ||
+			!__tasklet_is_enabled(&sched_engine->tasklet));
+}
 
-	i915_sched_engine_reset_on_empty(engine->sched_engine);
+static void disable_submission(struct intel_guc *guc)
+{
+	struct i915_sched_engine * const sched_engine = guc->sched_engine;
 
-	spin_unlock_irqrestore(&engine->sched_engine->lock, flags);
+	if (__tasklet_is_enabled(&sched_engine->tasklet)) {
+		GEM_BUG_ON(!guc->ct.enabled);
+		__tasklet_disable_sync_once(&sched_engine->tasklet);
+		sched_engine->tasklet.callback = NULL;
+	}
 }
 
-static void cs_irq_handler(struct intel_engine_cs *engine, u16 iir)
+static void enable_submission(struct intel_guc *guc)
 {
-	if (iir & GT_RENDER_USER_INTERRUPT) {
-		intel_engine_signal_breadcrumbs(engine);
-		tasklet_hi_schedule(&engine->sched_engine->tasklet);
+	struct i915_sched_engine * const sched_engine = guc->sched_engine;
+	unsigned long flags;
+
+	spin_lock_irqsave(&guc->sched_engine->lock, flags);
+	sched_engine->tasklet.callback = guc_submission_tasklet;
+	wmb();
+	if (!__tasklet_is_enabled(&sched_engine->tasklet) &&
+	    __tasklet_enable(&sched_engine->tasklet)) {
+		GEM_BUG_ON(!guc->ct.enabled);
+
+		/* And kick in case we missed a new request submission. */
+		tasklet_hi_schedule(&sched_engine->tasklet);
 	}
+	spin_unlock_irqrestore(&guc->sched_engine->lock, flags);
 }
 
-static void guc_reset_prepare(struct intel_engine_cs *engine)
+static void guc_flush_submissions(struct intel_guc *guc)
 {
-	ENGINE_TRACE(engine, "\n");
+	struct i915_sched_engine * const sched_engine = guc->sched_engine;
+	unsigned long flags;
+
+	spin_lock_irqsave(&sched_engine->lock, flags);
+	spin_unlock_irqrestore(&sched_engine->lock, flags);
+}
+
+void intel_guc_submission_reset_prepare(struct intel_guc *guc)
+{
+	int i;
+
+	if (unlikely(!guc_submission_initialized(guc)))
+		/* Reset called during driver load? GuC not yet initialised! */
+		return;
+
+	intel_gt_park_heartbeats(guc_to_gt(guc));
+	disable_submission(guc);
+	guc->interrupts.disable(guc);
+
+	/* Flush IRQ handler */
+	spin_lock_irq(&guc_to_gt(guc)->irq_lock);
+	spin_unlock_irq(&guc_to_gt(guc)->irq_lock);
+
+	guc_flush_submissions(guc);
 
 	/*
-	 * Prevent request submission to the hardware until we have
-	 * completed the reset in i915_gem_reset_finish(). If a request
-	 * is completed by one engine, it may then queue a request
-	 * to a second via its execlists->tasklet *just* as we are
-	 * calling engine->init_hw() and also writing the ELSP.
-	 * Turning off the execlists->tasklet until the reset is over
-	 * prevents the race.
-	 */
-	__tasklet_disable_sync_once(&engine->sched_engine->tasklet);
+	 * Handle any outstanding G2Hs before reset. Call IRQ handler directly
+	 * each pass as interrupt have been disabled. We always scrub for
+	 * outstanding G2H as it is possible for outstanding_submission_g2h to
+	 * be incremented after the context state update.
+ 	 */
+	for (i = 0; i < 4 && atomic_read(&guc->outstanding_submission_g2h); ++i) {
+		intel_guc_to_host_event_handler(guc);
+#define wait_for_reset(guc, wait_var) \
+		intel_guc_wait_for_pending_msg(guc, wait_var, false, (HZ / 20))
+		do {
+			wait_for_reset(guc, &guc->outstanding_submission_g2h);
+		} while (!list_empty(&guc->ct.requests.incoming));
+	}
+	scrub_guc_desc_for_outstanding_g2h(guc);
+}
+
+static struct intel_engine_cs *
+guc_virtual_get_sibling(struct intel_engine_cs *ve, unsigned int sibling)
+{
+	struct intel_engine_cs *engine;
+	intel_engine_mask_t tmp, mask = ve->mask;
+	unsigned int num_siblings = 0;
+
+	for_each_engine_masked(engine, ve->gt, mask, tmp)
+		if (num_siblings++ == sibling)
+			return engine;
+
+	return NULL;
+}
+
+static inline struct intel_engine_cs *
+__context_to_physical_engine(struct intel_context *ce)
+{
+	struct intel_engine_cs *engine = ce->engine;
+
+	if (intel_engine_is_virtual(engine))
+		engine = guc_virtual_get_sibling(engine, 0);
+
+	return engine;
 }
 
-static void guc_reset_state(struct intel_context *ce,
-			    struct intel_engine_cs *engine,
-			    u32 head,
-			    bool scrub)
+static void guc_reset_state(struct intel_context *ce, u32 head, bool scrub)
 {
+	struct intel_engine_cs *engine = __context_to_physical_engine(ce);
+
+	if (intel_context_is_banned(ce))
+		return;
+
 	GEM_BUG_ON(!intel_context_is_pinned(ce));
 
 	/*
@@ -315,37 +753,131 @@ static void guc_reset_state(struct intel_context *ce,
 	lrc_update_regs(ce, engine, head);
 }
 
-static void guc_reset_rewind(struct intel_engine_cs *engine, bool stalled)
+static void guc_reset_nop(struct intel_engine_cs *engine)
 {
-	struct intel_engine_execlists * const execlists = &engine->execlists;
-	struct i915_request *rq;
+}
+
+static void guc_rewind_nop(struct intel_engine_cs *engine, bool stalled)
+{
+}
+
+static void
+__unwind_incomplete_requests(struct intel_context *ce)
+{
+	struct i915_request *rq, *rn;
+	struct list_head *pl;
+	int prio = I915_PRIORITY_INVALID;
+	struct i915_sched_engine * const sched_engine =
+		ce->engine->sched_engine;
 	unsigned long flags;
 
-	spin_lock_irqsave(&engine->sched_engine->lock, flags);
+	spin_lock_irqsave(&sched_engine->lock, flags);
+	spin_lock(&ce->guc_active.lock);
+	list_for_each_entry_safe(rq, rn,
+				 &ce->guc_active.requests,
+				 sched.link) {
+		if (i915_request_completed(rq))
+			continue;
+
+		list_del_init(&rq->sched.link);
+		spin_unlock(&ce->guc_active.lock);
+
+		__i915_request_unsubmit(rq);
+
+		/* Push the request back into the queue for later resubmission. */
+		GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID);
+		if (rq_prio(rq) != prio) {
+			prio = rq_prio(rq);
+			pl = i915_sched_lookup_priolist(sched_engine, prio);
+		}
+		GEM_BUG_ON(i915_sched_engine_is_empty(sched_engine));
+
+		list_add_tail(&rq->sched.link, pl);
+		set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
+
+		spin_lock(&ce->guc_active.lock);
+	}
+	spin_unlock(&ce->guc_active.lock);
+	spin_unlock_irqrestore(&sched_engine->lock, flags);
+}
+
+static void __guc_reset_context(struct intel_context *ce, bool stalled)
+{
+	struct i915_request *rq;
+	u32 head;
+
+	intel_context_get(ce);
 
-	/* Push back any incomplete requests for replay after the reset. */
-	rq = execlists_unwind_incomplete_requests(execlists);
-	if (!rq)
-		goto out_unlock;
+	/*
+	 * GuC will implicitly mark the context as non-schedulable
+	 * when it sends the reset notification. Make sure our state
+	 * reflects this change. The context will be marked enabled
+	 * on resubmission.
+	 */
+	clr_context_enabled(ce);
+
+	rq = intel_context_find_active_request(ce);
+	if (!rq) {
+		head = ce->ring->tail;
+		stalled = false;
+		goto out_replay;
+	}
 
 	if (!i915_request_started(rq))
 		stalled = false;
 
+	GEM_BUG_ON(i915_active_is_idle(&ce->active));
+	head = intel_ring_wrap(ce->ring, rq->head);
 	__i915_request_reset(rq, stalled);
-	guc_reset_state(rq->context, engine, rq->head, stalled);
 
-out_unlock:
-	spin_unlock_irqrestore(&engine->sched_engine->lock, flags);
+out_replay:
+	guc_reset_state(ce, head, stalled);
+	__unwind_incomplete_requests(ce);
+	intel_context_put(ce);
+}
+
+void intel_guc_submission_reset(struct intel_guc *guc, bool stalled)
+{
+	struct intel_context *ce;
+	unsigned long index;
+
+	if (unlikely(!guc_submission_initialized(guc)))
+		/* Reset called during driver load? GuC not yet initialised! */
+		return;
+
+	xa_for_each(&guc->context_lookup, index, ce)
+		if (intel_context_is_pinned(ce))
+			__guc_reset_context(ce, stalled);
+
+	/* GuC is blown away, drop all references to contexts */
+	xa_destroy(&guc->context_lookup);
 }
 
-static void guc_reset_cancel(struct intel_engine_cs *engine)
+static void guc_cancel_context_requests(struct intel_context *ce)
+{
+	struct i915_sched_engine *sched_engine = ce_to_guc(ce)->sched_engine;
+	struct i915_request *rq;
+	unsigned long flags;
+
+	/* Mark all executing requests as skipped. */
+	spin_lock_irqsave(&sched_engine->lock, flags);
+	spin_lock(&ce->guc_active.lock);
+	list_for_each_entry(rq, &ce->guc_active.requests, sched.link)
+		i915_request_put(i915_request_mark_eio(rq));
+	spin_unlock(&ce->guc_active.lock);
+	spin_unlock_irqrestore(&sched_engine->lock, flags);
+}
+
+static void
+guc_cancel_sched_engine_requests(struct i915_sched_engine *sched_engine)
 {
-	struct i915_sched_engine * const sched_engine = engine->sched_engine;
 	struct i915_request *rq, *rn;
 	struct rb_node *rb;
 	unsigned long flags;
 
-	ENGINE_TRACE(engine, "\n");
+	/* Can be called during boot if GuC fails to load */
+	if (!sched_engine)
+		return;
 
 	/*
 	 * Before we call engine->cancel_requests(), we should have exclusive
@@ -363,21 +895,16 @@ static void guc_reset_cancel(struct intel_engine_cs *engine)
 	 */
 	spin_lock_irqsave(&sched_engine->lock, flags);
 
-	/* Mark all executing requests as skipped. */
-	list_for_each_entry(rq, &sched_engine->requests, sched.link) {
-		i915_request_set_error_once(rq, -EIO);
-		i915_request_mark_complete(rq);
-	}
-
 	/* Flush the queued requests to the timeline list (for retiring). */
 	while ((rb = rb_first_cached(&sched_engine->queue))) {
 		struct i915_priolist *p = to_priolist(rb);
 
 		priolist_for_each_request_consume(rq, rn, p) {
 			list_del_init(&rq->sched.link);
+
 			__i915_request_submit(rq);
-			dma_fence_set_error(&rq->fence, -EIO);
-			i915_request_mark_complete(rq);
+
+			i915_request_put(i915_request_mark_eio(rq));
 		}
 
 		rb_erase_cached(&p->node, &sched_engine->queue);
@@ -392,15 +919,41 @@ static void guc_reset_cancel(struct intel_engine_cs *engine)
 	spin_unlock_irqrestore(&sched_engine->lock, flags);
 }
 
-static void guc_reset_finish(struct intel_engine_cs *engine)
+void intel_guc_submission_cancel_requests(struct intel_guc *guc)
 {
-	if (__tasklet_enable(&engine->sched_engine->tasklet))
-		/* And kick in case we missed a new request submission. */
-		tasklet_hi_schedule(&engine->sched_engine->tasklet);
+	struct intel_context *ce;
+	unsigned long index;
 
-	ENGINE_TRACE(engine, "depth->%d\n",
-		     atomic_read(&engine->sched_engine->tasklet.count));
-}
+	xa_for_each(&guc->context_lookup, index, ce)
+		if (intel_context_is_pinned(ce))
+			guc_cancel_context_requests(ce);
+
+	guc_cancel_sched_engine_requests(guc->sched_engine);
+
+	/* GuC is blown away, drop all references to contexts */
+	xa_destroy(&guc->context_lookup);
+}
+
+void intel_guc_submission_reset_finish(struct intel_guc *guc)
+{
+	/* Reset called during driver load or during wedge? */
+	if (unlikely(!guc_submission_initialized(guc) ||
+		     test_bit(I915_WEDGED, &guc_to_gt(guc)->reset.flags)))
+		return;
+
+	/*
+	 * Technically possible for either of these values to be non-zero here,
+	 * but very unlikely + harmless. Regardless let's add a warn so we can
+	 * see in CI if this happens frequently / a precursor to taking down the
+	 * machine.
+	 */
+	GEM_WARN_ON(atomic_read(&guc->outstanding_submission_g2h));
+	atomic_set(&guc->outstanding_submission_g2h, 0);
+
+	intel_guc_global_policies_update(guc);
+	enable_submission(guc);
+	intel_gt_unpark_heartbeats(guc_to_gt(guc));
+}
 
 /*
  * Set up the memory resources to be shared with the GuC (via the GGTT)
@@ -410,72 +963,895 @@ int intel_guc_submission_init(struct intel_guc *guc)
 {
 	int ret;
 
-	if (guc->stage_desc_pool)
+	if (guc->lrc_desc_pool)
 		return 0;
 
-	ret = guc_stage_desc_pool_create(guc);
+	ret = guc_lrc_desc_pool_create(guc);
 	if (ret)
 		return ret;
 	/*
 	 * Keep static analysers happy, let them know that we allocated the
 	 * vma after testing that it didn't exist earlier.
 	 */
-	GEM_BUG_ON(!guc->stage_desc_pool);
+	GEM_BUG_ON(!guc->lrc_desc_pool);
+
+	xa_init_flags(&guc->context_lookup, XA_FLAGS_LOCK_IRQ);
+
+	spin_lock_init(&guc->contexts_lock);
+	INIT_LIST_HEAD(&guc->guc_id_list);
+	ida_init(&guc->guc_ids);
 
 	return 0;
 }
 
 void intel_guc_submission_fini(struct intel_guc *guc)
 {
-	if (guc->stage_desc_pool) {
-		guc_stage_desc_pool_destroy(guc);
+	if (!guc->lrc_desc_pool)
+		return;
+
+	guc_lrc_desc_pool_destroy(guc);
+	i915_sched_engine_put(guc->sched_engine);
+}
+
+static inline void queue_request(struct i915_sched_engine *sched_engine,
+				 struct i915_request *rq,
+				 int prio)
+{
+	GEM_BUG_ON(!list_empty(&rq->sched.link));
+	list_add_tail(&rq->sched.link,
+		      i915_sched_lookup_priolist(sched_engine, prio));
+	set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
+}
+
+static int guc_bypass_tasklet_submit(struct intel_guc *guc,
+				     struct i915_request *rq)
+{
+	int ret;
+
+	__i915_request_submit(rq);
+
+	trace_i915_request_in(rq, 0);
+
+	guc_set_lrc_tail(rq);
+	ret = guc_add_request(guc, rq);
+	if (ret == -EBUSY)
+		guc->stalled_request = rq;
+
+	if (unlikely(ret == -EPIPE))
+		disable_submission(guc);
+
+	return ret;
+}
+
+static void guc_submit_request(struct i915_request *rq)
+{
+	struct i915_sched_engine *sched_engine = rq->engine->sched_engine;
+	struct intel_guc *guc = &rq->engine->gt->uc.guc;
+	unsigned long flags;
+
+	/* Will be called from irq-context when using foreign fences. */
+	spin_lock_irqsave(&sched_engine->lock, flags);
+
+	if (submission_disabled(guc) || guc->stalled_request ||
+	    !i915_sched_engine_is_empty(sched_engine))
+		queue_request(sched_engine, rq, rq_prio(rq));
+	else if (guc_bypass_tasklet_submit(guc, rq) == -EBUSY)
+		tasklet_hi_schedule(&sched_engine->tasklet);
+
+	spin_unlock_irqrestore(&sched_engine->lock, flags);
+}
+
+#define GUC_ID_START	64	/* First 64 guc_ids reserved */
+static int new_guc_id(struct intel_guc *guc)
+{
+	return ida_simple_get(&guc->guc_ids, GUC_ID_START,
+			      GUC_MAX_LRC_DESCRIPTORS, GFP_KERNEL |
+			      __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
+}
+
+static void __release_guc_id(struct intel_guc *guc, struct intel_context *ce)
+{
+	if (!context_guc_id_invalid(ce)) {
+		ida_simple_remove(&guc->guc_ids, ce->guc_id);
+		reset_lrc_desc(guc, ce->guc_id);
+		set_context_guc_id_invalid(ce);
+	}
+	if (!list_empty(&ce->guc_id_link))
+		list_del_init(&ce->guc_id_link);
+}
+
+static void release_guc_id(struct intel_guc *guc, struct intel_context *ce)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&guc->contexts_lock, flags);
+	__release_guc_id(guc, ce);
+	spin_unlock_irqrestore(&guc->contexts_lock, flags);
+}
+
+static int steal_guc_id(struct intel_guc *guc)
+{
+	struct intel_context *ce;
+	int guc_id;
+
+	if (!list_empty(&guc->guc_id_list)) {
+		ce = list_first_entry(&guc->guc_id_list,
+				      struct intel_context,
+				      guc_id_link);
+
+		GEM_BUG_ON(atomic_read(&ce->guc_id_ref));
+		GEM_BUG_ON(context_guc_id_invalid(ce));
+
+		list_del_init(&ce->guc_id_link);
+		guc_id = ce->guc_id;
+		set_context_guc_id_invalid(ce);
+		return guc_id;
+	} else {
+		return -EAGAIN;
+	}
+}
+
+static int assign_guc_id(struct intel_guc *guc, u16 *out)
+{
+	int ret;
+
+	ret = new_guc_id(guc);
+	if (unlikely(ret < 0)) {
+		ret = steal_guc_id(guc);
+		if (ret < 0)
+			return ret;
+	}
+
+	*out = ret;
+	return 0;
+}
+
+#define PIN_GUC_ID_TRIES	4
+static int pin_guc_id(struct intel_guc *guc, struct intel_context *ce)
+{
+	int ret = 0;
+	unsigned long flags, tries = PIN_GUC_ID_TRIES;
+
+	GEM_BUG_ON(atomic_read(&ce->guc_id_ref));
+
+try_again:
+	spin_lock_irqsave(&guc->contexts_lock, flags);
+
+	if (context_guc_id_invalid(ce)) {
+		ret = assign_guc_id(guc, &ce->guc_id);
+		if (ret)
+			goto out_unlock;
+		ret = 1;	/* Indidcates newly assigned guc_id */
+	}
+	if (!list_empty(&ce->guc_id_link))
+		list_del_init(&ce->guc_id_link);
+	atomic_inc(&ce->guc_id_ref);
+
+out_unlock:
+	spin_unlock_irqrestore(&guc->contexts_lock, flags);
+
+	/*
+	 * -EAGAIN indicates no guc_ids are available, let's retire any
+	 * outstanding requests to see if that frees up a guc_id. If the first
+	 * retire didn't help, insert a sleep with the timeslice duration before
+	 * attempting to retire more requests. Double the sleep period each
+	 * subsequent pass before finally giving up. The sleep period has max of
+	 * 100ms and minimum of 1ms.
+	 */
+	if (ret == -EAGAIN && --tries) {
+		if (PIN_GUC_ID_TRIES - tries > 1) {
+			unsigned int timeslice_shifted =
+				ce->engine->props.timeslice_duration_ms <<
+				(PIN_GUC_ID_TRIES - tries - 2);
+			unsigned int max = min_t(unsigned int, 100,
+						 timeslice_shifted);
+
+			msleep(max_t(unsigned int, max, 1));
+		}
+		intel_gt_retire_requests(guc_to_gt(guc));
+		goto try_again;
+	}
+
+	return ret;
+}
+
+static void unpin_guc_id(struct intel_guc *guc, struct intel_context *ce)
+{
+	unsigned long flags;
+
+	GEM_BUG_ON(atomic_read(&ce->guc_id_ref) < 0);
+
+	spin_lock_irqsave(&guc->contexts_lock, flags);
+	if (!context_guc_id_invalid(ce) && list_empty(&ce->guc_id_link) &&
+	    !atomic_read(&ce->guc_id_ref))
+		list_add_tail(&ce->guc_id_link, &guc->guc_id_list);
+	spin_unlock_irqrestore(&guc->contexts_lock, flags);
+}
+
+static int __guc_action_register_context(struct intel_guc *guc,
+					 u32 guc_id,
+					 u32 offset,
+					 bool loop)
+{
+	u32 action[] = {
+		INTEL_GUC_ACTION_REGISTER_CONTEXT,
+		guc_id,
+		offset,
+	};
+
+	return guc_submission_busy_loop(guc, action, ARRAY_SIZE(action), 0, loop);
+}
+
+static int register_context(struct intel_context *ce, bool loop)
+{
+	struct intel_guc *guc = ce_to_guc(ce);
+	u32 offset = intel_guc_ggtt_offset(guc, guc->lrc_desc_pool) +
+		ce->guc_id * sizeof(struct guc_lrc_desc);
+
+	trace_intel_context_register(ce);
+
+	return __guc_action_register_context(guc, ce->guc_id, offset, loop);
+}
+
+static int __guc_action_deregister_context(struct intel_guc *guc,
+					   u32 guc_id,
+					   bool loop)
+{
+	u32 action[] = {
+		INTEL_GUC_ACTION_DEREGISTER_CONTEXT,
+		guc_id,
+	};
+
+	return guc_submission_busy_loop(guc, action, ARRAY_SIZE(action),
+					G2H_LEN_DW_DEREGISTER_CONTEXT, loop);
+}
+
+static int deregister_context(struct intel_context *ce, u32 guc_id, bool loop)
+{
+	struct intel_guc *guc = ce_to_guc(ce);
+
+	trace_intel_context_deregister(ce);
+
+	return __guc_action_deregister_context(guc, guc_id, loop);
+}
+
+static intel_engine_mask_t adjust_engine_mask(u8 class, intel_engine_mask_t mask)
+{
+	switch (class) {
+	case RENDER_CLASS:
+		return mask >> RCS0;
+	case VIDEO_ENHANCEMENT_CLASS:
+		return mask >> VECS0;
+	case VIDEO_DECODE_CLASS:
+		return mask >> VCS0;
+	case COPY_ENGINE_CLASS:
+		return mask >> BCS0;
+	default:
+		GEM_BUG_ON("Invalid Class");
+		return 0;
 	}
 }
 
+static void guc_context_policy_init(struct intel_engine_cs *engine,
+				    struct guc_lrc_desc *desc)
+{
+	desc->policy_flags = 0;
+
+	if (engine->flags & I915_ENGINE_WANT_FORCED_PREEMPTION)
+		desc->policy_flags |= CONTEXT_POLICY_FLAG_PREEMPT_TO_IDLE;
+
+	/* NB: For both of these, zero means disabled. */
+	desc->execution_quantum = engine->props.timeslice_duration_ms * 1000;
+	desc->preemption_timeout = engine->props.preempt_timeout_ms * 1000;
+}
+
+static int guc_lrc_desc_pin(struct intel_context *ce, bool loop)
+{
+	struct intel_runtime_pm *runtime_pm =
+		&ce->engine->gt->i915->runtime_pm;
+	struct intel_engine_cs *engine = ce->engine;
+	struct intel_guc *guc = &engine->gt->uc.guc;
+	u32 desc_idx = ce->guc_id;
+	struct guc_lrc_desc *desc;
+	bool context_registered;
+	intel_wakeref_t wakeref;
+	int ret = 0;
+
+	GEM_BUG_ON(!engine->mask);
+
+	/*
+	 * Ensure LRC + CT vmas are is same region as write barrier is done
+	 * based on CT vma region.
+	 */
+	GEM_BUG_ON(i915_gem_object_is_lmem(guc->ct.vma->obj) !=
+		   i915_gem_object_is_lmem(ce->ring->vma->obj));
+
+	context_registered = lrc_desc_registered(guc, desc_idx);
+
+	reset_lrc_desc(guc, desc_idx);
+	set_lrc_desc_registered(guc, desc_idx, ce);
+
+	desc = __get_lrc_desc(guc, desc_idx);
+	desc->engine_class = engine_class_to_guc_class(engine->class);
+	desc->engine_submit_mask = adjust_engine_mask(engine->class,
+						      engine->mask);
+	desc->hw_context_desc = ce->lrc.lrca;
+	desc->priority = GUC_CLIENT_PRIORITY_KMD_NORMAL;
+	desc->context_flags = CONTEXT_REGISTRATION_FLAG_KMD;
+	guc_context_policy_init(engine, desc);
+	init_sched_state(ce);
+
+	/*
+	 * The context_lookup xarray is used to determine if the hardware
+	 * context is currently registered. There are two cases in which it
+	 * could be regisgered either the guc_id has been stole from from
+	 * another context or the lrc descriptor address of this context has
+	 * changed. In either case the context needs to be deregistered with the
+	 * GuC before registering this context.
+	 */
+	if (context_registered) {
+		trace_intel_context_steal_guc_id(ce);
+		if (!loop) {
+			set_context_wait_for_deregister_to_register(ce);
+			intel_context_get(ce);
+		} else {
+			bool disabled;
+			unsigned long flags;
+
+			/* Seal race with Reset */
+			spin_lock_irqsave(&ce->guc_state.lock, flags);
+			disabled = submission_disabled(guc);
+			if (likely(!disabled)) {
+				set_context_wait_for_deregister_to_register(ce);
+				intel_context_get(ce);
+			}
+			spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+			if (unlikely(disabled)) {
+				reset_lrc_desc(guc, desc_idx);
+				return 0;	/* Will get registered later */
+			}
+		}
+
+		/*
+		 * If stealing the guc_id, this ce has the same guc_id as the
+		 * context whos guc_id was stole.
+		 */
+		with_intel_runtime_pm(runtime_pm, wakeref)
+			ret = deregister_context(ce, ce->guc_id, loop);
+		if (unlikely(ret == -EBUSY)) {
+			clr_context_wait_for_deregister_to_register(ce);
+			intel_context_put(ce);
+		} else if (unlikely(ret == -ENODEV))
+			ret = 0;	/* Will get registered later */
+	} else {
+		with_intel_runtime_pm(runtime_pm, wakeref)
+			ret = register_context(ce, loop);
+		if (unlikely(ret == -EBUSY))
+			reset_lrc_desc(guc, desc_idx);
+		else if (unlikely(ret == -ENODEV))
+			ret = 0;	/* Will get registered later */
+	}
+
+	return ret;
+}
+
+static int __guc_context_pre_pin(struct intel_context *ce,
+				 struct intel_engine_cs *engine,
+				 struct i915_gem_ww_ctx *ww,
+				 void **vaddr)
+{
+	return lrc_pre_pin(ce, engine, ww, vaddr);
+}
+
+static int __guc_context_pin(struct intel_context *ce,
+			     struct intel_engine_cs *engine,
+			     void *vaddr)
+{
+	if (i915_ggtt_offset(ce->state) !=
+	    (ce->lrc.lrca & CTX_GTT_ADDRESS_MASK))
+		set_bit(CONTEXT_LRCA_DIRTY, &ce->flags);
+
+	return lrc_pin(ce, engine, vaddr);
+}
+
+static int guc_context_pre_pin(struct intel_context *ce,
+			       struct i915_gem_ww_ctx *ww,
+			       void **vaddr)
+{
+	return __guc_context_pre_pin(ce, ce->engine, ww, vaddr);
+}
+
+static int guc_context_pin(struct intel_context *ce, void *vaddr)
+{
+	return __guc_context_pin(ce, ce->engine, vaddr);
+}
+
+static void guc_context_unpin(struct intel_context *ce)
+{
+	struct intel_guc *guc = ce_to_guc(ce);
+
+	unpin_guc_id(guc, ce);
+	lrc_unpin(ce);
+}
+
+static void guc_context_post_unpin(struct intel_context *ce)
+{
+	lrc_post_unpin(ce);
+}
+
+static void __guc_context_sched_enable(struct intel_guc *guc,
+				       struct intel_context *ce)
+{
+	u32 action[] = {
+		INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_SET,
+		ce->guc_id,
+		GUC_CONTEXT_ENABLE
+	};
+
+	trace_intel_context_sched_enable(ce);
+
+	guc_submission_busy_loop(guc, action, ARRAY_SIZE(action),
+				 G2H_LEN_DW_SCHED_CONTEXT_MODE_SET, true);
+}
+
+static void __guc_context_sched_disable(struct intel_guc *guc,
+					struct intel_context *ce,
+					u16 guc_id)
+{
+	u32 action[] = {
+		INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_SET,
+		guc_id,	/* ce->guc_id not stable */
+		GUC_CONTEXT_DISABLE
+	};
+
+	GEM_BUG_ON(guc_id == GUC_INVALID_LRC_ID);
+
+	trace_intel_context_sched_disable(ce);
+
+	guc_submission_busy_loop(guc, action, ARRAY_SIZE(action),
+				 G2H_LEN_DW_SCHED_CONTEXT_MODE_SET, true);
+}
+
+static void guc_blocked_fence_complete(struct intel_context *ce)
+{
+	lockdep_assert_held(&ce->guc_state.lock);
+
+	if (!i915_sw_fence_done(&ce->guc_blocked))
+		i915_sw_fence_complete(&ce->guc_blocked);
+}
+
+static void guc_blocked_fence_reinit(struct intel_context *ce)
+{
+	lockdep_assert_held(&ce->guc_state.lock);
+	GEM_BUG_ON(!i915_sw_fence_done(&ce->guc_blocked));
+	i915_sw_fence_fini(&ce->guc_blocked);
+	i915_sw_fence_reinit(&ce->guc_blocked);
+	i915_sw_fence_await(&ce->guc_blocked);
+	i915_sw_fence_commit(&ce->guc_blocked);
+}
+
+static u16 prep_context_pending_disable(struct intel_context *ce)
+{
+	lockdep_assert_held(&ce->guc_state.lock);
+
+	set_context_pending_disable(ce);
+	clr_context_enabled(ce);
+	guc_blocked_fence_reinit(ce);
+	intel_context_get(ce);
+
+	return ce->guc_id;
+}
+
+static struct i915_sw_fence *guc_context_block(struct intel_context *ce)
+{
+	struct intel_guc *guc = ce_to_guc(ce);
+	struct i915_sched_engine *sched_engine = ce->engine->sched_engine;
+	unsigned long flags;
+	struct intel_runtime_pm *runtime_pm = &ce->engine->gt->i915->runtime_pm;
+	intel_wakeref_t wakeref;
+	u16 guc_id;
+	bool enabled;
+
+	spin_lock_irqsave(&sched_engine->lock, flags);
+	incr_context_blocked(ce);
+	spin_unlock_irqrestore(&sched_engine->lock, flags);
+
+	spin_lock_irqsave(&ce->guc_state.lock, flags);
+	enabled = context_enabled(ce);
+	if (unlikely(!enabled || submission_disabled(guc))) {
+		if (enabled)
+			clr_context_enabled(ce);
+		spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+		return &ce->guc_blocked;
+	}
+
+	/*
+	 * We add +2 here as the schedule disable complete CTB handler calls
+	 * intel_context_sched_disable_unpin (-2 to pin_count).
+	 */
+	atomic_add(2, &ce->pin_count);
+
+	guc_id = prep_context_pending_disable(ce);
+	spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+
+	with_intel_runtime_pm(runtime_pm, wakeref)
+		__guc_context_sched_disable(guc, ce, guc_id);
+
+	return &ce->guc_blocked;
+}
+
+static void guc_context_unblock(struct intel_context *ce)
+{
+	struct intel_guc *guc = ce_to_guc(ce);
+	struct i915_sched_engine *sched_engine = ce->engine->sched_engine;
+	unsigned long flags;
+	struct intel_runtime_pm *runtime_pm = &ce->engine->gt->i915->runtime_pm;
+	intel_wakeref_t wakeref;
+
+	GEM_BUG_ON(context_enabled(ce));
+
+	if (unlikely(context_blocked(ce) > 1)) {
+		spin_lock_irqsave(&sched_engine->lock, flags);
+		if (likely(context_blocked(ce) > 1))
+			goto decrement;
+		spin_unlock_irqrestore(&sched_engine->lock, flags);
+	}
+
+	spin_lock_irqsave(&ce->guc_state.lock, flags);
+	if (unlikely(submission_disabled(guc) ||
+		     !intel_context_is_pinned(ce) ||
+		     context_pending_disable(ce) ||
+		     context_blocked(ce) > 1)) {
+		spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+		goto out;
+	}
+
+	set_context_pending_enable(ce);
+	set_context_enabled(ce);
+	intel_context_get(ce);
+	spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+
+	with_intel_runtime_pm(runtime_pm, wakeref)
+		__guc_context_sched_enable(guc, ce);
+
+out:
+	spin_lock_irqsave(&sched_engine->lock, flags);
+decrement:
+	decr_context_blocked(ce);
+	spin_unlock_irqrestore(&sched_engine->lock, flags);
+}
+
+static void guc_context_cancel_request(struct intel_context *ce,
+				       struct i915_request *rq)
+{
+	if (i915_sw_fence_signaled(&rq->submit)) {
+		struct i915_sw_fence *fence = guc_context_block(ce);
+
+		i915_sw_fence_wait(fence);
+		if (!i915_request_completed(rq)) {
+			__i915_request_skip(rq);
+			guc_reset_state(ce, intel_ring_wrap(ce->ring, rq->head),
+					true);
+		}
+		guc_context_unblock(ce);
+	}
+}
+
+static void __guc_context_set_preemption_timeout(struct intel_guc *guc,
+						 u16 guc_id,
+						 u32 preemption_timeout)
+{
+	u32 action [] = {
+		INTEL_GUC_ACTION_SET_CONTEXT_PREEMPTION_TIMEOUT,
+		guc_id,
+		preemption_timeout
+	};
+
+	intel_guc_send_busy_loop(guc, action, ARRAY_SIZE(action), 0, true);
+}
+
+static void guc_context_ban(struct intel_context *ce, struct i915_request *rq)
+{
+	struct intel_guc *guc = ce_to_guc(ce);
+	struct intel_runtime_pm *runtime_pm =
+		&ce->engine->gt->i915->runtime_pm;
+	intel_wakeref_t wakeref;
+	unsigned long flags;
+
+	guc_flush_submissions(guc);
+
+	spin_lock_irqsave(&ce->guc_state.lock, flags);
+	set_context_banned(ce);
+
+	if (submission_disabled(guc) || (!context_enabled(ce) &&
+	    !context_pending_disable(ce))) {
+		spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+
+		guc_cancel_context_requests(ce);
+		intel_engine_signal_breadcrumbs(ce->engine);
+	} else if (!context_pending_disable(ce)) {
+		u16 guc_id;
+
+		/*
+		 * We add +2 here as the schedule disable complete CTB handler
+		 * calls intel_context_sched_disable_unpin (-2 to pin_count).
+		 */
+		atomic_add(2, &ce->pin_count);
+
+		guc_id = prep_context_pending_disable(ce);
+		spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+
+		/*
+		 * In addition to disabling scheduling, set the preemption
+		 * timeout to the minimum value (1 us) so the banned context
+		 * gets kicked off the HW ASAP.
+		 */
+		with_intel_runtime_pm(runtime_pm, wakeref) {
+			__guc_context_set_preemption_timeout(guc, guc_id, 1);
+			__guc_context_sched_disable(guc, ce, guc_id);
+		}
+	} else {
+		if (!context_guc_id_invalid(ce))
+			with_intel_runtime_pm(runtime_pm, wakeref)
+				__guc_context_set_preemption_timeout(guc,
+								     ce->guc_id,
+								     1);
+		spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+	}
+}
+
+static void guc_context_sched_disable(struct intel_context *ce)
+{
+	struct intel_guc *guc = ce_to_guc(ce);
+	unsigned long flags;
+	struct intel_runtime_pm *runtime_pm = &ce->engine->gt->i915->runtime_pm;
+	intel_wakeref_t wakeref;
+	u16 guc_id;
+	bool enabled;
+
+	if (submission_disabled(guc) || context_guc_id_invalid(ce) ||
+	    !lrc_desc_registered(guc, ce->guc_id)) {
+		clr_context_enabled(ce);
+		goto unpin;
+	}
+
+	if (!context_enabled(ce))
+		goto unpin;
+
+	spin_lock_irqsave(&ce->guc_state.lock, flags);
+
+	/*
+	 * We have to check if the context is disabled by another thread. We
+	 * also have to check if the context has been pinned again as another
+	 * pin operation is allowed to pass this function. Checking the pin
+	 * count here synchronizes this function with guc_request_alloc ensuring
+	 * a request doesn't slip through the 'context_pending_disable' fence.
+	 */
+	enabled = context_enabled(ce);
+	if (unlikely(!enabled || submission_disabled(guc))) {
+		if (enabled)
+			clr_context_enabled(ce);
+		spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+		goto unpin;
+	}
+	if (unlikely(atomic_add_unless(&ce->pin_count, -2, 2))) {
+		spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+		return;
+	}
+	guc_id = prep_context_pending_disable(ce);
+
+	spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+
+	with_intel_runtime_pm(runtime_pm, wakeref)
+		__guc_context_sched_disable(guc, ce, guc_id);
+
+	return;
+unpin:
+	intel_context_sched_disable_unpin(ce);
+}
+
+static inline void guc_lrc_desc_unpin(struct intel_context *ce)
+{
+	struct intel_guc *guc = ce_to_guc(ce);
+
+	GEM_BUG_ON(!lrc_desc_registered(guc, ce->guc_id));
+	GEM_BUG_ON(ce != __get_context(guc, ce->guc_id));
+	GEM_BUG_ON(context_enabled(ce));
+
+	deregister_context(ce, ce->guc_id, true);
+}
+
+static void __guc_context_destroy(struct intel_context *ce)
+{
+	lrc_fini(ce);
+	intel_context_fini(ce);
+
+	if (intel_engine_is_virtual(ce->engine)) {
+		struct guc_virtual_engine *ve =
+			container_of(ce, typeof(*ve), context);
+
+		if (ve->base.breadcrumbs)
+			intel_breadcrumbs_put(ve->base.breadcrumbs);
+
+		kfree(ve);
+	} else {
+		intel_context_free(ce);
+	}
+}
+
+static void guc_context_destroy(struct kref *kref)
+{
+	struct intel_context *ce = container_of(kref, typeof(*ce), ref);
+	struct intel_runtime_pm *runtime_pm = &ce->engine->gt->i915->runtime_pm;
+	struct intel_guc *guc = &ce->engine->gt->uc.guc;
+	intel_wakeref_t wakeref;
+	unsigned long flags;
+	bool disabled;
+
+	/*
+	 * If the guc_id is invalid this context has been stolen and we can free
+	 * it immediately. Also can be freed immediately if the context is not
+	 * registered with the GuC.
+	 */
+	if (submission_disabled(guc) ||
+	    context_guc_id_invalid(ce) ||
+	    !lrc_desc_registered(guc, ce->guc_id)) {
+		release_guc_id(guc, ce);
+		__guc_context_destroy(ce);
+		return;
+	}
+
+	/*
+	 * We have to acquire the context spinlock and check guc_id again, if it
+	 * is valid it hasn't been stolen and needs to be deregistered. We
+	 * delete this context from the list of unpinned guc_ids available to
+	 * stole to seal a race with guc_lrc_desc_pin(). When the G2H CTB
+	 * returns indicating this context has been deregistered the guc_id is
+	 * returned to the pool of available guc_ids.
+	 */
+	spin_lock_irqsave(&guc->contexts_lock, flags);
+	if (context_guc_id_invalid(ce)) {
+		__release_guc_id(guc, ce);
+		spin_unlock_irqrestore(&guc->contexts_lock, flags);
+		__guc_context_destroy(ce);
+		return;
+	}
+
+	if (!list_empty(&ce->guc_id_link))
+		list_del_init(&ce->guc_id_link);
+	spin_unlock_irqrestore(&guc->contexts_lock, flags);
+
+	/* Seal race with Reset */
+	spin_lock_irqsave(&ce->guc_state.lock, flags);
+	disabled = submission_disabled(guc);
+	if (likely(!disabled))
+		set_context_destroyed(ce);
+	spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+	if (unlikely(disabled)) {
+		release_guc_id(guc, ce);
+		__guc_context_destroy(ce);
+		return;
+	}
+
+	/*
+	 * We defer GuC context deregistration until the context is destroyed
+	 * in order to save on CTBs. With this optimization ideally we only need
+	 * 1 CTB to register the context during the first pin and 1 CTB to
+	 * deregister the context when the context is destroyed. Without this
+	 * optimization, a CTB would be needed every pin & unpin.
+	 *
+	 * XXX: Need to acqiure the runtime wakeref as this can be triggered
+	 * from context_free_worker when not runtime wakeref is held.
+	 * guc_lrc_desc_unpin requires the runtime as a GuC register is written
+	 * in H2G CTB to deregister the context. A future patch may defer this
+	 * H2G CTB if the runtime wakeref is zero.
+	 */
+	with_intel_runtime_pm(runtime_pm, wakeref)
+		guc_lrc_desc_unpin(ce);
+}
+
 static int guc_context_alloc(struct intel_context *ce)
 {
 	return lrc_alloc(ce, ce->engine);
 }
 
-static int guc_context_pre_pin(struct intel_context *ce,
-			       struct i915_gem_ww_ctx *ww,
-			       void **vaddr)
+static void add_to_context(struct i915_request *rq)
+{
+	struct intel_context *ce = rq->context;
+
+	spin_lock(&ce->guc_active.lock);
+	list_move_tail(&rq->sched.link, &ce->guc_active.requests);
+	spin_unlock(&ce->guc_active.lock);
+}
+
+static void remove_from_context(struct i915_request *rq)
+{
+	struct intel_context *ce = rq->context;
+
+	spin_lock_irq(&ce->guc_active.lock);
+
+	list_del_init(&rq->sched.link);
+	clear_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
+
+	/* Prevent further __await_execution() registering a cb, then flush */
+	set_bit(I915_FENCE_FLAG_ACTIVE, &rq->fence.flags);
+
+	spin_unlock_irq(&ce->guc_active.lock);
+
+	atomic_dec(&ce->guc_id_ref);
+	i915_request_notify_execute_cb_imm(rq);
+}
+
+static const struct intel_context_ops guc_context_ops = {
+	.alloc = guc_context_alloc,
+
+	.pre_pin = guc_context_pre_pin,
+	.pin = guc_context_pin,
+	.unpin = guc_context_unpin,
+	.post_unpin = guc_context_post_unpin,
+
+	.ban = guc_context_ban,
+
+	.cancel_request = guc_context_cancel_request,
+
+	.enter = intel_context_enter_engine,
+	.exit = intel_context_exit_engine,
+
+	.sched_disable = guc_context_sched_disable,
+
+	.reset = lrc_reset,
+	.destroy = guc_context_destroy,
+
+	.create_virtual = guc_create_virtual,
+};
+
+static void __guc_signal_context_fence(struct intel_context *ce)
+{
+	struct i915_request *rq;
+
+	lockdep_assert_held(&ce->guc_state.lock);
+
+	if (!list_empty(&ce->guc_state.fences))
+		trace_intel_context_fence_release(ce);
+
+	list_for_each_entry(rq, &ce->guc_state.fences, guc_fence_link)
+		i915_sw_fence_complete(&rq->submit);
+
+	INIT_LIST_HEAD(&ce->guc_state.fences);
+}
+
+static void guc_signal_context_fence(struct intel_context *ce)
 {
-	return lrc_pre_pin(ce, ce->engine, ww, vaddr);
+	unsigned long flags;
+
+	spin_lock_irqsave(&ce->guc_state.lock, flags);
+	clr_context_wait_for_deregister_to_register(ce);
+	__guc_signal_context_fence(ce);
+	spin_unlock_irqrestore(&ce->guc_state.lock, flags);
 }
 
-static int guc_context_pin(struct intel_context *ce, void *vaddr)
+static bool context_needs_register(struct intel_context *ce, bool new_guc_id)
 {
-	return lrc_pin(ce, ce->engine, vaddr);
+	return (new_guc_id || test_bit(CONTEXT_LRCA_DIRTY, &ce->flags) ||
+		!lrc_desc_registered(ce_to_guc(ce), ce->guc_id)) &&
+		!submission_disabled(ce_to_guc(ce));
 }
 
-static const struct intel_context_ops guc_context_ops = {
-	.alloc = guc_context_alloc,
-
-	.pre_pin = guc_context_pre_pin,
-	.pin = guc_context_pin,
-	.unpin = lrc_unpin,
-	.post_unpin = lrc_post_unpin,
-
-	.enter = intel_context_enter_engine,
-	.exit = intel_context_exit_engine,
-
-	.reset = lrc_reset,
-	.destroy = lrc_destroy,
-};
-
-static int guc_request_alloc(struct i915_request *request)
+static int guc_request_alloc(struct i915_request *rq)
 {
+	struct intel_context *ce = rq->context;
+	struct intel_guc *guc = ce_to_guc(ce);
+	unsigned long flags;
 	int ret;
 
-	GEM_BUG_ON(!intel_context_is_pinned(request->context));
+	GEM_BUG_ON(!intel_context_is_pinned(rq->context));
 
 	/*
 	 * Flush enough space to reduce the likelihood of waiting after
 	 * we start building the request - in which case we will just
 	 * have to repeat work.
 	 */
-	request->reserved_space += GUC_REQUEST_SIZE;
+	rq->reserved_space += GUC_REQUEST_SIZE;
 
 	/*
 	 * Note that after this point, we have committed to using
@@ -486,40 +1862,199 @@ static int guc_request_alloc(struct i915_request *request)
 	 */
 
 	/* Unconditionally invalidate GPU caches and TLBs. */
-	ret = request->engine->emit_flush(request, EMIT_INVALIDATE);
+	ret = rq->engine->emit_flush(rq, EMIT_INVALIDATE);
 	if (ret)
 		return ret;
 
-	request->reserved_space -= GUC_REQUEST_SIZE;
+	rq->reserved_space -= GUC_REQUEST_SIZE;
+
+	/*
+	 * Call pin_guc_id here rather than in the pinning step as with
+	 * dma_resv, contexts can be repeatedly pinned / unpinned trashing the
+	 * guc_ids and creating horrible race conditions. This is especially bad
+	 * when guc_ids are being stolen due to over subscription. By the time
+	 * this function is reached, it is guaranteed that the guc_id will be
+	 * persistent until the generated request is retired. Thus, sealing these
+	 * race conditions. It is still safe to fail here if guc_ids are
+	 * exhausted and return -EAGAIN to the user indicating that they can try
+	 * again in the future.
+	 *
+	 * There is no need for a lock here as the timeline mutex ensures at
+	 * most one context can be executing this code path at once. The
+	 * guc_id_ref is incremented once for every request in flight and
+	 * decremented on each retire. When it is zero, a lock around the
+	 * increment (in pin_guc_id) is needed to seal a race with unpin_guc_id.
+	 */
+	if (atomic_add_unless(&ce->guc_id_ref, 1, 0))
+		goto out;
+
+	ret = pin_guc_id(guc, ce);	/* returns 1 if new guc_id assigned */
+	if (unlikely(ret < 0))
+		return ret;;
+	if (context_needs_register(ce, !!ret)) {
+		ret = guc_lrc_desc_pin(ce, true);
+		if (unlikely(ret)) {	/* unwind */
+			if (ret == -EPIPE) {
+				disable_submission(guc);
+				goto out;	/* GPU will be reset */
+			}
+			atomic_dec(&ce->guc_id_ref);
+			unpin_guc_id(guc, ce);
+			return ret;
+		}
+	}
+
+	clear_bit(CONTEXT_LRCA_DIRTY, &ce->flags);
+
+out:
+	/*
+	 * We block all requests on this context if a G2H is pending for a
+	 * schedule disable or context deregistration as the GuC will fail a
+	 * schedule enable or context registration if either G2H is pending
+	 * respectfully. Once a G2H returns, the fence is released that is
+	 * blocking these requests (see guc_signal_context_fence).
+	 *
+	 * We can safely check the below fields outside of the lock as it isn't
+	 * possible for these fields to transition from being clear to set but
+	 * converse is possible, hence the need for the check within the lock.
+	 */
+	if (likely(!context_wait_for_deregister_to_register(ce) &&
+		   !context_pending_disable(ce)))
+		return 0;
+
+	spin_lock_irqsave(&ce->guc_state.lock, flags);
+	if (context_wait_for_deregister_to_register(ce) ||
+	    context_pending_disable(ce)) {
+		i915_sw_fence_await(&rq->submit);
+
+		list_add_tail(&rq->guc_fence_link, &ce->guc_state.fences);
+	}
+	spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+
 	return 0;
 }
 
-static inline void queue_request(struct intel_engine_cs *engine,
-				 struct i915_request *rq,
-				 int prio)
+static int guc_virtual_context_pre_pin(struct intel_context *ce,
+				       struct i915_gem_ww_ctx *ww,
+				       void **vaddr)
 {
-	GEM_BUG_ON(!list_empty(&rq->sched.link));
-	list_add_tail(&rq->sched.link,
-		      i915_sched_lookup_priolist(engine->sched_engine, prio));
-	set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
+	struct intel_engine_cs *engine = guc_virtual_get_sibling(ce->engine, 0);
+
+	return __guc_context_pre_pin(ce, engine, ww, vaddr);
 }
 
-static void guc_submit_request(struct i915_request *rq)
+static int guc_virtual_context_pin(struct intel_context *ce, void *vaddr)
 {
-	struct intel_engine_cs *engine = rq->engine;
-	unsigned long flags;
+	struct intel_engine_cs *engine = guc_virtual_get_sibling(ce->engine, 0);
 
-	/* Will be called from irq-context when using foreign fences. */
-	spin_lock_irqsave(&engine->sched_engine->lock, flags);
+	return __guc_context_pin(ce, engine, vaddr);
+}
+
+static void guc_virtual_context_enter(struct intel_context *ce)
+{
+	intel_engine_mask_t tmp, mask = ce->engine->mask;
+	struct intel_engine_cs *engine;
+
+	for_each_engine_masked(engine, ce->engine->gt, mask, tmp)
+		intel_engine_pm_get(engine);
+
+	intel_timeline_enter(ce->timeline);
+}
+
+static void guc_virtual_context_exit(struct intel_context *ce)
+{
+	intel_engine_mask_t tmp, mask = ce->engine->mask;
+	struct intel_engine_cs *engine;
+
+	for_each_engine_masked(engine, ce->engine->gt, mask, tmp)
+		intel_engine_pm_put(engine);
+
+	intel_timeline_exit(ce->timeline);
+}
+
+static int guc_virtual_context_alloc(struct intel_context *ce)
+{
+	struct intel_engine_cs *engine = guc_virtual_get_sibling(ce->engine, 0);
+
+	return lrc_alloc(ce, engine);
+}
+
+static const struct intel_context_ops virtual_guc_context_ops = {
+	.alloc = guc_virtual_context_alloc,
+
+	.pre_pin = guc_virtual_context_pre_pin,
+	.pin = guc_virtual_context_pin,
+	.unpin = guc_context_unpin,
+	.post_unpin = guc_context_post_unpin,
+
+	.ban = guc_context_ban,
 
-	queue_request(engine, rq, rq_prio(rq));
+	.cancel_request = guc_context_cancel_request,
 
-	GEM_BUG_ON(i915_sched_engine_is_empty(engine->sched_engine));
-	GEM_BUG_ON(list_empty(&rq->sched.link));
+	.enter = guc_virtual_context_enter,
+	.exit = guc_virtual_context_exit,
 
-	tasklet_hi_schedule(&engine->sched_engine->tasklet);
+	.sched_disable = guc_context_sched_disable,
 
-	spin_unlock_irqrestore(&engine->sched_engine->lock, flags);
+	.destroy = guc_context_destroy,
+
+	.get_sibling = guc_virtual_get_sibling,
+};
+
+static bool
+guc_irq_enable_breadcrumbs(struct intel_breadcrumbs *b)
+{
+	struct intel_engine_cs *sibling;
+	intel_engine_mask_t tmp, mask = b->engine_mask;
+	bool result = false;
+
+	for_each_engine_masked(sibling, b->irq_engine->gt, mask, tmp)
+		result |= intel_engine_irq_enable(sibling);
+
+	return result;
+}
+
+static void
+guc_irq_disable_breadcrumbs(struct intel_breadcrumbs *b)
+{
+	struct intel_engine_cs *sibling;
+	intel_engine_mask_t tmp, mask = b->engine_mask;
+
+	for_each_engine_masked(sibling, b->irq_engine->gt, mask, tmp)
+		intel_engine_irq_disable(sibling);
+}
+
+static void guc_init_breadcrumbs(struct intel_engine_cs *engine)
+{
+	int i;
+
+       /*
+        * In GuC submission mode we do not know which physical engine a request
+        * will be scheduled on, this creates a problem because the breadcrumb
+        * interrupt is per physical engine. To work around this we attach
+        * requests and direct all breadcrumb interrupts to the first instance
+        * of an engine per class. In addition all breadcrumb interrupts are
+	* enabled / disabled across an engine class in unison.
+        */
+	for (i = 0; i < MAX_ENGINE_INSTANCE; ++i) {
+		struct intel_engine_cs *sibling =
+			engine->gt->engine_class[engine->class][i];
+
+		if (sibling) {
+			if (engine->breadcrumbs != sibling->breadcrumbs) {
+				intel_breadcrumbs_put(engine->breadcrumbs);
+				engine->breadcrumbs =
+					intel_breadcrumbs_get(sibling->breadcrumbs);
+			}
+			break;
+		}
+	}
+
+	if (engine->breadcrumbs) {
+		engine->breadcrumbs->engine_mask |= engine->mask;
+		engine->breadcrumbs->irq_enable = guc_irq_enable_breadcrumbs;
+		engine->breadcrumbs->irq_disable = guc_irq_disable_breadcrumbs;
+	}
 }
 
 static void sanitize_hwsp(struct intel_engine_cs *engine)
@@ -588,21 +2123,78 @@ static int guc_resume(struct intel_engine_cs *engine)
 	return 0;
 }
 
+static bool guc_sched_engine_disabled(struct i915_sched_engine *sched_engine)
+{
+	return !sched_engine->tasklet.callback;
+}
+
 static void guc_set_default_submission(struct intel_engine_cs *engine)
 {
 	engine->submit_request = guc_submit_request;
 }
 
+static inline void guc_kernel_context_pin(struct intel_guc *guc,
+					  struct intel_context *ce)
+{
+	if (context_guc_id_invalid(ce))
+		pin_guc_id(guc, ce);
+	guc_lrc_desc_pin(ce, true);
+}
+
+static inline void guc_init_lrc_mapping(struct intel_guc *guc)
+{
+	struct intel_gt *gt = guc_to_gt(guc);
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+
+	/* make sure all descriptors are clean... */
+	xa_destroy(&guc->context_lookup);
+
+	/*
+	 * Some contexts might have been pinned before we enabled GuC
+	 * submission, so we need to add them to the GuC bookeeping.
+	 * Also, after a reset the GuC we want to make sure that the information
+	 * shared with GuC is properly reset. The kernel lrcs are not attached
+	 * to the gem_context, so they need to be added separately.
+	 *
+	 * Note: we purposely do not check the error return of
+	 * guc_lrc_desc_pin, because that function can only fail in two cases.
+	 * One, if there aren't enough free IDs, but we're guaranteed to have
+	 * enough here (we're either only pinning a handful of lrc on first boot
+	 * or we're re-pinning lrcs that were already pinned before the reset).
+	 * Two, if the GuC has died and CTBs can't make forward progress.
+	 * Presumably, the GuC should be alive as this function is called on
+	 * driver load or after a reset. Even if it is dead, another full GPU
+	 * reset will be triggered and this function would be called again.
+	 */
+
+	for_each_engine(engine, gt, id)
+		if (engine->kernel_context)
+			guc_kernel_context_pin(guc, engine->kernel_context);
+}
+
 static void guc_release(struct intel_engine_cs *engine)
 {
 	engine->sanitize = NULL; /* no longer in control, nothing to sanitize */
 
-	tasklet_kill(&engine->sched_engine->tasklet);
-
 	intel_engine_cleanup_common(engine);
 	lrc_fini_wa_ctx(engine);
 }
 
+static void guc_bump_serial(struct intel_engine_cs *engine)
+{
+	engine->serial++;
+}
+
+static void virtual_guc_bump_serial(struct intel_engine_cs *engine)
+{
+	struct intel_engine_cs *e;
+	intel_engine_mask_t tmp, mask = engine->mask;
+
+	for_each_engine_masked(e, engine->gt, mask, tmp)
+		e->serial++;
+}
+
 static void guc_default_vfuncs(struct intel_engine_cs *engine)
 {
 	/* Default vfuncs which can be overridden by each engine. */
@@ -611,13 +2203,16 @@ static void guc_default_vfuncs(struct intel_engine_cs *engine)
 
 	engine->cops = &guc_context_ops;
 	engine->request_alloc = guc_request_alloc;
+	engine->bump_serial = guc_bump_serial;
+	engine->add_active_request = add_to_context;
+	engine->remove_active_request = remove_from_context;
 
 	engine->sched_engine->schedule = i915_schedule;
 
-	engine->reset.prepare = guc_reset_prepare;
-	engine->reset.rewind = guc_reset_rewind;
-	engine->reset.cancel = guc_reset_cancel;
-	engine->reset.finish = guc_reset_finish;
+	engine->reset.prepare = guc_reset_nop;
+	engine->reset.rewind = guc_rewind_nop;
+	engine->reset.cancel = guc_reset_nop;
+	engine->reset.finish = guc_reset_nop;
 
 	engine->emit_flush = gen8_emit_flush_xcs;
 	engine->emit_init_breadcrumb = gen8_emit_init_breadcrumb;
@@ -629,13 +2224,13 @@ static void guc_default_vfuncs(struct intel_engine_cs *engine)
 	engine->set_default_submission = guc_set_default_submission;
 
 	engine->flags |= I915_ENGINE_HAS_PREEMPTION;
+	engine->flags |= I915_ENGINE_HAS_TIMESLICES;
 
 	/*
 	 * TODO: GuC supports timeslicing and semaphores as well, but they're
 	 * handled by the firmware so some minor tweaks are required before
 	 * enabling.
 	 *
-	 * engine->flags |= I915_ENGINE_HAS_TIMESLICES;
 	 * engine->flags |= I915_ENGINE_HAS_SEMAPHORES;
 	 */
 
@@ -666,9 +2261,21 @@ static inline void guc_default_irqs(struct intel_engine_cs *engine)
 	intel_engine_set_irq_handler(engine, cs_irq_handler);
 }
 
+static void guc_sched_engine_destroy(struct kref *kref)
+{
+	struct i915_sched_engine *sched_engine =
+		container_of(kref, typeof(*sched_engine), ref);
+	struct intel_guc *guc = sched_engine->private_data;
+
+	guc->sched_engine = NULL;
+	tasklet_kill(&sched_engine->tasklet); /* flush the callback */
+	kfree(sched_engine);
+}
+
 int intel_guc_submission_setup(struct intel_engine_cs *engine)
 {
 	struct drm_i915_private *i915 = engine->i915;
+	struct intel_guc *guc = &engine->gt->uc.guc;
 
 	/*
 	 * The setup relies on several assumptions (e.g. irqs always enabled)
@@ -676,10 +2283,24 @@ int intel_guc_submission_setup(struct intel_engine_cs *engine)
 	 */
 	GEM_BUG_ON(GRAPHICS_VER(i915) < 11);
 
-	tasklet_setup(&engine->sched_engine->tasklet, guc_submission_tasklet);
+	if (!guc->sched_engine) {
+		guc->sched_engine = i915_sched_engine_create(ENGINE_VIRTUAL);
+		if (!guc->sched_engine)
+			return -ENOMEM;
+
+		guc->sched_engine->schedule = i915_schedule;
+		guc->sched_engine->disabled = guc_sched_engine_disabled;
+		guc->sched_engine->private_data = guc;
+		guc->sched_engine->destroy = guc_sched_engine_destroy;
+		tasklet_setup(&guc->sched_engine->tasklet,
+			      guc_submission_tasklet);
+	}
+	i915_sched_engine_put(engine->sched_engine);
+	engine->sched_engine = i915_sched_engine_get(guc->sched_engine);
 
 	guc_default_vfuncs(engine);
 	guc_default_irqs(engine);
+	guc_init_breadcrumbs(engine);
 
 	if (engine->class == RENDER_CLASS)
 		rcs_submission_override(engine);
@@ -695,7 +2316,7 @@ int intel_guc_submission_setup(struct intel_engine_cs *engine)
 
 void intel_guc_submission_enable(struct intel_guc *guc)
 {
-	guc_stage_desc_init(guc);
+	guc_init_lrc_mapping(guc);
 }
 
 void intel_guc_submission_disable(struct intel_guc *guc)
@@ -705,8 +2326,13 @@ void intel_guc_submission_disable(struct intel_guc *guc)
 	GEM_BUG_ON(gt->awake); /* GT should be parked first */
 
 	/* Note: By the time we're here, GuC may have already been reset */
+}
 
-	guc_stage_desc_fini(guc);
+static bool __guc_submission_supported(struct intel_guc *guc)
+{
+	/* GuC submission is unavailable for pre-Gen11 */
+	return intel_guc_is_supported(guc) &&
+	       GRAPHICS_VER(guc_to_gt(guc)->i915) >= 11;
 }
 
 static bool __guc_submission_selected(struct intel_guc *guc)
@@ -721,5 +2347,462 @@ static bool __guc_submission_selected(struct intel_guc *guc)
 
 void intel_guc_submission_init_early(struct intel_guc *guc)
 {
+	guc->submission_supported = __guc_submission_supported(guc);
 	guc->submission_selected = __guc_submission_selected(guc);
 }
+
+static inline struct intel_context *
+g2h_context_lookup(struct intel_guc *guc, u32 desc_idx)
+{
+	struct intel_context *ce;
+
+	if (unlikely(desc_idx >= GUC_MAX_LRC_DESCRIPTORS)) {
+		drm_dbg(&guc_to_gt(guc)->i915->drm,
+			"Invalid desc_idx %u", desc_idx);
+		return NULL;
+	}
+
+	ce = __get_context(guc, desc_idx);
+	if (unlikely(!ce)) {
+		drm_dbg(&guc_to_gt(guc)->i915->drm,
+			"Context is NULL, desc_idx %u", desc_idx);
+		return NULL;
+	}
+
+	return ce;
+}
+
+static void decr_outstanding_submission_g2h(struct intel_guc *guc)
+{
+	if (atomic_dec_and_test(&guc->outstanding_submission_g2h))
+		wake_up_all(&guc->ct.wq);
+}
+
+int intel_guc_deregister_done_process_msg(struct intel_guc *guc,
+					  const u32 *msg,
+					  u32 len)
+{
+	struct intel_context *ce;
+	u32 desc_idx = msg[0];
+
+	if (unlikely(len < 1)) {
+		drm_dbg(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len);
+		return -EPROTO;
+	}
+
+	ce = g2h_context_lookup(guc, desc_idx);
+	if (unlikely(!ce))
+		return -EPROTO;
+
+	trace_intel_context_deregister_done(ce);
+
+	if (context_wait_for_deregister_to_register(ce)) {
+		struct intel_runtime_pm *runtime_pm =
+			&ce->engine->gt->i915->runtime_pm;
+		intel_wakeref_t wakeref;
+
+		/*
+		 * Previous owner of this guc_id has been deregistered, now safe
+		 * register this context.
+		 */
+		with_intel_runtime_pm(runtime_pm, wakeref)
+			register_context(ce, true);
+		guc_signal_context_fence(ce);
+		intel_context_put(ce);
+	} else if (context_destroyed(ce)) {
+		/* Context has been destroyed */
+		release_guc_id(guc, ce);
+		__guc_context_destroy(ce);
+	}
+
+	decr_outstanding_submission_g2h(guc);
+
+	return 0;
+}
+
+int intel_guc_sched_done_process_msg(struct intel_guc *guc,
+				     const u32 *msg,
+				     u32 len)
+{
+	struct intel_context *ce;
+	unsigned long flags;
+	u32 desc_idx = msg[0];
+
+	if (unlikely(len < 2)) {
+		drm_dbg(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len);
+		return -EPROTO;
+	}
+
+	ce = g2h_context_lookup(guc, desc_idx);
+	if (unlikely(!ce))
+		return -EPROTO;
+
+	if (unlikely(context_destroyed(ce) ||
+		     (!context_pending_enable(ce) &&
+		     !context_pending_disable(ce)))) {
+		drm_dbg(&guc_to_gt(guc)->i915->drm,
+			"Bad context sched_state 0x%x, 0x%x, desc_idx %u",
+			atomic_read(&ce->guc_sched_state_no_lock),
+			ce->guc_state.sched_state, desc_idx);
+		return -EPROTO;
+	}
+
+	trace_intel_context_sched_done(ce);
+
+	if (context_pending_enable(ce)) {
+		clr_context_pending_enable(ce);
+	} else if (context_pending_disable(ce)) {
+		bool banned;
+
+		/*
+		 * Unpin must be done before __guc_signal_context_fence,
+		 * otherwise a race exists between the requests getting
+		 * submitted + retired before this unpin completes resulting in
+		 * the pin_count going to zero and the context still being
+		 * enabled.
+		 */
+		intel_context_sched_disable_unpin(ce);
+
+		spin_lock_irqsave(&ce->guc_state.lock, flags);
+		banned = context_banned(ce);
+		clr_context_banned(ce);
+		clr_context_pending_disable(ce);
+		__guc_signal_context_fence(ce);
+		guc_blocked_fence_complete(ce);
+		spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+
+		if (banned) {
+			guc_cancel_context_requests(ce);
+			intel_engine_signal_breadcrumbs(ce->engine);
+		}
+	}
+
+	decr_outstanding_submission_g2h(guc);
+	intel_context_put(ce);
+
+	return 0;
+}
+
+static void capture_error_state(struct intel_guc *guc,
+				struct intel_context *ce)
+{
+	struct intel_gt *gt = guc_to_gt(guc);
+	struct drm_i915_private *i915 = gt->i915;
+	struct intel_engine_cs *engine = __context_to_physical_engine(ce);
+	intel_wakeref_t wakeref;
+
+	intel_engine_set_hung_context(engine, ce);
+	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
+		i915_capture_error_state(gt, engine->mask);
+	atomic_inc(&i915->gpu_error.reset_engine_count[engine->uabi_class]);
+}
+
+static void guc_context_replay(struct intel_context *ce)
+{
+	struct i915_sched_engine *sched_engine = ce->engine->sched_engine;
+
+	__guc_reset_context(ce, true);
+	tasklet_hi_schedule(&sched_engine->tasklet);
+}
+
+static void guc_handle_context_reset(struct intel_guc *guc,
+				     struct intel_context *ce)
+{
+	trace_intel_context_reset(ce);
+
+	if (likely(!intel_context_is_banned(ce))) {
+		capture_error_state(guc, ce);
+		guc_context_replay(ce);
+	}
+}
+
+int intel_guc_context_reset_process_msg(struct intel_guc *guc,
+					const u32 *msg, u32 len)
+{
+	struct intel_context *ce;
+	int desc_idx = msg[0];
+
+	if (unlikely(len != 1)) {
+		drm_dbg(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len);
+		return -EPROTO;
+	}
+
+	ce = g2h_context_lookup(guc, desc_idx);
+	if (unlikely(!ce))
+		return -EPROTO;
+
+	guc_handle_context_reset(guc, ce);
+
+	return 0;
+}
+
+static struct intel_engine_cs *
+guc_lookup_engine(struct intel_guc *guc, u8 guc_class, u8 instance)
+{
+	struct intel_gt *gt = guc_to_gt(guc);
+	u8 engine_class = guc_class_to_engine_class(guc_class);
+
+	/* Class index is checked in class converter */
+	GEM_BUG_ON(instance > MAX_ENGINE_INSTANCE);
+
+	return gt->engine_class[engine_class][instance];
+}
+
+int intel_guc_engine_failure_process_msg(struct intel_guc *guc,
+					 const u32 *msg, u32 len)
+{
+	struct intel_engine_cs *engine;
+	u8 guc_class, instance;
+	u32 reason;
+
+	if (unlikely(len != 3)) {
+		drm_dbg(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len);
+		return -EPROTO;
+	}
+
+	guc_class = msg[0];
+	instance = msg[1];
+	reason = msg[2];
+
+	engine = guc_lookup_engine(guc, guc_class, instance);
+	if (unlikely(!engine)) {
+		drm_dbg(&guc_to_gt(guc)->i915->drm,
+			"Invalid engine %d:%d", guc_class, instance);
+		return -EPROTO;
+	}
+
+	intel_gt_handle_error(guc_to_gt(guc), engine->mask,
+			      I915_ERROR_CAPTURE,
+			      "GuC failed to reset %s (reason=0x%08x)\n",
+			      engine->name, reason);
+
+	return 0;
+}
+
+void intel_guc_find_hung_context(struct intel_engine_cs *engine)
+{
+	struct intel_guc *guc = &engine->gt->uc.guc;
+	struct intel_context *ce;
+	struct i915_request *rq;
+	unsigned long index;
+
+	/* Reset called during driver load? GuC not yet initialised! */
+	if (unlikely(!guc_submission_initialized(guc)))
+		return;
+
+	xa_for_each(&guc->context_lookup, index, ce) {
+		if (!intel_context_is_pinned(ce))
+			continue;
+
+		if (intel_engine_is_virtual(ce->engine)) {
+			if (!(ce->engine->mask & engine->mask))
+				continue;
+		} else {
+			if (ce->engine != engine)
+				continue;
+		}
+
+		list_for_each_entry(rq, &ce->guc_active.requests, sched.link) {
+			if (i915_test_request_state(rq) != I915_REQUEST_ACTIVE)
+				continue;
+
+			intel_engine_set_hung_context(engine, ce);
+
+			/* Can only cope with one hang at a time... */
+			return;
+		}
+	}
+}
+
+void intel_guc_dump_active_requests(struct intel_engine_cs *engine,
+				    struct i915_request *hung_rq,
+				    struct drm_printer *m)
+{
+	struct intel_guc *guc = &engine->gt->uc.guc;
+	struct intel_context *ce;
+	unsigned long index;
+	unsigned long flags;
+
+	/* Reset called during driver load? GuC not yet initialised! */
+	if (unlikely(!guc_submission_initialized(guc)))
+		return;
+
+	xa_for_each(&guc->context_lookup, index, ce) {
+		if (!intel_context_is_pinned(ce))
+			continue;
+
+		if (intel_engine_is_virtual(ce->engine)) {
+			if (!(ce->engine->mask & engine->mask))
+				continue;
+		} else {
+			if (ce->engine != engine)
+				continue;
+		}
+
+		spin_lock_irqsave(&ce->guc_active.lock, flags);
+		intel_engine_dump_active_requests(&ce->guc_active.requests,
+						  hung_rq, m);
+		spin_unlock_irqrestore(&ce->guc_active.lock, flags);
+	}
+}
+
+void intel_guc_log_submission_info(struct intel_guc *guc,
+				   struct drm_printer *p)
+{
+	struct i915_sched_engine *sched_engine = guc->sched_engine;
+	struct rb_node *rb;
+	unsigned long flags;
+
+	if (!sched_engine)
+		return;
+
+	drm_printf(p, "GuC Number Outstanding Submission G2H: %u\n",
+		   atomic_read(&guc->outstanding_submission_g2h));
+	drm_printf(p, "GuC tasklet count: %u\n\n",
+		   atomic_read(&sched_engine->tasklet.count));
+
+	spin_lock_irqsave(&sched_engine->lock, flags);
+	drm_printf(p, "Requests in GuC submit tasklet:\n");
+	for (rb = rb_first_cached(&sched_engine->queue); rb; rb = rb_next(rb)) {
+		struct i915_priolist *pl = to_priolist(rb);
+		struct i915_request *rq;
+
+		priolist_for_each_request(rq, pl)
+			drm_printf(p, "guc_id=%u, seqno=%llu\n",
+				   rq->context->guc_id,
+				   rq->fence.seqno);
+	}
+	spin_unlock_irqrestore(&sched_engine->lock, flags);
+	drm_printf(p, "\n");
+}
+
+void intel_guc_log_context_info(struct intel_guc *guc,
+				struct drm_printer *p)
+{
+	struct intel_context *ce;
+	unsigned long index;
+
+	xa_for_each(&guc->context_lookup, index, ce) {
+		drm_printf(p, "GuC lrc descriptor %u:\n", ce->guc_id);
+		drm_printf(p, "\tHW Context Desc: 0x%08x\n", ce->lrc.lrca);
+		drm_printf(p, "\t\tLRC Head: Internal %u, Memory %u\n",
+			   ce->ring->head,
+			   ce->lrc_reg_state[CTX_RING_HEAD]);
+		drm_printf(p, "\t\tLRC Tail: Internal %u, Memory %u\n",
+			   ce->ring->tail,
+			   ce->lrc_reg_state[CTX_RING_TAIL]);
+		drm_printf(p, "\t\tContext Pin Count: %u\n",
+			   atomic_read(&ce->pin_count));
+		drm_printf(p, "\t\tGuC ID Ref Count: %u\n",
+			   atomic_read(&ce->guc_id_ref));
+		drm_printf(p, "\t\tSchedule State: 0x%x, 0x%x\n\n",
+			   ce->guc_state.sched_state,
+			   atomic_read(&ce->guc_sched_state_no_lock));
+	}
+}
+
+static struct intel_context *
+guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count)
+{
+	struct guc_virtual_engine *ve;
+	struct intel_guc *guc;
+	unsigned int n;
+	int err;
+
+	ve = kzalloc(sizeof(*ve), GFP_KERNEL);
+	if (!ve)
+		return ERR_PTR(-ENOMEM);
+
+	guc = &siblings[0]->gt->uc.guc;
+
+	ve->base.i915 = siblings[0]->i915;
+	ve->base.gt = siblings[0]->gt;
+	ve->base.uncore = siblings[0]->uncore;
+	ve->base.id = -1;
+
+	ve->base.uabi_class = I915_ENGINE_CLASS_INVALID;
+	ve->base.instance = I915_ENGINE_CLASS_INVALID_VIRTUAL;
+	ve->base.uabi_instance = I915_ENGINE_CLASS_INVALID_VIRTUAL;
+	ve->base.saturated = ALL_ENGINES;
+
+	snprintf(ve->base.name, sizeof(ve->base.name), "virtual");
+
+	ve->base.sched_engine = i915_sched_engine_get(guc->sched_engine);
+
+	ve->base.cops = &virtual_guc_context_ops;
+	ve->base.request_alloc = guc_request_alloc;
+	ve->base.bump_serial = virtual_guc_bump_serial;
+
+	ve->base.submit_request = guc_submit_request;
+
+	ve->base.flags = I915_ENGINE_IS_VIRTUAL;
+
+	intel_context_init(&ve->context, &ve->base);
+
+	for (n = 0; n < count; n++) {
+		struct intel_engine_cs *sibling = siblings[n];
+
+		GEM_BUG_ON(!is_power_of_2(sibling->mask));
+		if (sibling->mask & ve->base.mask) {
+			DRM_DEBUG("duplicate %s entry in load balancer\n",
+				  sibling->name);
+			err = -EINVAL;
+			goto err_put;
+		}
+
+		ve->base.mask |= sibling->mask;
+
+		if (n != 0 && ve->base.class != sibling->class) {
+			DRM_DEBUG("invalid mixing of engine class, sibling %d, already %d\n",
+				  sibling->class, ve->base.class);
+			err = -EINVAL;
+			goto err_put;
+		} else if (n == 0) {
+			ve->base.class = sibling->class;
+			ve->base.uabi_class = sibling->uabi_class;
+			snprintf(ve->base.name, sizeof(ve->base.name),
+				 "v%dx%d", ve->base.class, count);
+			ve->base.context_size = sibling->context_size;
+
+			ve->base.add_active_request =
+				sibling->add_active_request;
+			ve->base.remove_active_request =
+				sibling->remove_active_request;
+			ve->base.emit_bb_start = sibling->emit_bb_start;
+			ve->base.emit_flush = sibling->emit_flush;
+			ve->base.emit_init_breadcrumb =
+				sibling->emit_init_breadcrumb;
+			ve->base.emit_fini_breadcrumb =
+				sibling->emit_fini_breadcrumb;
+			ve->base.emit_fini_breadcrumb_dw =
+				sibling->emit_fini_breadcrumb_dw;
+			ve->base.breadcrumbs =
+				intel_breadcrumbs_get(sibling->breadcrumbs);
+
+			ve->base.flags |= sibling->flags;
+
+			ve->base.props.timeslice_duration_ms =
+				sibling->props.timeslice_duration_ms;
+		}
+	}
+
+	return &ve->context;
+
+err_put:
+	intel_context_put(&ve->context);
+	return ERR_PTR(err);
+}
+
+
+
+bool intel_guc_virtual_engine_has_heartbeat(const struct intel_engine_cs *ve)
+{
+	struct intel_engine_cs *engine;
+	intel_engine_mask_t tmp, mask = ve->mask;
+
+	for_each_engine_masked(engine, ve->gt, mask, tmp)
+		if (READ_ONCE(engine->props.heartbeat_interval_ms))
+			return true;
+
+	return false;
+}
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
index 3f7005018939..be767eb6ff71 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
@@ -10,6 +10,7 @@
 
 #include "intel_guc.h"
 
+struct drm_printer;
 struct intel_engine_cs;
 
 void intel_guc_submission_init_early(struct intel_guc *guc);
@@ -20,11 +21,23 @@ void intel_guc_submission_fini(struct intel_guc *guc);
 int intel_guc_preempt_work_create(struct intel_guc *guc);
 void intel_guc_preempt_work_destroy(struct intel_guc *guc);
 int intel_guc_submission_setup(struct intel_engine_cs *engine);
+void intel_guc_log_submission_info(struct intel_guc *guc,
+				   struct drm_printer *p);
+void intel_guc_log_context_info(struct intel_guc *guc, struct drm_printer *p);
+void intel_guc_dump_active_requests(struct intel_engine_cs *engine,
+				    struct i915_request *hung_rq,
+				    struct drm_printer *m);
+
+bool intel_guc_virtual_engine_has_heartbeat(const struct intel_engine_cs *ve);
+
+int intel_guc_wait_for_pending_msg(struct intel_guc *guc,
+				   atomic_t *wait_var,
+				   bool interruptible,
+				   long timeout);
 
 static inline bool intel_guc_submission_is_supported(struct intel_guc *guc)
 {
-	/* XXX: GuC submission is unavailable for now */
-	return false;
+	return guc->submission_supported;
 }
 
 static inline bool intel_guc_submission_is_wanted(struct intel_guc *guc)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
index 6d8b9233214e..61be0aa81492 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
@@ -34,8 +34,15 @@ static void uc_expand_default_options(struct intel_uc *uc)
 		return;
 	}
 
-	/* Default: enable HuC authentication only */
-	i915->params.enable_guc = ENABLE_GUC_LOAD_HUC;
+	/* Intermediate platforms are HuC authentication only */
+	if (IS_DG1(i915) || IS_ALDERLAKE_S(i915)) {
+		drm_dbg(&i915->drm, "Disabling GuC only due to old platform\n");
+		i915->params.enable_guc = ENABLE_GUC_LOAD_HUC;
+		return;
+	}
+
+	/* Default: enable HuC authentication and GuC submission */
+	i915->params.enable_guc = ENABLE_GUC_LOAD_HUC | ENABLE_GUC_SUBMISSION;
 }
 
 /* Reset GuC providing us with fresh state for both GuC and HuC.
@@ -120,6 +127,11 @@ void intel_uc_init_early(struct intel_uc *uc)
 		uc->ops = &uc_ops_off;
 }
 
+void intel_uc_init_late(struct intel_uc *uc)
+{
+	intel_guc_init_late(&uc->guc);
+}
+
 void intel_uc_driver_late_release(struct intel_uc *uc)
 {
 }
@@ -207,21 +219,6 @@ static void guc_handle_mmio_msg(struct intel_guc *guc)
 	spin_unlock_irq(&guc->irq_lock);
 }
 
-static void guc_reset_interrupts(struct intel_guc *guc)
-{
-	guc->interrupts.reset(guc);
-}
-
-static void guc_enable_interrupts(struct intel_guc *guc)
-{
-	guc->interrupts.enable(guc);
-}
-
-static void guc_disable_interrupts(struct intel_guc *guc)
-{
-	guc->interrupts.disable(guc);
-}
-
 static int guc_enable_communication(struct intel_guc *guc)
 {
 	struct intel_gt *gt = guc_to_gt(guc);
@@ -242,7 +239,7 @@ static int guc_enable_communication(struct intel_guc *guc)
 	guc_get_mmio_msg(guc);
 	guc_handle_mmio_msg(guc);
 
-	guc_enable_interrupts(guc);
+	intel_guc_enable_interrupts(guc);
 
 	/* check for CT messages received before we enabled interrupts */
 	spin_lock_irq(&gt->irq_lock);
@@ -265,7 +262,7 @@ static void guc_disable_communication(struct intel_guc *guc)
 	 */
 	guc_clear_mmio_msg(guc);
 
-	guc_disable_interrupts(guc);
+	intel_guc_disable_interrupts(guc);
 
 	intel_guc_ct_disable(&guc->ct);
 
@@ -323,9 +320,6 @@ static int __uc_init(struct intel_uc *uc)
 	if (i915_inject_probe_failure(uc_to_gt(uc)->i915))
 		return -ENOMEM;
 
-	/* XXX: GuC submission is unavailable for now */
-	GEM_BUG_ON(intel_uc_uses_guc_submission(uc));
-
 	ret = intel_guc_init(guc);
 	if (ret)
 		return ret;
@@ -463,7 +457,7 @@ static int __uc_init_hw(struct intel_uc *uc)
 	if (ret)
 		goto err_out;
 
-	guc_reset_interrupts(guc);
+	intel_guc_reset_interrupts(guc);
 
 	/* WaEnableuKernelHeaderValidFix:skl */
 	/* WaEnableGuCBootHashCheckNotSet:skl,bxt,kbl */
@@ -565,23 +559,67 @@ void intel_uc_reset_prepare(struct intel_uc *uc)
 {
 	struct intel_guc *guc = &uc->guc;
 
-	if (!intel_guc_is_ready(guc))
+	uc->reset_in_progress = true;
+
+	/* Nothing to do if GuC isn't supported */
+	if (!intel_uc_supports_guc(uc))
 		return;
 
+	/* Firmware expected to be running when this function is called */
+	if (!intel_guc_is_ready(guc))
+		goto sanitize;
+
+	if (intel_uc_uses_guc_submission(uc))
+		intel_guc_submission_reset_prepare(guc);
+
+sanitize:
 	__uc_sanitize(uc);
 }
 
+void intel_uc_reset(struct intel_uc *uc, bool stalled)
+{
+	struct intel_guc *guc = &uc->guc;
+
+	/* Firmware can not be running when this function is called  */
+	if (intel_uc_uses_guc_submission(uc))
+		intel_guc_submission_reset(guc, stalled);
+}
+
+void intel_uc_reset_finish(struct intel_uc *uc)
+{
+	struct intel_guc *guc = &uc->guc;
+
+	uc->reset_in_progress = false;
+
+	/* Firmware expected to be running when this function is called */
+	if (intel_guc_is_fw_running(guc) && intel_uc_uses_guc_submission(uc))
+		intel_guc_submission_reset_finish(guc);
+}
+
+void intel_uc_cancel_requests(struct intel_uc *uc)
+{
+	struct intel_guc *guc = &uc->guc;
+
+	/* Firmware can not be running when this function is called  */
+	if (intel_uc_uses_guc_submission(uc))
+		intel_guc_submission_cancel_requests(guc);
+}
+
 void intel_uc_runtime_suspend(struct intel_uc *uc)
 {
 	struct intel_guc *guc = &uc->guc;
-	int err;
 
 	if (!intel_guc_is_ready(guc))
 		return;
 
-	err = intel_guc_suspend(guc);
-	if (err)
-		DRM_DEBUG_DRIVER("Failed to suspend GuC, err=%d", err);
+	/*
+	 * Wait for any outstanding CTB before tearing down communication /w the
+	 * GuC.
+	 */
+#define OUTSTANDING_CTB_TIMEOUT_PERIOD	(HZ / 5)
+	intel_guc_wait_for_pending_msg(guc, &guc->outstanding_submission_g2h,
+				       false, OUTSTANDING_CTB_TIMEOUT_PERIOD);
+	GEM_WARN_ON(atomic_read(&guc->outstanding_submission_g2h));
 
 	guc_disable_communication(guc);
 }
@@ -590,12 +628,16 @@ void intel_uc_suspend(struct intel_uc *uc)
 {
 	struct intel_guc *guc = &uc->guc;
 	intel_wakeref_t wakeref;
+	int err;
 
 	if (!intel_guc_is_ready(guc))
 		return;
 
-	with_intel_runtime_pm(uc_to_gt(uc)->uncore->rpm, wakeref)
-		intel_uc_runtime_suspend(uc);
+	with_intel_runtime_pm(&uc_to_gt(uc)->i915->runtime_pm, wakeref) {
+		err = intel_guc_suspend(guc);
+		if (err)
+			DRM_DEBUG_DRIVER("Failed to suspend GuC, err=%d", err);
+	}
 }
 
 static int __uc_resume(struct intel_uc *uc, bool enable_communication)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.h b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
index 9c954c589edf..e2da2b6e76e1 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
@@ -30,13 +30,19 @@ struct intel_uc {
 
 	/* Snapshot of GuC log from last failed load */
 	struct drm_i915_gem_object *load_err_log;
+
+	bool reset_in_progress;
 };
 
 void intel_uc_init_early(struct intel_uc *uc);
+void intel_uc_init_late(struct intel_uc *uc);
 void intel_uc_driver_late_release(struct intel_uc *uc);
 void intel_uc_driver_remove(struct intel_uc *uc);
 void intel_uc_init_mmio(struct intel_uc *uc);
 void intel_uc_reset_prepare(struct intel_uc *uc);
+void intel_uc_reset(struct intel_uc *uc, bool stalled);
+void intel_uc_reset_finish(struct intel_uc *uc);
+void intel_uc_cancel_requests(struct intel_uc *uc);
 void intel_uc_suspend(struct intel_uc *uc);
 void intel_uc_runtime_suspend(struct intel_uc *uc);
 int intel_uc_resume(struct intel_uc *uc);
@@ -81,6 +87,11 @@ uc_state_checkers(guc, guc_submission);
 #undef uc_state_checkers
 #undef __uc_state_checker
 
+static inline int intel_uc_wait_for_idle(struct intel_uc *uc, long timeout)
+{
+	return intel_guc_wait_for_idle(&uc->guc, timeout);
+}
+
 #define intel_uc_ops_function(_NAME, _OPS, _TYPE, _RET) \
 static inline _TYPE intel_uc_##_NAME(struct intel_uc *uc) \
 { \
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index cc745751ac53..a9084789deff 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -36,6 +36,7 @@
 #include "gt/intel_gt_clock_utils.h"
 #include "gt/intel_gt.h"
 #include "gt/intel_gt_pm.h"
+#include "gt/intel_gt.h"
 #include "gt/intel_gt_requests.h"
 #include "gt/intel_reset.h"
 #include "gt/intel_rc6.h"
@@ -49,6 +50,7 @@
 #include "i915_trace.h"
 #include "intel_pm.h"
 #include "intel_sideband.h"
+#include "gt/intel_lrc_reg.h"
 
 static inline struct drm_i915_private *node_to_i915(struct drm_info_node *node)
 {
diff --git a/drivers/gpu/drm/i915/i915_debugfs_params.c b/drivers/gpu/drm/i915/i915_debugfs_params.c
index 4e2b077692cb..8ecd8b42f048 100644
--- a/drivers/gpu/drm/i915/i915_debugfs_params.c
+++ b/drivers/gpu/drm/i915/i915_debugfs_params.c
@@ -6,9 +6,20 @@
 #include <linux/kernel.h>
 
 #include "i915_debugfs_params.h"
+#include "gt/intel_gt.h"
+#include "gt/uc/intel_guc.h"
 #include "i915_drv.h"
 #include "i915_params.h"
 
+#define MATCH_DEBUGFS_NODE_NAME(_file, _name)	(strcmp((_file)->f_path.dentry->d_name.name, (_name)) == 0)
+
+#define GET_I915(i915, name, ptr)	\
+	do {	\
+		struct i915_params *params;	\
+		params = container_of(((void *) (ptr)), typeof(*params), name);	\
+		(i915) = container_of(params, typeof(*(i915)), params);	\
+	} while(0)
+
 /* int param */
 static int i915_param_int_show(struct seq_file *m, void *data)
 {
@@ -24,6 +35,16 @@ static int i915_param_int_open(struct inode *inode, struct file *file)
 	return single_open(file, i915_param_int_show, inode->i_private);
 }
 
+static int notify_guc(struct drm_i915_private *i915)
+{
+	int ret = 0;
+
+	if (intel_uc_uses_guc_submission(&i915->gt.uc))
+		ret = intel_guc_global_policies_update(&i915->gt.uc.guc);
+
+	return ret;
+}
+
 static ssize_t i915_param_int_write(struct file *file,
 				    const char __user *ubuf, size_t len,
 				    loff_t *offp)
@@ -81,8 +102,10 @@ static ssize_t i915_param_uint_write(struct file *file,
 				     const char __user *ubuf, size_t len,
 				     loff_t *offp)
 {
+	struct drm_i915_private *i915;
 	struct seq_file *m = file->private_data;
 	unsigned int *value = m->private;
+	unsigned int old = *value;
 	int ret;
 
 	ret = kstrtouint_from_user(ubuf, len, 0, value);
@@ -95,6 +118,14 @@ static ssize_t i915_param_uint_write(struct file *file,
 			*value = b;
 	}
 
+	if (!ret && MATCH_DEBUGFS_NODE_NAME(file, "reset")) {
+		GET_I915(i915, reset, value);
+
+		ret = notify_guc(i915);
+		if (ret)
+			*value = old;
+	}
+
 	return ret ?: len;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
index 4d2d59a9942b..2b73ddb11c66 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -27,6 +27,7 @@
  */
 
 #include "gem/i915_gem_context.h"
+#include "gt/intel_gt.h"
 #include "gt/intel_gt_requests.h"
 
 #include "i915_drv.h"
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index a2c58b54a592..0f08bcfbe964 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1429,20 +1429,37 @@ capture_engine(struct intel_engine_cs *engine,
 {
 	struct intel_engine_capture_vma *capture = NULL;
 	struct intel_engine_coredump *ee;
-	struct i915_request *rq;
+	struct intel_context *ce;
+	struct i915_request *rq = NULL;
 	unsigned long flags;
 
 	ee = intel_engine_coredump_alloc(engine, GFP_KERNEL);
 	if (!ee)
 		return NULL;
 
-	spin_lock_irqsave(&engine->sched_engine->lock, flags);
-	rq = intel_engine_find_active_request(engine);
+	ce = intel_engine_get_hung_context(engine);
+	if (ce) {
+		intel_engine_clear_hung_context(engine);
+		rq = intel_context_find_active_request(ce);
+		if (!rq || !i915_request_started(rq))
+			goto no_request_capture;
+	} else {
+		/*
+		 * Getting here with GuC enabled means it is a forced error capture
+		 * with no actual hang. So, no need to attempt the execlist search.
+		 */
+		if (!intel_uc_uses_guc_submission(&engine->gt->uc)) {
+			spin_lock_irqsave(&engine->sched_engine->lock, flags);
+			rq = intel_engine_execlist_find_hung_request(engine);
+			spin_unlock_irqrestore(&engine->sched_engine->lock,
+					       flags);
+		}
+	}
 	if (rq)
 		capture = intel_engine_coredump_add_request(ee, rq,
 							    ATOMIC_MAYFAIL);
-	spin_unlock_irqrestore(&engine->sched_engine->lock, flags);
 	if (!capture) {
+no_request_capture:
 		kfree(ee);
 		return NULL;
 	}
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index e915ec034c98..7d9e90aa3ec0 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -4142,6 +4142,7 @@ enum {
 	FAULT_AND_CONTINUE /* Unsupported */
 };
 
+#define CTX_GTT_ADDRESS_MASK GENMASK(31, 12)
 #define GEN8_CTX_VALID (1 << 0)
 #define GEN8_CTX_FORCE_PD_RESTORE (1 << 1)
 #define GEN8_CTX_FORCE_RESTORE (1 << 2)
@@ -12287,6 +12288,7 @@ enum skl_power_gate {
 
 /* MOCS (Memory Object Control State) registers */
 #define GEN9_LNCFCMOCS(i)	_MMIO(0xb020 + (i) * 4)	/* L3 Cache Control */
+#define GEN9_LNCFCMOCS_REG_COUNT	32
 
 #define __GEN9_RCS0_MOCS0	0xc800
 #define GEN9_GFX_MOCS(i)	_MMIO(__GEN9_RCS0_MOCS0 + (i) * 4)
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 86b4c9f2613d..fde3278dd9b7 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -125,39 +125,17 @@ static void i915_fence_release(struct dma_fence *fence)
 	i915_sw_fence_fini(&rq->semaphore);
 
 	/*
-	 * Keep one request on each engine for reserved use under mempressure
-	 *
-	 * We do not hold a reference to the engine here and so have to be
-	 * very careful in what rq->engine we poke. The virtual engine is
-	 * referenced via the rq->context and we released that ref during
-	 * i915_request_retire(), ergo we must not dereference a virtual
-	 * engine here. Not that we would want to, as the only consumer of
-	 * the reserved engine->request_pool is the power management parking,
-	 * which must-not-fail, and that is only run on the physical engines.
-	 *
-	 * Since the request must have been executed to be have completed,
-	 * we know that it will have been processed by the HW and will
-	 * not be unsubmitted again, so rq->engine and rq->execution_mask
-	 * at this point is stable. rq->execution_mask will be a single
-	 * bit if the last and _only_ engine it could execution on was a
-	 * physical engine, if it's multiple bits then it started on and
-	 * could still be on a virtual engine. Thus if the mask is not a
-	 * power-of-two we assume that rq->engine may still be a virtual
-	 * engine and so a dangling invalid pointer that we cannot dereference
-	 *
-	 * For example, consider the flow of a bonded request through a virtual
-	 * engine. The request is created with a wide engine mask (all engines
-	 * that we might execute on). On processing the bond, the request mask
-	 * is reduced to one or more engines. If the request is subsequently
-	 * bound to a single engine, it will then be constrained to only
-	 * execute on that engine and never returned to the virtual engine
-	 * after timeslicing away, see __unwind_incomplete_requests(). Thus we
-	 * know that if the rq->execution_mask is a single bit, rq->engine
-	 * can be a physical engine with the exact corresponding mask.
+	 * Keep one request on each engine for reserved use under mempressure,
+	 * do not use with virtual engines as this really is only needed for
+	 * kernel contexts.
 	 */
-	if (is_power_of_2(rq->execution_mask) &&
-	    !cmpxchg(&rq->engine->request_pool, NULL, rq))
+	if (!intel_engine_is_virtual(rq->engine) &&
+	    !cmpxchg(&rq->engine->request_pool, NULL, rq)) {
+		intel_context_put(rq->context);
 		return;
+	}
+
+	intel_context_put(rq->context);
 
 	kmem_cache_free(global.slab_requests, rq);
 }
@@ -204,7 +182,7 @@ static bool irq_work_imm(struct irq_work *wrk)
 	return false;
 }
 
-static void __notify_execute_cb_imm(struct i915_request *rq)
+void i915_request_notify_execute_cb_imm(struct i915_request *rq)
 {
 	__notify_execute_cb(rq, irq_work_imm);
 }
@@ -278,37 +256,6 @@ i915_request_active_engine(struct i915_request *rq,
 	return ret;
 }
 
-
-static void remove_from_engine(struct i915_request *rq)
-{
-	struct intel_engine_cs *engine, *locked;
-
-	/*
-	 * Virtual engines complicate acquiring the engine timeline lock,
-	 * as their rq->engine pointer is not stable until under that
-	 * engine lock. The simple ploy we use is to take the lock then
-	 * check that the rq still belongs to the newly locked engine.
-	 */
-	locked = READ_ONCE(rq->engine);
-	spin_lock_irq(&locked->sched_engine->lock);
-	while (unlikely(locked != (engine = READ_ONCE(rq->engine)))) {
-		spin_unlock(&locked->sched_engine->lock);
-		spin_lock(&engine->sched_engine->lock);
-		locked = engine;
-	}
-	list_del_init(&rq->sched.link);
-
-	clear_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
-	clear_bit(I915_FENCE_FLAG_HOLD, &rq->fence.flags);
-
-	/* Prevent further __await_execution() registering a cb, then flush */
-	set_bit(I915_FENCE_FLAG_ACTIVE, &rq->fence.flags);
-
-	spin_unlock_irq(&locked->sched_engine->lock);
-
-	__notify_execute_cb_imm(rq);
-}
-
 static void __rq_init_watchdog(struct i915_request *rq)
 {
 	rq->watchdog.timer.function = NULL;
@@ -405,8 +352,7 @@ bool i915_request_retire(struct i915_request *rq)
 	 * after removing the breadcrumb and signaling it, so that we do not
 	 * inadvertently attach the breadcrumb to a completed request.
 	 */
-	if (!list_empty(&rq->sched.link))
-		remove_from_engine(rq);
+	rq->engine->remove_active_request(rq);
 	GEM_BUG_ON(!llist_empty(&rq->execute_cb));
 
 	__list_del_entry(&rq->link); /* poison neither prev/next (RCU walks) */
@@ -431,6 +377,7 @@ void i915_request_retire_upto(struct i915_request *rq)
 
 	do {
 		tmp = list_first_entry(&tl->requests, typeof(*tmp), link);
+		GEM_BUG_ON(!i915_request_completed(tmp));
 	} while (i915_request_retire(tmp) && tmp != rq);
 }
 
@@ -536,7 +483,7 @@ __await_execution(struct i915_request *rq,
 	if (llist_add(&cb->work.node.llist, &signal->execute_cb)) {
 		if (i915_request_is_active(signal) ||
 		    __request_in_flight(signal))
-			__notify_execute_cb_imm(signal);
+			i915_request_notify_execute_cb_imm(signal);
 	}
 
 	return 0;
@@ -667,11 +614,13 @@ bool __i915_request_submit(struct i915_request *request)
 				     request->ring->vaddr + request->postfix);
 
 	trace_i915_request_execute(request);
-	engine->serial++;
+	if (engine->bump_serial)
+		engine->bump_serial(engine);
+
 	result = true;
 
 	GEM_BUG_ON(test_bit(I915_FENCE_FLAG_ACTIVE, &request->fence.flags));
-	list_move_tail(&request->sched.link, &engine->sched_engine->requests);
+	engine->add_active_request(request);
 active:
 	clear_bit(I915_FENCE_FLAG_PQUEUE, &request->fence.flags);
 	set_bit(I915_FENCE_FLAG_ACTIVE, &request->fence.flags);
@@ -759,18 +708,6 @@ void i915_request_unsubmit(struct i915_request *request)
 	spin_unlock_irqrestore(&engine->sched_engine->lock, flags);
 }
 
-static void __cancel_request(struct i915_request *rq)
-{
-	struct intel_engine_cs *engine = NULL;
-
-	i915_request_active_engine(rq, &engine);
-
-	if (engine && intel_engine_pulse(engine))
-		intel_gt_handle_error(engine->gt, engine->mask, 0,
-				      "request cancellation by %s",
-				      current->comm);
-}
-
 void i915_request_cancel(struct i915_request *rq, int error)
 {
 	if (!i915_request_set_error_once(rq, error))
@@ -778,7 +715,7 @@ void i915_request_cancel(struct i915_request *rq, int error)
 
 	set_bit(I915_FENCE_FLAG_SENTINEL, &rq->fence.flags);
 
-	__cancel_request(rq);
+	intel_context_cancel_request(rq->context, rq);
 }
 
 static int __i915_sw_fence_call
@@ -950,7 +887,18 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
 		}
 	}
 
-	rq->context = ce;
+	/*
+	 * Hold a reference to the intel_context over life of an i915_request.
+	 * Without this an i915_request can exist after the context has been
+	 * destroyed (e.g. request retired, context closed, but user space holds
+	 * a reference to the request from an out fence). In the case of GuC
+	 * submission + virtual engine, the engine that the request references
+	 * is also destroyed which can trigger bad pointer dref in fence ops
+	 * (e.g. i915_fence_get_driver_name). We could likely change these
+	 * functions to avoid touching the engine but let's just be safe and
+	 * hold the intel_context reference.
+	 */
+	rq->context = intel_context_get(ce);
 	rq->engine = ce->engine;
 	rq->ring = ce->ring;
 	rq->execution_mask = ce->engine->mask;
@@ -1027,6 +975,7 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
 	GEM_BUG_ON(!list_empty(&rq->sched.waiters_list));
 
 err_free:
+	intel_context_put(ce);
 	kmem_cache_free(global.slab_requests, rq);
 err_unreserve:
 	intel_context_unpin(ce);
@@ -1317,6 +1266,9 @@ __i915_request_await_execution(struct i915_request *to,
 			return err;
 	}
 
+	trace_i915_request_dep_to(to);
+	trace_i915_request_dep_from(from);
+
 	/* Couple the dependency tree for PI on this exposed to->fence */
 	if (to->engine->sched_engine->schedule) {
 		err = i915_sched_node_add_dependency(&to->sched,
@@ -1379,6 +1331,9 @@ i915_request_await_external(struct i915_request *rq, struct dma_fence *fence)
 	return err;
 }
 
+static int
+i915_request_await_request(struct i915_request *to, struct i915_request *from);
+
 int
 i915_request_await_execution(struct i915_request *rq,
 			     struct dma_fence *fence)
@@ -1464,7 +1419,8 @@ i915_request_await_request(struct i915_request *to, struct i915_request *from)
 			return ret;
 	}
 
-	if (is_power_of_2(to->execution_mask | READ_ONCE(from->execution_mask)))
+	if (!intel_engine_uses_guc(to->engine) &&
+	    is_power_of_2(to->execution_mask | READ_ONCE(from->execution_mask)))
 		ret = await_request_submit(to, from);
 	else
 		ret = emit_semaphore_wait(to, from, I915_FENCE_GFP);
@@ -1625,6 +1581,8 @@ __i915_request_add_to_timeline(struct i915_request *rq)
 	prev = to_request(__i915_active_fence_set(&timeline->last_request,
 						  &rq->fence));
 	if (prev && !__i915_request_is_complete(prev)) {
+		bool uses_guc = intel_engine_uses_guc(rq->engine);
+
 		/*
 		 * The requests are supposed to be kept in order. However,
 		 * we need to be wary in case the timeline->last_request
@@ -1635,7 +1593,8 @@ __i915_request_add_to_timeline(struct i915_request *rq)
 			   i915_seqno_passed(prev->fence.seqno,
 					     rq->fence.seqno));
 
-		if (is_power_of_2(READ_ONCE(prev->engine)->mask | rq->engine->mask))
+		if ((!uses_guc && is_power_of_2(READ_ONCE(prev->engine)->mask | rq->engine->mask)) ||
+		    (uses_guc && prev->context == rq->context))
 			i915_sw_fence_await_sw_fence(&rq->submit,
 						     &prev->submit,
 						     &rq->submitq);
@@ -2076,6 +2035,47 @@ void i915_request_show(struct drm_printer *m,
 		   name);
 }
 
+static bool engine_match_ring(struct intel_engine_cs *engine, struct i915_request *rq)
+{
+	u32 ring = ENGINE_READ(engine, RING_START);
+
+	return ring == i915_ggtt_offset(rq->ring->vma);
+}
+
+static bool match_ring(struct i915_request *rq)
+{
+	struct intel_engine_cs *engine;
+	bool found;
+	int i;
+
+	if (!intel_engine_is_virtual(rq->engine))
+		return engine_match_ring(rq->engine, rq);
+
+	found = false;
+	i = 0;
+	while ((engine = intel_engine_get_sibling(rq->engine, i++))) {
+		found = engine_match_ring(engine, rq);
+		if (found)
+			break;
+	}
+
+	return found;
+}
+
+enum i915_request_state i915_test_request_state(struct i915_request *rq)
+{
+	if (i915_request_completed(rq))
+		return I915_REQUEST_COMPLETE;
+
+	if (!i915_request_started(rq))
+		return I915_REQUEST_PENDING;
+
+	if (match_ring(rq))
+		return I915_REQUEST_ACTIVE;
+
+	return I915_REQUEST_QUEUED;
+}
+
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftests/mock_request.c"
 #include "selftests/i915_request.c"
diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h
index 5deb65ec5fa5..a3d4728ea06c 100644
--- a/drivers/gpu/drm/i915/i915_request.h
+++ b/drivers/gpu/drm/i915/i915_request.h
@@ -285,6 +285,14 @@ struct i915_request {
 		struct hrtimer timer;
 	} watchdog;
 
+	/*
+	 * Requests may need to be stalled when using GuC submission waiting for
+	 * certain GuC operations to complete. If that is the case, stalled
+	 * requests are added to a per context list of stalled requests. The
+	 * below list_head is the link in that list.
+	 */
+	struct list_head guc_fence_link;
+
 	I915_SELFTEST_DECLARE(struct {
 		struct list_head link;
 		unsigned long delay;
@@ -639,4 +647,17 @@ bool
 i915_request_active_engine(struct i915_request *rq,
 			   struct intel_engine_cs **active);
 
+void i915_request_notify_execute_cb_imm(struct i915_request *rq);
+
+enum i915_request_state
+{
+	I915_REQUEST_UNKNOWN = 0,
+	I915_REQUEST_COMPLETE,
+	I915_REQUEST_PENDING,
+	I915_REQUEST_QUEUED,
+	I915_REQUEST_ACTIVE,
+};
+
+enum i915_request_state i915_test_request_state(struct i915_request *rq);
+
 #endif /* I915_REQUEST_H */
diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c
index 3a58a9130309..8766a8643469 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -431,7 +431,7 @@ void i915_request_show_with_schedule(struct drm_printer *m,
 	rcu_read_unlock();
 }
 
-void i915_sched_engine_free(struct kref *kref)
+static void default_destroy(struct kref *kref)
 {
 	struct i915_sched_engine *sched_engine =
 		container_of(kref, typeof(*sched_engine), ref);
@@ -440,6 +440,11 @@ void i915_sched_engine_free(struct kref *kref)
 	kfree(sched_engine);
 }
 
+static bool default_disabled(struct i915_sched_engine *sched_engine)
+{
+	return false;
+}
+
 struct i915_sched_engine *
 i915_sched_engine_create(unsigned int subclass)
 {
@@ -453,6 +458,8 @@ i915_sched_engine_create(unsigned int subclass)
 
 	sched_engine->queue = RB_ROOT_CACHED;
 	sched_engine->queue_priority_hint = INT_MIN;
+	sched_engine->destroy = default_destroy;
+	sched_engine->disabled = default_disabled;
 
 	INIT_LIST_HEAD(&sched_engine->requests);
 	INIT_LIST_HEAD(&sched_engine->hold);
diff --git a/drivers/gpu/drm/i915/i915_scheduler.h b/drivers/gpu/drm/i915/i915_scheduler.h
index 650ab8e0db9f..f4d9811ade5b 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.h
+++ b/drivers/gpu/drm/i915/i915_scheduler.h
@@ -51,8 +51,6 @@ static inline void i915_priolist_free(struct i915_priolist *p)
 struct i915_sched_engine *
 i915_sched_engine_create(unsigned int subclass);
 
-void i915_sched_engine_free(struct kref *kref);
-
 static inline struct i915_sched_engine *
 i915_sched_engine_get(struct i915_sched_engine *sched_engine)
 {
@@ -63,7 +61,7 @@ i915_sched_engine_get(struct i915_sched_engine *sched_engine)
 static inline void
 i915_sched_engine_put(struct i915_sched_engine *sched_engine)
 {
-	kref_put(&sched_engine->ref, i915_sched_engine_free);
+	kref_put(&sched_engine->ref, sched_engine->destroy);
 }
 
 static inline bool
@@ -98,4 +96,10 @@ void i915_request_show_with_schedule(struct drm_printer *m,
 				     const char *prefix,
 				     int indent);
 
+static inline bool
+i915_sched_engine_disabled(struct i915_sched_engine *sched_engine)
+{
+	return sched_engine->disabled(sched_engine);
+}
+
 #endif /* _I915_SCHEDULER_H_ */
diff --git a/drivers/gpu/drm/i915/i915_scheduler_types.h b/drivers/gpu/drm/i915/i915_scheduler_types.h
index 5935c3152bdc..eaef233e9080 100644
--- a/drivers/gpu/drm/i915/i915_scheduler_types.h
+++ b/drivers/gpu/drm/i915/i915_scheduler_types.h
@@ -163,6 +163,16 @@ struct i915_sched_engine {
 	 */
 	void *private_data;
 
+	/**
+	 * @destroy: destroy schedule engine / cleanup in backend
+	 */
+	void	(*destroy)(struct kref *kref);
+
+	/**
+	 * @disabled: check if backend has disabled submission
+	 */
+	bool	(*disabled)(struct i915_sched_engine *sched_engine);
+
 	/**
 	 * @kick_backend: kick backend after a request's priority has changed
 	 */
diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
index 6778ad2a14a4..937d3706af9b 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -794,30 +794,50 @@ DECLARE_EVENT_CLASS(i915_request,
 	    TP_STRUCT__entry(
 			     __field(u32, dev)
 			     __field(u64, ctx)
+			     __field(u32, guc_id)
 			     __field(u16, class)
 			     __field(u16, instance)
 			     __field(u32, seqno)
+			     __field(u32, tail)
 			     ),
 
 	    TP_fast_assign(
 			   __entry->dev = rq->engine->i915->drm.primary->index;
 			   __entry->class = rq->engine->uabi_class;
 			   __entry->instance = rq->engine->uabi_instance;
+			   __entry->guc_id = rq->context->guc_id;
 			   __entry->ctx = rq->fence.context;
 			   __entry->seqno = rq->fence.seqno;
+			   __entry->tail = rq->tail;
 			   ),
 
-	    TP_printk("dev=%u, engine=%u:%u, ctx=%llu, seqno=%u",
+	    TP_printk("dev=%u, engine=%u:%u, guc_id=%u, ctx=%llu, seqno=%u, tail=%u",
 		      __entry->dev, __entry->class, __entry->instance,
-		      __entry->ctx, __entry->seqno)
+		      __entry->guc_id, __entry->ctx, __entry->seqno,
+		      __entry->tail)
 );
 
 DEFINE_EVENT(i915_request, i915_request_add,
-	    TP_PROTO(struct i915_request *rq),
-	    TP_ARGS(rq)
+	     TP_PROTO(struct i915_request *rq),
+	     TP_ARGS(rq)
 );
 
 #if defined(CONFIG_DRM_I915_LOW_LEVEL_TRACEPOINTS)
+DEFINE_EVENT(i915_request, i915_request_dep_to,
+	     TP_PROTO(struct i915_request *rq),
+	     TP_ARGS(rq)
+);
+
+DEFINE_EVENT(i915_request, i915_request_dep_from,
+	     TP_PROTO(struct i915_request *rq),
+	     TP_ARGS(rq)
+);
+
+DEFINE_EVENT(i915_request, i915_request_guc_submit,
+	     TP_PROTO(struct i915_request *rq),
+	     TP_ARGS(rq)
+);
+
 DEFINE_EVENT(i915_request, i915_request_submit,
 	     TP_PROTO(struct i915_request *rq),
 	     TP_ARGS(rq)
@@ -885,8 +905,117 @@ TRACE_EVENT(i915_request_out,
 			      __entry->ctx, __entry->seqno, __entry->completed)
 );
 
+DECLARE_EVENT_CLASS(intel_context,
+	    TP_PROTO(struct intel_context *ce),
+	    TP_ARGS(ce),
+
+	    TP_STRUCT__entry(
+			     __field(u32, guc_id)
+			     __field(int, pin_count)
+			     __field(u32, sched_state)
+			     __field(u32, guc_sched_state_no_lock)
+			     ),
+
+	    TP_fast_assign(
+			   __entry->guc_id = ce->guc_id;
+			   __entry->pin_count = atomic_read(&ce->pin_count);
+			   __entry->sched_state = ce->guc_state.sched_state;
+			   __entry->guc_sched_state_no_lock =
+			   atomic_read(&ce->guc_sched_state_no_lock);
+			   ),
+
+	    TP_printk("guc_id=%d, pin_count=%d sched_state=0x%x,0x%x",
+		      __entry->guc_id, __entry->pin_count, __entry->sched_state,
+		      __entry->guc_sched_state_no_lock)
+);
+
+DEFINE_EVENT(intel_context, intel_context_reset,
+	     TP_PROTO(struct intel_context *ce),
+	     TP_ARGS(ce)
+);
+
+DEFINE_EVENT(intel_context, intel_context_ban,
+	     TP_PROTO(struct intel_context *ce),
+	     TP_ARGS(ce)
+);
+
+DEFINE_EVENT(intel_context, intel_context_register,
+	     TP_PROTO(struct intel_context *ce),
+	     TP_ARGS(ce)
+);
+
+DEFINE_EVENT(intel_context, intel_context_deregister,
+	     TP_PROTO(struct intel_context *ce),
+	     TP_ARGS(ce)
+);
+
+DEFINE_EVENT(intel_context, intel_context_deregister_done,
+	     TP_PROTO(struct intel_context *ce),
+	     TP_ARGS(ce)
+);
+
+DEFINE_EVENT(intel_context, intel_context_sched_enable,
+	     TP_PROTO(struct intel_context *ce),
+	     TP_ARGS(ce)
+);
+
+DEFINE_EVENT(intel_context, intel_context_sched_disable,
+	     TP_PROTO(struct intel_context *ce),
+	     TP_ARGS(ce)
+);
+
+DEFINE_EVENT(intel_context, intel_context_sched_done,
+	     TP_PROTO(struct intel_context *ce),
+	     TP_ARGS(ce)
+);
+
+DEFINE_EVENT(intel_context, intel_context_create,
+	     TP_PROTO(struct intel_context *ce),
+	     TP_ARGS(ce)
+);
+
+DEFINE_EVENT(intel_context, intel_context_fence_release,
+	     TP_PROTO(struct intel_context *ce),
+	     TP_ARGS(ce)
+);
+
+DEFINE_EVENT(intel_context, intel_context_free,
+	     TP_PROTO(struct intel_context *ce),
+	     TP_ARGS(ce)
+);
+
+DEFINE_EVENT(intel_context, intel_context_steal_guc_id,
+	     TP_PROTO(struct intel_context *ce),
+	     TP_ARGS(ce)
+);
+
+DEFINE_EVENT(intel_context, intel_context_do_pin,
+	     TP_PROTO(struct intel_context *ce),
+	     TP_ARGS(ce)
+);
+
+DEFINE_EVENT(intel_context, intel_context_do_unpin,
+	     TP_PROTO(struct intel_context *ce),
+	     TP_ARGS(ce)
+);
+
 #else
 #if !defined(TRACE_HEADER_MULTI_READ)
+static inline void
+trace_i915_request_dep_to(struct i915_request *rq)
+{
+}
+
+static inline void
+trace_i915_request_dep_from(struct i915_request *rq)
+{
+}
+
+static inline void
+trace_i915_request_guc_submit(struct i915_request *rq)
+{
+}
+
 static inline void
 trace_i915_request_submit(struct i915_request *rq)
 {
@@ -906,6 +1035,76 @@ static inline void
 trace_i915_request_out(struct i915_request *rq)
 {
 }
+
+static inline void
+trace_intel_context_reset(struct intel_context *ce)
+{
+}
+
+static inline void
+trace_intel_context_ban(struct intel_context *ce)
+{
+}
+
+static inline void
+trace_intel_context_register(struct intel_context *ce)
+{
+}
+
+static inline void
+trace_intel_context_deregister(struct intel_context *ce)
+{
+}
+
+static inline void
+trace_intel_context_deregister_done(struct intel_context *ce)
+{
+}
+
+static inline void
+trace_intel_context_sched_enable(struct intel_context *ce)
+{
+}
+
+static inline void
+trace_intel_context_sched_disable(struct intel_context *ce)
+{
+}
+
+static inline void
+trace_intel_context_sched_done(struct intel_context *ce)
+{
+}
+
+static inline void
+trace_intel_context_create(struct intel_context *ce)
+{
+}
+
+static inline void
+trace_intel_context_fence_release(struct intel_context *ce)
+{
+}
+
+static inline void
+trace_intel_context_free(struct intel_context *ce)
+{
+}
+
+static inline void
+trace_intel_context_steal_guc_id(struct intel_context *ce)
+{
+}
+
+static inline void
+trace_intel_context_do_pin(struct intel_context *ce)
+{
+}
+
+static inline void
+trace_intel_context_do_unpin(struct intel_context *ce)
+{
+}
 #endif
 #endif
 
diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c b/drivers/gpu/drm/i915/selftests/i915_request.c
index bd5c96a77ba3..d67710d10615 100644
--- a/drivers/gpu/drm/i915/selftests/i915_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_request.c
@@ -1313,7 +1313,7 @@ static int __live_parallel_engine1(void *arg)
 		i915_request_add(rq);
 
 		err = 0;
-		if (i915_request_wait(rq, 0, HZ / 5) < 0)
+		if (i915_request_wait(rq, 0, HZ) < 0)
 			err = -ETIME;
 		i915_request_put(rq);
 		if (err)
@@ -1419,7 +1419,7 @@ static int __live_parallel_spin(void *arg)
 	}
 	igt_spinner_end(&spin);
 
-	if (err == 0 && i915_request_wait(rq, 0, HZ / 5) < 0)
+	if (err == 0 && i915_request_wait(rq, 0, HZ) < 0)
 		err = -EIO;
 	i915_request_put(rq);
 
diff --git a/drivers/gpu/drm/i915/selftests/igt_flush_test.c b/drivers/gpu/drm/i915/selftests/igt_flush_test.c
index 7b0939e3f007..a6c71fca61aa 100644
--- a/drivers/gpu/drm/i915/selftests/igt_flush_test.c
+++ b/drivers/gpu/drm/i915/selftests/igt_flush_test.c
@@ -19,7 +19,7 @@ int igt_flush_test(struct drm_i915_private *i915)
 
 	cond_resched();
 
-	if (intel_gt_wait_for_idle(gt, HZ / 5) == -ETIME) {
+	if (intel_gt_wait_for_idle(gt, HZ) == -ETIME) {
 		pr_err("%pS timed out, cancelling all further testing.\n",
 		       __builtin_return_address(0));
 
diff --git a/drivers/gpu/drm/i915/selftests/igt_live_test.c b/drivers/gpu/drm/i915/selftests/igt_live_test.c
index c130010a7033..1c721542e277 100644
--- a/drivers/gpu/drm/i915/selftests/igt_live_test.c
+++ b/drivers/gpu/drm/i915/selftests/igt_live_test.c
@@ -5,7 +5,7 @@
  */
 
 #include "i915_drv.h"
-#include "gt/intel_gt_requests.h"
+#include "gt/intel_gt.h"
 
 #include "../i915_selftest.h"
 #include "igt_flush_test.h"
diff --git a/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c b/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c
new file mode 100644
index 000000000000..ebd6d69b3315
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c
@@ -0,0 +1,89 @@
+/*
+ * SPDX-License-Identifier: MIT
+ *
+ * Copyright © 2018 Intel Corporation
+ */
+
+//#include "gt/intel_engine_user.h"
+#include "gt/intel_gt.h"
+#include "i915_drv.h"
+#include "i915_selftest.h"
+
+#include "selftests/intel_scheduler_helpers.h"
+
+#define REDUCED_TIMESLICE	5
+#define REDUCED_PREEMPT		10
+#define WAIT_FOR_RESET_TIME	10000
+
+int intel_selftest_modify_policy(struct intel_engine_cs *engine,
+				 struct intel_selftest_saved_policy *saved,
+				 u32 modify_type)
+
+{
+	int err;
+
+	saved->reset = engine->i915->params.reset;
+	saved->flags = engine->flags;
+	saved->timeslice = engine->props.timeslice_duration_ms;
+	saved->preempt_timeout = engine->props.preempt_timeout_ms;
+
+	switch (modify_type) {
+	case SELFTEST_SCHEDULER_MODIFY_FAST_RESET:
+		/*
+		 * Enable force pre-emption on time slice expiration
+		 * together with engine reset on pre-emption timeout.
+		 * This is required to make the GuC notice and reset
+		 * the single hanging context.
+		 * Also, reduce the preemption timeout to something
+		 * small to speed the test up.
+		 */
+		engine->i915->params.reset = 2;
+		engine->flags |= I915_ENGINE_WANT_FORCED_PREEMPTION;
+		engine->props.timeslice_duration_ms = REDUCED_TIMESLICE;
+		engine->props.preempt_timeout_ms = REDUCED_PREEMPT;
+		break;
+
+	case SELFTEST_SCHEDULER_MODIFY_NO_HANGCHECK:
+		engine->props.preempt_timeout_ms = 0;
+		break;
+
+	default:
+		pr_err("Invalid scheduler policy modification type: %d!\n", modify_type);
+		return -EINVAL;
+	}
+
+	if (!intel_engine_uses_guc(engine))
+		return 0;
+
+	err = intel_guc_global_policies_update(&engine->gt->uc.guc);
+	if (err)
+		intel_selftest_restore_policy(engine, saved);
+
+	return err;
+}
+
+int intel_selftest_restore_policy(struct intel_engine_cs *engine,
+				  struct intel_selftest_saved_policy *saved)
+{
+	/* Restore the original policies */
+	engine->i915->params.reset = saved->reset;
+	engine->flags = saved->flags;
+	engine->props.timeslice_duration_ms = saved->timeslice;
+	engine->props.preempt_timeout_ms = saved->preempt_timeout;
+
+	if (!intel_engine_uses_guc(engine))
+		return 0;
+
+	return intel_guc_global_policies_update(&engine->gt->uc.guc);
+}
+
+int intel_selftest_wait_for_rq(struct i915_request *rq)
+{
+	long ret;
+
+	ret = i915_request_wait(rq, 0, WAIT_FOR_RESET_TIME);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.h b/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.h
new file mode 100644
index 000000000000..050bc5a8ba8b
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2014-2019 Intel Corporation
+ */
+
+#ifndef _INTEL_SELFTEST_SCHEDULER_HELPERS_H_
+#define _INTEL_SELFTEST_SCHEDULER_HELPERS_H_
+
+#include <linux/types.h>
+
+struct i915_request;
+struct intel_engine_cs;
+
+struct intel_selftest_saved_policy
+{
+	u32 flags;
+	u32 reset;
+	u64 timeslice;
+	u64 preempt_timeout;
+};
+
+enum selftest_scheduler_modify
+{
+	SELFTEST_SCHEDULER_MODIFY_NO_HANGCHECK = 0,
+	SELFTEST_SCHEDULER_MODIFY_FAST_RESET,
+};
+
+int intel_selftest_modify_policy(struct intel_engine_cs *engine,
+				 struct intel_selftest_saved_policy *saved,
+				 enum selftest_scheduler_modify modify_type);
+int intel_selftest_restore_policy(struct intel_engine_cs *engine,
+				  struct intel_selftest_saved_policy *saved);
+int intel_selftest_wait_for_rq( struct i915_request *rq);
+
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index d189c4bd4bef..4f8180146888 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -52,7 +52,8 @@ void mock_device_flush(struct drm_i915_private *i915)
 	do {
 		for_each_engine(engine, gt, id)
 			mock_engine_flush(engine);
-	} while (intel_gt_retire_requests_timeout(gt, MAX_SCHEDULE_TIMEOUT));
+	} while (intel_gt_retire_requests_timeout(gt, MAX_SCHEDULE_TIMEOUT,
+						  NULL));
 }
 
 static void mock_device_release(struct drm_device *dev)
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Intel-gfx] [PATCH 02/16] drm/i915/guc/slpc: Initial definitions for slpc
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 01/16] drm/i915/guc: Squashed patch - DO NOT REVIEW Vinay Belgaumkar
@ 2021-07-10  1:20 ` Vinay Belgaumkar
  2021-07-10 14:27   ` Michal Wajdeczko
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 03/16] drm/i915/guc/slpc: Gate Host RPS when slpc is enabled Vinay Belgaumkar
                   ` (16 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Vinay Belgaumkar @ 2021-07-10  1:20 UTC (permalink / raw)
  To: intel-gfx, dri-devel

Add macros to check for slpc support. This feature is currently supported
for gen12+ and enabled whenever guc submission is enabled/selected.

Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
---
 drivers/gpu/drm/i915/gt/uc/intel_guc.c        |  1 +
 drivers/gpu/drm/i915/gt/uc/intel_guc.h        |  2 ++
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 21 +++++++++++++++++++
 .../gpu/drm/i915/gt/uc/intel_guc_submission.h | 16 ++++++++++++++
 drivers/gpu/drm/i915/gt/uc/intel_uc.c         |  6 ++++--
 drivers/gpu/drm/i915/gt/uc/intel_uc.h         |  1 +
 6 files changed, 45 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
index 979128e28372..b9a809f2d221 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
@@ -157,6 +157,7 @@ void intel_guc_init_early(struct intel_guc *guc)
 	intel_guc_ct_init_early(&guc->ct);
 	intel_guc_log_init_early(&guc->log);
 	intel_guc_submission_init_early(guc);
+	intel_guc_slpc_init_early(guc);
 
 	mutex_init(&guc->send_mutex);
 	spin_lock_init(&guc->irq_lock);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index 5d94cf482516..e5a456918b88 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -57,6 +57,8 @@ struct intel_guc {
 
 	bool submission_supported;
 	bool submission_selected;
+	bool slpc_supported;
+	bool slpc_selected;
 
 	struct i915_vma *ads_vma;
 	struct __guc_ads_blob *ads_blob;
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 9c102bf0c8e3..e2644a05f298 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -2351,6 +2351,27 @@ void intel_guc_submission_init_early(struct intel_guc *guc)
 	guc->submission_selected = __guc_submission_selected(guc);
 }
 
+static bool __guc_slpc_supported(struct intel_guc *guc)
+{
+	/* GuC slpc is unavailable for pre-Gen12 */
+	return guc->submission_supported &&
+		GRAPHICS_VER(guc_to_gt(guc)->i915) >= 12;
+}
+
+static bool __guc_slpc_selected(struct intel_guc *guc)
+{
+	if (!intel_guc_slpc_is_supported(guc))
+		return false;
+
+	return guc->submission_selected;
+}
+
+void intel_guc_slpc_init_early(struct intel_guc *guc)
+{
+	guc->slpc_supported = __guc_slpc_supported(guc);
+	guc->slpc_selected = __guc_slpc_selected(guc);
+}
+
 static inline struct intel_context *
 g2h_context_lookup(struct intel_guc *guc, u32 desc_idx)
 {
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
index be767eb6ff71..7ae5fd052faf 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
@@ -13,6 +13,7 @@
 struct drm_printer;
 struct intel_engine_cs;
 
+void intel_guc_slpc_init_early(struct intel_guc *guc);
 void intel_guc_submission_init_early(struct intel_guc *guc);
 int intel_guc_submission_init(struct intel_guc *guc);
 void intel_guc_submission_enable(struct intel_guc *guc);
@@ -50,4 +51,19 @@ static inline bool intel_guc_submission_is_used(struct intel_guc *guc)
 	return intel_guc_is_used(guc) && intel_guc_submission_is_wanted(guc);
 }
 
+static inline bool intel_guc_slpc_is_supported(struct intel_guc *guc)
+{
+	return guc->slpc_supported;
+}
+
+static inline bool intel_guc_slpc_is_wanted(struct intel_guc *guc)
+{
+	return guc->slpc_selected;
+}
+
+static inline bool intel_guc_slpc_is_used(struct intel_guc *guc)
+{
+	return intel_guc_submission_is_used(guc) && intel_guc_slpc_is_wanted(guc);
+}
+
 #endif
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
index 61be0aa81492..dca5f6d0641b 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
@@ -76,16 +76,18 @@ static void __confirm_options(struct intel_uc *uc)
 	struct drm_i915_private *i915 = uc_to_gt(uc)->i915;
 
 	drm_dbg(&i915->drm,
-		"enable_guc=%d (guc:%s submission:%s huc:%s)\n",
+		"enable_guc=%d (guc:%s submission:%s huc:%s slpc:%s)\n",
 		i915->params.enable_guc,
 		yesno(intel_uc_wants_guc(uc)),
 		yesno(intel_uc_wants_guc_submission(uc)),
-		yesno(intel_uc_wants_huc(uc)));
+		yesno(intel_uc_wants_huc(uc)),
+		yesno(intel_uc_wants_guc_slpc(uc)));
 
 	if (i915->params.enable_guc == 0) {
 		GEM_BUG_ON(intel_uc_wants_guc(uc));
 		GEM_BUG_ON(intel_uc_wants_guc_submission(uc));
 		GEM_BUG_ON(intel_uc_wants_huc(uc));
+		GEM_BUG_ON(intel_uc_wants_guc_slpc(uc));
 		return;
 	}
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.h b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
index e2da2b6e76e1..38e465fd8a0c 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
@@ -83,6 +83,7 @@ __uc_state_checker(x, func, uses, used)
 uc_state_checkers(guc, guc);
 uc_state_checkers(huc, huc);
 uc_state_checkers(guc, guc_submission);
+uc_state_checkers(guc, guc_slpc);
 
 #undef uc_state_checkers
 #undef __uc_state_checker
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Intel-gfx] [PATCH 03/16] drm/i915/guc/slpc: Gate Host RPS when slpc is enabled
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 01/16] drm/i915/guc: Squashed patch - DO NOT REVIEW Vinay Belgaumkar
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 02/16] drm/i915/guc/slpc: Initial definitions for slpc Vinay Belgaumkar
@ 2021-07-10  1:20 ` Vinay Belgaumkar
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 04/16] drm/i915/guc/slpc: Lay out slpc init/enable/disable/fini Vinay Belgaumkar
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 53+ messages in thread
From: Vinay Belgaumkar @ 2021-07-10  1:20 UTC (permalink / raw)
  To: intel-gfx, dri-devel

Disable RPS when slpc is enabled. Also ensure uc_init is called
before we initialize RPS so that we can check for slpc support.
We do not need to enable up/down interrupts when slpc is enabled.
However, we still need the ARAT interrupt, which will be enabled
separately.

Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_gt.c  |  2 +-
 drivers/gpu/drm/i915/gt/intel_rps.c | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
index ceeb517ba259..f94d2e1ec3fe 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt.c
@@ -41,8 +41,8 @@ void intel_gt_init_early(struct intel_gt *gt, struct drm_i915_private *i915)
 	intel_gt_init_timelines(gt);
 	intel_gt_pm_init_early(gt);
 
-	intel_rps_init_early(&gt->rps);
 	intel_uc_init_early(&gt->uc);
+	intel_rps_init_early(&gt->rps);
 }
 
 int intel_gt_probe_lmem(struct intel_gt *gt)
diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
index 0c8e7f2b06f0..e858eeb2c59d 100644
--- a/drivers/gpu/drm/i915/gt/intel_rps.c
+++ b/drivers/gpu/drm/i915/gt/intel_rps.c
@@ -37,6 +37,13 @@ static struct intel_uncore *rps_to_uncore(struct intel_rps *rps)
 	return rps_to_gt(rps)->uncore;
 }
 
+static bool rps_uses_slpc(struct intel_rps *rps)
+{
+	struct intel_gt *gt = rps_to_gt(rps);
+
+	return intel_uc_uses_guc_slpc(&gt->uc);
+}
+
 static u32 rps_pm_sanitize_mask(struct intel_rps *rps, u32 mask)
 {
 	return mask & ~rps->pm_intrmsk_mbz;
@@ -167,6 +174,8 @@ static void rps_enable_interrupts(struct intel_rps *rps)
 {
 	struct intel_gt *gt = rps_to_gt(rps);
 
+	GEM_BUG_ON(rps_uses_slpc(rps));
+
 	GT_TRACE(gt, "interrupts:on rps->pm_events: %x, rps_pm_mask:%x\n",
 		 rps->pm_events, rps_pm_mask(rps, rps->last_freq));
 
@@ -771,6 +780,8 @@ static int gen6_rps_set(struct intel_rps *rps, u8 val)
 	struct drm_i915_private *i915 = rps_to_i915(rps);
 	u32 swreq;
 
+	GEM_BUG_ON(rps_uses_slpc(rps));
+
 	if (GRAPHICS_VER(i915) >= 9)
 		swreq = GEN9_FREQUENCY(val);
 	else if (IS_HASWELL(i915) || IS_BROADWELL(i915))
@@ -861,6 +872,9 @@ void intel_rps_park(struct intel_rps *rps)
 {
 	int adj;
 
+	if (!intel_rps_is_enabled(rps))
+		return;
+
 	GEM_BUG_ON(atomic_read(&rps->num_waiters));
 
 	if (!intel_rps_clear_active(rps))
@@ -1829,6 +1843,9 @@ void intel_rps_init(struct intel_rps *rps)
 {
 	struct drm_i915_private *i915 = rps_to_i915(rps);
 
+	if (rps_uses_slpc(rps))
+		return;
+
 	if (IS_CHERRYVIEW(i915))
 		chv_rps_init(rps);
 	else if (IS_VALLEYVIEW(i915))
@@ -1885,6 +1902,9 @@ void intel_rps_init(struct intel_rps *rps)
 
 void intel_rps_sanitize(struct intel_rps *rps)
 {
+	if (rps_uses_slpc(rps))
+		return;
+
 	if (GRAPHICS_VER(rps_to_i915(rps)) >= 6)
 		rps_disable_interrupts(rps);
 }
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Intel-gfx] [PATCH 04/16] drm/i915/guc/slpc: Lay out slpc init/enable/disable/fini
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
                   ` (2 preceding siblings ...)
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 03/16] drm/i915/guc/slpc: Gate Host RPS when slpc is enabled Vinay Belgaumkar
@ 2021-07-10  1:20 ` Vinay Belgaumkar
  2021-07-10 14:35   ` Michal Wajdeczko
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 05/16] drm/i915/guc/slpc: Adding slpc communication interfaces Vinay Belgaumkar
                   ` (14 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Vinay Belgaumkar @ 2021-07-10  1:20 UTC (permalink / raw)
  To: intel-gfx, dri-devel

Declare header and source files for SLPC, along with init and
enable/disable function templates.

Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
---
 drivers/gpu/drm/i915/Makefile               |  1 +
 drivers/gpu/drm/i915/gt/uc/intel_guc.h      |  2 ++
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 34 +++++++++++++++++++++
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h | 16 ++++++++++
 4 files changed, 53 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
 create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index ab7679957623..d8eac4468df9 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -186,6 +186,7 @@ i915-y += gt/uc/intel_uc.o \
 	  gt/uc/intel_guc_fw.o \
 	  gt/uc/intel_guc_log.o \
 	  gt/uc/intel_guc_log_debugfs.o \
+	  gt/uc/intel_guc_slpc.o \
 	  gt/uc/intel_guc_submission.o \
 	  gt/uc/intel_huc.o \
 	  gt/uc/intel_huc_debugfs.o \
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index e5a456918b88..0dbbd9cf553f 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -15,6 +15,7 @@
 #include "intel_guc_ct.h"
 #include "intel_guc_log.h"
 #include "intel_guc_reg.h"
+#include "intel_guc_slpc.h"
 #include "intel_uc_fw.h"
 #include "i915_utils.h"
 #include "i915_vma.h"
@@ -30,6 +31,7 @@ struct intel_guc {
 	struct intel_uc_fw fw;
 	struct intel_guc_log log;
 	struct intel_guc_ct ct;
+	struct intel_guc_slpc slpc;
 
 	/* Global engine used to submit requests to GuC */
 	struct i915_sched_engine *sched_engine;
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
new file mode 100644
index 000000000000..c1f569d2300d
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
@@ -0,0 +1,34 @@
+/*
+ * SPDX-License-Identifier: MIT
+ *
+ * Copyright © 2020 Intel Corporation
+ */
+
+#include "intel_guc_slpc.h"
+
+int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
+{
+	return 0;
+}
+
+/*
+ * intel_guc_slpc_enable() - Start SLPC
+ * @slpc: pointer to intel_guc_slpc.
+ *
+ * SLPC is enabled by setting up the shared data structure and
+ * sending reset event to GuC SLPC. Initial data is setup in
+ * intel_guc_slpc_init. Here we send the reset event. We do
+ * not currently need a slpc_disable since this is taken care
+ * of automatically when a reset/suspend occurs and the guc
+ * channels are destroyed.
+ *
+ * Return: 0 on success, non-zero error code on failure.
+ */
+int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
+{
+	return 0;
+}
+
+void intel_guc_slpc_fini(struct intel_guc_slpc *slpc)
+{
+}
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
new file mode 100644
index 000000000000..74fd86769163
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
@@ -0,0 +1,16 @@
+/*
+ * SPDX-License-Identifier: MIT
+ *
+ * Copyright © 2020 Intel Corporation
+ */
+#ifndef _INTEL_GUC_SLPC_H_
+#define _INTEL_GUC_SLPC_H_
+
+struct intel_guc_slpc {
+};
+
+int intel_guc_slpc_init(struct intel_guc_slpc *slpc);
+int intel_guc_slpc_enable(struct intel_guc_slpc *slpc);
+void intel_guc_slpc_fini(struct intel_guc_slpc *slpc);
+
+#endif
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Intel-gfx] [PATCH 05/16] drm/i915/guc/slpc: Adding slpc communication interfaces
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
                   ` (3 preceding siblings ...)
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 04/16] drm/i915/guc/slpc: Lay out slpc init/enable/disable/fini Vinay Belgaumkar
@ 2021-07-10  1:20 ` Vinay Belgaumkar
  2021-07-10 15:52   ` Michal Wajdeczko
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 06/16] drm/i915/guc/slpc: Allocate, initialize and release slpc Vinay Belgaumkar
                   ` (13 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Vinay Belgaumkar @ 2021-07-10  1:20 UTC (permalink / raw)
  To: intel-gfx, dri-devel

Replicate the SLPC header file in GuC for the most part. There are
some SLPC mode based parameters which haven't been included since
we are not using them.

Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
---
 drivers/gpu/drm/i915/gt/uc/intel_guc.c        |   4 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h   |   2 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h   |   2 +
 .../gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h  | 255 ++++++++++++++++++
 4 files changed, 263 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
index b9a809f2d221..9d61b2d54de4 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
@@ -202,11 +202,15 @@ static u32 guc_ctl_debug_flags(struct intel_guc *guc)
 
 static u32 guc_ctl_feature_flags(struct intel_guc *guc)
 {
+	struct intel_gt *gt = guc_to_gt(guc);
 	u32 flags = 0;
 
 	if (!intel_guc_submission_is_used(guc))
 		flags |= GUC_CTL_DISABLE_SCHEDULER;
 
+	if (intel_uc_uses_guc_slpc(&gt->uc))
+		flags |= GUC_CTL_ENABLE_SLPC;
+
 	return flags;
 }
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
index 94bb1ca6f889..19e2504d7a36 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
@@ -114,6 +114,8 @@
 #define   GUC_ADS_ADDR_SHIFT		1
 #define   GUC_ADS_ADDR_MASK		(0xFFFFF << GUC_ADS_ADDR_SHIFT)
 
+#define GUC_CTL_ENABLE_SLPC            BIT(2)
+
 #define GUC_CTL_MAX_DWORDS		(SOFT_SCRATCH_COUNT - 2) /* [1..14] */
 
 /* Generic GT SysInfo data types */
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
index 74fd86769163..98036459a1a3 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
@@ -6,6 +6,8 @@
 #ifndef _INTEL_GUC_SLPC_H_
 #define _INTEL_GUC_SLPC_H_
 
+#include "intel_guc_slpc_fwif.h"
+
 struct intel_guc_slpc {
 };
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h
new file mode 100644
index 000000000000..2a5e71428374
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h
@@ -0,0 +1,255 @@
+/*
+ * SPDX-License-Identifier: MIT
+ *
+ * Copyright © 2020 Intel Corporation
+ */
+#ifndef _INTEL_GUC_SLPC_FWIF_H_
+#define _INTEL_GUC_SLPC_FWIF_H_
+
+#include <linux/types.h>
+
+/* This file replicates the header in GuC code for handling SLPC related
+ * data structures and sizes
+ */
+
+/* SLPC exposes certain parameters for global configuration by the host.
+ * These are referred to as override parameters, because in most cases
+ * the host will not need to modify the default values used by SLPC.
+ * SLPC remembers the default values which allows the host to easily restore
+ * them by simply unsetting the override. The host can set or unset override
+ * parameters during SLPC (re-)initialization using the SLPC Reset event.
+ * The host can also set or unset override parameters on the fly using the
+ * Parameter Set and Parameter Unset events
+ */
+#define SLPC_MAX_OVERRIDE_PARAMETERS	256
+#define SLPC_OVERRIDE_BITFIELD_SIZE \
+		(SLPC_MAX_OVERRIDE_PARAMETERS / 32)
+
+#define SLPC_PAGE_SIZE_BYTES			4096
+#define SLPC_CACHELINE_SIZE_BYTES		64
+#define SLPC_SHARE_DATA_SIZE_BYTE_HEADER	SLPC_CACHELINE_SIZE_BYTES
+#define SLPC_SHARE_DATA_SIZE_BYTE_PLATFORM_INFO	SLPC_CACHELINE_SIZE_BYTES
+#define SLPC_SHARE_DATA_SIZE_BYTE_TASK_STATE	SLPC_CACHELINE_SIZE_BYTES
+#define SLPC_SHARE_DATA_MODE_DEFN_TABLE_SIZE	SLPC_PAGE_SIZE_BYTES
+
+#define SLPC_SHARE_DATA_SIZE_BYTE_MAX		(2 * SLPC_PAGE_SIZE_BYTES)
+
+/* Cacheline size aligned (Total size needed for
+ * SLPM_KMD_MAX_OVERRIDE_PARAMETERS=256 is 1088 bytes)
+ */
+#define SLPC_SHARE_DATA_SIZE_BYTE_PARAM		(((((SLPC_MAX_OVERRIDE_PARAMETERS * 4) \
+						+ ((SLPC_MAX_OVERRIDE_PARAMETERS / 32) * 4)) \
+		+ (SLPC_CACHELINE_SIZE_BYTES-1)) / SLPC_CACHELINE_SIZE_BYTES)*SLPC_CACHELINE_SIZE_BYTES)
+
+#define SLPC_SHARE_DATA_SIZE_BYTE_OTHER		(SLPC_SHARE_DATA_SIZE_BYTE_MAX - \
+					(SLPC_SHARE_DATA_SIZE_BYTE_HEADER \
+					+ SLPC_SHARE_DATA_SIZE_BYTE_PLATFORM_INFO \
+					+ SLPC_SHARE_DATA_SIZE_BYTE_TASK_STATE \
+					+ SLPC_SHARE_DATA_SIZE_BYTE_PARAM \
+					+ SLPC_SHARE_DATA_MODE_DEFN_TABLE_SIZE))
+
+#define SLPC_EVENT(id, argc)			((u32)(id) << 8 | (argc))
+
+#define SLPC_PARAM_TASK_DEFAULT			0
+#define SLPC_PARAM_TASK_ENABLED			1
+#define SLPC_PARAM_TASK_DISABLED		2
+#define SLPC_PARAM_TASK_UNKNOWN			3
+
+enum slpc_status {
+	SLPC_STATUS_OK = 0,
+	SLPC_STATUS_ERROR = 1,
+	SLPC_STATUS_ILLEGAL_COMMAND = 2,
+	SLPC_STATUS_INVALID_ARGS = 3,
+	SLPC_STATUS_INVALID_PARAMS = 4,
+	SLPC_STATUS_INVALID_DATA = 5,
+	SLPC_STATUS_OUT_OF_RANGE = 6,
+	SLPC_STATUS_NOT_SUPPORTED = 7,
+	SLPC_STATUS_NOT_IMPLEMENTED = 8,
+	SLPC_STATUS_NO_DATA = 9,
+	SLPC_STATUS_EVENT_NOT_REGISTERED = 10,
+	SLPC_STATUS_REGISTER_LOCKED = 11,
+	SLPC_STATUS_TEMPORARILY_UNAVAILABLE = 12,
+	SLPC_STATUS_VALUE_ALREADY_SET = 13,
+	SLPC_STATUS_VALUE_ALREADY_UNSET = 14,
+	SLPC_STATUS_VALUE_NOT_CHANGED = 15,
+	SLPC_STATUS_MEMIO_ERROR = 16,
+	SLPC_STATUS_EVENT_QUEUED_REQ_DPC = 17,
+	SLPC_STATUS_EVENT_QUEUED_NOREQ_DPC = 18,
+	SLPC_STATUS_NO_EVENT_QUEUED = 19,
+	SLPC_STATUS_OUT_OF_SPACE = 20,
+	SLPC_STATUS_TIMEOUT = 21,
+	SLPC_STATUS_NO_LOCK = 22,
+	SLPC_STATUS_MAX
+};
+
+enum slpc_event_id {
+	SLPC_EVENT_RESET = 0,
+	SLPC_EVENT_SHUTDOWN = 1,
+	SLPC_EVENT_PLATFORM_INFO_CHANGE = 2,
+	SLPC_EVENT_DISPLAY_MODE_CHANGE = 3,
+	SLPC_EVENT_FLIP_COMPLETE = 4,
+	SLPC_EVENT_QUERY_TASK_STATE = 5,
+	SLPC_EVENT_PARAMETER_SET = 6,
+	SLPC_EVENT_PARAMETER_UNSET = 7,
+};
+
+enum slpc_param_id {
+	SLPC_PARAM_TASK_ENABLE_GTPERF = 0,
+	SLPC_PARAM_TASK_DISABLE_GTPERF = 1,
+	SLPC_PARAM_TASK_ENABLE_BALANCER = 2,
+	SLPC_PARAM_TASK_DISABLE_BALANCER = 3,
+	SLPC_PARAM_TASK_ENABLE_DCC = 4,
+	SLPC_PARAM_TASK_DISABLE_DCC = 5,
+	SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ = 6,
+	SLPC_PARAM_GLOBAL_MAX_GT_UNSLICE_FREQ_MHZ = 7,
+	SLPC_PARAM_GLOBAL_MIN_GT_SLICE_FREQ_MHZ = 8,
+	SLPC_PARAM_GLOBAL_MAX_GT_SLICE_FREQ_MHZ = 9,
+	SLPC_PARAM_GTPERF_THRESHOLD_MAX_FPS = 10,
+	SLPC_PARAM_GLOBAL_DISABLE_GT_FREQ_MANAGEMENT = 11,
+	SLPC_PARAM_GTPERF_ENABLE_FRAMERATE_STALLING = 12,
+	SLPC_PARAM_GLOBAL_DISABLE_RC6_MODE_CHANGE = 13,
+	SLPC_PARAM_GLOBAL_OC_UNSLICE_FREQ_MHZ = 14,
+	SLPC_PARAM_GLOBAL_OC_SLICE_FREQ_MHZ = 15,
+	SLPC_PARAM_GLOBAL_ENABLE_IA_GT_BALANCING = 16,
+	SLPC_PARAM_GLOBAL_ENABLE_ADAPTIVE_BURST_TURBO = 17,
+	SLPC_PARAM_GLOBAL_ENABLE_EVAL_MODE = 18,
+	SLPC_PARAM_GLOBAL_ENABLE_BALANCER_IN_NON_GAMING_MODE = 19,
+	SLPC_PARAM_GLOBAL_RT_MODE_TURBO_FREQ_DELTA_MHZ = 20,
+	SLPC_PARAM_PWRGATE_RC_MODE = 21,
+	SLPC_PARAM_EDR_MODE_COMPUTE_TIMEOUT_MS = 22,
+	SLPC_PARAM_EDR_QOS_FREQ_MHZ = 23,
+	SLPC_PARAM_MEDIA_FF_RATIO_MODE = 24,
+	SLPC_PARAM_ENABLE_IA_FREQ_LIMITING = 25,
+	SLPC_PARAM_STRATEGIES = 26,
+	SLPC_PARAM_POWER_PROFILE = 27,
+	SLPC_IGNORE_EFFICIENT_FREQUENCY = 28,
+	SLPC_MAX_PARAM = 32,
+};
+
+enum slpc_global_state {
+	SLPC_GLOBAL_STATE_NOT_RUNNING = 0,
+	SLPC_GLOBAL_STATE_INITIALIZING = 1,
+	SLPC_GLOBAL_STATE_RESETTING = 2,
+	SLPC_GLOBAL_STATE_RUNNING = 3,
+	SLPC_GLOBAL_STATE_SHUTTING_DOWN = 4,
+	SLPC_GLOBAL_STATE_ERROR = 5
+};
+
+enum slpc_platform_sku {
+	SLPC_PLATFORM_SKU_UNDEFINED = 0,
+	SLPC_PLATFORM_SKU_ULX = 1,
+	SLPC_PLATFORM_SKU_ULT = 2,
+	SLPC_PLATFORM_SKU_T = 3,
+	SLPC_PLATFORM_SKU_MOBL = 4,
+	SLPC_PLATFORM_SKU_DT = 5,
+	SLPC_PLATFORM_SKU_UNKNOWN = 6,
+};
+
+struct slpc_platform_info {
+	union {
+		u32 sku;  /**< SKU info */
+		struct {
+			u32 reserved:8;
+			u32 fused_slice_count:8;
+			u32 reserved1:16;
+		};
+	};
+        union
+	{
+		u32 bitfield2;       /**< IA capability info*/
+		struct {
+			u32 max_p0_freq_bins:8;
+			u32 p1_freq_bins:8;
+			u32 pe_freq_bins:8;
+			u32 pn_freq_bins:8;
+		};
+	};
+	u32 reserved2[2];
+} __packed;
+
+struct slpc_task_state_data {
+	union {
+		u32 bitfield1;
+		struct {
+			u32 gtperf_task_active:1;
+			u32 gtperf_stall_possible:1;
+			u32 gtperf_gaming_mode:1;
+			u32 gtperf_target_fps:8;
+			u32 dcc_task_active:1;
+			u32 in_dcc:1;
+			u32 in_dct:1;
+			u32 freq_switch_active:1;
+			u32 ibc_enabled:1;
+			u32 ibc_active:1;
+			u32 pg1_enabled:1;
+			u32 pg1_active:1;
+		};
+	};
+	union {
+		u32 bitfield2;
+		struct {
+			u32 max_unslice_freq:8;
+			u32 min_unslice_freq:8;
+			u32 max_slice_freq:8;
+			u32 min_slice_freq:8;
+		};
+	};
+} __packed;
+
+struct slpc_shared_data {
+	union {
+		struct {
+			/* Total size in bytes of this buffer. */
+			u32 shared_data_size;
+			u32 global_state;
+			u32 display_data_addr;
+		};
+		unsigned char reserved_header[SLPC_SHARE_DATA_SIZE_BYTE_HEADER];
+	};
+
+	union {
+		struct slpc_platform_info platform_info;
+		unsigned char reserved_platform[SLPC_SHARE_DATA_SIZE_BYTE_PLATFORM_INFO];
+	};
+
+	union {
+		struct slpc_task_state_data task_state_data;
+		unsigned char reserved_task_state[SLPC_SHARE_DATA_SIZE_BYTE_TASK_STATE];
+	};
+
+	union {
+		struct {
+		u32 override_params_set_bits[SLPC_OVERRIDE_BITFIELD_SIZE];
+		u32 override_params_values[SLPC_MAX_OVERRIDE_PARAMETERS];
+		};
+		unsigned char reserved_override_parameter[SLPC_SHARE_DATA_SIZE_BYTE_PARAM];
+	};
+
+	unsigned char reserved_other[SLPC_SHARE_DATA_SIZE_BYTE_OTHER];
+
+	/* PAGE 2 (4096 bytes), mode based parameter will be removed soon */
+	unsigned char reserved_mode_definition[4096];
+} __packed;
+
+enum slpc_reset_flags {
+	SLPC_RESET_FLAG_TDR_OCCURRED = (1 << 0)
+};
+
+#define SLPC_EVENT_MAX_INPUT_ARGS  9
+#define SLPC_EVENT_MAX_OUTPUT_ARGS 1
+
+union slpc_event_input_header {
+	u32 value;
+	struct {
+		u32 num_args:8;
+		u32 event_id:8;
+	};
+};
+
+struct slpc_event_input {
+	u32 h2g_action_id;
+	union slpc_event_input_header header;
+	u32 args[SLPC_EVENT_MAX_INPUT_ARGS];
+} __packed;
+
+#endif
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Intel-gfx] [PATCH 06/16] drm/i915/guc/slpc: Allocate, initialize and release slpc
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
                   ` (4 preceding siblings ...)
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 05/16] drm/i915/guc/slpc: Adding slpc communication interfaces Vinay Belgaumkar
@ 2021-07-10  1:20 ` Vinay Belgaumkar
  2021-07-10 16:05   ` Michal Wajdeczko
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 07/16] drm/i915/guc/slpc: Enable slpc and add related H2G events Vinay Belgaumkar
                   ` (12 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Vinay Belgaumkar @ 2021-07-10  1:20 UTC (permalink / raw)
  To: intel-gfx, dri-devel

Allocate data structures for SLPC and functions for
initializing on host side.

Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
---
 drivers/gpu/drm/i915/gt/uc/intel_guc.c      | 11 +++++++
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 36 ++++++++++++++++++++-
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h | 20 ++++++++++++
 3 files changed, 66 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
index 9d61b2d54de4..82863a9bc8e8 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
@@ -336,6 +336,12 @@ int intel_guc_init(struct intel_guc *guc)
 			goto err_ct;
 	}
 
+	if (intel_guc_slpc_is_used(guc)) {
+		ret = intel_guc_slpc_init(&guc->slpc);
+		if (ret)
+			goto err_submission;
+	}
+
 	/* now that everything is perma-pinned, initialize the parameters */
 	guc_init_params(guc);
 
@@ -346,6 +352,8 @@ int intel_guc_init(struct intel_guc *guc)
 
 	return 0;
 
+err_submission:
+	intel_guc_submission_fini(guc);
 err_ct:
 	intel_guc_ct_fini(&guc->ct);
 err_ads:
@@ -368,6 +376,9 @@ void intel_guc_fini(struct intel_guc *guc)
 
 	i915_ggtt_disable_guc(gt->ggtt);
 
+	if (intel_guc_slpc_is_used(guc))
+		intel_guc_slpc_fini(&guc->slpc);
+
 	if (intel_guc_submission_is_used(guc))
 		intel_guc_submission_fini(guc);
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
index c1f569d2300d..94e2f19951aa 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
@@ -4,11 +4,41 @@
  * Copyright © 2020 Intel Corporation
  */
 
+#include <asm/msr-index.h>
+
+#include "gt/intel_gt.h"
+#include "gt/intel_rps.h"
+
+#include "i915_drv.h"
 #include "intel_guc_slpc.h"
+#include "intel_pm.h"
+
+static inline struct intel_guc *slpc_to_guc(struct intel_guc_slpc *slpc)
+{
+	return container_of(slpc, struct intel_guc, slpc);
+}
+
+static int slpc_shared_data_init(struct intel_guc_slpc *slpc)
+{
+	struct intel_guc *guc = slpc_to_guc(slpc);
+	int err;
+	u32 size = PAGE_ALIGN(sizeof(struct slpc_shared_data));
+
+	err = intel_guc_allocate_and_map_vma(guc, size, &slpc->vma, &slpc->vaddr);
+	if (unlikely(err)) {
+		DRM_ERROR("Failed to allocate slpc struct (err=%d)\n", err);
+		i915_vma_unpin_and_release(&slpc->vma, I915_VMA_RELEASE_MAP);
+		return err;
+	}
+
+	return err;
+}
 
 int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
 {
-	return 0;
+	GEM_BUG_ON(slpc->vma);
+
+	return slpc_shared_data_init(slpc);
 }
 
 /*
@@ -31,4 +61,8 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
 
 void intel_guc_slpc_fini(struct intel_guc_slpc *slpc)
 {
+	if (!slpc->vma)
+		return;
+
+	i915_vma_unpin_and_release(&slpc->vma, I915_VMA_RELEASE_MAP);
 }
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
index 98036459a1a3..a2643b904165 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
@@ -3,12 +3,32 @@
  *
  * Copyright © 2020 Intel Corporation
  */
+
 #ifndef _INTEL_GUC_SLPC_H_
 #define _INTEL_GUC_SLPC_H_
 
+#include <linux/mutex.h>
 #include "intel_guc_slpc_fwif.h"
 
 struct intel_guc_slpc {
+	/*Protects access to vma and SLPC actions */
+	struct i915_vma *vma;
+	void *vaddr;
+
+	/* platform frequency limits */
+	u32 min_freq;
+	u32 rp0_freq;
+	u32 rp1_freq;
+
+	/* frequency softlimits */
+	u32 min_freq_softlimit;
+	u32 max_freq_softlimit;
+
+	struct {
+		u32 param_id;
+		u32 param_value;
+		u32 param_override;
+	} debug;
 };
 
 int intel_guc_slpc_init(struct intel_guc_slpc *slpc);
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Intel-gfx] [PATCH 07/16] drm/i915/guc/slpc: Enable slpc and add related H2G events
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
                   ` (5 preceding siblings ...)
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 06/16] drm/i915/guc/slpc: Allocate, initialize and release slpc Vinay Belgaumkar
@ 2021-07-10  1:20 ` Vinay Belgaumkar
  2021-07-10 17:37   ` Michal Wajdeczko
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 08/16] drm/i915/guc/slpc: Add methods to set min/max frequency Vinay Belgaumkar
                   ` (11 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Vinay Belgaumkar @ 2021-07-10  1:20 UTC (permalink / raw)
  To: intel-gfx, dri-devel

Add methods for interacting with guc for enabling SLPC. Enable
SLPC after guc submission has been established. GuC load will
fail if SLPC cannot be successfully initialized. Add various
helper methods to set/unset the parameters for SLPC. They can
be set using h2g calls or directly setting bits in the shared
data structure.

Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c   | 221 ++++++++++++++++++
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c |   4 -
 drivers/gpu/drm/i915/gt/uc/intel_uc.c         |  10 +
 3 files changed, 231 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
index 94e2f19951aa..e579408d1c19 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
@@ -18,6 +18,61 @@ static inline struct intel_guc *slpc_to_guc(struct intel_guc_slpc *slpc)
 	return container_of(slpc, struct intel_guc, slpc);
 }
 
+static inline struct intel_gt *slpc_to_gt(struct intel_guc_slpc *slpc)
+{
+	return guc_to_gt(slpc_to_guc(slpc));
+}
+
+static inline struct drm_i915_private *slpc_to_i915(struct intel_guc_slpc *slpc)
+{
+	return (slpc_to_gt(slpc))->i915;
+}
+
+static void slpc_mem_set_param(struct slpc_shared_data *data,
+				u32 id, u32 value)
+{
+	GEM_BUG_ON(id >= SLPC_MAX_OVERRIDE_PARAMETERS);
+	/* When the flag bit is set, corresponding value will be read
+	 * and applied by slpc.
+	 */
+	data->override_params_set_bits[id >> 5] |= (1 << (id % 32));
+	data->override_params_values[id] = value;
+}
+
+static void slpc_mem_unset_param(struct slpc_shared_data *data,
+				 u32 id)
+{
+	GEM_BUG_ON(id >= SLPC_MAX_OVERRIDE_PARAMETERS);
+	/* When the flag bit is unset, corresponding value will not be
+	 * read by slpc.
+	 */
+	data->override_params_set_bits[id >> 5] &= (~(1 << (id % 32)));
+	data->override_params_values[id] = 0;
+}
+
+static void slpc_mem_task_control(struct slpc_shared_data *data,
+				 u64 val, u32 enable_id, u32 disable_id)
+{
+	/* Enabling a param involves setting the enable_id
+	 * to 1 and disable_id to 0. Setting it to default
+	 * will unset both enable and disable ids and let
+	 * slpc choose it's default values.
+	 */
+	if (val == SLPC_PARAM_TASK_DEFAULT) {
+		/* set default */
+		slpc_mem_unset_param(data, enable_id);
+		slpc_mem_unset_param(data, disable_id);
+	} else if (val == SLPC_PARAM_TASK_ENABLED) {
+		/* set enable */
+		slpc_mem_set_param(data, enable_id, 1);
+		slpc_mem_set_param(data, disable_id, 0);
+	} else if (val == SLPC_PARAM_TASK_DISABLED) {
+		/* set disable */
+		slpc_mem_set_param(data, disable_id, 1);
+		slpc_mem_set_param(data, enable_id, 0);
+	}
+}
+
 static int slpc_shared_data_init(struct intel_guc_slpc *slpc)
 {
 	struct intel_guc *guc = slpc_to_guc(slpc);
@@ -34,6 +89,128 @@ static int slpc_shared_data_init(struct intel_guc_slpc *slpc)
 	return err;
 }
 
+/*
+ * Send SLPC event to guc
+ *
+ */
+static int slpc_send(struct intel_guc_slpc *slpc,
+			struct slpc_event_input *input,
+			u32 in_len)
+{
+	struct intel_guc *guc = slpc_to_guc(slpc);
+	u32 *action;
+
+	action = (u32 *)input;
+	action[0] = INTEL_GUC_ACTION_SLPC_REQUEST;
+
+	return intel_guc_send(guc, action, in_len);
+}
+
+static bool slpc_running(struct intel_guc_slpc *slpc)
+{
+	struct slpc_shared_data *data;
+	u32 slpc_global_state;
+
+	GEM_BUG_ON(!slpc->vma);
+
+	drm_clflush_virt_range(slpc->vaddr, sizeof(struct slpc_shared_data));
+	data = slpc->vaddr;
+
+	slpc_global_state = data->global_state;
+
+	return (data->global_state == SLPC_GLOBAL_STATE_RUNNING);
+}
+
+static int host2guc_slpc_query_task_state(struct intel_guc_slpc *slpc)
+{
+	struct intel_guc *guc = slpc_to_guc(slpc);
+	u32 shared_data_gtt_offset = intel_guc_ggtt_offset(guc, slpc->vma);
+	struct slpc_event_input data = {0};
+
+	data.header.value = SLPC_EVENT(SLPC_EVENT_QUERY_TASK_STATE, 2);
+	data.args[0] = shared_data_gtt_offset;
+	data.args[1] = 0;
+
+	return slpc_send(slpc, &data, 4);
+}
+
+static int slpc_read_task_state(struct intel_guc_slpc *slpc)
+{
+	return host2guc_slpc_query_task_state(slpc);
+}
+
+static const char *slpc_state_stringify(enum slpc_global_state state)
+{
+	const char *str = NULL;
+
+	switch (state) {
+	case SLPC_GLOBAL_STATE_NOT_RUNNING:
+		str = "not running";
+		break;
+	case SLPC_GLOBAL_STATE_INITIALIZING:
+		str = "initializing";
+		break;
+	case SLPC_GLOBAL_STATE_RESETTING:
+		str = "resetting";
+		break;
+	case SLPC_GLOBAL_STATE_RUNNING:
+		str = "running";
+		break;
+	case SLPC_GLOBAL_STATE_SHUTTING_DOWN:
+		str = "shutting down";
+		break;
+	case SLPC_GLOBAL_STATE_ERROR:
+		str = "error";
+		break;
+	default:
+		str = "unknown";
+		break;
+	}
+
+	return str;
+}
+
+static const char *get_slpc_state(struct intel_guc_slpc *slpc)
+{
+	struct slpc_shared_data *data;
+	u32 slpc_global_state;
+
+	GEM_BUG_ON(!slpc->vma);
+
+	drm_clflush_virt_range(slpc->vaddr, sizeof(struct slpc_shared_data));
+	data = slpc->vaddr;
+
+	slpc_global_state = data->global_state;
+
+	return slpc_state_stringify(slpc_global_state);
+}
+
+static int host2guc_slpc_reset(struct intel_guc_slpc *slpc)
+{
+	struct intel_guc *guc = slpc_to_guc(slpc);
+	u32 shared_data_gtt_offset = intel_guc_ggtt_offset(guc, slpc->vma);
+	struct slpc_event_input data = {0};
+	int ret;
+
+	data.header.value = SLPC_EVENT(SLPC_EVENT_RESET, 2);
+	data.args[0] = shared_data_gtt_offset;
+	data.args[1] = 0;
+
+	/* TODO: Hardcoded 4 needs define */
+	ret = slpc_send(slpc, &data, 4);
+
+	if (!ret) {
+		/* TODO: How long to Wait until SLPC is running */
+		if (wait_for(slpc_running(slpc), 5)) {
+			DRM_ERROR("SLPC not enabled! State = %s\n",
+				  get_slpc_state(slpc));
+			return -EIO;
+		}
+	}
+
+	return ret;
+}
+
 int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
 {
 	GEM_BUG_ON(slpc->vma);
@@ -56,6 +233,50 @@ int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
  */
 int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
 {
+	struct drm_i915_private *i915 = slpc_to_i915(slpc);
+	struct slpc_shared_data *data;
+	int ret;
+
+	GEM_BUG_ON(!slpc->vma);
+
+	memset(slpc->vaddr, 0, sizeof(struct slpc_shared_data));
+
+	data = slpc->vaddr;
+	data->shared_data_size = sizeof(struct slpc_shared_data);
+
+	/* Enable only GTPERF task, Disable others */
+	slpc_mem_task_control(data, SLPC_PARAM_TASK_ENABLED,
+				SLPC_PARAM_TASK_ENABLE_GTPERF,
+				SLPC_PARAM_TASK_DISABLE_GTPERF);
+
+	slpc_mem_task_control(data, SLPC_PARAM_TASK_DISABLED,
+				SLPC_PARAM_TASK_ENABLE_BALANCER,
+				SLPC_PARAM_TASK_DISABLE_BALANCER);
+
+	slpc_mem_task_control(data, SLPC_PARAM_TASK_DISABLED,
+				SLPC_PARAM_TASK_ENABLE_DCC,
+				SLPC_PARAM_TASK_DISABLE_DCC);
+
+	ret = host2guc_slpc_reset(slpc);
+	if (ret) {
+		drm_err(&i915->drm, "SLPC Reset event returned %d", ret);
+		return -EIO;
+	}
+
+	DRM_INFO("SLPC state: %s\n", get_slpc_state(slpc));
+
+	if (slpc_read_task_state(slpc))
+		drm_err(&i915->drm, "Unable to read task state data");
+
+	drm_clflush_virt_range(slpc->vaddr, sizeof(struct slpc_shared_data));
+
+	/* min and max frequency limits being used by SLPC */
+	drm_info(&i915->drm, "SLPC min freq: %u Mhz, max is %u Mhz",
+			DIV_ROUND_CLOSEST(data->task_state_data.min_unslice_freq *
+				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER),
+			DIV_ROUND_CLOSEST(data->task_state_data.max_unslice_freq *
+				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER));
+
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index e2644a05f298..3e76d4d5f7bb 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -2321,10 +2321,6 @@ void intel_guc_submission_enable(struct intel_guc *guc)
 
 void intel_guc_submission_disable(struct intel_guc *guc)
 {
-	struct intel_gt *gt = guc_to_gt(guc);
-
-	GEM_BUG_ON(gt->awake); /* GT should be parked first */
-
 	/* Note: By the time we're here, GuC may have already been reset */
 }
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
index dca5f6d0641b..7b6c767d3eb0 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
@@ -501,6 +501,14 @@ static int __uc_init_hw(struct intel_uc *uc)
 	if (intel_uc_uses_guc_submission(uc))
 		intel_guc_submission_enable(guc);
 
+	if (intel_uc_uses_guc_slpc(uc)) {
+		ret = intel_guc_slpc_enable(&guc->slpc);
+		if (ret)
+			goto err_submission;
+		drm_info(&i915->drm, "GuC SLPC %s\n",
+			 enableddisabled(intel_uc_uses_guc_slpc(uc)));
+	}
+
 	drm_info(&i915->drm, "%s firmware %s version %u.%u %s:%s\n",
 		 intel_uc_fw_type_repr(INTEL_UC_FW_TYPE_GUC), guc->fw.path,
 		 guc->fw.major_ver_found, guc->fw.minor_ver_found,
@@ -521,6 +529,8 @@ static int __uc_init_hw(struct intel_uc *uc)
 	/*
 	 * We've failed to load the firmware :(
 	 */
+err_submission:
+	intel_guc_submission_disable(guc);
 err_log_capture:
 	__uc_capture_load_err_log(uc);
 err_out:
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Intel-gfx] [PATCH 08/16] drm/i915/guc/slpc: Add methods to set min/max frequency
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
                   ` (6 preceding siblings ...)
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 07/16] drm/i915/guc/slpc: Enable slpc and add related H2G events Vinay Belgaumkar
@ 2021-07-10  1:20 ` Vinay Belgaumkar
  2021-07-10  3:07   ` kernel test robot
                     ` (2 more replies)
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 09/16] drm/i915/guc/slpc: Add get max/min freq hooks Vinay Belgaumkar
                   ` (10 subsequent siblings)
  18 siblings, 3 replies; 53+ messages in thread
From: Vinay Belgaumkar @ 2021-07-10  1:20 UTC (permalink / raw)
  To: intel-gfx, dri-devel

Add param set h2g helpers to set the min and max frequencies
for use by SLPC.

Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 94 +++++++++++++++++++++
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h |  2 +
 2 files changed, 96 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
index e579408d1c19..19cb26479942 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
@@ -106,6 +106,19 @@ static int slpc_send(struct intel_guc_slpc *slpc,
 	return intel_guc_send(guc, action, in_len);
 }
 
+static int host2guc_slpc_set_param(struct intel_guc_slpc *slpc,
+				   u32 id, u32 value)
+{
+	struct slpc_event_input data = {0};
+
+	data.header.value = SLPC_EVENT(SLPC_EVENT_PARAMETER_SET, 2);
+	data.args[0] = id;
+	data.args[1] = value;
+
+	return slpc_send(slpc, &data, 4);
+}
+
+
 static bool slpc_running(struct intel_guc_slpc *slpc)
 {
 	struct slpc_shared_data *data;
@@ -134,6 +147,19 @@ static int host2guc_slpc_query_task_state(struct intel_guc_slpc *slpc)
 	return slpc_send(slpc, &data, 4);
 }
 
+static int slpc_set_param(struct intel_guc_slpc *slpc, u32 id, u32 value)
+{
+	struct drm_i915_private *i915 = slpc_to_i915(slpc);
+	GEM_BUG_ON(id >= SLPC_MAX_PARAM);
+
+	if (host2guc_slpc_set_param(slpc, id, value)) {
+		drm_err(&i915->drm, "Unable to set param %x", id);
+		return -EIO;
+	}
+
+	return 0;
+}
+
 static int slpc_read_task_state(struct intel_guc_slpc *slpc)
 {
 	return host2guc_slpc_query_task_state(slpc);
@@ -218,6 +244,74 @@ int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
 	return slpc_shared_data_init(slpc);
 }
 
+/**
+ * intel_guc_slpc_max_freq_set() - Set max frequency limit for SLPC.
+ * @slpc: pointer to intel_guc_slpc.
+ * @val: encoded frequency
+ *
+ * This function will invoke GuC SLPC action to update the max frequency
+ * limit for slice and unslice.
+ *
+ * Return: 0 on success, non-zero error code on failure.
+ */
+int intel_guc_slpc_set_max_freq(struct intel_guc_slpc *slpc, u32 val)
+{
+	int ret;
+	struct drm_i915_private *i915 = slpc_to_i915(slpc);
+	intel_wakeref_t wakeref;
+
+	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
+
+	ret = slpc_set_param(slpc,
+		       SLPC_PARAM_GLOBAL_MAX_GT_UNSLICE_FREQ_MHZ,
+		       val);
+
+	if (ret) {
+		drm_err(&i915->drm,
+			"Set max frequency unslice returned %d", ret);
+		ret = -EIO;
+		goto done;
+	}
+
+done:
+	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
+	return ret;
+}
+
+/**
+ * intel_guc_slpc_min_freq_set() - Set min frequency limit for SLPC.
+ * @slpc: pointer to intel_guc_slpc.
+ * @val: encoded frequency
+ *
+ * This function will invoke GuC SLPC action to update the min frequency
+ * limit.
+ *
+ * Return: 0 on success, non-zero error code on failure.
+ */
+int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val)
+{
+	int ret;
+	struct intel_guc *guc = slpc_to_guc(slpc);
+	struct drm_i915_private *i915 = guc_to_gt(guc)->i915;
+	intel_wakeref_t wakeref;
+
+	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
+
+	ret = slpc_set_param(slpc,
+		       SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ,
+		       val);
+	if (ret) {
+		drm_err(&i915->drm,
+			"Set min frequency for unslice returned %d", ret);
+		ret = -EIO;
+		goto done;
+	}
+
+done:
+	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
+	return ret;
+}
+
 /*
  * intel_guc_slpc_enable() - Start SLPC
  * @slpc: pointer to intel_guc_slpc.
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
index a2643b904165..a473e1ea7c10 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
@@ -34,5 +34,7 @@ struct intel_guc_slpc {
 int intel_guc_slpc_init(struct intel_guc_slpc *slpc);
 int intel_guc_slpc_enable(struct intel_guc_slpc *slpc);
 void intel_guc_slpc_fini(struct intel_guc_slpc *slpc);
+int intel_guc_slpc_set_max_freq(struct intel_guc_slpc *slpc, u32 val);
+int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val);
 
 #endif
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Intel-gfx] [PATCH 09/16] drm/i915/guc/slpc: Add get max/min freq hooks
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
                   ` (7 preceding siblings ...)
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 08/16] drm/i915/guc/slpc: Add methods to set min/max frequency Vinay Belgaumkar
@ 2021-07-10  1:20 ` Vinay Belgaumkar
  2021-07-10 17:52   ` Michal Wajdeczko
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 10/16] drm/i915/guc/slpc: Add debugfs for slpc info Vinay Belgaumkar
                   ` (9 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Vinay Belgaumkar @ 2021-07-10  1:20 UTC (permalink / raw)
  To: intel-gfx, dri-devel

Add helpers to read the min/max frequency being used
by SLPC. This is done by send a h2g command which forces
SLPC to update the shared data struct which can then be
read.

Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 58 +++++++++++++++++++++
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h |  2 +
 2 files changed, 60 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
index 19cb26479942..98a283d31734 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
@@ -278,6 +278,35 @@ int intel_guc_slpc_set_max_freq(struct intel_guc_slpc *slpc, u32 val)
 	return ret;
 }
 
+int intel_guc_slpc_get_max_freq(struct intel_guc_slpc *slpc, u32 *val)
+{
+	struct slpc_shared_data *data;
+	intel_wakeref_t wakeref;
+	struct drm_i915_private *i915 = guc_to_gt(slpc_to_guc(slpc))->i915;
+	int ret = 0;
+
+	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
+
+	/* Force GuC to update task data */
+	if (slpc_read_task_state(slpc)) {
+		DRM_ERROR("Unable to update task data");
+		ret = -EIO;
+		goto done;
+	}
+
+	GEM_BUG_ON(!slpc->vma);
+
+	drm_clflush_virt_range(slpc->vaddr, sizeof(struct slpc_shared_data));
+	data = slpc->vaddr;
+
+	*val = DIV_ROUND_CLOSEST(data->task_state_data.max_unslice_freq *
+				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER);
+
+done:
+	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
+	return ret;
+}
+
 /**
  * intel_guc_slpc_min_freq_set() - Set min frequency limit for SLPC.
  * @slpc: pointer to intel_guc_slpc.
@@ -312,6 +341,35 @@ int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val)
 	return ret;
 }
 
+int intel_guc_slpc_get_min_freq(struct intel_guc_slpc *slpc, u32 *val)
+{
+	struct slpc_shared_data *data;
+	intel_wakeref_t wakeref;
+	struct drm_i915_private *i915 = guc_to_gt(slpc_to_guc(slpc))->i915;
+	int ret = 0;
+
+	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
+
+	/* Force GuC to update task data */
+	if (slpc_read_task_state(slpc)) {
+		DRM_ERROR("Unable to update task data");
+		ret = -EIO;
+		goto done;
+	}
+
+	GEM_BUG_ON(!slpc->vma);
+
+	drm_clflush_virt_range(slpc->vaddr, sizeof(struct slpc_shared_data));
+	data = slpc->vaddr;
+
+	*val = DIV_ROUND_CLOSEST(data->task_state_data.min_unslice_freq *
+				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER);
+
+done:
+	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
+	return ret;
+}
+
 /*
  * intel_guc_slpc_enable() - Start SLPC
  * @slpc: pointer to intel_guc_slpc.
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
index a473e1ea7c10..2cb830cdacb5 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
@@ -36,5 +36,7 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc);
 void intel_guc_slpc_fini(struct intel_guc_slpc *slpc);
 int intel_guc_slpc_set_max_freq(struct intel_guc_slpc *slpc, u32 val);
 int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val);
+int intel_guc_slpc_get_max_freq(struct intel_guc_slpc *slpc, u32 *val);
+int intel_guc_slpc_get_min_freq(struct intel_guc_slpc *slpc, u32 *val);
 
 #endif
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Intel-gfx] [PATCH 10/16] drm/i915/guc/slpc: Add debugfs for slpc info
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
                   ` (8 preceding siblings ...)
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 09/16] drm/i915/guc/slpc: Add get max/min freq hooks Vinay Belgaumkar
@ 2021-07-10  1:20 ` Vinay Belgaumkar
  2021-07-10 18:08   ` Michal Wajdeczko
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 11/16] drm/i915/guc/slpc: Enable ARAT timer interrupt Vinay Belgaumkar
                   ` (8 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Vinay Belgaumkar @ 2021-07-10  1:20 UTC (permalink / raw)
  To: intel-gfx, dri-devel

This prints out relevant SLPC info from the SLPC shared structure.

We will send a h2g message which forces SLPC to update the
shared data structure with latest information before reading it.

Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
---
 .../gpu/drm/i915/gt/uc/intel_guc_debugfs.c    | 16 ++++++
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c   | 53 +++++++++++++++++++
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h   |  3 ++
 3 files changed, 72 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
index 9a03ff56e654..bef749e54601 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
@@ -12,6 +12,7 @@
 #include "gt/uc/intel_guc_ct.h"
 #include "gt/uc/intel_guc_ads.h"
 #include "gt/uc/intel_guc_submission.h"
+#include "gt/uc/intel_guc_slpc.h"
 
 static int guc_info_show(struct seq_file *m, void *data)
 {
@@ -50,11 +51,26 @@ static int guc_registered_contexts_show(struct seq_file *m, void *data)
 }
 DEFINE_GT_DEBUGFS_ATTRIBUTE(guc_registered_contexts);
 
+static int guc_slpc_info_show(struct seq_file *m, void *unused)
+{
+	struct intel_guc *guc = m->private;
+	struct intel_guc_slpc *slpc = &guc->slpc;
+	struct drm_printer p = drm_seq_file_printer(m);
+
+	if (!intel_guc_slpc_is_used(guc))
+		return -ENODEV;
+
+	return intel_guc_slpc_info(slpc, &p);
+}
+
+DEFINE_GT_DEBUGFS_ATTRIBUTE(guc_slpc_info);
+
 void intel_guc_debugfs_register(struct intel_guc *guc, struct dentry *root)
 {
 	static const struct debugfs_gt_file files[] = {
 		{ "guc_info", &guc_info_fops, NULL },
 		{ "guc_registered_contexts", &guc_registered_contexts_fops, NULL },
+		{ "guc_slpc_info", &guc_slpc_info_fops, NULL},
 	};
 
 	if (!intel_guc_is_supported(guc))
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
index 98a283d31734..d179ba14ece6 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
@@ -432,6 +432,59 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
 	return 0;
 }
 
+int intel_guc_slpc_info(struct intel_guc_slpc *slpc, struct drm_printer *p)
+{
+	struct drm_i915_private *i915 = guc_to_gt(slpc_to_guc(slpc))->i915;
+	struct slpc_shared_data *data;
+	struct slpc_platform_info *platform_info;
+	struct slpc_task_state_data *task_state_data;
+	intel_wakeref_t wakeref;
+	int ret = 0;
+
+	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
+
+	if (slpc_read_task_state(slpc)) {
+		ret = -EIO;
+		goto done;
+	}
+
+	GEM_BUG_ON(!slpc->vma);
+
+	drm_clflush_virt_range(slpc->vaddr, sizeof(struct slpc_shared_data));
+	data = slpc->vaddr;
+
+	platform_info = &data->platform_info;
+	task_state_data = &data->task_state_data;
+
+	drm_printf(p, "SLPC state: %s\n", slpc_state_stringify(data->global_state));
+	drm_printf(p, "\tgtperf task active: %d\n",
+			task_state_data->gtperf_task_active);
+	drm_printf(p, "\tdcc task active: %d\n",
+				task_state_data->dcc_task_active);
+	drm_printf(p, "\tin dcc: %d\n",
+				task_state_data->in_dcc);
+	drm_printf(p, "\tfreq switch active: %d\n",
+				task_state_data->freq_switch_active);
+	drm_printf(p, "\tibc enabled: %d\n",
+				task_state_data->ibc_enabled);
+	drm_printf(p, "\tibc active: %d\n",
+				task_state_data->ibc_active);
+	drm_printf(p, "\tpg1 enabled: %s\n",
+				yesno(task_state_data->pg1_enabled));
+	drm_printf(p, "\tpg1 active: %s\n",
+				yesno(task_state_data->pg1_active));
+	drm_printf(p, "\tmax freq: %dMHz\n",
+				DIV_ROUND_CLOSEST(data->task_state_data.max_unslice_freq *
+				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER));
+	drm_printf(p, "\tmin freq: %dMHz\n",
+				DIV_ROUND_CLOSEST(data->task_state_data.min_unslice_freq *
+				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER));
+
+done:
+	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
+	return ret;
+}
+
 void intel_guc_slpc_fini(struct intel_guc_slpc *slpc)
 {
 	if (!slpc->vma)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
index 2cb830cdacb5..cd12c5f19f4b 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
@@ -10,6 +10,8 @@
 #include <linux/mutex.h>
 #include "intel_guc_slpc_fwif.h"
 
+struct drm_printer;
+
 struct intel_guc_slpc {
 	/*Protects access to vma and SLPC actions */
 	struct i915_vma *vma;
@@ -38,5 +40,6 @@ int intel_guc_slpc_set_max_freq(struct intel_guc_slpc *slpc, u32 val);
 int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val);
 int intel_guc_slpc_get_max_freq(struct intel_guc_slpc *slpc, u32 *val);
 int intel_guc_slpc_get_min_freq(struct intel_guc_slpc *slpc, u32 *val);
+int intel_guc_slpc_info(struct intel_guc_slpc *slpc, struct drm_printer *p);
 
 #endif
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Intel-gfx] [PATCH 11/16] drm/i915/guc/slpc: Enable ARAT timer interrupt
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
                   ` (9 preceding siblings ...)
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 10/16] drm/i915/guc/slpc: Add debugfs for slpc info Vinay Belgaumkar
@ 2021-07-10  1:20 ` Vinay Belgaumkar
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 12/16] drm/i915/guc/slpc: Cache platform frequency limits for slpc Vinay Belgaumkar
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 53+ messages in thread
From: Vinay Belgaumkar @ 2021-07-10  1:20 UTC (permalink / raw)
  To: intel-gfx, dri-devel

This interrupt is enabled during RPS initialization, and
now needs to be done by slpc code. It allows ARAT timer
expiry interrupts to get forwarded to GuC.

Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 16 ++++++++++++++++
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h |  2 ++
 drivers/gpu/drm/i915/gt/uc/intel_uc.c       |  8 ++++++++
 3 files changed, 26 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
index d179ba14ece6..d32274cd1db7 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
@@ -370,6 +370,20 @@ int intel_guc_slpc_get_min_freq(struct intel_guc_slpc *slpc, u32 *val)
 	return ret;
 }
 
+void intel_guc_pm_intrmsk_enable(struct intel_gt *gt)
+{
+	u32 pm_intrmsk_mbz = 0;
+
+	/* Allow GuC to receive ARAT timer expiry event.
+	 * This interrupt register is setup by RPS code
+	 * when host based Turbo is enabled.
+	 */
+	pm_intrmsk_mbz |= ARAT_EXPIRED_INTRMSK;
+
+	intel_uncore_rmw(gt->uncore,
+			   GEN6_PMINTRMSK, pm_intrmsk_mbz, 0);
+}
+
 /*
  * intel_guc_slpc_enable() - Start SLPC
  * @slpc: pointer to intel_guc_slpc.
@@ -417,6 +431,8 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
 
 	DRM_INFO("SLPC state: %s\n", get_slpc_state(slpc));
 
+	intel_guc_pm_intrmsk_enable(&i915->gt);
+
 	if (slpc_read_task_state(slpc))
 		drm_err(&i915->drm, "Unable to read task state data");
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
index cd12c5f19f4b..2af0c5eb8c9a 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
@@ -10,6 +10,7 @@
 #include <linux/mutex.h>
 #include "intel_guc_slpc_fwif.h"
 
+struct intel_gt;
 struct drm_printer;
 
 struct intel_guc_slpc {
@@ -41,5 +42,6 @@ int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val);
 int intel_guc_slpc_get_max_freq(struct intel_guc_slpc *slpc, u32 *val);
 int intel_guc_slpc_get_min_freq(struct intel_guc_slpc *slpc, u32 *val);
 int intel_guc_slpc_info(struct intel_guc_slpc *slpc, struct drm_printer *p);
+void intel_guc_pm_intrmsk_enable(struct intel_gt *gt);
 
 #endif
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
index 7b6c767d3eb0..823f8d3d8df7 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
@@ -655,6 +655,7 @@ void intel_uc_suspend(struct intel_uc *uc)
 static int __uc_resume(struct intel_uc *uc, bool enable_communication)
 {
 	struct intel_guc *guc = &uc->guc;
+	struct intel_gt *gt = guc_to_gt(guc);
 	int err;
 
 	if (!intel_guc_is_fw_running(guc))
@@ -666,6 +667,13 @@ static int __uc_resume(struct intel_uc *uc, bool enable_communication)
 	if (enable_communication)
 		guc_enable_communication(guc);
 
+	/* If we are only resuming GuC communication but not reloading
+	 * GuC, we need to ensure the ARAT timer interrupt is enabled
+	 * again. In case of GuC reload, it is enabled during slpc enable.
+	 */
+	if (enable_communication && intel_uc_uses_guc_slpc(uc))
+		intel_guc_pm_intrmsk_enable(gt);
+
 	err = intel_guc_resume(guc);
 	if (err) {
 		DRM_DEBUG_DRIVER("Failed to resume GuC, err=%d", err);
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Intel-gfx] [PATCH 12/16] drm/i915/guc/slpc: Cache platform frequency limits for slpc
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
                   ` (10 preceding siblings ...)
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 11/16] drm/i915/guc/slpc: Enable ARAT timer interrupt Vinay Belgaumkar
@ 2021-07-10  1:20 ` Vinay Belgaumkar
  2021-07-10 18:15   ` Michal Wajdeczko
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 13/16] drm/i915/guc/slpc: Update slpc to use platform min/max Vinay Belgaumkar
                   ` (6 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Vinay Belgaumkar @ 2021-07-10  1:20 UTC (permalink / raw)
  To: intel-gfx, dri-devel

Cache rp0, rp1 and rpn platform limits into slpc structure
for range checking while setting min/max frequencies.

Also add "soft" limits which keep track of frequency changes
made from userland. These are initially set to platform min
and max.

Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 41 +++++++++++++++++++++
 1 file changed, 41 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
index d32274cd1db7..6e978f27b7a6 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
@@ -86,6 +86,9 @@ static int slpc_shared_data_init(struct intel_guc_slpc *slpc)
 		return err;
 	}
 
+	slpc->max_freq_softlimit = 0;
+	slpc->min_freq_softlimit = 0;
+
 	return err;
 }
 
@@ -384,6 +387,29 @@ void intel_guc_pm_intrmsk_enable(struct intel_gt *gt)
 			   GEN6_PMINTRMSK, pm_intrmsk_mbz, 0);
 }
 
+static int intel_guc_slpc_set_softlimits(struct intel_guc_slpc *slpc)
+{
+	int ret = 0;
+
+	/* Softlimits are initially equivalent to platform limits
+	 * unless they have deviated from defaults, in which case,
+	 * we retain the values and set min/max accordingly.
+	 */
+	if (!slpc->max_freq_softlimit)
+		slpc->max_freq_softlimit = slpc->rp0_freq;
+	else if (slpc->max_freq_softlimit != slpc->rp0_freq)
+		ret = intel_guc_slpc_set_max_freq(slpc,
+					slpc->max_freq_softlimit);
+
+	if (!slpc->min_freq_softlimit)
+		slpc->min_freq_softlimit = slpc->min_freq;
+	else if (slpc->min_freq_softlimit != slpc->min_freq)
+		ret = intel_guc_slpc_set_min_freq(slpc,
+					slpc->min_freq_softlimit);
+
+	return ret;
+}
+
 /*
  * intel_guc_slpc_enable() - Start SLPC
  * @slpc: pointer to intel_guc_slpc.
@@ -402,6 +428,7 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
 	struct drm_i915_private *i915 = slpc_to_i915(slpc);
 	struct slpc_shared_data *data;
 	int ret;
+	u32 rp_state_cap;
 
 	GEM_BUG_ON(!slpc->vma);
 
@@ -445,6 +472,20 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
 			DIV_ROUND_CLOSEST(data->task_state_data.max_unslice_freq *
 				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER));
 
+	rp_state_cap = intel_uncore_read(i915->gt.uncore, GEN6_RP_STATE_CAP);
+
+	slpc->rp0_freq = ((rp_state_cap >> 0) & 0xff) * GT_FREQUENCY_MULTIPLIER;
+	slpc->min_freq = ((rp_state_cap >> 16) & 0xff) * GT_FREQUENCY_MULTIPLIER;
+	slpc->rp1_freq = ((rp_state_cap >> 8) & 0xff) * GT_FREQUENCY_MULTIPLIER;
+
+	if (intel_guc_slpc_set_softlimits(slpc))
+		drm_err(&i915->drm, "Unable to set softlimits");
+
+	drm_info(&i915->drm,
+		 "Platform fused frequency values -  min: %u Mhz, max: %u Mhz",
+		 slpc->min_freq,
+		 slpc->rp0_freq);
+
 	return 0;
 }
 
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Intel-gfx] [PATCH 13/16] drm/i915/guc/slpc: Update slpc to use platform min/max
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
                   ` (11 preceding siblings ...)
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 12/16] drm/i915/guc/slpc: Cache platform frequency limits for slpc Vinay Belgaumkar
@ 2021-07-10  1:20 ` Vinay Belgaumkar
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 14/16] drm/i915/guc/slpc: Sysfs hooks for slpc Vinay Belgaumkar
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 53+ messages in thread
From: Vinay Belgaumkar @ 2021-07-10  1:20 UTC (permalink / raw)
  To: intel-gfx, dri-devel

SLPC requests efficient frequency by default instead of min.
It provides a flag to turn this off. Set that flag to maintain
original semantics so that tests do not fail. SLPC can also
request frequency that is much higher than the platform max,
update that as well for the same reason.

Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 55 +++++++++++++++++++++
 1 file changed, 55 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
index 6e978f27b7a6..db575443ffb2 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
@@ -109,6 +109,17 @@ static int slpc_send(struct intel_guc_slpc *slpc,
 	return intel_guc_send(guc, action, in_len);
 }
 
+static int host2guc_slpc_unset_param(struct intel_guc_slpc *slpc,
+				   u32 id)
+{
+	struct slpc_event_input data = {0};
+
+	data.header.value = SLPC_EVENT(SLPC_EVENT_PARAMETER_UNSET, 1);
+	data.args[0] = id;
+
+	return slpc_send(slpc, &data, 4);
+}
+
 static int host2guc_slpc_set_param(struct intel_guc_slpc *slpc,
 				   u32 id, u32 value)
 {
@@ -150,6 +161,20 @@ static int host2guc_slpc_query_task_state(struct intel_guc_slpc *slpc)
 	return slpc_send(slpc, &data, 4);
 }
 
+static int slpc_unset_param(struct intel_guc_slpc *slpc, u32 id)
+{
+	struct drm_i915_private *i915 = slpc_to_i915(slpc);
+
+	GEM_BUG_ON(id >= SLPC_MAX_PARAM);
+
+	if (host2guc_slpc_unset_param(slpc, id)) {
+		drm_err(&i915->drm, "Unable to unset param %x", id);
+		return -EIO;
+	}
+
+	return 0;
+}
+
 static int slpc_set_param(struct intel_guc_slpc *slpc, u32 id, u32 value)
 {
 	struct drm_i915_private *i915 = slpc_to_i915(slpc);
@@ -410,6 +435,32 @@ static int intel_guc_slpc_set_softlimits(struct intel_guc_slpc *slpc)
 	return ret;
 }
 
+static void intel_guc_slpc_ignore_eff_freq(struct intel_guc_slpc *slpc, bool ignore)
+{
+	if (ignore) {
+		/* A failure here does not affect the algorithm in a fatal way */
+		slpc_set_param(slpc,
+		   SLPC_IGNORE_EFFICIENT_FREQUENCY,
+		   ignore);
+		slpc_set_param(slpc,
+		   SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ,
+		   slpc->min_freq);
+	} else {
+		slpc_unset_param(slpc,
+		   SLPC_IGNORE_EFFICIENT_FREQUENCY);
+		slpc_unset_param(slpc,
+		   SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ);
+	}
+}
+
+static void intel_guc_slpc_use_fused_rp0(struct intel_guc_slpc *slpc)
+{
+	/* Force slpc to used platform rp0 */
+	slpc_set_param(slpc,
+	   SLPC_PARAM_GLOBAL_MAX_GT_UNSLICE_FREQ_MHZ,
+	   slpc->rp0_freq);
+}
+
 /*
  * intel_guc_slpc_enable() - Start SLPC
  * @slpc: pointer to intel_guc_slpc.
@@ -478,6 +529,10 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
 	slpc->min_freq = ((rp_state_cap >> 16) & 0xff) * GT_FREQUENCY_MULTIPLIER;
 	slpc->rp1_freq = ((rp_state_cap >> 8) & 0xff) * GT_FREQUENCY_MULTIPLIER;
 
+	/* Ignore efficient freq and set min/max to platform min/max */
+	intel_guc_slpc_ignore_eff_freq(slpc, true);
+	intel_guc_slpc_use_fused_rp0(slpc);
+
 	if (intel_guc_slpc_set_softlimits(slpc))
 		drm_err(&i915->drm, "Unable to set softlimits");
 
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Intel-gfx] [PATCH 14/16] drm/i915/guc/slpc: Sysfs hooks for slpc
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
                   ` (12 preceding siblings ...)
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 13/16] drm/i915/guc/slpc: Update slpc to use platform min/max Vinay Belgaumkar
@ 2021-07-10  1:20 ` Vinay Belgaumkar
  2021-07-10  6:18   ` kernel test robot
                     ` (4 more replies)
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 15/16] drm/i915/guc/slpc: slpc selftest Vinay Belgaumkar
                   ` (4 subsequent siblings)
  18 siblings, 5 replies; 53+ messages in thread
From: Vinay Belgaumkar @ 2021-07-10  1:20 UTC (permalink / raw)
  To: intel-gfx, dri-devel

Update the get/set min/max freq hooks to work for
slpc case as well. Consolidate helpers for requested/min/max
frequency get/set to intel_rps where the proper action can
be taken depending on whether slpc is enabled.

Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Signed-off-by: Sujaritha Sundaresan <sujaritha.sundaresan@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_rps.c | 135 ++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/gt/intel_rps.h |   5 ++
 drivers/gpu/drm/i915/i915_pmu.c     |   2 +-
 drivers/gpu/drm/i915/i915_reg.h     |   2 +
 drivers/gpu/drm/i915/i915_sysfs.c   |  71 +++------------
 5 files changed, 154 insertions(+), 61 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
index e858eeb2c59d..88ffc5d90730 100644
--- a/drivers/gpu/drm/i915/gt/intel_rps.c
+++ b/drivers/gpu/drm/i915/gt/intel_rps.c
@@ -37,6 +37,12 @@ static struct intel_uncore *rps_to_uncore(struct intel_rps *rps)
 	return rps_to_gt(rps)->uncore;
 }
 
+static struct intel_guc_slpc *rps_to_slpc(struct intel_rps *rps)
+{
+	struct intel_gt *gt = rps_to_gt(rps);
+	return &gt->uc.guc.slpc;
+}
+
 static bool rps_uses_slpc(struct intel_rps *rps)
 {
 	struct intel_gt *gt = rps_to_gt(rps);
@@ -1960,6 +1966,135 @@ u32 intel_rps_read_actual_frequency(struct intel_rps *rps)
 	return freq;
 }
 
+u32 intel_rps_read_punit_req(struct intel_rps *rps)
+{
+	struct intel_uncore *uncore = rps_to_uncore(rps);
+
+	u32 pureq = intel_uncore_read(uncore, GEN6_RPNSWREQ);
+
+	return pureq;
+}
+
+u32 intel_rps_get_req(struct intel_rps *rps, u32 pureq)
+{
+	u32 req = pureq >> GEN9_SW_REQ_UNSLICE_RATIO_SHIFT;
+
+	return req;
+}
+
+u32 intel_rps_read_punit_req_frequency(struct intel_rps *rps)
+{
+	u32 freq = intel_rps_get_req(rps, intel_rps_read_punit_req(rps));
+
+	return intel_gpu_freq(rps, freq);
+}
+
+u32 intel_rps_get_requested_frequency(struct intel_rps *rps)
+{
+	if (rps_uses_slpc(rps))
+		return intel_rps_read_punit_req_frequency(rps);
+	else
+		return intel_gpu_freq(rps, rps->cur_freq);
+}
+
+u32 intel_rps_get_max_frequency(struct intel_rps *rps)
+{
+	struct intel_guc_slpc *slpc = rps_to_slpc(rps);
+
+	if (rps_uses_slpc(rps))
+		return slpc->max_freq_softlimit;
+	else
+		return intel_gpu_freq(rps, rps->max_freq_softlimit);
+}
+
+int intel_rps_set_max_frequency(struct intel_rps *rps, u32 val)
+{
+	struct intel_guc_slpc *slpc = rps_to_slpc(rps);
+	int ret;
+
+	if (rps_uses_slpc(rps))
+		return intel_guc_slpc_set_max_freq(slpc, val);
+
+	mutex_lock(&rps->lock);
+
+	val = intel_freq_opcode(rps, val);
+	if (val < rps->min_freq ||
+	    val > rps->max_freq ||
+	    val < rps->min_freq_softlimit) {
+		ret = -EINVAL;
+		goto unlock;
+	}
+
+	if (val > rps->rp0_freq)
+		DRM_DEBUG("User requested overclocking to %d\n",
+			  intel_gpu_freq(rps, val));
+
+	rps->max_freq_softlimit = val;
+
+	val = clamp_t(int, rps->cur_freq,
+		      rps->min_freq_softlimit,
+		      rps->max_freq_softlimit);
+
+	/*
+	 * We still need *_set_rps to process the new max_delay and
+	 * update the interrupt limits and PMINTRMSK even though
+	 * frequency request may be unchanged.
+	 */
+	intel_rps_set(rps, val);
+
+unlock:
+	mutex_unlock(&rps->lock);
+
+	return ret;
+}
+
+u32 intel_rps_get_min_frequency(struct intel_rps *rps)
+{
+	struct intel_guc_slpc *slpc = rps_to_slpc(rps);
+
+	if (rps_uses_slpc(rps))
+		return slpc->min_freq_softlimit;
+	else
+		return intel_gpu_freq(rps, rps->min_freq_softlimit);
+}
+
+int intel_rps_set_min_frequency(struct intel_rps *rps, u32 val)
+{
+	struct intel_guc_slpc *slpc = rps_to_slpc(rps);
+	int ret;
+
+	if (rps_uses_slpc(rps))
+		return intel_guc_slpc_set_min_freq(slpc, val);
+
+	mutex_lock(&rps->lock);
+
+	val = intel_freq_opcode(rps, val);
+	if (val < rps->min_freq ||
+	    val > rps->max_freq ||
+	    val > rps->max_freq_softlimit) {
+		ret = -EINVAL;
+		goto unlock;
+	}
+
+	rps->min_freq_softlimit = val;
+
+	val = clamp_t(int, rps->cur_freq,
+		      rps->min_freq_softlimit,
+		      rps->max_freq_softlimit);
+
+	/*
+	 * We still need *_set_rps to process the new min_delay and
+	 * update the interrupt limits and PMINTRMSK even though
+	 * frequency request may be unchanged.
+	 */
+	intel_rps_set(rps, val);
+
+unlock:
+	mutex_unlock(&rps->lock);
+
+	return ret;
+}
+
 /* External interface for intel_ips.ko */
 
 static struct drm_i915_private __rcu *ips_mchdev;
diff --git a/drivers/gpu/drm/i915/gt/intel_rps.h b/drivers/gpu/drm/i915/gt/intel_rps.h
index 1d2cfc98b510..9a09ff5ebf64 100644
--- a/drivers/gpu/drm/i915/gt/intel_rps.h
+++ b/drivers/gpu/drm/i915/gt/intel_rps.h
@@ -31,6 +31,11 @@ int intel_gpu_freq(struct intel_rps *rps, int val);
 int intel_freq_opcode(struct intel_rps *rps, int val);
 u32 intel_rps_get_cagf(struct intel_rps *rps, u32 rpstat1);
 u32 intel_rps_read_actual_frequency(struct intel_rps *rps);
+u32 intel_rps_get_requested_frequency(struct intel_rps *rps);
+u32 intel_rps_get_min_frequency(struct intel_rps *rps);
+int intel_rps_set_min_frequency(struct intel_rps *rps, u32 val);
+u32 intel_rps_get_max_frequency(struct intel_rps *rps);
+int intel_rps_set_max_frequency(struct intel_rps *rps, u32 val);
 
 void gen5_rps_irq_handler(struct intel_rps *rps);
 void gen6_rps_irq_handler(struct intel_rps *rps, u32 pm_iir);
diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
index 34d37d46a126..a896bec18255 100644
--- a/drivers/gpu/drm/i915/i915_pmu.c
+++ b/drivers/gpu/drm/i915/i915_pmu.c
@@ -407,7 +407,7 @@ frequency_sample(struct intel_gt *gt, unsigned int period_ns)
 
 	if (pmu->enable & config_mask(I915_PMU_REQUESTED_FREQUENCY)) {
 		add_sample_mult(&pmu->sample[__I915_SAMPLE_FREQ_REQ],
-				intel_gpu_freq(rps, rps->cur_freq),
+				intel_rps_get_requested_frequency(rps),
 				period_ns / 1000);
 	}
 
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 7d9e90aa3ec0..8ab3c2f8f8e4 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -9195,6 +9195,8 @@ enum {
 #define   GEN9_FREQUENCY(x)			((x) << 23)
 #define   GEN6_OFFSET(x)			((x) << 19)
 #define   GEN6_AGGRESSIVE_TURBO			(0 << 15)
+#define   GEN9_SW_REQ_UNSLICE_RATIO_SHIFT 	23
+
 #define GEN6_RC_VIDEO_FREQ			_MMIO(0xA00C)
 #define GEN6_RC_CONTROL				_MMIO(0xA090)
 #define   GEN6_RC_CTL_RC6pp_ENABLE		(1 << 16)
diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
index 873bf996ceb5..f2eee8491b19 100644
--- a/drivers/gpu/drm/i915/i915_sysfs.c
+++ b/drivers/gpu/drm/i915/i915_sysfs.c
@@ -272,7 +272,7 @@ static ssize_t gt_cur_freq_mhz_show(struct device *kdev,
 	struct drm_i915_private *i915 = kdev_minor_to_i915(kdev);
 	struct intel_rps *rps = &i915->gt.rps;
 
-	return sysfs_emit(buf, "%d\n", intel_gpu_freq(rps, rps->cur_freq));
+	return sysfs_emit(buf, "%d\n", intel_rps_get_requested_frequency(rps));
 }
 
 static ssize_t gt_boost_freq_mhz_show(struct device *kdev, struct device_attribute *attr, char *buf)
@@ -326,9 +326,10 @@ static ssize_t vlv_rpe_freq_mhz_show(struct device *kdev,
 static ssize_t gt_max_freq_mhz_show(struct device *kdev, struct device_attribute *attr, char *buf)
 {
 	struct drm_i915_private *dev_priv = kdev_minor_to_i915(kdev);
-	struct intel_rps *rps = &dev_priv->gt.rps;
+	struct intel_gt *gt = &dev_priv->gt;
+	struct intel_rps *rps = &gt->rps;
 
-	return sysfs_emit(buf, "%d\n", intel_gpu_freq(rps, rps->max_freq_softlimit));
+	return sysfs_emit(buf, "%d\n", intel_rps_get_max_frequency(rps));
 }
 
 static ssize_t gt_max_freq_mhz_store(struct device *kdev,
@@ -336,7 +337,8 @@ static ssize_t gt_max_freq_mhz_store(struct device *kdev,
 				     const char *buf, size_t count)
 {
 	struct drm_i915_private *dev_priv = kdev_minor_to_i915(kdev);
-	struct intel_rps *rps = &dev_priv->gt.rps;
+	struct intel_gt *gt = &dev_priv->gt;
+	struct intel_rps *rps = &gt->rps;
 	ssize_t ret;
 	u32 val;
 
@@ -344,35 +346,7 @@ static ssize_t gt_max_freq_mhz_store(struct device *kdev,
 	if (ret)
 		return ret;
 
-	mutex_lock(&rps->lock);
-
-	val = intel_freq_opcode(rps, val);
-	if (val < rps->min_freq ||
-	    val > rps->max_freq ||
-	    val < rps->min_freq_softlimit) {
-		ret = -EINVAL;
-		goto unlock;
-	}
-
-	if (val > rps->rp0_freq)
-		DRM_DEBUG("User requested overclocking to %d\n",
-			  intel_gpu_freq(rps, val));
-
-	rps->max_freq_softlimit = val;
-
-	val = clamp_t(int, rps->cur_freq,
-		      rps->min_freq_softlimit,
-		      rps->max_freq_softlimit);
-
-	/*
-	 * We still need *_set_rps to process the new max_delay and
-	 * update the interrupt limits and PMINTRMSK even though
-	 * frequency request may be unchanged.
-	 */
-	intel_rps_set(rps, val);
-
-unlock:
-	mutex_unlock(&rps->lock);
+	ret = intel_rps_set_max_frequency(rps, val);
 
 	return ret ?: count;
 }
@@ -380,9 +354,10 @@ static ssize_t gt_max_freq_mhz_store(struct device *kdev,
 static ssize_t gt_min_freq_mhz_show(struct device *kdev, struct device_attribute *attr, char *buf)
 {
 	struct drm_i915_private *dev_priv = kdev_minor_to_i915(kdev);
-	struct intel_rps *rps = &dev_priv->gt.rps;
+	struct intel_gt *gt = &dev_priv->gt;
+	struct intel_rps *rps = &gt->rps;
 
-	return sysfs_emit(buf, "%d\n", intel_gpu_freq(rps, rps->min_freq_softlimit));
+	return sysfs_emit(buf, "%d\n", intel_rps_get_min_frequency(rps));
 }
 
 static ssize_t gt_min_freq_mhz_store(struct device *kdev,
@@ -398,31 +373,7 @@ static ssize_t gt_min_freq_mhz_store(struct device *kdev,
 	if (ret)
 		return ret;
 
-	mutex_lock(&rps->lock);
-
-	val = intel_freq_opcode(rps, val);
-	if (val < rps->min_freq ||
-	    val > rps->max_freq ||
-	    val > rps->max_freq_softlimit) {
-		ret = -EINVAL;
-		goto unlock;
-	}
-
-	rps->min_freq_softlimit = val;
-
-	val = clamp_t(int, rps->cur_freq,
-		      rps->min_freq_softlimit,
-		      rps->max_freq_softlimit);
-
-	/*
-	 * We still need *_set_rps to process the new min_delay and
-	 * update the interrupt limits and PMINTRMSK even though
-	 * frequency request may be unchanged.
-	 */
-	intel_rps_set(rps, val);
-
-unlock:
-	mutex_unlock(&rps->lock);
+	ret = intel_rps_set_min_frequency(rps, val);
 
 	return ret ?: count;
 }
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Intel-gfx] [PATCH 15/16] drm/i915/guc/slpc: slpc selftest
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
                   ` (13 preceding siblings ...)
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 14/16] drm/i915/guc/slpc: Sysfs hooks for slpc Vinay Belgaumkar
@ 2021-07-10  1:20 ` Vinay Belgaumkar
  2021-07-10 18:29   ` Michal Wajdeczko
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 16/16] drm/i915/guc/rc: Setup and enable GUCRC feature Vinay Belgaumkar
                   ` (3 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Vinay Belgaumkar @ 2021-07-10  1:20 UTC (permalink / raw)
  To: intel-gfx, dri-devel

Tests that exercise the slpc get/set frequency interfaces.

Clamp_max will set max frequency to multiple levels and check
that slpc requests frequency lower than or equal to it.

Clamp_min will set min frequency to different levels and check
if slpc requests are higher or equal to those levels.

Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_rps.c           |   1 +
 drivers/gpu/drm/i915/gt/selftest_slpc.c       | 333 ++++++++++++++++++
 drivers/gpu/drm/i915/gt/selftest_slpc.h       |  12 +
 .../drm/i915/selftests/i915_live_selftests.h  |   1 +
 4 files changed, 347 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/gt/selftest_slpc.c
 create mode 100644 drivers/gpu/drm/i915/gt/selftest_slpc.h

diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
index 88ffc5d90730..16ac2e840881 100644
--- a/drivers/gpu/drm/i915/gt/intel_rps.c
+++ b/drivers/gpu/drm/i915/gt/intel_rps.c
@@ -2288,4 +2288,5 @@ EXPORT_SYMBOL_GPL(i915_gpu_turbo_disable);
 
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftest_rps.c"
+#include "selftest_slpc.c"
 #endif
diff --git a/drivers/gpu/drm/i915/gt/selftest_slpc.c b/drivers/gpu/drm/i915/gt/selftest_slpc.c
new file mode 100644
index 000000000000..f440c1cb2afa
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/selftest_slpc.c
@@ -0,0 +1,333 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2020 Intel Corporation
+ */
+#include "selftest_slpc.h"
+#include "selftest_rps.h"
+
+#include <linux/pm_qos.h>
+#include <linux/sort.h>
+
+#include "intel_engine_heartbeat.h"
+#include "intel_engine_pm.h"
+#include "intel_gpu_commands.h"
+#include "intel_gt_clock_utils.h"
+#include "intel_gt_pm.h"
+#include "intel_rc6.h"
+#include "selftest_engine_heartbeat.h"
+#include "intel_rps.h"
+#include "selftests/igt_flush_test.h"
+#include "selftests/igt_spinner.h"
+
+#define NUM_STEPS 5
+#define H2G_DELAY 50000
+#define delay_for_h2g() usleep_range(H2G_DELAY, H2G_DELAY + 10000)
+
+static int set_min_freq(struct intel_guc_slpc *slpc, int freq)
+{
+	int ret;
+	ret = intel_guc_slpc_set_min_freq(slpc, freq);
+	if (ret) {
+		pr_err("Could not set min frequency to [%d]\n", freq);
+		return ret;
+	} else {
+		/* Delay to ensure h2g completes */
+		delay_for_h2g();
+	}
+
+	return ret;
+}
+
+static int set_max_freq(struct intel_guc_slpc *slpc, int freq)
+{
+	int ret;
+	ret = intel_guc_slpc_set_max_freq(slpc, freq);
+	if (ret) {
+		pr_err("Could not set maximum frequency [%d]\n",
+			freq);
+		return ret;
+	} else {
+		/* Delay to ensure h2g completes */
+		delay_for_h2g();
+	}
+
+	return ret;
+}
+
+int live_slpc_clamp_min(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct intel_gt *gt = &i915->gt;
+	struct intel_guc_slpc *slpc;
+	struct intel_rps *rps;
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+	struct igt_spinner spin;
+	int err = 0;
+	u32 slpc_min_freq, slpc_max_freq;
+
+
+	slpc = &gt->uc.guc.slpc;
+	rps = &gt->rps;
+
+	if (!intel_uc_uses_guc_slpc(&gt->uc))
+		return 0;
+
+	if (igt_spinner_init(&spin, gt))
+		return -ENOMEM;
+
+	if (intel_guc_slpc_get_max_freq(slpc, &slpc_max_freq)) {
+		pr_err("Could not get SLPC max freq");
+		return -EIO;
+	}
+
+	if (intel_guc_slpc_get_min_freq(slpc, &slpc_min_freq)) {
+		pr_err("Could not get SLPC min freq");
+		return -EIO;
+	}
+
+	if (slpc_min_freq == slpc_max_freq) {
+		pr_err("Min/Max are fused to the same value");
+		return -EINVAL;
+	}
+
+	intel_gt_pm_wait_for_idle(gt);
+	intel_gt_pm_get(gt);
+	for_each_engine(engine, gt, id) {
+		struct i915_request *rq;
+		u32 step, min_freq, req_freq;
+		u32 act_freq, max_act_freq;
+
+		if (!intel_engine_can_store_dword(engine))
+			continue;
+
+		/* Go from min to max in 5 steps */
+		step = (slpc_max_freq - slpc_min_freq)/NUM_STEPS;
+		max_act_freq = slpc_min_freq;
+		for (min_freq = slpc_min_freq; min_freq < slpc_max_freq; min_freq+=step)
+		{
+			err = set_min_freq(slpc, min_freq);
+			if (err)
+				break;
+
+			st_engine_heartbeat_disable(engine);
+
+
+			rq = igt_spinner_create_request(&spin,
+					engine->kernel_context,
+					MI_NOOP);
+			if (IS_ERR(rq)) {
+				err = PTR_ERR(rq);
+				st_engine_heartbeat_enable(engine);
+				break;
+			}
+
+			i915_request_add(rq);
+
+			if (!igt_wait_for_spinner(&spin, rq)) {
+				pr_err("%s: Spinner did not start\n",
+					engine->name);
+				igt_spinner_end(&spin);
+				st_engine_heartbeat_enable(engine);
+				intel_gt_set_wedged(engine->gt);
+				err = -EIO;
+				break;
+			}
+
+			/* Wait for GuC to detect business and raise
+			 * requested frequency if necessary */
+			delay_for_h2g();
+
+			req_freq = intel_rps_read_punit_req_frequency(rps);
+
+			/* GuC requests freq in multiples of 50/3 MHz */
+			if (req_freq < (min_freq - 50/3)) {
+				pr_err("SWReq is %d, should be at least %d", req_freq,
+					min_freq - 50/3);
+				igt_spinner_end(&spin);
+				st_engine_heartbeat_enable(engine);
+				err = -EINVAL;
+				break;
+			}
+
+			act_freq =  intel_rps_read_actual_frequency(rps);
+			if (act_freq > max_act_freq)
+				max_act_freq = act_freq;
+
+			igt_spinner_end(&spin);
+			st_engine_heartbeat_enable(engine);
+		}
+
+		pr_info("Max actual frequency for %s was %d",
+				engine->name, max_act_freq);
+
+		/* Actual frequency should rise above min */
+		if (max_act_freq == slpc_min_freq) {
+			pr_err("Actual freq did not rise above min");
+			err = -EINVAL;
+		}
+
+		if (err)
+			break;
+	}
+
+	/* Restore min/max frequencies */
+	set_max_freq(slpc, slpc_max_freq);
+	set_min_freq(slpc, slpc_min_freq);
+
+	if (igt_flush_test(gt->i915))
+		err = -EIO;
+
+	intel_gt_pm_put(gt);
+	igt_spinner_fini(&spin);
+	intel_gt_pm_wait_for_idle(gt);
+
+	return err;
+}
+
+int live_slpc_clamp_max(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct intel_gt *gt = &i915->gt;
+	struct intel_guc_slpc *slpc;
+	struct intel_rps *rps;
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+	struct igt_spinner spin;
+	int err = 0;
+	u32 slpc_min_freq, slpc_max_freq;
+
+	slpc = &gt->uc.guc.slpc;
+	rps = &gt->rps;
+
+	if (!intel_uc_uses_guc_slpc(&gt->uc))
+		return 0;
+
+	if (igt_spinner_init(&spin, gt))
+		return -ENOMEM;
+
+	if (intel_guc_slpc_get_max_freq(slpc, &slpc_max_freq)) {
+		pr_err("Could not get SLPC max freq");
+		return -EIO;
+	}
+
+	if (intel_guc_slpc_get_min_freq(slpc, &slpc_min_freq)) {
+		pr_err("Could not get SLPC min freq");
+		return -EIO;
+	}
+
+	if (slpc_min_freq == slpc_max_freq) {
+		pr_err("Min/Max are fused to the same value");
+		return -EINVAL;
+	}
+
+	intel_gt_pm_wait_for_idle(gt);
+	intel_gt_pm_get(gt);
+	for_each_engine(engine, gt, id) {
+		struct i915_request *rq;
+		u32 max_freq, req_freq;
+		u32 act_freq, max_act_freq;
+		u32 step;
+
+		if (!intel_engine_can_store_dword(engine))
+			continue;
+
+		/* Go from max to min in 5 steps */
+		step = (slpc_max_freq - slpc_min_freq)/NUM_STEPS;
+		max_act_freq = slpc_min_freq;
+		for (max_freq = slpc_max_freq; max_freq > slpc_min_freq; max_freq-=step)
+		{
+			err = set_max_freq(slpc, max_freq);
+			if (err)
+				break;
+
+			st_engine_heartbeat_disable(engine);
+
+			rq = igt_spinner_create_request(&spin,
+						engine->kernel_context,
+						MI_NOOP);
+			if (IS_ERR(rq)) {
+				st_engine_heartbeat_enable(engine);
+				err = PTR_ERR(rq);
+				break;
+			}
+
+			i915_request_add(rq);
+
+			if (!igt_wait_for_spinner(&spin, rq)) {
+				pr_err("%s: SLPC spinner did not start\n",
+				       engine->name);
+				igt_spinner_end(&spin);
+				st_engine_heartbeat_enable(engine);
+				intel_gt_set_wedged(engine->gt);
+				err = -EIO;
+				break;
+			}
+
+			delay_for_h2g();
+
+			/* Verify that SWREQ indeed was set to specific value */
+			req_freq = intel_rps_read_punit_req_frequency(rps);
+
+			/* GuC requests freq in multiples of 50/3 MHz */
+			if (req_freq > (max_freq + 50/3)) {
+				pr_err("SWReq is %d, should be at most %d", req_freq,
+					max_freq + 50/3);
+				igt_spinner_end(&spin);
+				st_engine_heartbeat_enable(engine);
+				err = -EINVAL;
+				break;
+			}
+
+			act_freq =  intel_rps_read_actual_frequency(rps);
+			if (act_freq > max_act_freq)
+				max_act_freq = act_freq;
+
+			st_engine_heartbeat_enable(engine);
+			igt_spinner_end(&spin);
+
+			if (err)
+				break;
+		}
+
+		pr_info("Max actual frequency for %s was %d",
+				engine->name, max_act_freq);
+
+		/* Actual frequency should rise above min */
+		if (max_act_freq == slpc_min_freq) {
+			pr_err("Actual freq did not rise above min");
+			err = -EINVAL;
+		}
+
+		if (igt_flush_test(gt->i915)) {
+			err = -EIO;
+			break;
+		}
+
+		if (err)
+			break;
+	}
+
+	/* Restore min/max freq */
+	set_max_freq(slpc, slpc_max_freq);
+	set_min_freq(slpc, slpc_min_freq);
+
+	intel_gt_pm_put(gt);
+	igt_spinner_fini(&spin);
+	intel_gt_pm_wait_for_idle(gt);
+
+	return err;
+}
+
+int intel_slpc_live_selftests(struct drm_i915_private *i915)
+{
+	static const struct i915_subtest tests[] = {
+		SUBTEST(live_slpc_clamp_max),
+		SUBTEST(live_slpc_clamp_min),
+	};
+
+	if (intel_gt_is_wedged(&i915->gt))
+		return 0;
+
+	return i915_live_subtests(tests, i915);
+}
diff --git a/drivers/gpu/drm/i915/gt/selftest_slpc.h b/drivers/gpu/drm/i915/gt/selftest_slpc.h
new file mode 100644
index 000000000000..8dfb40916a8c
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/selftest_slpc.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2020 Intel Corporation
+ */
+
+#ifndef SELFTEST_SLPC_H
+#define SELFTEST_SLPC_H
+
+int live_slpc_clamp_max(void *arg);
+int live_slpc_clamp_min(void *arg);
+
+#endif /* SELFTEST_SLPC_H */
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index e2fd1b61af71..1746a56dda06 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -47,5 +47,6 @@ selftest(hangcheck, intel_hangcheck_live_selftests)
 selftest(execlists, intel_execlists_live_selftests)
 selftest(ring_submission, intel_ring_submission_live_selftests)
 selftest(perf, i915_perf_live_selftests)
+selftest(slpc, intel_slpc_live_selftests)
 /* Here be dragons: keep last to run last! */
 selftest(late_gt_pm, intel_gt_pm_late_selftests)
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Intel-gfx] [PATCH 16/16] drm/i915/guc/rc: Setup and enable GUCRC feature
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
                   ` (14 preceding siblings ...)
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 15/16] drm/i915/guc/slpc: slpc selftest Vinay Belgaumkar
@ 2021-07-10  1:20 ` Vinay Belgaumkar
  2021-07-10 18:41   ` Michal Wajdeczko
  2021-07-10  1:40 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Enable GuC based power management features Patchwork
                   ` (2 subsequent siblings)
  18 siblings, 1 reply; 53+ messages in thread
From: Vinay Belgaumkar @ 2021-07-10  1:20 UTC (permalink / raw)
  To: intel-gfx, dri-devel

This feature hands over the control of HW RC6 to the GUC.
GUC decides when to put HW into RC6 based on it's internal
busyness algorithms.

GUCRC needs GUC submission to be enabled, and only
supported on Gen12+ for now.

When GUCRC is enabled, do not set HW RC6. Use a H2G message
to tell guc to enable GUCRC. When disabling RC6, tell guc to
revert RC6 control back to KMD.

Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
---
 drivers/gpu/drm/i915/Makefile                 |  1 +
 drivers/gpu/drm/i915/gt/intel_rc6.c           | 22 ++++--
 .../gpu/drm/i915/gt/uc/abi/guc_actions_abi.h  |  6 ++
 drivers/gpu/drm/i915/gt/uc/intel_guc.c        |  1 +
 drivers/gpu/drm/i915/gt/uc/intel_guc.h        |  2 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c     | 79 +++++++++++++++++++
 drivers/gpu/drm/i915/gt/uc/intel_guc_rc.h     | 32 ++++++++
 drivers/gpu/drm/i915/gt/uc/intel_uc.h         |  2 +
 8 files changed, 140 insertions(+), 5 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c
 create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_rc.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index d8eac4468df9..3fc17f20d88e 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -186,6 +186,7 @@ i915-y += gt/uc/intel_uc.o \
 	  gt/uc/intel_guc_fw.o \
 	  gt/uc/intel_guc_log.o \
 	  gt/uc/intel_guc_log_debugfs.o \
+	  gt/uc/intel_guc_rc.o \
 	  gt/uc/intel_guc_slpc.o \
 	  gt/uc/intel_guc_submission.o \
 	  gt/uc/intel_huc.o \
diff --git a/drivers/gpu/drm/i915/gt/intel_rc6.c b/drivers/gpu/drm/i915/gt/intel_rc6.c
index 259d7eb4e165..299fcf10b04b 100644
--- a/drivers/gpu/drm/i915/gt/intel_rc6.c
+++ b/drivers/gpu/drm/i915/gt/intel_rc6.c
@@ -98,11 +98,19 @@ static void gen11_rc6_enable(struct intel_rc6 *rc6)
 	set(uncore, GEN9_MEDIA_PG_IDLE_HYSTERESIS, 60);
 	set(uncore, GEN9_RENDER_PG_IDLE_HYSTERESIS, 60);
 
-	/* 3a: Enable RC6 */
-	rc6->ctl_enable =
-		GEN6_RC_CTL_HW_ENABLE |
-		GEN6_RC_CTL_RC6_ENABLE |
-		GEN6_RC_CTL_EI_MODE(1);
+	/* 3a: Enable RC6
+	 *
+	 * With GUCRC, we do not enable bit 31 of RC_CTL,
+	 * thus allowing GuC to control RC6 entry/exit fully instead.
+	 * We will not set the HW ENABLE and EI bits
+	 */
+	if (!intel_guc_rc_enable(&gt->uc.guc))
+		rc6->ctl_enable = GEN6_RC_CTL_RC6_ENABLE;
+	else
+		rc6->ctl_enable =
+			GEN6_RC_CTL_HW_ENABLE |
+			GEN6_RC_CTL_RC6_ENABLE |
+			GEN6_RC_CTL_EI_MODE(1);
 
 	pg_enable =
 		GEN9_RENDER_PG_ENABLE |
@@ -513,6 +521,10 @@ static void __intel_rc6_disable(struct intel_rc6 *rc6)
 {
 	struct drm_i915_private *i915 = rc6_to_i915(rc6);
 	struct intel_uncore *uncore = rc6_to_uncore(rc6);
+	struct intel_gt *gt = rc6_to_gt(rc6);
+
+	/* Take control of RC6 back from GuC */
+	intel_guc_rc_disable(&gt->uc.guc);
 
 	intel_uncore_forcewake_get(uncore, FORCEWAKE_ALL);
 	if (GRAPHICS_VER(i915) >= 9)
diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
index 596cf4b818e5..2ddb9cdc0a59 100644
--- a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
+++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
@@ -136,6 +136,7 @@ enum intel_guc_action {
 	INTEL_GUC_ACTION_CONTEXT_RESET_NOTIFICATION = 0x1008,
 	INTEL_GUC_ACTION_ENGINE_FAILURE_NOTIFICATION = 0x1009,
 	INTEL_GUC_ACTION_SLPC_REQUEST = 0x3003,
+	INTEL_GUC_ACTION_SETUP_PC_GUCRC = 0x3004,
 	INTEL_GUC_ACTION_AUTHENTICATE_HUC = 0x4000,
 	INTEL_GUC_ACTION_REGISTER_CONTEXT = 0x4502,
 	INTEL_GUC_ACTION_DEREGISTER_CONTEXT = 0x4503,
@@ -146,6 +147,11 @@ enum intel_guc_action {
 	INTEL_GUC_ACTION_LIMIT
 };
 
+enum intel_guc_rc_options {
+	INTEL_GUCRC_HOST_CONTROL,
+	INTEL_GUCRC_FIRMWARE_CONTROL,
+};
+
 enum intel_guc_preempt_options {
 	INTEL_GUC_PREEMPT_OPTION_DROP_WORK_Q = 0x4,
 	INTEL_GUC_PREEMPT_OPTION_DROP_SUBMIT_Q = 0x8,
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
index 82863a9bc8e8..0d55b24f7c67 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
@@ -158,6 +158,7 @@ void intel_guc_init_early(struct intel_guc *guc)
 	intel_guc_log_init_early(&guc->log);
 	intel_guc_submission_init_early(guc);
 	intel_guc_slpc_init_early(guc);
+	intel_guc_rc_init_early(guc);
 
 	mutex_init(&guc->send_mutex);
 	spin_lock_init(&guc->irq_lock);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index 0dbbd9cf553f..592d52e5e93c 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -59,6 +59,8 @@ struct intel_guc {
 
 	bool submission_supported;
 	bool submission_selected;
+	bool rc_supported;
+	bool rc_selected;
 	bool slpc_supported;
 	bool slpc_selected;
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c
new file mode 100644
index 000000000000..45b61432c56d
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c
@@ -0,0 +1,79 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2020 Intel Corporation
+*/
+
+#include "intel_guc_rc.h"
+#include "gt/intel_gt.h"
+#include "i915_drv.h"
+
+static bool __guc_rc_supported(struct intel_guc *guc)
+{
+	/* GuC RC is unavailable for pre-Gen12 */
+	return guc->submission_supported &&
+		GRAPHICS_VER(guc_to_gt(guc)->i915) >= 12;
+}
+
+static bool __guc_rc_selected(struct intel_guc *guc)
+{
+	if (!intel_guc_rc_is_supported(guc))
+		return false;
+
+	return guc->submission_selected;
+}
+
+void intel_guc_rc_init_early(struct intel_guc *guc)
+{
+	guc->rc_supported = __guc_rc_supported(guc);
+	guc->rc_selected = __guc_rc_selected(guc);
+}
+
+static int guc_action_control_gucrc(struct intel_guc *guc, bool enable)
+{
+	struct drm_device *drm = &guc_to_gt(guc)->i915->drm;
+	u32 rc_mode = enable ? INTEL_GUCRC_FIRMWARE_CONTROL :
+				INTEL_GUCRC_HOST_CONTROL;
+	u32 action[] = {
+		INTEL_GUC_ACTION_SETUP_PC_GUCRC,
+		rc_mode
+	};
+	int ret;
+
+	ret = intel_guc_send(guc, action, ARRAY_SIZE(action));
+	if (ret)
+		drm_err(drm, "Failed to set GUCRC mode(%d), err=%d\n",
+			rc_mode, ret);
+
+	return ret;
+}
+
+static int __guc_rc_control(struct intel_guc *guc, bool enable)
+{
+	struct intel_gt *gt = guc_to_gt(guc);
+	int ret;
+
+	if (!intel_uc_uses_guc_rc(&gt->uc))
+		return -ENOTSUPP;
+
+	if (!intel_guc_is_ready(guc))
+		return -EINVAL;
+
+	ret = guc_action_control_gucrc(guc, enable);
+	if (unlikely(ret))
+		return ret;
+
+	drm_info(&gt->i915->drm, "GuC RC %s\n",
+	         enableddisabled(enable));
+
+	return 0;
+}
+
+int intel_guc_rc_enable(struct intel_guc *guc)
+{
+	return __guc_rc_control(guc, true);
+}
+
+int intel_guc_rc_disable(struct intel_guc *guc)
+{
+	return __guc_rc_control(guc, false);
+}
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.h
new file mode 100644
index 000000000000..169e60726e5b
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2020 Intel Corporation
+ */
+
+#ifndef _INTEL_GUC_RC_H_
+#define _INTEL_GUC_RC_H_
+
+#include <linux/types.h>
+#include "intel_guc_submission.h"
+
+void intel_guc_rc_init_early(struct intel_guc *guc);
+
+static inline bool intel_guc_rc_is_supported(struct intel_guc *guc)
+{
+	return guc->rc_supported;
+}
+
+static inline bool intel_guc_rc_is_wanted(struct intel_guc *guc)
+{
+	return guc->submission_selected && intel_guc_rc_is_supported(guc);
+}
+
+static inline bool intel_guc_rc_is_used(struct intel_guc *guc)
+{
+	return intel_guc_submission_is_used(guc) && intel_guc_rc_is_wanted(guc);
+}
+
+int intel_guc_rc_enable(struct intel_guc *guc);
+int intel_guc_rc_disable(struct intel_guc *guc);
+
+#endif
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.h b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
index 38e465fd8a0c..29d8ad6d9087 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
@@ -7,6 +7,7 @@
 #define _INTEL_UC_H_
 
 #include "intel_guc.h"
+#include "intel_guc_rc.h"
 #include "intel_guc_submission.h"
 #include "intel_huc.h"
 #include "i915_params.h"
@@ -84,6 +85,7 @@ uc_state_checkers(guc, guc);
 uc_state_checkers(huc, huc);
 uc_state_checkers(guc, guc_submission);
 uc_state_checkers(guc, guc_slpc);
+uc_state_checkers(guc, guc_rc);
 
 #undef uc_state_checkers
 #undef __uc_state_checker
-- 
2.25.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Enable GuC based power management features
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
                   ` (15 preceding siblings ...)
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 16/16] drm/i915/guc/rc: Setup and enable GUCRC feature Vinay Belgaumkar
@ 2021-07-10  1:40 ` Patchwork
  2021-07-10  1:41 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
  2021-07-10  2:09 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
  18 siblings, 0 replies; 53+ messages in thread
From: Patchwork @ 2021-07-10  1:40 UTC (permalink / raw)
  To: Vinay Belgaumkar; +Cc: intel-gfx

== Series Details ==

Series: Enable GuC based power management features
URL   : https://patchwork.freedesktop.org/series/92391/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
d9063ce26607 drm/i915/guc: Squashed patch - DO NOT REVIEW
-:21: WARNING:BAD_SIGN_OFF: Duplicate signature
#21: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:32: WARNING:BAD_SIGN_OFF: Duplicate signature
#32: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:33: WARNING:BAD_SIGN_OFF: Duplicate signature
#33: 
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>

-:63: WARNING:BAD_SIGN_OFF: Duplicate signature
#63: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:82: WARNING:BAD_SIGN_OFF: Duplicate signature
#82: 
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>

-:84: WARNING:BAD_SIGN_OFF: Duplicate signature
#84: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:85: WARNING:BAD_SIGN_OFF: Duplicate signature
#85: 
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>

-:113: WARNING:BAD_SIGN_OFF: Duplicate signature
#113: 
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>

-:114: WARNING:BAD_SIGN_OFF: Duplicate signature
#114: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:115: WARNING:BAD_SIGN_OFF: Duplicate signature
#115: 
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>

-:122: WARNING:BAD_SIGN_OFF: Duplicate signature
#122: 
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>

-:123: WARNING:BAD_SIGN_OFF: Duplicate signature
#123: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:124: WARNING:BAD_SIGN_OFF: Duplicate signature
#124: 
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>

-:131: WARNING:BAD_SIGN_OFF: Duplicate signature
#131: 
Cc: John Harrison <john.c.harrison@intel.com>

-:132: WARNING:BAD_SIGN_OFF: Duplicate signature
#132: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:133: WARNING:BAD_SIGN_OFF: Duplicate signature
#133: 
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>

-:144: WARNING:BAD_SIGN_OFF: Duplicate signature
#144: 
Cc: John Harrison <john.c.harrison@intel.com>

-:145: WARNING:BAD_SIGN_OFF: Duplicate signature
#145: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:146: WARNING:BAD_SIGN_OFF: Duplicate signature
#146: 
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>

-:161: WARNING:BAD_SIGN_OFF: Duplicate signature
#161: 
Cc: John Harrison <john.c.harrison@intel.com>

-:162: WARNING:BAD_SIGN_OFF: Duplicate signature
#162: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:163: WARNING:BAD_SIGN_OFF: Duplicate signature
#163: 
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>

-:182: WARNING:BAD_SIGN_OFF: Duplicate signature
#182: 
Cc: John Harrison <john.c.harrison@intel.com>

-:183: WARNING:BAD_SIGN_OFF: Duplicate signature
#183: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:190: WARNING:BAD_SIGN_OFF: Duplicate signature
#190: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:191: WARNING:BAD_SIGN_OFF: Duplicate signature
#191: 
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>

-:204: WARNING:BAD_SIGN_OFF: Duplicate signature
#204: 
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>

-:205: WARNING:BAD_SIGN_OFF: Duplicate signature
#205: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:216: WARNING:BAD_SIGN_OFF: Duplicate signature
#216: 
Cc: John Harrison <john.c.harrison@intel.com>

-:217: WARNING:BAD_SIGN_OFF: Duplicate signature
#217: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:231: WARNING:BAD_SIGN_OFF: Duplicate signature
#231: 
Cc: John Harrison <john.c.harrison@intel.com>

-:232: WARNING:BAD_SIGN_OFF: Duplicate signature
#232: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:242: WARNING:BAD_SIGN_OFF: Duplicate signature
#242: 
Cc: John Harrison <john.c.harrison@intel.com>

-:243: WARNING:BAD_SIGN_OFF: Duplicate signature
#243: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:244: WARNING:BAD_SIGN_OFF: Duplicate signature
#244: 
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>

-:251: WARNING:BAD_SIGN_OFF: Duplicate signature
#251: 
Cc: John Harrison <john.c.harrison@intel.com>

-:252: WARNING:BAD_SIGN_OFF: Duplicate signature
#252: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:259: WARNING:BAD_SIGN_OFF: Duplicate signature
#259: 
Cc: John Harrison <john.c.harrison@intel.com>

-:260: WARNING:BAD_SIGN_OFF: Duplicate signature
#260: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:271: WARNING:BAD_SIGN_OFF: Duplicate signature
#271: 
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>

-:272: WARNING:BAD_SIGN_OFF: Duplicate signature
#272: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:284: WARNING:BAD_SIGN_OFF: Duplicate signature
#284: 
Cc: John Harrison <john.c.harrison@intel.com>

-:285: WARNING:BAD_SIGN_OFF: Duplicate signature
#285: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:300: WARNING:BAD_SIGN_OFF: Duplicate signature
#300: 
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>

-:301: WARNING:BAD_SIGN_OFF: Duplicate signature
#301: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:312: WARNING:BAD_SIGN_OFF: Duplicate signature
#312: 
Cc: John Harrison <john.c.harrison@intel.com>

-:313: WARNING:BAD_SIGN_OFF: Duplicate signature
#313: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:319: WARNING:BAD_SIGN_OFF: Duplicate signature
#319: 
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>

-:320: WARNING:BAD_SIGN_OFF: Duplicate signature
#320: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:328: WARNING:BAD_SIGN_OFF: Duplicate signature
#328: 
Cc: John Harrison <john.c.harrison@intel.com>

-:329: WARNING:BAD_SIGN_OFF: Duplicate signature
#329: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:337: WARNING:BAD_SIGN_OFF: Duplicate signature
#337: 
Cc: John Harrison <john.c.harrison@intel.com>

-:338: WARNING:BAD_SIGN_OFF: Duplicate signature
#338: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:350: WARNING:BAD_SIGN_OFF: Duplicate signature
#350: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:369: WARNING:BAD_SIGN_OFF: Duplicate signature
#369: 
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>

-:370: WARNING:BAD_SIGN_OFF: Duplicate signature
#370: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:384: WARNING:BAD_SIGN_OFF: Duplicate signature
#384: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:392: WARNING:BAD_SIGN_OFF: Duplicate signature
#392: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:402: WARNING:BAD_SIGN_OFF: Duplicate signature
#402: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:403: WARNING:BAD_SIGN_OFF: Duplicate signature
#403: 
CC: John Harrison <John.C.Harrison@Intel.com>

-:410: WARNING:BAD_SIGN_OFF: Duplicate signature
#410: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:428: WARNING:BAD_SIGN_OFF: Duplicate signature
#428: 
Cc: John Harrison <john.c.harrison@intel.com>

-:429: WARNING:BAD_SIGN_OFF: Duplicate signature
#429: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:437: WARNING:BAD_SIGN_OFF: Duplicate signature
#437: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:445: WARNING:BAD_SIGN_OFF: Duplicate signature
#445: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:446: ERROR:BAD_SIGN_OFF: Unrecognized email address: 'Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com'
#446: 
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com

-:463: WARNING:BAD_SIGN_OFF: Duplicate signature
#463: 
Cc: John Harrison <john.c.harrison@intel.com>

-:464: WARNING:BAD_SIGN_OFF: Duplicate signature
#464: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:475: WARNING:BAD_SIGN_OFF: Duplicate signature
#475: 
Cc: John Harrison <John.C.Harrison@Intel.com>

-:476: WARNING:BAD_SIGN_OFF: Duplicate signature
#476: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:484: WARNING:BAD_SIGN_OFF: Duplicate signature
#484: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:492: WARNING:BAD_SIGN_OFF: Duplicate signature
#492: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:493: WARNING:BAD_SIGN_OFF: Duplicate signature
#493: 
CC: John Harrison <John.C.Harrison@Intel.com>

-:507: WARNING:BAD_SIGN_OFF: Duplicate signature
#507: 
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>

-:508: WARNING:BAD_SIGN_OFF: Duplicate signature
#508: 
Signed-off-by: Fernando Pacheco <fernando.pacheco@intel.com>

-:509: WARNING:BAD_SIGN_OFF: Duplicate signature
#509: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:510: WARNING:BAD_SIGN_OFF: Duplicate signature
#510: 
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>

-:521: WARNING:BAD_SIGN_OFF: Duplicate signature
#521: 
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>

-:522: WARNING:BAD_SIGN_OFF: Duplicate signature
#522: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:530: WARNING:BAD_SIGN_OFF: Duplicate signature
#530: 
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>

-:531: WARNING:BAD_SIGN_OFF: Duplicate signature
#531: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:532: WARNING:BAD_SIGN_OFF: Duplicate signature
#532: 
Reviewed-by: Matthew Brost <matthew.brost@intel.com>

-:546: WARNING:BAD_SIGN_OFF: Duplicate signature
#546: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:563: WARNING:BAD_SIGN_OFF: Duplicate signature
#563: 
Signed-off-by: John Harrison <john.c.harrison@intel.com>

-:564: WARNING:BAD_SIGN_OFF: Duplicate signature
#564: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:571: WARNING:BAD_SIGN_OFF: Duplicate signature
#571: 
Signed-off-by: John Harrison <john.c.harrison@intel.com>

-:572: WARNING:BAD_SIGN_OFF: Duplicate signature
#572: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:590: WARNING:BAD_SIGN_OFF: Duplicate signature
#590: 
Signed-off-by: John Harrison <john.c.harrison@intel.com>

-:591: WARNING:BAD_SIGN_OFF: Duplicate signature
#591: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:598: WARNING:BAD_SIGN_OFF: Duplicate signature
#598: 
Signed-off-by: John Harrison <john.c.harrison@intel.com>

-:599: WARNING:BAD_SIGN_OFF: Duplicate signature
#599: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:600: WARNING:BAD_SIGN_OFF: Duplicate signature
#600: 
Reviewed-by: Matthew Brost <matthew.brost@intel.com>

-:608: WARNING:BAD_SIGN_OFF: Duplicate signature
#608: 
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>

-:609: WARNING:BAD_SIGN_OFF: Duplicate signature
#609: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:616: WARNING:BAD_SIGN_OFF: Duplicate signature
#616: 
Cc: John Harrison <John.C.Harrison@Intel.com>

-:617: WARNING:BAD_SIGN_OFF: Duplicate signature
#617: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:624: WARNING:BAD_SIGN_OFF: Duplicate signature
#624: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:625: WARNING:BAD_SIGN_OFF: Duplicate signature
#625: 
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

-:633: WARNING:BAD_SIGN_OFF: Duplicate signature
#633: 
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>

-:634: WARNING:BAD_SIGN_OFF: Duplicate signature
#634: 
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>

-:643: WARNING:BAD_SIGN_OFF: Duplicate signature
#643: 
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>

-:644: WARNING:BAD_SIGN_OFF: Duplicate signature
#644: 
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>

-:645: WARNING:BAD_SIGN_OFF: Duplicate signature
#645: 
Cc: Matthew Brost <matthew.brost@intel.com>

-:653: WARNING:BAD_SIGN_OFF: Duplicate signature
#653: 
Signed-off-by: Rahul Kumar Singh <rahul.kumar.singh@intel.com>

-:654: WARNING:BAD_SIGN_OFF: Duplicate signature
#654: 
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>

-:655: WARNING:BAD_SIGN_OFF: Duplicate signature
#655: 
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>

-:656: WARNING:BAD_SIGN_OFF: Duplicate signature
#656: 
Cc: Matthew Brost <matthew.brost@intel.com>

-:663: WARNING:BAD_SIGN_OFF: Duplicate signature
#663: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:688: WARNING:BAD_SIGN_OFF: Duplicate signature
#688: 
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>

-:689: WARNING:BAD_SIGN_OFF: Duplicate signature
#689: 
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>

-:690: WARNING:BAD_SIGN_OFF: Duplicate signature
#690: 
Cc: Matthew Brost <matthew.brost@intel.com>

-:710: WARNING:BAD_SIGN_OFF: Duplicate signature
#710: 
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>

-:711: WARNING:BAD_SIGN_OFF: Duplicate signature
#711: 
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>

-:717: WARNING:BAD_SIGN_OFF: Duplicate signature
#717: 
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>

-:718: WARNING:BAD_SIGN_OFF: Duplicate signature
#718: 
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>

-:719: WARNING:BAD_SIGN_OFF: Duplicate signature
#719: 
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

-:1381: WARNING:UNNECESSARY_ELSE: else is not generally useful after a break or return
#1381: FILE: drivers/gpu/drm/i915/gt/intel_engine.h:291:
+		return intel_guc_virtual_engine_has_heartbeat(engine);
+	else

-:1558: CHECK:BRACES: braces {} should be used on all arms of this statement
#1558: FILE: drivers/gpu/drm/i915/gt/intel_engine_cs.c:1702:
+	if (guc) {
[...]
+	} else
[...]

-:1562: CHECK:BRACES: Unbalanced braces around else statement
#1562: FILE: drivers/gpu/drm/i915/gt/intel_engine_cs.c:1706:
+	} else

-:1761: CHECK:BRACES: Blank lines aren't necessary before a close brace '}'
#1761: FILE: drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c:231:
+
+}

-:2671: CHECK:LINE_SPACING: Please don't use multiple blank lines
#2671: FILE: drivers/gpu/drm/i915/gt/mock_engine.c:266:
+
+

-:2874: WARNING:LONG_LINE: line length of 103 exceeds 100 columns
#2874: FILE: drivers/gpu/drm/i915/gt/selftest_hangcheck.c:393:
+					pr_err("[%s] Create request failed: %d!\n", engine->name, err);

-:2986: WARNING:LONG_LINE: line length of 108 exceeds 100 columns
#2986: FILE: drivers/gpu/drm/i915/gt/selftest_hangcheck.c:754:
+					pr_err("[%s] Create hang request failed: %d!\n", engine->name, err);

-:3020: WARNING:LONG_LINE: line length of 123 exceeds 100 columns
#3020: FILE: drivers/gpu/drm/i915/gt/selftest_hangcheck.c:789:
+					       engine->name, rq->fence.context, rq->fence.seqno, rq->context->guc_id, err);

-:3167: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#3167: FILE: drivers/gpu/drm/i915/gt/selftest_hangcheck.c:1049:
+			err = intel_selftest_modify_policy(engine, &saved,
+							  SELFTEST_SCHEDULER_MODIFY_FAST_RESET);

-:3178: WARNING:LONG_LINE: line length of 108 exceeds 100 columns
#3178: FILE: drivers/gpu/drm/i915/gt/selftest_hangcheck.c:1059:
+					pr_err("[%s] Create hang request failed: %d!\n", engine->name, err);

-:3213: WARNING:LONG_LINE: line length of 123 exceeds 100 columns
#3213: FILE: drivers/gpu/drm/i915/gt/selftest_hangcheck.c:1096:
+					       engine->name, rq->fence.context, rq->fence.seqno, rq->context->guc_id, err);

-:3434: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#3434: FILE: drivers/gpu/drm/i915/gt/selftest_hangcheck.c:1660:
+			err = intel_selftest_modify_policy(engine, &saved,
+							  SELFTEST_SCHEDULER_MODIFY_NO_HANGCHECK);

-:3513: WARNING:LINE_SPACING: Missing a blank line after declarations
#3513: FILE: drivers/gpu/drm/i915/gt/selftest_hangcheck.c:1775:
+			int err2 = intel_selftest_restore_policy(engine, &saved);
+			if (err2)

-:3514: WARNING:LONG_LINE: line length of 123 exceeds 100 columns
#3514: FILE: drivers/gpu/drm/i915/gt/selftest_hangcheck.c:1776:
+				pr_err("%s:%d> [%s] Restore policy failed: %d!\n", __func__, __LINE__, engine->name, err2);

-:3769: WARNING:LONG_LINE: line length of 105 exceeds 100 columns
#3769: FILE: drivers/gpu/drm/i915/gt/selftest_workarounds.c:814:
+								   SELFTEST_SCHEDULER_MODIFY_FAST_RESET);

-:3770: ERROR:SPACING: space required before the open parenthesis '('
#3770: FILE: drivers/gpu/drm/i915/gt/selftest_workarounds.c:815:
+				if(err)

-:3780: CHECK:BRACES: Unbalanced braces around else statement
#3780: FILE: drivers/gpu/drm/i915/gt/selftest_workarounds.c:825:
+			} else

-:3803: CHECK:LINE_SPACING: Please don't use multiple blank lines
#3803: FILE: drivers/gpu/drm/i915/gt/selftest_workarounds.c:1277:
+
+

-:4142: ERROR:POINTER_LOCATION: "foo* bar" should be "foo *bar"
#4142: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc.h:119:
+static inline int intel_guc_send_busy_loop(struct intel_guc* guc,

-:4150: ERROR:IN_ATOMIC: do not use in_atomic in drivers
#4150: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc.h:127:
+	bool not_atomic = !in_atomic() && !irqs_disabled();

-:4372: WARNING:ENOTSUPP: ENOTSUPP is not a SUSV4 error code, prefer EOPNOTSUPP
#4372: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c:147:
+		return -ENOTSUPP;

-:4500: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#4500: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c:285:
+	temp_set.registers = kmalloc_array(temp_set.size,
+					  sizeof(*temp_set.registers),

-:4529: CHECK:SPACING: No space is necessary after a cast
#4529: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c:314:
+	temp_set.registers = (struct guc_mmio_reg *) (((u8 *) blob) + offset);

-:4678: CHECK:SPACING: No space is necessary after a cast
#4678: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c:453:
+	ptr = ((u8 *) blob) + offset;

-:4704: WARNING:LONG_LINE: line length of 106 exceeds 100 columns
#4704: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c:470:
+			drm_err(&gt->i915->drm, "No engine state recorded for class %d!\n", engine_class);

-:5165: ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in parentheses
#5165: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c:620:
+#define G2H_LEN_DW(f) \
+	FIELD_GET(INTEL_GUC_CT_SEND_G2H_DW_MASK, f) ? \
+	FIELD_GET(INTEL_GUC_CT_SEND_G2H_DW_MASK, f) + GUC_CTB_HXG_MSG_MIN_LEN : 0

-:5165: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'f' - possible side-effects?
#5165: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c:620:
+#define G2H_LEN_DW(f) \
+	FIELD_GET(INTEL_GUC_CT_SEND_G2H_DW_MASK, f) ? \
+	FIELD_GET(INTEL_GUC_CT_SEND_G2H_DW_MASK, f) + GUC_CTB_HXG_MSG_MIN_LEN : 0

-:5511: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'len' - possible side-effects?
#5511: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h:109:
+#define MAKE_SEND_FLAGS(len) \
+	({GEM_BUG_ON(!FIELD_FIT(INTEL_GUC_CT_SEND_G2H_DW_MASK, len)); \
+	(FIELD_PREP(INTEL_GUC_CT_SEND_G2H_DW_MASK, len) | INTEL_GUC_CT_SEND_NB);})

-:5513: ERROR:SPACING: space required after that ';' (ctx:VxV)
#5513: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h:111:
+	(FIELD_PREP(INTEL_GUC_CT_SEND_G2H_DW_MASK, len) | INTEL_GUC_CT_SEND_NB);})
 	                                                                       ^

-:5687: WARNING:BLOCK_COMMENT_STYLE: Block comments use a trailing */ on a separate line
#5687: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h:214:
+	 * reset. (in micro seconds). */

-:6002: CHECK:COMPARISON_TO_NULL: Comparison to NULL could be written "guc->lrc_desc_pool_vaddr"
#6002: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:295:
+	return guc->lrc_desc_pool_vaddr != NULL;

-:6048: ERROR:POINTER_LOCATION: "foo* bar" should be "foo *bar"
#6048: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:335:
+static int guc_submission_busy_loop(struct intel_guc* guc,

-:6273: CHECK:BRACES: braces {} should be used on all arms of this statement
#6273: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:522:
+		if (unlikely(ret == -EPIPE))
[...]
+		else if (ret == -EBUSY) {
[...]

-:6368: ERROR:SPACING: spaces required around that '||' (ctx:VxW)
#6368: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:603:
+			if (pending_enable|| deregister)
 			                  ^

-:6424: WARNING:MEMORY_BARRIER: memory barrier without comment
#6424: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:652:
+	wmb();

-:6478: ERROR:CODE_INDENT: code indent should use tabs where possible
#6478: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:695:
+ ^I */$

-:6478: WARNING:SPACE_BEFORE_TAB: please, no space before tabs
#6478: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:695:
+ ^I */$

-:7079: WARNING:REPEATED_WORD: Possible repeated word: 'from'
#7079: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:1287:
+	 * could be regisgered either the guc_id has been stole from from

-:7113: CHECK:BRACES: braces {} should be used on all arms of this statement
#7113: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:1321:
+		if (unlikely(ret == -EBUSY)) {
[...]
+		} else if (unlikely(ret == -ENODEV))
[...]

-:7336: ERROR:BRACKET_SPACE: space prohibited before open square bracket '['
#7336: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:1544:
+	u32 action [] = {

-:7359: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#7359: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:1567:
+	if (submission_disabled(guc) || (!context_enabled(ce) &&
+	    !context_pending_disable(ce))) {

-:7708: WARNING:ONE_SEMICOLON: Statements terminations use 1 semicolon
#7708: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:1893:
+		return ret;;

-:7864: ERROR:CODE_INDENT: code indent should use tabs where possible
#7864: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:2032:
+        * In GuC submission mode we do not know which physical engine a request$

-:7865: ERROR:CODE_INDENT: code indent should use tabs where possible
#7865: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:2033:
+        * will be scheduled on, this creates a problem because the breadcrumb$

-:7866: ERROR:CODE_INDENT: code indent should use tabs where possible
#7866: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:2034:
+        * interrupt is per physical engine. To work around this we attach$

-:7867: ERROR:CODE_INDENT: code indent should use tabs where possible
#7867: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:2035:
+        * requests and direct all breadcrumb interrupts to the first instance$

-:7868: ERROR:CODE_INDENT: code indent should use tabs where possible
#7868: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:2036:
+        * of an engine per class. In addition all breadcrumb interrupts are$

-:7870: ERROR:CODE_INDENT: code indent should use tabs where possible
#7870: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:2038:
+        */$

-:8532: CHECK:LINE_SPACING: Please don't use multiple blank lines
#8532: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:2796:
+
+

-:8837: WARNING:LONG_LINE: line length of 107 exceeds 100 columns
#8837: FILE: drivers/gpu/drm/i915/i915_debugfs_params.c:14:
+#define MATCH_DEBUGFS_NODE_NAME(_file, _name)	(strcmp((_file)->f_path.dentry->d_name.name, (_name)) == 0)

-:8839: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'i915' - possible side-effects?
#8839: FILE: drivers/gpu/drm/i915/i915_debugfs_params.c:16:
+#define GET_I915(i915, name, ptr)	\
+	do {	\
+		struct i915_params *params;	\
+		params = container_of(((void *) (ptr)), typeof(*params), name);	\
+		(i915) = container_of(params, typeof(*(i915)), params);	\
+	} while(0)

-:8842: CHECK:SPACING: No space is necessary after a cast
#8842: FILE: drivers/gpu/drm/i915/i915_debugfs_params.c:19:
+		params = container_of(((void *) (ptr)), typeof(*params), name);	\

-:8844: ERROR:SPACING: space required before the open parenthesis '('
#8844: FILE: drivers/gpu/drm/i915/i915_debugfs_params.c:21:
+	} while(0)

-:9213: WARNING:LONG_LINE: line length of 101 exceeds 100 columns
#9213: FILE: drivers/gpu/drm/i915/i915_request.c:1596:
+		if ((!uses_guc && is_power_of_2(READ_ONCE(prev->engine)->mask | rq->engine->mask)) ||

-:9292: ERROR:OPEN_BRACE: open brace '{' following enum go on the same line
#9292: FILE: drivers/gpu/drm/i915/i915_request.h:653:
+enum i915_request_state
+{

-:9455: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#9455: FILE: drivers/gpu/drm/i915/i915_trace.h:909:
+DECLARE_EVENT_CLASS(intel_context,
+	    TP_PROTO(struct intel_context *ce),

-:9458: CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#9458: FILE: drivers/gpu/drm/i915/i915_trace.h:912:
+	    TP_STRUCT__entry(

-:9465: CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#9465: FILE: drivers/gpu/drm/i915/i915_trace.h:919:
+	    TP_fast_assign(

-:9694: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#9694: 
new file mode 100644

-:9699: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#9699: FILE: drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c:1:
+/*

-:9700: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#9700: FILE: drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c:2:
+ * SPDX-License-Identifier: MIT

-:9808: ERROR:OPEN_BRACE: open brace '{' following struct go on the same line
#9808: FILE: drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.h:15:
+struct intel_selftest_saved_policy
+{

-:9816: ERROR:OPEN_BRACE: open brace '{' following enum go on the same line
#9816: FILE: drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.h:23:
+enum selftest_scheduler_modify
+{

-:9826: ERROR:SPACING: space prohibited after that open parenthesis '('
#9826: FILE: drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.h:33:
+int intel_selftest_wait_for_rq( struct i915_request *rq);

total: 21 errors, 136 warnings, 23 checks, 8531 lines checked
25589f2d8bfd drm/i915/guc/slpc: Initial definitions for slpc
2790990fc2e6 drm/i915/guc/slpc: Gate Host RPS when slpc is enabled
980d855c5c5a drm/i915/guc/slpc: Lay out slpc init/enable/disable/fini
-:45: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#45: 
new file mode 100644

-:50: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#50: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c:1:
+/*

-:51: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#51: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c:2:
+ * SPDX-License-Identifier: MIT

-:90: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#90: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h:1:
+/*

-:91: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#91: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h:2:
+ * SPDX-License-Identifier: MIT

total: 0 errors, 5 warnings, 0 checks, 71 lines checked
b076b1d0bbf4 drm/i915/guc/slpc: Adding slpc communication interfaces
-:60: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#60: 
new file mode 100644

-:65: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#65: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h:1:
+/*

-:66: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#66: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h:2:
+ * SPDX-License-Identifier: MIT

-:106: WARNING:LONG_LINE: line length of 104 exceeds 100 columns
#106: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h:42:
+		+ (SLPC_CACHELINE_SIZE_BYTES-1)) / SLPC_CACHELINE_SIZE_BYTES)*SLPC_CACHELINE_SIZE_BYTES)

-:106: CHECK:SPACING: spaces preferred around that '-' (ctx:VxV)
#106: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h:42:
+		+ (SLPC_CACHELINE_SIZE_BYTES-1)) / SLPC_CACHELINE_SIZE_BYTES)*SLPC_CACHELINE_SIZE_BYTES)
 		                            ^

-:106: CHECK:SPACING: spaces preferred around that '*' (ctx:VxV)
#106: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h:42:
+		+ (SLPC_CACHELINE_SIZE_BYTES-1)) / SLPC_CACHELINE_SIZE_BYTES)*SLPC_CACHELINE_SIZE_BYTES)
 		                                                             ^

-:221: ERROR:CODE_INDENT: code indent should use tabs where possible
#221: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h:157:
+        union$

-:221: WARNING:LEADING_SPACE: please, no spaces at the start of a line
#221: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h:157:
+        union$

-:222: ERROR:OPEN_BRACE: open brace '{' following union go on the same line
#222: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h:158:
+        union
+	{

total: 2 errors, 5 warnings, 2 checks, 286 lines checked
f3e07ff23e21 drm/i915/guc/slpc: Allocate, initialize and release slpc
675158b5dd9c drm/i915/guc/slpc: Enable slpc and add related H2G events
-:35: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#35: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c:32:
+static void slpc_mem_set_param(struct slpc_shared_data *data,
+				u32 id, u32 value)

-:57: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#57: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c:54:
+static void slpc_mem_task_control(struct slpc_shared_data *data,
+				 u64 val, u32 enable_id, u32 disable_id)

-:91: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#91: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c:97:
+static int slpc_send(struct intel_guc_slpc *slpc,
+			struct slpc_event_input *input,

-:228: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#228: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c:249:
+	slpc_mem_task_control(data, SLPC_PARAM_TASK_ENABLED,
+				SLPC_PARAM_TASK_ENABLE_GTPERF,

-:232: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#232: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c:253:
+	slpc_mem_task_control(data, SLPC_PARAM_TASK_DISABLED,
+				SLPC_PARAM_TASK_ENABLE_BALANCER,

-:236: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#236: FILE: drivers/gpu/drm/i915/


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for Enable GuC based power management features
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
                   ` (16 preceding siblings ...)
  2021-07-10  1:40 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Enable GuC based power management features Patchwork
@ 2021-07-10  1:41 ` Patchwork
  2021-07-10  2:09 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
  18 siblings, 0 replies; 53+ messages in thread
From: Patchwork @ 2021-07-10  1:41 UTC (permalink / raw)
  To: Vinay Belgaumkar; +Cc: intel-gfx

== Series Details ==

Series: Enable GuC based power management features
URL   : https://patchwork.freedesktop.org/series/92391/
State : warning

== Summary ==

$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.
+drivers/gpu/drm/i915/gt/intel_rps.c:1969:5: warning: symbol 'intel_rps_read_punit_req' was not declared. Should it be static?
+drivers/gpu/drm/i915/gt/intel_rps.c:1978:5: warning: symbol 'intel_rps_get_req' was not declared. Should it be static?
+drivers/gpu/drm/i915/gt/intel_rps.c:1985:5: warning: symbol 'intel_rps_read_punit_req_frequency' was not declared. Should it be static?
+drivers/gpu/drm/i915/selftests/i915_syncmap.c:80:54: warning: dubious: x | !y
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+./include/linux/stddef.h:17:9: this was the original definition
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined
+/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h:417:9: warning: preprocessor token offsetof redefined


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BAT: failure for Enable GuC based power management features
  2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
                   ` (17 preceding siblings ...)
  2021-07-10  1:41 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
@ 2021-07-10  2:09 ` Patchwork
  18 siblings, 0 replies; 53+ messages in thread
From: Patchwork @ 2021-07-10  2:09 UTC (permalink / raw)
  To: Vinay Belgaumkar; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 11469 bytes --]

== Series Details ==

Series: Enable GuC based power management features
URL   : https://patchwork.freedesktop.org/series/92391/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_10329 -> Patchwork_20567
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_20567 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_20567, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/index.html

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_20567:

### IGT changes ###

#### Possible regressions ####

  * igt@i915_pm_rps@basic-api:
    - fi-kbl-guc:         [PASS][1] -> [FAIL][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-kbl-guc/igt@i915_pm_rps@basic-api.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-kbl-guc/igt@i915_pm_rps@basic-api.html
    - fi-cfl-8109u:       [PASS][3] -> [FAIL][4]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-cfl-8109u/igt@i915_pm_rps@basic-api.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-cfl-8109u/igt@i915_pm_rps@basic-api.html
    - fi-bsw-nick:        [PASS][5] -> [FAIL][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-bsw-nick/igt@i915_pm_rps@basic-api.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-bsw-nick/igt@i915_pm_rps@basic-api.html
    - fi-kbl-7500u:       [PASS][7] -> [FAIL][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-kbl-7500u/igt@i915_pm_rps@basic-api.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-kbl-7500u/igt@i915_pm_rps@basic-api.html
    - fi-kbl-8809g:       [PASS][9] -> [FAIL][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-kbl-8809g/igt@i915_pm_rps@basic-api.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-kbl-8809g/igt@i915_pm_rps@basic-api.html
    - fi-kbl-r:           [PASS][11] -> [FAIL][12]
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-kbl-r/igt@i915_pm_rps@basic-api.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-kbl-r/igt@i915_pm_rps@basic-api.html
    - fi-bsw-kefka:       [PASS][13] -> [FAIL][14]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-bsw-kefka/igt@i915_pm_rps@basic-api.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-bsw-kefka/igt@i915_pm_rps@basic-api.html
    - fi-glk-dsi:         [PASS][15] -> [FAIL][16]
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-glk-dsi/igt@i915_pm_rps@basic-api.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-glk-dsi/igt@i915_pm_rps@basic-api.html
    - fi-kbl-soraka:      [PASS][17] -> [FAIL][18]
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-kbl-soraka/igt@i915_pm_rps@basic-api.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-kbl-soraka/igt@i915_pm_rps@basic-api.html
    - fi-kbl-x1275:       [PASS][19] -> [FAIL][20]
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-kbl-x1275/igt@i915_pm_rps@basic-api.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-kbl-x1275/igt@i915_pm_rps@basic-api.html
    - fi-cml-s:           [PASS][21] -> [FAIL][22]
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-cml-s/igt@i915_pm_rps@basic-api.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-cml-s/igt@i915_pm_rps@basic-api.html
    - fi-tgl-y:           [PASS][23] -> [FAIL][24]
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-tgl-y/igt@i915_pm_rps@basic-api.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-tgl-y/igt@i915_pm_rps@basic-api.html
    - fi-cfl-guc:         [PASS][25] -> [FAIL][26]
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-cfl-guc/igt@i915_pm_rps@basic-api.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-cfl-guc/igt@i915_pm_rps@basic-api.html
    - fi-ivb-3770:        [PASS][27] -> [FAIL][28]
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-ivb-3770/igt@i915_pm_rps@basic-api.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-ivb-3770/igt@i915_pm_rps@basic-api.html
    - fi-cml-u2:          [PASS][29] -> [FAIL][30]
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-cml-u2/igt@i915_pm_rps@basic-api.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-cml-u2/igt@i915_pm_rps@basic-api.html
    - fi-skl-6700k2:      [PASS][31] -> [FAIL][32]
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-skl-6700k2/igt@i915_pm_rps@basic-api.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-skl-6700k2/igt@i915_pm_rps@basic-api.html
    - fi-bxt-dsi:         [PASS][33] -> [FAIL][34]
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-bxt-dsi/igt@i915_pm_rps@basic-api.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-bxt-dsi/igt@i915_pm_rps@basic-api.html
    - fi-cfl-8700k:       [PASS][35] -> [FAIL][36]
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-cfl-8700k/igt@i915_pm_rps@basic-api.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-cfl-8700k/igt@i915_pm_rps@basic-api.html
    - fi-hsw-4770:        [PASS][37] -> [FAIL][38]
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-hsw-4770/igt@i915_pm_rps@basic-api.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-hsw-4770/igt@i915_pm_rps@basic-api.html
    - fi-snb-2520m:       [PASS][39] -> [FAIL][40]
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-snb-2520m/igt@i915_pm_rps@basic-api.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-snb-2520m/igt@i915_pm_rps@basic-api.html
    - fi-kbl-7567u:       [PASS][41] -> [FAIL][42]
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-kbl-7567u/igt@i915_pm_rps@basic-api.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-kbl-7567u/igt@i915_pm_rps@basic-api.html
    - fi-bdw-5557u:       [PASS][43] -> [FAIL][44]
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-bdw-5557u/igt@i915_pm_rps@basic-api.html
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-bdw-5557u/igt@i915_pm_rps@basic-api.html
    - fi-skl-6600u:       [PASS][45] -> [FAIL][46]
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-skl-6600u/igt@i915_pm_rps@basic-api.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-skl-6600u/igt@i915_pm_rps@basic-api.html
    - fi-skl-guc:         [PASS][47] -> [FAIL][48]
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-skl-guc/igt@i915_pm_rps@basic-api.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-skl-guc/igt@i915_pm_rps@basic-api.html
    - fi-icl-y:           [PASS][49] -> [FAIL][50]
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-icl-y/igt@i915_pm_rps@basic-api.html
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-icl-y/igt@i915_pm_rps@basic-api.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@i915_pm_rps@basic-api:
    - {fi-ehl-2}:         [PASS][51] -> [FAIL][52]
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-ehl-2/igt@i915_pm_rps@basic-api.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-ehl-2/igt@i915_pm_rps@basic-api.html
    - {fi-jsl-1}:         [PASS][53] -> [FAIL][54]
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-jsl-1/igt@i915_pm_rps@basic-api.html
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-jsl-1/igt@i915_pm_rps@basic-api.html
    - {fi-tgl-dsi}:       [PASS][55] -> [FAIL][56]
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-tgl-dsi/igt@i915_pm_rps@basic-api.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-tgl-dsi/igt@i915_pm_rps@basic-api.html
    - {fi-tgl-1115g4}:    [PASS][57] -> [FAIL][58]
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-tgl-1115g4/igt@i915_pm_rps@basic-api.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-tgl-1115g4/igt@i915_pm_rps@basic-api.html
    - {fi-hsw-gt1}:       [PASS][59] -> [FAIL][60]
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-hsw-gt1/igt@i915_pm_rps@basic-api.html
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-hsw-gt1/igt@i915_pm_rps@basic-api.html

  
Known issues
------------

  Here are the changes found in Patchwork_20567 that come from known issues:

### IGT changes ###

#### Possible fixes ####

  * igt@kms_chamelium@dp-crc-fast:
    - fi-kbl-7500u:       [FAIL][61] ([i915#1372]) -> [PASS][62]
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10329/fi-kbl-7500u/igt@kms_chamelium@dp-crc-fast.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/fi-kbl-7500u/igt@kms_chamelium@dp-crc-fast.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [i915#1372]: https://gitlab.freedesktop.org/drm/intel/issues/1372
  [i915#3717]: https://gitlab.freedesktop.org/drm/intel/issues/3717


Participating hosts (41 -> 39)
------------------------------

  Missing    (2): fi-bsw-cyan fi-bdw-samus 


Build changes
-------------

  * Linux: CI_DRM_10329 -> Patchwork_20567

  CI-20190529: 20190529
  CI_DRM_10329: 2c76b98f510f1e4284285813024bc4cbba6a776e @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6134: cd63c83e23789eb194d38b8d272247a88122f2f6 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_20567: 8feb96ac300e385894f02a4da095bd03f3e84b7b @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

8feb96ac300e drm/i915/guc/rc: Setup and enable GUCRC feature
83e32fc3791c drm/i915/guc/slpc: slpc selftest
7ec83710bc2a drm/i915/guc/slpc: Sysfs hooks for slpc
3d77883e3ca5 drm/i915/guc/slpc: Update slpc to use platform min/max
3d4115c88dc4 drm/i915/guc/slpc: Cache platform frequency limits for slpc
de179acc62c1 drm/i915/guc/slpc: Enable ARAT timer interrupt
439492c0d769 drm/i915/guc/slpc: Add debugfs for slpc info
62ae9d2a357a drm/i915/guc/slpc: Add get max/min freq hooks
08a6776bb7fb drm/i915/guc/slpc: Add methods to set min/max frequency
675158b5dd9c drm/i915/guc/slpc: Enable slpc and add related H2G events
f3e07ff23e21 drm/i915/guc/slpc: Allocate, initialize and release slpc
b076b1d0bbf4 drm/i915/guc/slpc: Adding slpc communication interfaces
980d855c5c5a drm/i915/guc/slpc: Lay out slpc init/enable/disable/fini
2790990fc2e6 drm/i915/guc/slpc: Gate Host RPS when slpc is enabled
25589f2d8bfd drm/i915/guc/slpc: Initial definitions for slpc
d9063ce26607 drm/i915/guc: Squashed patch - DO NOT REVIEW

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20567/index.html

[-- Attachment #1.2: Type: text/html, Size: 12579 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 08/16] drm/i915/guc/slpc: Add methods to set min/max frequency
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 08/16] drm/i915/guc/slpc: Add methods to set min/max frequency Vinay Belgaumkar
@ 2021-07-10  3:07   ` kernel test robot
  2021-07-10  5:17   ` kernel test robot
  2021-07-10 17:47   ` Michal Wajdeczko
  2 siblings, 0 replies; 53+ messages in thread
From: kernel test robot @ 2021-07-10  3:07 UTC (permalink / raw)
  To: Vinay Belgaumkar, intel-gfx, dri-devel; +Cc: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 4028 bytes --]

Hi Vinay,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on drm-tip/drm-tip]
[cannot apply to drm-intel/for-linux-next drm-exynos/exynos-drm-next tegra-drm/drm/tegra/for-next drm/drm-next v5.13 next-20210709]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Vinay-Belgaumkar/Enable-GuC-based-power-management-features/20210710-092520
base:   git://anongit.freedesktop.org/drm/drm-tip drm-tip
config: x86_64-allyesconfig (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/ce93ba218ad070e0b1ae6f9823820fb4d2e14a8b
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Vinay-Belgaumkar/Enable-GuC-based-power-management-features/20210710-092520
        git checkout ce93ba218ad070e0b1ae6f9823820fb4d2e14a8b
        # save the attached .config to linux build tree
        mkdir build_dir
        make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash drivers/gpu/drm/i915/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c:258: warning: expecting prototype for intel_guc_slpc_max_freq_set(). Prototype was for intel_guc_slpc_set_max_freq() instead
>> drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c:292: warning: expecting prototype for intel_guc_slpc_min_freq_set(). Prototype was for intel_guc_slpc_set_min_freq() instead


vim +258 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c

   246	
   247	/**
   248	 * intel_guc_slpc_max_freq_set() - Set max frequency limit for SLPC.
   249	 * @slpc: pointer to intel_guc_slpc.
   250	 * @val: encoded frequency
   251	 *
   252	 * This function will invoke GuC SLPC action to update the max frequency
   253	 * limit for slice and unslice.
   254	 *
   255	 * Return: 0 on success, non-zero error code on failure.
   256	 */
   257	int intel_guc_slpc_set_max_freq(struct intel_guc_slpc *slpc, u32 val)
 > 258	{
   259		int ret;
   260		struct drm_i915_private *i915 = slpc_to_i915(slpc);
   261		intel_wakeref_t wakeref;
   262	
   263		wakeref = intel_runtime_pm_get(&i915->runtime_pm);
   264	
   265		ret = slpc_set_param(slpc,
   266			       SLPC_PARAM_GLOBAL_MAX_GT_UNSLICE_FREQ_MHZ,
   267			       val);
   268	
   269		if (ret) {
   270			drm_err(&i915->drm,
   271				"Set max frequency unslice returned %d", ret);
   272			ret = -EIO;
   273			goto done;
   274		}
   275	
   276	done:
   277		intel_runtime_pm_put(&i915->runtime_pm, wakeref);
   278		return ret;
   279	}
   280	
   281	/**
   282	 * intel_guc_slpc_min_freq_set() - Set min frequency limit for SLPC.
   283	 * @slpc: pointer to intel_guc_slpc.
   284	 * @val: encoded frequency
   285	 *
   286	 * This function will invoke GuC SLPC action to update the min frequency
   287	 * limit.
   288	 *
   289	 * Return: 0 on success, non-zero error code on failure.
   290	 */
   291	int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val)
 > 292	{
   293		int ret;
   294		struct intel_guc *guc = slpc_to_guc(slpc);
   295		struct drm_i915_private *i915 = guc_to_gt(guc)->i915;
   296		intel_wakeref_t wakeref;
   297	
   298		wakeref = intel_runtime_pm_get(&i915->runtime_pm);
   299	
   300		ret = slpc_set_param(slpc,
   301			       SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ,
   302			       val);
   303		if (ret) {
   304			drm_err(&i915->drm,
   305				"Set min frequency for unslice returned %d", ret);
   306			ret = -EIO;
   307			goto done;
   308		}
   309	
   310	done:
   311		intel_runtime_pm_put(&i915->runtime_pm, wakeref);
   312		return ret;
   313	}
   314	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 66092 bytes --]

[-- Attachment #3: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 08/16] drm/i915/guc/slpc: Add methods to set min/max frequency
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 08/16] drm/i915/guc/slpc: Add methods to set min/max frequency Vinay Belgaumkar
  2021-07-10  3:07   ` kernel test robot
@ 2021-07-10  5:17   ` kernel test robot
  2021-07-10 17:47   ` Michal Wajdeczko
  2 siblings, 0 replies; 53+ messages in thread
From: kernel test robot @ 2021-07-10  5:17 UTC (permalink / raw)
  To: Vinay Belgaumkar, intel-gfx, dri-devel; +Cc: clang-built-linux, kbuild-all

[-- Attachment #1: Type: text/plain, Size: 4415 bytes --]

Hi Vinay,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on drm-tip/drm-tip]
[cannot apply to drm-intel/for-linux-next drm-exynos/exynos-drm-next tegra-drm/drm/tegra/for-next drm/drm-next v5.13 next-20210709]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Vinay-Belgaumkar/Enable-GuC-based-power-management-features/20210710-092520
base:   git://anongit.freedesktop.org/drm/drm-tip drm-tip
config: x86_64-randconfig-a014-20210709 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project 8d69635ed9ecf36fd0ca85906bfde17949671cbe)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install x86_64 cross compiling tool for clang build
        # apt-get install binutils-x86-64-linux-gnu
        # https://github.com/0day-ci/linux/commit/ce93ba218ad070e0b1ae6f9823820fb4d2e14a8b
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Vinay-Belgaumkar/Enable-GuC-based-power-management-features/20210710-092520
        git checkout ce93ba218ad070e0b1ae6f9823820fb4d2e14a8b
        # save the attached .config to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross O=build_dir ARCH=x86_64 SHELL=/bin/bash drivers/gpu/drm/i915/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c:258: warning: expecting prototype for intel_guc_slpc_max_freq_set(). Prototype was for intel_guc_slpc_set_max_freq() instead
>> drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c:292: warning: expecting prototype for intel_guc_slpc_min_freq_set(). Prototype was for intel_guc_slpc_set_min_freq() instead


vim +258 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c

   246	
   247	/**
   248	 * intel_guc_slpc_max_freq_set() - Set max frequency limit for SLPC.
   249	 * @slpc: pointer to intel_guc_slpc.
   250	 * @val: encoded frequency
   251	 *
   252	 * This function will invoke GuC SLPC action to update the max frequency
   253	 * limit for slice and unslice.
   254	 *
   255	 * Return: 0 on success, non-zero error code on failure.
   256	 */
   257	int intel_guc_slpc_set_max_freq(struct intel_guc_slpc *slpc, u32 val)
 > 258	{
   259		int ret;
   260		struct drm_i915_private *i915 = slpc_to_i915(slpc);
   261		intel_wakeref_t wakeref;
   262	
   263		wakeref = intel_runtime_pm_get(&i915->runtime_pm);
   264	
   265		ret = slpc_set_param(slpc,
   266			       SLPC_PARAM_GLOBAL_MAX_GT_UNSLICE_FREQ_MHZ,
   267			       val);
   268	
   269		if (ret) {
   270			drm_err(&i915->drm,
   271				"Set max frequency unslice returned %d", ret);
   272			ret = -EIO;
   273			goto done;
   274		}
   275	
   276	done:
   277		intel_runtime_pm_put(&i915->runtime_pm, wakeref);
   278		return ret;
   279	}
   280	
   281	/**
   282	 * intel_guc_slpc_min_freq_set() - Set min frequency limit for SLPC.
   283	 * @slpc: pointer to intel_guc_slpc.
   284	 * @val: encoded frequency
   285	 *
   286	 * This function will invoke GuC SLPC action to update the min frequency
   287	 * limit.
   288	 *
   289	 * Return: 0 on success, non-zero error code on failure.
   290	 */
   291	int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val)
 > 292	{
   293		int ret;
   294		struct intel_guc *guc = slpc_to_guc(slpc);
   295		struct drm_i915_private *i915 = guc_to_gt(guc)->i915;
   296		intel_wakeref_t wakeref;
   297	
   298		wakeref = intel_runtime_pm_get(&i915->runtime_pm);
   299	
   300		ret = slpc_set_param(slpc,
   301			       SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ,
   302			       val);
   303		if (ret) {
   304			drm_err(&i915->drm,
   305				"Set min frequency for unslice returned %d", ret);
   306			ret = -EIO;
   307			goto done;
   308		}
   309	
   310	done:
   311		intel_runtime_pm_put(&i915->runtime_pm, wakeref);
   312		return ret;
   313	}
   314	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 52225 bytes --]

[-- Attachment #3: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 14/16] drm/i915/guc/slpc: Sysfs hooks for slpc
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 14/16] drm/i915/guc/slpc: Sysfs hooks for slpc Vinay Belgaumkar
@ 2021-07-10  6:18   ` kernel test robot
  2021-07-10  7:30   ` kernel test robot
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 53+ messages in thread
From: kernel test robot @ 2021-07-10  6:18 UTC (permalink / raw)
  To: Vinay Belgaumkar, intel-gfx, dri-devel; +Cc: clang-built-linux, kbuild-all

[-- Attachment #1: Type: text/plain, Size: 3860 bytes --]

Hi Vinay,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on drm-tip/drm-tip]
[cannot apply to drm-intel/for-linux-next drm-exynos/exynos-drm-next tegra-drm/drm/tegra/for-next drm/drm-next v5.13 next-20210709]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Vinay-Belgaumkar/Enable-GuC-based-power-management-features/20210710-092520
base:   git://anongit.freedesktop.org/drm/drm-tip drm-tip
config: x86_64-randconfig-a014-20210709 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project 8d69635ed9ecf36fd0ca85906bfde17949671cbe)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install x86_64 cross compiling tool for clang build
        # apt-get install binutils-x86-64-linux-gnu
        # https://github.com/0day-ci/linux/commit/8388422991b4e0e4da460328634a7ec1d278de6a
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Vinay-Belgaumkar/Enable-GuC-based-power-management-features/20210710-092520
        git checkout 8388422991b4e0e4da460328634a7ec1d278de6a
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/gpu/drm/i915/gt/intel_rps.c:1969:5: warning: no previous prototype for function 'intel_rps_read_punit_req' [-Wmissing-prototypes]
   u32 intel_rps_read_punit_req(struct intel_rps *rps)
       ^
   drivers/gpu/drm/i915/gt/intel_rps.c:1969:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
   u32 intel_rps_read_punit_req(struct intel_rps *rps)
   ^
   static 
>> drivers/gpu/drm/i915/gt/intel_rps.c:1978:5: warning: no previous prototype for function 'intel_rps_get_req' [-Wmissing-prototypes]
   u32 intel_rps_get_req(struct intel_rps *rps, u32 pureq)
       ^
   drivers/gpu/drm/i915/gt/intel_rps.c:1978:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
   u32 intel_rps_get_req(struct intel_rps *rps, u32 pureq)
   ^
   static 
>> drivers/gpu/drm/i915/gt/intel_rps.c:1985:5: warning: no previous prototype for function 'intel_rps_read_punit_req_frequency' [-Wmissing-prototypes]
   u32 intel_rps_read_punit_req_frequency(struct intel_rps *rps)
       ^
   drivers/gpu/drm/i915/gt/intel_rps.c:1985:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
   u32 intel_rps_read_punit_req_frequency(struct intel_rps *rps)
   ^
   static 
   3 warnings generated.


vim +/intel_rps_read_punit_req +1969 drivers/gpu/drm/i915/gt/intel_rps.c

  1968	
> 1969	u32 intel_rps_read_punit_req(struct intel_rps *rps)
  1970	{
  1971		struct intel_uncore *uncore = rps_to_uncore(rps);
  1972	
  1973		u32 pureq = intel_uncore_read(uncore, GEN6_RPNSWREQ);
  1974	
  1975		return pureq;
  1976	}
  1977	
> 1978	u32 intel_rps_get_req(struct intel_rps *rps, u32 pureq)
  1979	{
  1980		u32 req = pureq >> GEN9_SW_REQ_UNSLICE_RATIO_SHIFT;
  1981	
  1982		return req;
  1983	}
  1984	
> 1985	u32 intel_rps_read_punit_req_frequency(struct intel_rps *rps)
  1986	{
  1987		u32 freq = intel_rps_get_req(rps, intel_rps_read_punit_req(rps));
  1988	
  1989		return intel_gpu_freq(rps, freq);
  1990	}
  1991	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 52225 bytes --]

[-- Attachment #3: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 14/16] drm/i915/guc/slpc: Sysfs hooks for slpc
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 14/16] drm/i915/guc/slpc: Sysfs hooks for slpc Vinay Belgaumkar
  2021-07-10  6:18   ` kernel test robot
@ 2021-07-10  7:30   ` kernel test robot
  2021-07-10  7:30   ` [Intel-gfx] [RFC PATCH] drm/i915/guc/slpc: intel_rps_read_punit_req() can be static kernel test robot
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 53+ messages in thread
From: kernel test robot @ 2021-07-10  7:30 UTC (permalink / raw)
  To: Vinay Belgaumkar, intel-gfx, dri-devel; +Cc: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 1786 bytes --]

Hi Vinay,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on drm-tip/drm-tip]
[cannot apply to drm-intel/for-linux-next drm-exynos/exynos-drm-next tegra-drm/drm/tegra/for-next drm/drm-next v5.13 next-20210709]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Vinay-Belgaumkar/Enable-GuC-based-power-management-features/20210710-092520
base:   git://anongit.freedesktop.org/drm/drm-tip drm-tip
config: x86_64-randconfig-s021-20210709 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.3-341-g8af24329-dirty
        # https://github.com/0day-ci/linux/commit/8388422991b4e0e4da460328634a7ec1d278de6a
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Vinay-Belgaumkar/Enable-GuC-based-power-management-features/20210710-092520
        git checkout 8388422991b4e0e4da460328634a7ec1d278de6a
        # save the attached .config to linux build tree
        make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=x86_64 SHELL=/bin/bash drivers/gpu/drm/i915/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>


sparse warnings: (new ones prefixed by >>)
>> drivers/gpu/drm/i915/gt/intel_rps.c:1978:5: sparse: sparse: symbol 'intel_rps_get_req' was not declared. Should it be static?

Please review and possibly fold the followup patch.

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 46082 bytes --]

[-- Attachment #3: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [Intel-gfx] [RFC PATCH] drm/i915/guc/slpc: intel_rps_read_punit_req() can be static
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 14/16] drm/i915/guc/slpc: Sysfs hooks for slpc Vinay Belgaumkar
  2021-07-10  6:18   ` kernel test robot
  2021-07-10  7:30   ` kernel test robot
@ 2021-07-10  7:30   ` kernel test robot
  2021-07-10 13:54   ` [Intel-gfx] [PATCH 14/16] drm/i915/guc/slpc: Sysfs hooks for slpc kernel test robot
  2021-07-10 18:20   ` Michal Wajdeczko
  4 siblings, 0 replies; 53+ messages in thread
From: kernel test robot @ 2021-07-10  7:30 UTC (permalink / raw)
  To: Vinay Belgaumkar, intel-gfx, dri-devel; +Cc: kbuild-all

drivers/gpu/drm/i915/gt/intel_rps.c:1969:5: warning: symbol 'intel_rps_read_punit_req' was not declared. Should it be static?
drivers/gpu/drm/i915/gt/intel_rps.c:1978:5: warning: symbol 'intel_rps_get_req' was not declared. Should it be static?

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: kernel test robot <lkp@intel.com>
---
 intel_rps.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
index 88ffc5d90730a..a78af6b0babea 100644
--- a/drivers/gpu/drm/i915/gt/intel_rps.c
+++ b/drivers/gpu/drm/i915/gt/intel_rps.c
@@ -1966,7 +1966,7 @@ u32 intel_rps_read_actual_frequency(struct intel_rps *rps)
 	return freq;
 }
 
-u32 intel_rps_read_punit_req(struct intel_rps *rps)
+static u32 intel_rps_read_punit_req(struct intel_rps *rps)
 {
 	struct intel_uncore *uncore = rps_to_uncore(rps);
 
@@ -1975,7 +1975,7 @@ u32 intel_rps_read_punit_req(struct intel_rps *rps)
 	return pureq;
 }
 
-u32 intel_rps_get_req(struct intel_rps *rps, u32 pureq)
+static u32 intel_rps_get_req(struct intel_rps *rps, u32 pureq)
 {
 	u32 req = pureq >> GEN9_SW_REQ_UNSLICE_RATIO_SHIFT;
 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 14/16] drm/i915/guc/slpc: Sysfs hooks for slpc
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 14/16] drm/i915/guc/slpc: Sysfs hooks for slpc Vinay Belgaumkar
                     ` (2 preceding siblings ...)
  2021-07-10  7:30   ` [Intel-gfx] [RFC PATCH] drm/i915/guc/slpc: intel_rps_read_punit_req() can be static kernel test robot
@ 2021-07-10 13:54   ` kernel test robot
  2021-07-10 18:20   ` Michal Wajdeczko
  4 siblings, 0 replies; 53+ messages in thread
From: kernel test robot @ 2021-07-10 13:54 UTC (permalink / raw)
  To: Vinay Belgaumkar, intel-gfx, dri-devel; +Cc: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 2888 bytes --]

Hi Vinay,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on drm-tip/drm-tip]
[cannot apply to drm-intel/for-linux-next drm-exynos/exynos-drm-next tegra-drm/drm/tegra/for-next drm/drm-next v5.13 next-20210709]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Vinay-Belgaumkar/Enable-GuC-based-power-management-features/20210710-092520
base:   git://anongit.freedesktop.org/drm/drm-tip drm-tip
config: x86_64-randconfig-a004-20210709 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/8388422991b4e0e4da460328634a7ec1d278de6a
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Vinay-Belgaumkar/Enable-GuC-based-power-management-features/20210710-092520
        git checkout 8388422991b4e0e4da460328634a7ec1d278de6a
        # save the attached .config to linux build tree
        make W=1 ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/gpu/drm/i915/gt/intel_rps.c:1969:5: warning: no previous prototype for 'intel_rps_read_punit_req' [-Wmissing-prototypes]
    1969 | u32 intel_rps_read_punit_req(struct intel_rps *rps)
         |     ^~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/gpu/drm/i915/gt/intel_rps.c:1978:5: warning: no previous prototype for 'intel_rps_get_req' [-Wmissing-prototypes]
    1978 | u32 intel_rps_get_req(struct intel_rps *rps, u32 pureq)
         |     ^~~~~~~~~~~~~~~~~
>> drivers/gpu/drm/i915/gt/intel_rps.c:1985:5: warning: no previous prototype for 'intel_rps_read_punit_req_frequency' [-Wmissing-prototypes]
    1985 | u32 intel_rps_read_punit_req_frequency(struct intel_rps *rps)
         |     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


vim +/intel_rps_read_punit_req +1969 drivers/gpu/drm/i915/gt/intel_rps.c

  1968	
> 1969	u32 intel_rps_read_punit_req(struct intel_rps *rps)
  1970	{
  1971		struct intel_uncore *uncore = rps_to_uncore(rps);
  1972	
  1973		u32 pureq = intel_uncore_read(uncore, GEN6_RPNSWREQ);
  1974	
  1975		return pureq;
  1976	}
  1977	
> 1978	u32 intel_rps_get_req(struct intel_rps *rps, u32 pureq)
  1979	{
  1980		u32 req = pureq >> GEN9_SW_REQ_UNSLICE_RATIO_SHIFT;
  1981	
  1982		return req;
  1983	}
  1984	
> 1985	u32 intel_rps_read_punit_req_frequency(struct intel_rps *rps)
  1986	{
  1987		u32 freq = intel_rps_get_req(rps, intel_rps_read_punit_req(rps));
  1988	
  1989		return intel_gpu_freq(rps, freq);
  1990	}
  1991	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 41972 bytes --]

[-- Attachment #3: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 02/16] drm/i915/guc/slpc: Initial definitions for slpc
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 02/16] drm/i915/guc/slpc: Initial definitions for slpc Vinay Belgaumkar
@ 2021-07-10 14:27   ` Michal Wajdeczko
  2021-07-12 18:40     ` Belgaumkar, Vinay
  2021-07-12 23:43     ` Belgaumkar, Vinay
  0 siblings, 2 replies; 53+ messages in thread
From: Michal Wajdeczko @ 2021-07-10 14:27 UTC (permalink / raw)
  To: Vinay Belgaumkar, intel-gfx, dri-devel

Hi Vinay,

On 10.07.2021 03:20, Vinay Belgaumkar wrote:
> Add macros to check for slpc support. This feature is currently supported
> for gen12+ and enabled whenever guc submission is enabled/selected.

please try to use consistent names across all patches:

s/slpc/SLPC
s/gen12/Gen12
s/guc/GuC

> 
> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> ---
>  drivers/gpu/drm/i915/gt/uc/intel_guc.c        |  1 +
>  drivers/gpu/drm/i915/gt/uc/intel_guc.h        |  2 ++
>  .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 21 +++++++++++++++++++
>  .../gpu/drm/i915/gt/uc/intel_guc_submission.h | 16 ++++++++++++++
>  drivers/gpu/drm/i915/gt/uc/intel_uc.c         |  6 ++++--
>  drivers/gpu/drm/i915/gt/uc/intel_uc.h         |  1 +
>  6 files changed, 45 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
> index 979128e28372..b9a809f2d221 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
> @@ -157,6 +157,7 @@ void intel_guc_init_early(struct intel_guc *guc)
>  	intel_guc_ct_init_early(&guc->ct);
>  	intel_guc_log_init_early(&guc->log);
>  	intel_guc_submission_init_early(guc);
> +	intel_guc_slpc_init_early(guc);
>  
>  	mutex_init(&guc->send_mutex);
>  	spin_lock_init(&guc->irq_lock);
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> index 5d94cf482516..e5a456918b88 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> @@ -57,6 +57,8 @@ struct intel_guc {
>  
>  	bool submission_supported;
>  	bool submission_selected;
> +	bool slpc_supported;
> +	bool slpc_selected;
>  
>  	struct i915_vma *ads_vma;
>  	struct __guc_ads_blob *ads_blob;
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> index 9c102bf0c8e3..e2644a05f298 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> @@ -2351,6 +2351,27 @@ void intel_guc_submission_init_early(struct intel_guc *guc)
>  	guc->submission_selected = __guc_submission_selected(guc);
>  }
>  
> +static bool __guc_slpc_supported(struct intel_guc *guc)

hmm, easy to confuse with intel_guc_slpc_is_supported, so maybe:

__detect_slpc_supported()

(yes, I know you were following code above)

> +{
> +	/* GuC slpc is unavailable for pre-Gen12 */

s/slpc/SLPC

> +	return guc->submission_supported &&
> +		GRAPHICS_VER(guc_to_gt(guc)->i915) >= 12;
> +}
> +
> +static bool __guc_slpc_selected(struct intel_guc *guc)
> +{
> +	if (!intel_guc_slpc_is_supported(guc))
> +		return false;
> +
> +	return guc->submission_selected;
> +}
> +
> +void intel_guc_slpc_init_early(struct intel_guc *guc)
> +{
> +	guc->slpc_supported = __guc_slpc_supported(guc);
> +	guc->slpc_selected = __guc_slpc_selected(guc);
> +}

in patch 4/16 you are introducing intel_guc_slpc.c|h so to have proper
encapsulation better to define this function as

void intel_guc_slpc_init_early(struct intel_guc_slpc *slpc) { }

and move it to intel_guc_slpc.c

> +
>  static inline struct intel_context *
>  g2h_context_lookup(struct intel_guc *guc, u32 desc_idx)
>  {
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> index be767eb6ff71..7ae5fd052faf 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> @@ -13,6 +13,7 @@
>  struct drm_printer;
>  struct intel_engine_cs;
>  
> +void intel_guc_slpc_init_early(struct intel_guc *guc);

it really does not belong to this .h

>  void intel_guc_submission_init_early(struct intel_guc *guc);
>  int intel_guc_submission_init(struct intel_guc *guc);
>  void intel_guc_submission_enable(struct intel_guc *guc);
> @@ -50,4 +51,19 @@ static inline bool intel_guc_submission_is_used(struct intel_guc *guc)
>  	return intel_guc_is_used(guc) && intel_guc_submission_is_wanted(guc);
>  }
>  
> +static inline bool intel_guc_slpc_is_supported(struct intel_guc *guc)
> +{
> +	return guc->slpc_supported;
> +}
> +
> +static inline bool intel_guc_slpc_is_wanted(struct intel_guc *guc)
> +{
> +	return guc->slpc_selected;
> +}
> +
> +static inline bool intel_guc_slpc_is_used(struct intel_guc *guc)
> +{
> +	return intel_guc_submission_is_used(guc) && intel_guc_slpc_is_wanted(guc);
> +}

did you try to define them in intel_guc_slpc.h ?

note that to avoid circular dependencies you can define slpc struct in
intel_guc_slpc_types.h and then

in intel_guc.h:
	#include "intel_guc_slpc_types.h" instead of intel_guc_slpc.h

in intel_guc_slpc.h:
	#include "intel_guc.h"
	#include "intel_guc_slpc_types.h"
	#include "intel_guc_submission.h"

> +
>  #endif
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
> index 61be0aa81492..dca5f6d0641b 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
> @@ -76,16 +76,18 @@ static void __confirm_options(struct intel_uc *uc)
>  	struct drm_i915_private *i915 = uc_to_gt(uc)->i915;
>  
>  	drm_dbg(&i915->drm,
> -		"enable_guc=%d (guc:%s submission:%s huc:%s)\n",
> +		"enable_guc=%d (guc:%s submission:%s huc:%s slpc:%s)\n",
>  		i915->params.enable_guc,
>  		yesno(intel_uc_wants_guc(uc)),
>  		yesno(intel_uc_wants_guc_submission(uc)),
> -		yesno(intel_uc_wants_huc(uc)));
> +		yesno(intel_uc_wants_huc(uc)),
> +		yesno(intel_uc_wants_guc_slpc(uc)));
>  
>  	if (i915->params.enable_guc == 0) {
>  		GEM_BUG_ON(intel_uc_wants_guc(uc));
>  		GEM_BUG_ON(intel_uc_wants_guc_submission(uc));
>  		GEM_BUG_ON(intel_uc_wants_huc(uc));
> +		GEM_BUG_ON(intel_uc_wants_guc_slpc(uc));
>  		return;
>  	}
>  
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.h b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
> index e2da2b6e76e1..38e465fd8a0c 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.h
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
> @@ -83,6 +83,7 @@ __uc_state_checker(x, func, uses, used)
>  uc_state_checkers(guc, guc);
>  uc_state_checkers(huc, huc);
>  uc_state_checkers(guc, guc_submission);
> +uc_state_checkers(guc, guc_slpc);
>  
>  #undef uc_state_checkers
>  #undef __uc_state_checker
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 04/16] drm/i915/guc/slpc: Lay out slpc init/enable/disable/fini
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 04/16] drm/i915/guc/slpc: Lay out slpc init/enable/disable/fini Vinay Belgaumkar
@ 2021-07-10 14:35   ` Michal Wajdeczko
  2021-07-13  0:37     ` Belgaumkar, Vinay
  0 siblings, 1 reply; 53+ messages in thread
From: Michal Wajdeczko @ 2021-07-10 14:35 UTC (permalink / raw)
  To: Vinay Belgaumkar, intel-gfx, dri-devel



On 10.07.2021 03:20, Vinay Belgaumkar wrote:
> Declare header and source files for SLPC, along with init and
> enable/disable function templates.

later you claim that "disable" is not needed

> 
> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
> ---
>  drivers/gpu/drm/i915/Makefile               |  1 +
>  drivers/gpu/drm/i915/gt/uc/intel_guc.h      |  2 ++
>  drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 34 +++++++++++++++++++++
>  drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h | 16 ++++++++++
>  4 files changed, 53 insertions(+)
>  create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>  create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
> 
> diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
> index ab7679957623..d8eac4468df9 100644
> --- a/drivers/gpu/drm/i915/Makefile
> +++ b/drivers/gpu/drm/i915/Makefile
> @@ -186,6 +186,7 @@ i915-y += gt/uc/intel_uc.o \
>  	  gt/uc/intel_guc_fw.o \
>  	  gt/uc/intel_guc_log.o \
>  	  gt/uc/intel_guc_log_debugfs.o \
> +	  gt/uc/intel_guc_slpc.o \
>  	  gt/uc/intel_guc_submission.o \
>  	  gt/uc/intel_huc.o \
>  	  gt/uc/intel_huc_debugfs.o \
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> index e5a456918b88..0dbbd9cf553f 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> @@ -15,6 +15,7 @@
>  #include "intel_guc_ct.h"
>  #include "intel_guc_log.h"
>  #include "intel_guc_reg.h"
> +#include "intel_guc_slpc.h"
>  #include "intel_uc_fw.h"
>  #include "i915_utils.h"
>  #include "i915_vma.h"
> @@ -30,6 +31,7 @@ struct intel_guc {
>  	struct intel_uc_fw fw;
>  	struct intel_guc_log log;
>  	struct intel_guc_ct ct;
> +	struct intel_guc_slpc slpc;
>  
>  	/* Global engine used to submit requests to GuC */
>  	struct i915_sched_engine *sched_engine;
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> new file mode 100644
> index 000000000000..c1f569d2300d
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> @@ -0,0 +1,34 @@
> +/*
> + * SPDX-License-Identifier: MIT

SPDX tag shall be in very first line, for .c:

// SPDX-License-Identifier: MIT

> + *
> + * Copyright © 2020 Intel Corporation

2021

> + */
> +
> +#include "intel_guc_slpc.h"
> +
> +int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
> +{
> +	return 0;
> +}
> +
> +/*
> + * intel_guc_slpc_enable() - Start SLPC
> + * @slpc: pointer to intel_guc_slpc.
> + *
> + * SLPC is enabled by setting up the shared data structure and
> + * sending reset event to GuC SLPC. Initial data is setup in
> + * intel_guc_slpc_init. Here we send the reset event. We do
> + * not currently need a slpc_disable since this is taken care
> + * of automatically when a reset/suspend occurs and the guc

s/guc/GuC

> + * channels are destroyed.

you mean CTB ?

> + *
> + * Return: 0 on success, non-zero error code on failure.
> + */
> +int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
> +{
> +	return 0;
> +}
> +
> +void intel_guc_slpc_fini(struct intel_guc_slpc *slpc)
> +{
> +}
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
> new file mode 100644
> index 000000000000..74fd86769163
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
> @@ -0,0 +1,16 @@
> +/*
> + * SPDX-License-Identifier: MIT

SPDX tag shall be in very first line, for .h:

/* SPDX-License-Identifier: MIT */

> + *
> + * Copyright © 2020 Intel Corporation

2021

> + */
> +#ifndef _INTEL_GUC_SLPC_H_
> +#define _INTEL_GUC_SLPC_H_
> +
> +struct intel_guc_slpc {
> +};

move all data definitions to intel_guc_slpc_types.h and include it here

> +
> +int intel_guc_slpc_init(struct intel_guc_slpc *slpc);
> +int intel_guc_slpc_enable(struct intel_guc_slpc *slpc);
> +void intel_guc_slpc_fini(struct intel_guc_slpc *slpc);
> +
> +#endif
> 

and as suggested in comment to 2/14 you should likely move this patch to
the front of the series

Michal

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 05/16] drm/i915/guc/slpc: Adding slpc communication interfaces
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 05/16] drm/i915/guc/slpc: Adding slpc communication interfaces Vinay Belgaumkar
@ 2021-07-10 15:52   ` Michal Wajdeczko
  2021-07-13 23:22     ` Belgaumkar, Vinay
  0 siblings, 1 reply; 53+ messages in thread
From: Michal Wajdeczko @ 2021-07-10 15:52 UTC (permalink / raw)
  To: Vinay Belgaumkar, intel-gfx, dri-devel



On 10.07.2021 03:20, Vinay Belgaumkar wrote:
> Replicate the SLPC header file in GuC for the most part. There are

what you mean by "replicate" here?

> some SLPC mode based parameters which haven't been included since
> we are not using them.
> 
> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
> ---
>  drivers/gpu/drm/i915/gt/uc/intel_guc.c        |   4 +
>  drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h   |   2 +
>  drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h   |   2 +
>  .../gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h  | 255 ++++++++++++++++++
>  4 files changed, 263 insertions(+)
>  create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h
> 
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
> index b9a809f2d221..9d61b2d54de4 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
> @@ -202,11 +202,15 @@ static u32 guc_ctl_debug_flags(struct intel_guc *guc)
>  
>  static u32 guc_ctl_feature_flags(struct intel_guc *guc)
>  {
> +	struct intel_gt *gt = guc_to_gt(guc);
>  	u32 flags = 0;
>  
>  	if (!intel_guc_submission_is_used(guc))
>  		flags |= GUC_CTL_DISABLE_SCHEDULER;
>  
> +	if (intel_uc_uses_guc_slpc(&gt->uc))
> +		flags |= GUC_CTL_ENABLE_SLPC;
> +
>  	return flags;
>  }
>  
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
> index 94bb1ca6f889..19e2504d7a36 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
> @@ -114,6 +114,8 @@
>  #define   GUC_ADS_ADDR_SHIFT		1
>  #define   GUC_ADS_ADDR_MASK		(0xFFFFF << GUC_ADS_ADDR_SHIFT)
>  
> +#define GUC_CTL_ENABLE_SLPC            BIT(2)

this should be defined closer to GUC_CTL_FEATURE

> +
>  #define GUC_CTL_MAX_DWORDS		(SOFT_SCRATCH_COUNT - 2) /* [1..14] */
>  
>  /* Generic GT SysInfo data types */
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
> index 74fd86769163..98036459a1a3 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
> @@ -6,6 +6,8 @@
>  #ifndef _INTEL_GUC_SLPC_H_
>  #define _INTEL_GUC_SLPC_H_
>  
> +#include "intel_guc_slpc_fwif.h"

doesn't seem to be needed right now

> +
>  struct intel_guc_slpc {
>  };
>  
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h
> new file mode 100644
> index 000000000000..2a5e71428374
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h

I've started to move all pure ABI definitions to files in abi/ folder,
leaving in guc_fwif.h only our next level helpers/wrappers.

Can you move these SLPC definition there too ? maybe as dedicated:

	abi/guc_slpc_abi.h

> @@ -0,0 +1,255 @@
> +/*
> + * SPDX-License-Identifier: MIT

use proper format

> + *
> + * Copyright © 2020 Intel Corporation

2021

> + */
> +#ifndef _INTEL_GUC_SLPC_FWIF_H_
> +#define _INTEL_GUC_SLPC_FWIF_H_
> +
> +#include <linux/types.h>
> +
> +/* This file replicates the header in GuC code for handling SLPC related
> + * data structures and sizes
> + */

use proper format for multi-line comments:

	/*
	 * blah blah
	 * blah blah
	 */

> +
> +/* SLPC exposes certain parameters for global configuration by the host.
> + * These are referred to as override parameters, because in most cases
> + * the host will not need to modify the default values used by SLPC.
> + * SLPC remembers the default values which allows the host to easily restore
> + * them by simply unsetting the override. The host can set or unset override
> + * parameters during SLPC (re-)initialization using the SLPC Reset event.
> + * The host can also set or unset override parameters on the fly using the
> + * Parameter Set and Parameter Unset events
> + */
> +#define SLPC_MAX_OVERRIDE_PARAMETERS	256
> +#define SLPC_OVERRIDE_BITFIELD_SIZE \
> +		(SLPC_MAX_OVERRIDE_PARAMETERS / 32)
> +
> +#define SLPC_PAGE_SIZE_BYTES			4096
> +#define SLPC_CACHELINE_SIZE_BYTES		64
> +#define SLPC_SHARE_DATA_SIZE_BYTE_HEADER	SLPC_CACHELINE_SIZE_BYTES
> +#define SLPC_SHARE_DATA_SIZE_BYTE_PLATFORM_INFO	SLPC_CACHELINE_SIZE_BYTES
> +#define SLPC_SHARE_DATA_SIZE_BYTE_TASK_STATE	SLPC_CACHELINE_SIZE_BYTES
> +#define SLPC_SHARE_DATA_MODE_DEFN_TABLE_SIZE	SLPC_PAGE_SIZE_BYTES

can you put some simply diagram that would describe this layout ?

> +
> +#define SLPC_SHARE_DATA_SIZE_BYTE_MAX		(2 * SLPC_PAGE_SIZE_BYTES)
> +
> +/* Cacheline size aligned (Total size needed for
> + * SLPM_KMD_MAX_OVERRIDE_PARAMETERS=256 is 1088 bytes)
> + */
> +#define SLPC_SHARE_DATA_SIZE_BYTE_PARAM		(((((SLPC_MAX_OVERRIDE_PARAMETERS * 4) \
> +						+ ((SLPC_MAX_OVERRIDE_PARAMETERS / 32) * 4)) \
> +		+ (SLPC_CACHELINE_SIZE_BYTES-1)) / SLPC_CACHELINE_SIZE_BYTES)*SLPC_CACHELINE_SIZE_BYTES)
> +
> +#define SLPC_SHARE_DATA_SIZE_BYTE_OTHER		(SLPC_SHARE_DATA_SIZE_BYTE_MAX - \
> +					(SLPC_SHARE_DATA_SIZE_BYTE_HEADER \
> +					+ SLPC_SHARE_DATA_SIZE_BYTE_PLATFORM_INFO \
> +					+ SLPC_SHARE_DATA_SIZE_BYTE_TASK_STATE \
> +					+ SLPC_SHARE_DATA_SIZE_BYTE_PARAM \
> +					+ SLPC_SHARE_DATA_MODE_DEFN_TABLE_SIZE))
> +
> +#define SLPC_EVENT(id, argc)			((u32)(id) << 8 | (argc))
> +
> +#define SLPC_PARAM_TASK_DEFAULT			0
> +#define SLPC_PARAM_TASK_ENABLED			1
> +#define SLPC_PARAM_TASK_DISABLED		2
> +#define SLPC_PARAM_TASK_UNKNOWN			3

many values below are defined as enum, why these values are #defines ?

and is there any relation to these ones defined below (look similar)?

 +	SLPC_PARAM_TASK_ENABLE_GTPERF = 0,
 +	SLPC_PARAM_TASK_DISABLE_GTPERF = 1,
 +	SLPC_PARAM_TASK_ENABLE_BALANCER = 2,
 +	SLPC_PARAM_TASK_DISABLE_BALANCER = 3,
 +	SLPC_PARAM_TASK_ENABLE_DCC = 4,
 +	SLPC_PARAM_TASK_DISABLE_DCC = 5,

> +
> +enum slpc_status {
> +	SLPC_STATUS_OK = 0,
> +	SLPC_STATUS_ERROR = 1,
> +	SLPC_STATUS_ILLEGAL_COMMAND = 2,
> +	SLPC_STATUS_INVALID_ARGS = 3,
> +	SLPC_STATUS_INVALID_PARAMS = 4,
> +	SLPC_STATUS_INVALID_DATA = 5,
> +	SLPC_STATUS_OUT_OF_RANGE = 6,
> +	SLPC_STATUS_NOT_SUPPORTED = 7,
> +	SLPC_STATUS_NOT_IMPLEMENTED = 8,
> +	SLPC_STATUS_NO_DATA = 9,
> +	SLPC_STATUS_EVENT_NOT_REGISTERED = 10,
> +	SLPC_STATUS_REGISTER_LOCKED = 11,
> +	SLPC_STATUS_TEMPORARILY_UNAVAILABLE = 12,
> +	SLPC_STATUS_VALUE_ALREADY_SET = 13,
> +	SLPC_STATUS_VALUE_ALREADY_UNSET = 14,
> +	SLPC_STATUS_VALUE_NOT_CHANGED = 15,
> +	SLPC_STATUS_MEMIO_ERROR = 16,
> +	SLPC_STATUS_EVENT_QUEUED_REQ_DPC = 17,
> +	SLPC_STATUS_EVENT_QUEUED_NOREQ_DPC = 18,
> +	SLPC_STATUS_NO_EVENT_QUEUED = 19,
> +	SLPC_STATUS_OUT_OF_SPACE = 20,
> +	SLPC_STATUS_TIMEOUT = 21,
> +	SLPC_STATUS_NO_LOCK = 22,
> +	SLPC_STATUS_MAX
> +};
> +
> +enum slpc_event_id {
> +	SLPC_EVENT_RESET = 0,
> +	SLPC_EVENT_SHUTDOWN = 1,
> +	SLPC_EVENT_PLATFORM_INFO_CHANGE = 2,
> +	SLPC_EVENT_DISPLAY_MODE_CHANGE = 3,
> +	SLPC_EVENT_FLIP_COMPLETE = 4,
> +	SLPC_EVENT_QUERY_TASK_STATE = 5,
> +	SLPC_EVENT_PARAMETER_SET = 6,
> +	SLPC_EVENT_PARAMETER_UNSET = 7,
> +};
> +
> +enum slpc_param_id {
> +	SLPC_PARAM_TASK_ENABLE_GTPERF = 0,
> +	SLPC_PARAM_TASK_DISABLE_GTPERF = 1,
> +	SLPC_PARAM_TASK_ENABLE_BALANCER = 2,
> +	SLPC_PARAM_TASK_DISABLE_BALANCER = 3,
> +	SLPC_PARAM_TASK_ENABLE_DCC = 4,
> +	SLPC_PARAM_TASK_DISABLE_DCC = 5,
> +	SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ = 6,
> +	SLPC_PARAM_GLOBAL_MAX_GT_UNSLICE_FREQ_MHZ = 7,
> +	SLPC_PARAM_GLOBAL_MIN_GT_SLICE_FREQ_MHZ = 8,
> +	SLPC_PARAM_GLOBAL_MAX_GT_SLICE_FREQ_MHZ = 9,
> +	SLPC_PARAM_GTPERF_THRESHOLD_MAX_FPS = 10,
> +	SLPC_PARAM_GLOBAL_DISABLE_GT_FREQ_MANAGEMENT = 11,
> +	SLPC_PARAM_GTPERF_ENABLE_FRAMERATE_STALLING = 12,
> +	SLPC_PARAM_GLOBAL_DISABLE_RC6_MODE_CHANGE = 13,
> +	SLPC_PARAM_GLOBAL_OC_UNSLICE_FREQ_MHZ = 14,
> +	SLPC_PARAM_GLOBAL_OC_SLICE_FREQ_MHZ = 15,
> +	SLPC_PARAM_GLOBAL_ENABLE_IA_GT_BALANCING = 16,
> +	SLPC_PARAM_GLOBAL_ENABLE_ADAPTIVE_BURST_TURBO = 17,
> +	SLPC_PARAM_GLOBAL_ENABLE_EVAL_MODE = 18,
> +	SLPC_PARAM_GLOBAL_ENABLE_BALANCER_IN_NON_GAMING_MODE = 19,
> +	SLPC_PARAM_GLOBAL_RT_MODE_TURBO_FREQ_DELTA_MHZ = 20,
> +	SLPC_PARAM_PWRGATE_RC_MODE = 21,
> +	SLPC_PARAM_EDR_MODE_COMPUTE_TIMEOUT_MS = 22,
> +	SLPC_PARAM_EDR_QOS_FREQ_MHZ = 23,
> +	SLPC_PARAM_MEDIA_FF_RATIO_MODE = 24,
> +	SLPC_PARAM_ENABLE_IA_FREQ_LIMITING = 25,
> +	SLPC_PARAM_STRATEGIES = 26,
> +	SLPC_PARAM_POWER_PROFILE = 27,
> +	SLPC_IGNORE_EFFICIENT_FREQUENCY = 28,

no PARAM tag inside this enum name

> +	SLPC_MAX_PARAM = 32,

can we move this out of enum, maybe as standalone #define ?
or remove it as doesn't seem to be useful at all

> +};
> +
> +enum slpc_global_state {
> +	SLPC_GLOBAL_STATE_NOT_RUNNING = 0,
> +	SLPC_GLOBAL_STATE_INITIALIZING = 1,
> +	SLPC_GLOBAL_STATE_RESETTING = 2,
> +	SLPC_GLOBAL_STATE_RUNNING = 3,
> +	SLPC_GLOBAL_STATE_SHUTTING_DOWN = 4,
> +	SLPC_GLOBAL_STATE_ERROR = 5
> +};
> +
> +enum slpc_platform_sku {
> +	SLPC_PLATFORM_SKU_UNDEFINED = 0,
> +	SLPC_PLATFORM_SKU_ULX = 1,
> +	SLPC_PLATFORM_SKU_ULT = 2,
> +	SLPC_PLATFORM_SKU_T = 3,
> +	SLPC_PLATFORM_SKU_MOBL = 4,
> +	SLPC_PLATFORM_SKU_DT = 5,
> +	SLPC_PLATFORM_SKU_UNKNOWN = 6,
> +};
> +
> +struct slpc_platform_info {
> +	union {
> +		u32 sku;  /**< SKU info */
> +		struct {
> +			u32 reserved:8;
> +			u32 fused_slice_count:8;
> +			u32 reserved1:16;
> +		};
> +	};
> +        union
> +	{
> +		u32 bitfield2;       /**< IA capability info*/
> +		struct {
> +			u32 max_p0_freq_bins:8;
> +			u32 p1_freq_bins:8;
> +			u32 pe_freq_bins:8;
> +			u32 pn_freq_bins:8;
> +		};
> +	};
> +	u32 reserved2[2];
> +} __packed;

I'm not a big fan of using C bitfields for interface definitions

can we switch to regular #defines and use FIELD_GET|PREP ?

> +
> +struct slpc_task_state_data {
> +	union {
> +		u32 bitfield1;
> +		struct {
> +			u32 gtperf_task_active:1;
> +			u32 gtperf_stall_possible:1;
> +			u32 gtperf_gaming_mode:1;
> +			u32 gtperf_target_fps:8;
> +			u32 dcc_task_active:1;
> +			u32 in_dcc:1;
> +			u32 in_dct:1;
> +			u32 freq_switch_active:1;
> +			u32 ibc_enabled:1;
> +			u32 ibc_active:1;
> +			u32 pg1_enabled:1;
> +			u32 pg1_active:1;
> +		};
> +	};
> +	union {
> +		u32 bitfield2;
> +		struct {
> +			u32 max_unslice_freq:8;
> +			u32 min_unslice_freq:8;
> +			u32 max_slice_freq:8;
> +			u32 min_slice_freq:8;
> +		};
> +	};
> +} __packed;
> +
> +struct slpc_shared_data {
> +	union {
> +		struct {
> +			/* Total size in bytes of this buffer. */
> +			u32 shared_data_size;
> +			u32 global_state;
> +			u32 display_data_addr;
> +		};

below all structs are named, this one not, why ?

> +		unsigned char reserved_header[SLPC_SHARE_DATA_SIZE_BYTE_HEADER];

this could be just "u8"

and I assume all these "reserved" are in fact padding, no ?

> +	};
> +
> +	union {
> +		struct slpc_platform_info platform_info;
> +		unsigned char reserved_platform[SLPC_SHARE_DATA_SIZE_BYTE_PLATFORM_INFO];
> +	};

maybe we can avoid these unions by declaring padding explicitly:

	struct slpc_platform_info platform_info;
	u8 platform_info_pad[SLPC_SHARE_DATA_SIZE_BYTE_PLATFORM_INFO -
	                     sizeof(struct slpc_platform_info)];

> +
> +	union {
> +		struct slpc_task_state_data task_state_data;
> +		unsigned char reserved_task_state[SLPC_SHARE_DATA_SIZE_BYTE_TASK_STATE];
> +	};
> +
> +	union {
> +		struct {
> +		u32 override_params_set_bits[SLPC_OVERRIDE_BITFIELD_SIZE];
> +		u32 override_params_values[SLPC_MAX_OVERRIDE_PARAMETERS];
> +		};
> +		unsigned char reserved_override_parameter[SLPC_SHARE_DATA_SIZE_BYTE_PARAM];
> +	};
> +
> +	unsigned char reserved_other[SLPC_SHARE_DATA_SIZE_BYTE_OTHER];
> +
> +	/* PAGE 2 (4096 bytes), mode based parameter will be removed soon */
> +	unsigned char reserved_mode_definition[4096];
> +} __packed;
> +
> +enum slpc_reset_flags {
> +	SLPC_RESET_FLAG_TDR_OCCURRED = (1 << 0)
> +};
> +
> +#define SLPC_EVENT_MAX_INPUT_ARGS  9
> +#define SLPC_EVENT_MAX_OUTPUT_ARGS 1
> +
> +union slpc_event_input_header {
> +	u32 value;
> +	struct {
> +		u32 num_args:8;
> +		u32 event_id:8;
> +	};
> +};

I guess earlier #define SLPC_EVENT is related to above
can we keep related definitions together ?

> +
> +struct slpc_event_input {
> +	u32 h2g_action_id;
> +	union slpc_event_input_header header;
> +	u32 args[SLPC_EVENT_MAX_INPUT_ARGS];
> +} __packed;

this looks like a attempt to define details of the
INTEL_GUC_ACTION_SLPC_REQUEST HXG request message.

so maybe it can be moved to abi/guc_actions_slpc_abi.h ?
best if you can define it in the same fashion as CTB registration one

Michal

> +
> +#endif
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 06/16] drm/i915/guc/slpc: Allocate, initialize and release slpc
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 06/16] drm/i915/guc/slpc: Allocate, initialize and release slpc Vinay Belgaumkar
@ 2021-07-10 16:05   ` Michal Wajdeczko
  2021-07-14  1:40     ` Belgaumkar, Vinay
  0 siblings, 1 reply; 53+ messages in thread
From: Michal Wajdeczko @ 2021-07-10 16:05 UTC (permalink / raw)
  To: Vinay Belgaumkar, intel-gfx, dri-devel



On 10.07.2021 03:20, Vinay Belgaumkar wrote:
> Allocate data structures for SLPC and functions for
> initializing on host side.
> 
> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
> ---
>  drivers/gpu/drm/i915/gt/uc/intel_guc.c      | 11 +++++++
>  drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 36 ++++++++++++++++++++-
>  drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h | 20 ++++++++++++
>  3 files changed, 66 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
> index 9d61b2d54de4..82863a9bc8e8 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
> @@ -336,6 +336,12 @@ int intel_guc_init(struct intel_guc *guc)
>  			goto err_ct;
>  	}
>  
> +	if (intel_guc_slpc_is_used(guc)) {
> +		ret = intel_guc_slpc_init(&guc->slpc);
> +		if (ret)
> +			goto err_submission;
> +	}
> +
>  	/* now that everything is perma-pinned, initialize the parameters */
>  	guc_init_params(guc);
>  
> @@ -346,6 +352,8 @@ int intel_guc_init(struct intel_guc *guc)
>  
>  	return 0;
>  
> +err_submission:
> +	intel_guc_submission_fini(guc);
>  err_ct:
>  	intel_guc_ct_fini(&guc->ct);
>  err_ads:
> @@ -368,6 +376,9 @@ void intel_guc_fini(struct intel_guc *guc)
>  
>  	i915_ggtt_disable_guc(gt->ggtt);
>  
> +	if (intel_guc_slpc_is_used(guc))
> +		intel_guc_slpc_fini(&guc->slpc);
> +
>  	if (intel_guc_submission_is_used(guc))
>  		intel_guc_submission_fini(guc);
>  
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> index c1f569d2300d..94e2f19951aa 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> @@ -4,11 +4,41 @@
>   * Copyright © 2020 Intel Corporation
>   */
>  
> +#include <asm/msr-index.h>

hmm, what exactly is needed from this header ?

> +
> +#include "gt/intel_gt.h"
> +#include "gt/intel_rps.h"
> +
> +#include "i915_drv.h"
>  #include "intel_guc_slpc.h"
> +#include "intel_pm.h"
> +
> +static inline struct intel_guc *slpc_to_guc(struct intel_guc_slpc *slpc)
> +{
> +	return container_of(slpc, struct intel_guc, slpc);
> +}
> +
> +static int slpc_shared_data_init(struct intel_guc_slpc *slpc)
> +{
> +	struct intel_guc *guc = slpc_to_guc(slpc);
> +	int err;
> +	u32 size = PAGE_ALIGN(sizeof(struct slpc_shared_data));

move err decl here

> +
> +	err = intel_guc_allocate_and_map_vma(guc, size, &slpc->vma, &slpc->vaddr);
> +	if (unlikely(err)) {
> +		DRM_ERROR("Failed to allocate slpc struct (err=%d)\n", err);

s/slpc/SLPC

and use drm_err instead
and you may also want to print error as %pe

> +		i915_vma_unpin_and_release(&slpc->vma, I915_VMA_RELEASE_MAP);

do you really need this ?

> +		return err;
> +	}
> +
> +	return err;
> +}
>  
>  int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
>  {
> -	return 0;
> +	GEM_BUG_ON(slpc->vma);
> +
> +	return slpc_shared_data_init(slpc);
>  }
>  
>  /*
> @@ -31,4 +61,8 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
>  
>  void intel_guc_slpc_fini(struct intel_guc_slpc *slpc)
>  {
> +	if (!slpc->vma)
> +		return;
> +
> +	i915_vma_unpin_and_release(&slpc->vma, I915_VMA_RELEASE_MAP);
>  }
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
> index 98036459a1a3..a2643b904165 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
> @@ -3,12 +3,32 @@
>   *
>   * Copyright © 2020 Intel Corporation
>   */
> +

should be fixed in earlier patch

>  #ifndef _INTEL_GUC_SLPC_H_
>  #define _INTEL_GUC_SLPC_H_
>  
> +#include <linux/mutex.h>
>  #include "intel_guc_slpc_fwif.h"
>  
>  struct intel_guc_slpc {
> +	/*Protects access to vma and SLPC actions */

hmm, missing mutex ;)

> +	struct i915_vma *vma;
> +	void *vaddr;

no need to be void, define it as ptr to slpc_shared_data

> +
> +	/* platform frequency limits */
> +	u32 min_freq;
> +	u32 rp0_freq;
> +	u32 rp1_freq;
> +
> +	/* frequency softlimits */
> +	u32 min_freq_softlimit;
> +	u32 max_freq_softlimit;
> +
> +	struct {
> +		u32 param_id;
> +		u32 param_value;
> +		u32 param_override;
> +	} debug;

can you add all these extra fields in patches which will need them?

Michal

>  };
>  
>  int intel_guc_slpc_init(struct intel_guc_slpc *slpc);
> 

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 07/16] drm/i915/guc/slpc: Enable slpc and add related H2G events
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 07/16] drm/i915/guc/slpc: Enable slpc and add related H2G events Vinay Belgaumkar
@ 2021-07-10 17:37   ` Michal Wajdeczko
  2021-07-15  1:58     ` Belgaumkar, Vinay
  0 siblings, 1 reply; 53+ messages in thread
From: Michal Wajdeczko @ 2021-07-10 17:37 UTC (permalink / raw)
  To: Vinay Belgaumkar, intel-gfx, dri-devel



On 10.07.2021 03:20, Vinay Belgaumkar wrote:
> Add methods for interacting with guc for enabling SLPC. Enable
> SLPC after guc submission has been established. GuC load will

s/guc/GuC

> fail if SLPC cannot be successfully initialized. Add various
> helper methods to set/unset the parameters for SLPC. They can
> be set using h2g calls or directly setting bits in the shared

/h2g/H2G

> data structure.
> 
> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
> ---
>  drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c   | 221 ++++++++++++++++++
>  .../gpu/drm/i915/gt/uc/intel_guc_submission.c |   4 -
>  drivers/gpu/drm/i915/gt/uc/intel_uc.c         |  10 +
>  3 files changed, 231 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> index 94e2f19951aa..e579408d1c19 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> @@ -18,6 +18,61 @@ static inline struct intel_guc *slpc_to_guc(struct intel_guc_slpc *slpc)
>  	return container_of(slpc, struct intel_guc, slpc);
>  }
>  
> +static inline struct intel_gt *slpc_to_gt(struct intel_guc_slpc *slpc)
> +{
> +	return guc_to_gt(slpc_to_guc(slpc));
> +}
> +
> +static inline struct drm_i915_private *slpc_to_i915(struct intel_guc_slpc *slpc)
> +{
> +	return (slpc_to_gt(slpc))->i915;
> +}
> +
> +static void slpc_mem_set_param(struct slpc_shared_data *data,
> +				u32 id, u32 value)
> +{
> +	GEM_BUG_ON(id >= SLPC_MAX_OVERRIDE_PARAMETERS);
> +	/* When the flag bit is set, corresponding value will be read
> +	 * and applied by slpc.

fix format of multi-line comment
s/slpc/SLPC

> +	 */
> +	data->override_params_set_bits[id >> 5] |= (1 << (id % 32));

use __set_bit instead

> +	data->override_params_values[id] = value;
> +}
> +
> +static void slpc_mem_unset_param(struct slpc_shared_data *data,
> +				 u32 id)
> +{
> +	GEM_BUG_ON(id >= SLPC_MAX_OVERRIDE_PARAMETERS);
> +	/* When the flag bit is unset, corresponding value will not be
> +	 * read by slpc.
> +	 */
> +	data->override_params_set_bits[id >> 5] &= (~(1 << (id % 32)));

same here

> +	data->override_params_values[id] = 0;
> +}
> +
> +static void slpc_mem_task_control(struct slpc_shared_data *data,
> +				 u64 val, u32 enable_id, u32 disable_id)

hmm, u64 to pass simple tri-state flag ?

> +{
> +	/* Enabling a param involves setting the enable_id
> +	 * to 1 and disable_id to 0. Setting it to default
> +	 * will unset both enable and disable ids and let
> +	 * slpc choose it's default values.

fix format + s/slpc/SLPC

> +	 */
> +	if (val == SLPC_PARAM_TASK_DEFAULT) {
> +		/* set default */
> +		slpc_mem_unset_param(data, enable_id);
> +		slpc_mem_unset_param(data, disable_id);
> +	} else if (val == SLPC_PARAM_TASK_ENABLED) {
> +		/* set enable */
> +		slpc_mem_set_param(data, enable_id, 1);
> +		slpc_mem_set_param(data, disable_id, 0);
> +	} else if (val == SLPC_PARAM_TASK_DISABLED) {
> +		/* set disable */
> +		slpc_mem_set_param(data, disable_id, 1);
> +		slpc_mem_set_param(data, enable_id, 0);
> +	}

maybe instead of SLPC_PARAM_TASK_* flags (that btw were confusing me
earlier) you can define 3x small helpers:

static void slpc_mem_set_default(data, enable_id, disable_id);
static void slpc_mem_set_enabled(data, enable_id, disable_id);
static void slpc_mem_set_disabled(data, enable_id, disable_id);


> +}
> +
>  static int slpc_shared_data_init(struct intel_guc_slpc *slpc)
>  {
>  	struct intel_guc *guc = slpc_to_guc(slpc);
> @@ -34,6 +89,128 @@ static int slpc_shared_data_init(struct intel_guc_slpc *slpc)
>  	return err;
>  }
>  
> +/*
> + * Send SLPC event to guc
> + *
> + */
> +static int slpc_send(struct intel_guc_slpc *slpc,
> +			struct slpc_event_input *input,
> +			u32 in_len)
> +{
> +	struct intel_guc *guc = slpc_to_guc(slpc);
> +	u32 *action;
> +
> +	action = (u32 *)input;
> +	action[0] = INTEL_GUC_ACTION_SLPC_REQUEST;

why not just updating input->h2g_action_id ?

> +
> +	return intel_guc_send(guc, action, in_len);
> +}
> +
> +static bool slpc_running(struct intel_guc_slpc *slpc)
> +{
> +	struct slpc_shared_data *data;
> +	u32 slpc_global_state;
> +
> +	GEM_BUG_ON(!slpc->vma);
> +
> +	drm_clflush_virt_range(slpc->vaddr, sizeof(struct slpc_shared_data));

do you really need to flush all 8K of shared data?
it looks that you only need single u32

> +	data = slpc->vaddr;
> +
> +	slpc_global_state = data->global_state;
> +
> +	return (data->global_state == SLPC_GLOBAL_STATE_RUNNING);
> +}
> +
> +static int host2guc_slpc_query_task_state(struct intel_guc_slpc *slpc)
> +{
> +	struct intel_guc *guc = slpc_to_guc(slpc);
> +	u32 shared_data_gtt_offset = intel_guc_ggtt_offset(guc, slpc->vma);
> +	struct slpc_event_input data = {0};
> +
> +	data.header.value = SLPC_EVENT(SLPC_EVENT_QUERY_TASK_STATE, 2);

you defined header.num_args and header.event_id, don't want to use them?

> +	data.args[0] = shared_data_gtt_offset;
> +	data.args[1] = 0;
> +
> +	return slpc_send(slpc, &data, 4);

magic 4

> +}
> +
> +static int slpc_read_task_state(struct intel_guc_slpc *slpc)
> +{
> +	return host2guc_slpc_query_task_state(slpc);
> +}

hmm, all this looks complicated more than needed, why not just:

static int guc_action_slpc_query(struct intel_guc *guc, u32 offset)
{
	u32 request[] = {
		INTEL_GUC_ACTION_SLPC_REQUEST,
		SLPC_EVENT(SLPC_EVENT_QUERY_TASK_STATE, 2),
		offset,
		0,
	};
	int err;

	return intel_guc_send(guc, request, ARRAY_SIZE(request));
}

static int slpc_query_task_state(struct intel_guc_slpc *slpc)
{
	struct intel_guc *guc = slpc_to_guc(slpc);
	u32 offset = intel_guc_ggtt_offset(guc, slpc->vma);

	return guc_action_slpc_query(guc, offset);
}

btw, there is little magic in H2G data, as only event enums were defined
in slpc_fwif.h (or slpc_abi.h) but it looks that len and format of args
depends on the actual event used

> +
> +static const char *slpc_state_stringify(enum slpc_global_state state)
> +{
> +	const char *str = NULL;
> +
> +	switch (state) {
> +	case SLPC_GLOBAL_STATE_NOT_RUNNING:
> +		str = "not running";
> +		break;
> +	case SLPC_GLOBAL_STATE_INITIALIZING:
> +		str = "initializing";
> +		break;
> +	case SLPC_GLOBAL_STATE_RESETTING:
> +		str = "resetting";
> +		break;
> +	case SLPC_GLOBAL_STATE_RUNNING:
> +		str = "running";
> +		break;
> +	case SLPC_GLOBAL_STATE_SHUTTING_DOWN:
> +		str = "shutting down";
> +		break;
> +	case SLPC_GLOBAL_STATE_ERROR:
> +		str = "error";
> +		break;
> +	default:
> +		str = "unknown";
> +		break;
> +	}
> +
> +	return str;
> +}
> +
> +static const char *get_slpc_state(struct intel_guc_slpc *slpc)

lot of duplicated code with slpc_running()

maybe there should be:
	u32 slpc_get_state(slpc);
	bool slpc_is_running(slpc);
	const char *slpc_state_string(slpc);


> +{
> +	struct slpc_shared_data *data;
> +	u32 slpc_global_state;
> +
> +	GEM_BUG_ON(!slpc->vma);
> +
> +	drm_clflush_virt_range(slpc->vaddr, sizeof(struct slpc_shared_data));
> +	data = slpc->vaddr;
> +
> +	slpc_global_state = data->global_state;
> +
> +	return slpc_state_stringify(slpc_global_state);
> +}
> +
> +static int host2guc_slpc_reset(struct intel_guc_slpc *slpc)
> +{
> +	struct intel_guc *guc = slpc_to_guc(slpc);
> +	u32 shared_data_gtt_offset = intel_guc_ggtt_offset(guc, slpc->vma);
> +	struct slpc_event_input data = {0};
> +	int ret;
> +
> +	data.header.value = SLPC_EVENT(SLPC_EVENT_RESET, 2);
> +	data.args[0] = shared_data_gtt_offset;
> +	data.args[1] = 0;
> +
> +	/* TODO: Hardcoded 4 needs define */
> +	ret = slpc_send(slpc, &data, 4);
> +
> +	if (!ret) {
> +		/* TODO: How long to Wait until SLPC is running */

do we know state transitions ?
maybe there is no point in waiting for RUNNING if it is in ERROR or
SHUTTING_DOWN ?

> +		if (wait_for(slpc_running(slpc), 5)) {

magic 5

> +			DRM_ERROR("SLPC not enabled! State = %s\n",

use drm_err

> +				  get_slpc_state(slpc));
> +			return -EIO;
> +		}
> +	}
> +
> +	return ret;
> +}
> +
>  int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
>  {
>  	GEM_BUG_ON(slpc->vma);
> @@ -56,6 +233,50 @@ int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
>   */
>  int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
>  {
> +	struct drm_i915_private *i915 = slpc_to_i915(slpc);
> +	struct slpc_shared_data *data;
> +	int ret;
> +
> +	GEM_BUG_ON(!slpc->vma);
> +
> +	memset(slpc->vaddr, 0, sizeof(struct slpc_shared_data));
> +
> +	data = slpc->vaddr;
> +	data->shared_data_size = sizeof(struct slpc_shared_data);
> +
> +	/* Enable only GTPERF task, Disable others */
> +	slpc_mem_task_control(data, SLPC_PARAM_TASK_ENABLED,
> +				SLPC_PARAM_TASK_ENABLE_GTPERF,
> +				SLPC_PARAM_TASK_DISABLE_GTPERF);
> +
> +	slpc_mem_task_control(data, SLPC_PARAM_TASK_DISABLED,
> +				SLPC_PARAM_TASK_ENABLE_BALANCER,
> +				SLPC_PARAM_TASK_DISABLE_BALANCER);
> +
> +	slpc_mem_task_control(data, SLPC_PARAM_TASK_DISABLED,
> +				SLPC_PARAM_TASK_ENABLE_DCC,
> +				SLPC_PARAM_TASK_DISABLE_DCC);
> +
> +	ret = host2guc_slpc_reset(slpc);
> +	if (ret) {
> +		drm_err(&i915->drm, "SLPC Reset event returned %d", ret);

you may want to print error as %pe
missing \n

> +		return -EIO;
> +	}
> +
> +	DRM_INFO("SLPC state: %s\n", get_slpc_state(slpc));

use drm_info

> +
> +	if (slpc_read_task_state(slpc))
> +		drm_err(&i915->drm, "Unable to read task state data");

missing \n

> +
> +	drm_clflush_virt_range(slpc->vaddr, sizeof(struct slpc_shared_data));
> +
> +	/* min and max frequency limits being used by SLPC */
> +	drm_info(&i915->drm, "SLPC min freq: %u Mhz, max is %u Mhz",

missing \n

> +			DIV_ROUND_CLOSEST(data->task_state_data.min_unslice_freq *
> +				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER),
> +			DIV_ROUND_CLOSEST(data->task_state_data.max_unslice_freq *
> +				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER));

this info/code seems to be duplicated in patch 10/16
maybe just call intel_guc_slpc_info() here once available ?

> +
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> index e2644a05f298..3e76d4d5f7bb 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> @@ -2321,10 +2321,6 @@ void intel_guc_submission_enable(struct intel_guc *guc)
>  
>  void intel_guc_submission_disable(struct intel_guc *guc)
>  {
> -	struct intel_gt *gt = guc_to_gt(guc);
> -
> -	GEM_BUG_ON(gt->awake); /* GT should be parked first */

if not mistake, can you explain why it was removed ?

> -
>  	/* Note: By the time we're here, GuC may have already been reset */
>  }
>  
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
> index dca5f6d0641b..7b6c767d3eb0 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
> @@ -501,6 +501,14 @@ static int __uc_init_hw(struct intel_uc *uc)
>  	if (intel_uc_uses_guc_submission(uc))
>  		intel_guc_submission_enable(guc);
>  
> +	if (intel_uc_uses_guc_slpc(uc)) {
> +		ret = intel_guc_slpc_enable(&guc->slpc);
> +		if (ret)
> +			goto err_submission;
> +		drm_info(&i915->drm, "GuC SLPC %s\n",
> +			 enableddisabled(intel_uc_uses_guc_slpc(uc)));

move this drm_info after below GuC report and/or modify to have:

"GuC firmware path.bin version 1.0 loaded:yes"
"GuC submission:enabled"
"GuC SLPC:enabled"
"HuC firmware path.bin version 1.0 authenticated:yes"

Michal

> +	}
> +
>  	drm_info(&i915->drm, "%s firmware %s version %u.%u %s:%s\n",
>  		 intel_uc_fw_type_repr(INTEL_UC_FW_TYPE_GUC), guc->fw.path,
>  		 guc->fw.major_ver_found, guc->fw.minor_ver_found,
> @@ -521,6 +529,8 @@ static int __uc_init_hw(struct intel_uc *uc)
>  	/*
>  	 * We've failed to load the firmware :(
>  	 */
> +err_submission:
> +	intel_guc_submission_disable(guc);
>  err_log_capture:
>  	__uc_capture_load_err_log(uc);
>  err_out:
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 08/16] drm/i915/guc/slpc: Add methods to set min/max frequency
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 08/16] drm/i915/guc/slpc: Add methods to set min/max frequency Vinay Belgaumkar
  2021-07-10  3:07   ` kernel test robot
  2021-07-10  5:17   ` kernel test robot
@ 2021-07-10 17:47   ` Michal Wajdeczko
  2021-07-16 18:00     ` Belgaumkar, Vinay
  2 siblings, 1 reply; 53+ messages in thread
From: Michal Wajdeczko @ 2021-07-10 17:47 UTC (permalink / raw)
  To: Vinay Belgaumkar, intel-gfx, dri-devel



On 10.07.2021 03:20, Vinay Belgaumkar wrote:
> Add param set h2g helpers to set the min and max frequencies
> for use by SLPC.
> 
> Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> ---
>  drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 94 +++++++++++++++++++++
>  drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h |  2 +
>  2 files changed, 96 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> index e579408d1c19..19cb26479942 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> @@ -106,6 +106,19 @@ static int slpc_send(struct intel_guc_slpc *slpc,
>  	return intel_guc_send(guc, action, in_len);
>  }
>  
> +static int host2guc_slpc_set_param(struct intel_guc_slpc *slpc,
> +				   u32 id, u32 value)
> +{
> +	struct slpc_event_input data = {0};
> +
> +	data.header.value = SLPC_EVENT(SLPC_EVENT_PARAMETER_SET, 2);
> +	data.args[0] = id;
> +	data.args[1] = value;
> +
> +	return slpc_send(slpc, &data, 4);

as suggested before, use of explicit function like:

static int guc_action_slpc_param(guc, u32 id, u32 value)
{
	u32 request[] = {
		INTEL_GUC_ACTION_SLPC_REQUEST,
		SLPC_EVENT(SLPC_EVENT_PARAMETER_SET, 2),
		id,
		value,
	};

	return intel_guc_send(guc, request, ARRAY_SIZE(request));
}

will be simpler/cleaner

> +}
> +
> +
>  static bool slpc_running(struct intel_guc_slpc *slpc)
>  {
>  	struct slpc_shared_data *data;
> @@ -134,6 +147,19 @@ static int host2guc_slpc_query_task_state(struct intel_guc_slpc *slpc)
>  	return slpc_send(slpc, &data, 4);
>  }
>  
> +static int slpc_set_param(struct intel_guc_slpc *slpc, u32 id, u32 value)
> +{
> +	struct drm_i915_private *i915 = slpc_to_i915(slpc);
> +	GEM_BUG_ON(id >= SLPC_MAX_PARAM);
> +
> +	if (host2guc_slpc_set_param(slpc, id, value)) {
> +		drm_err(&i915->drm, "Unable to set param %x", id);

missing \n
what about printing value to be set ?
what about printing send error %pe ?

> +		return -EIO;
> +	}
> +
> +	return 0;
> +}
> +
>  static int slpc_read_task_state(struct intel_guc_slpc *slpc)
>  {
>  	return host2guc_slpc_query_task_state(slpc);
> @@ -218,6 +244,74 @@ int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
>  	return slpc_shared_data_init(slpc);
>  }
>  
> +/**
> + * intel_guc_slpc_max_freq_set() - Set max frequency limit for SLPC.
> + * @slpc: pointer to intel_guc_slpc.
> + * @val: encoded frequency

what's the encoding ?

> + *
> + * This function will invoke GuC SLPC action to update the max frequency
> + * limit for slice and unslice.
> + *
> + * Return: 0 on success, non-zero error code on failure.
> + */
> +int intel_guc_slpc_set_max_freq(struct intel_guc_slpc *slpc, u32 val)
> +{
> +	int ret;
> +	struct drm_i915_private *i915 = slpc_to_i915(slpc);
> +	intel_wakeref_t wakeref;
> +
> +	wakeref = intel_runtime_pm_get(&i915->runtime_pm);

use can use with_intel_runtime_pm(rpm, wakeref)

> +
> +	ret = slpc_set_param(slpc,
> +		       SLPC_PARAM_GLOBAL_MAX_GT_UNSLICE_FREQ_MHZ,
> +		       val);
> +
> +	if (ret) {
> +		drm_err(&i915->drm,
> +			"Set max frequency unslice returned %d", ret);

missing \n
print error with %pe
but slpc_set_param returns only -EIO ;(

> +		ret = -EIO;
> +		goto done;
> +	}
> +
> +done:
> +	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
> +	return ret;
> +}
> +
> +/**
> + * intel_guc_slpc_min_freq_set() - Set min frequency limit for SLPC.
> + * @slpc: pointer to intel_guc_slpc.
> + * @val: encoded frequency
> + *
> + * This function will invoke GuC SLPC action to update the min frequency
> + * limit.
> + *
> + * Return: 0 on success, non-zero error code on failure.
> + */
> +int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val)
> +{
> +	int ret;
> +	struct intel_guc *guc = slpc_to_guc(slpc);
> +	struct drm_i915_private *i915 = guc_to_gt(guc)->i915;
> +	intel_wakeref_t wakeref;
> +
> +	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
> +
> +	ret = slpc_set_param(slpc,
> +		       SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ,
> +		       val);
> +	if (ret) {
> +		drm_err(&i915->drm,
> +			"Set min frequency for unslice returned %d", ret);

as above

Michal

> +		ret = -EIO;
> +		goto done;
> +	}
> +
> +done:
> +	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
> +	return ret;
> +}
> +
>  /*
>   * intel_guc_slpc_enable() - Start SLPC
>   * @slpc: pointer to intel_guc_slpc.
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
> index a2643b904165..a473e1ea7c10 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
> @@ -34,5 +34,7 @@ struct intel_guc_slpc {
>  int intel_guc_slpc_init(struct intel_guc_slpc *slpc);
>  int intel_guc_slpc_enable(struct intel_guc_slpc *slpc);
>  void intel_guc_slpc_fini(struct intel_guc_slpc *slpc);
> +int intel_guc_slpc_set_max_freq(struct intel_guc_slpc *slpc, u32 val);
> +int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val);
>  
>  #endif
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 09/16] drm/i915/guc/slpc: Add get max/min freq hooks
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 09/16] drm/i915/guc/slpc: Add get max/min freq hooks Vinay Belgaumkar
@ 2021-07-10 17:52   ` Michal Wajdeczko
  2021-07-20 22:08     ` Belgaumkar, Vinay
  0 siblings, 1 reply; 53+ messages in thread
From: Michal Wajdeczko @ 2021-07-10 17:52 UTC (permalink / raw)
  To: Vinay Belgaumkar, intel-gfx, dri-devel



On 10.07.2021 03:20, Vinay Belgaumkar wrote:
> Add helpers to read the min/max frequency being used
> by SLPC. This is done by send a h2g command which forces

s/h2g/H2G

> SLPC to update the shared data struct which can then be
> read.
> 
> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
> ---
>  drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 58 +++++++++++++++++++++
>  drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h |  2 +
>  2 files changed, 60 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> index 19cb26479942..98a283d31734 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> @@ -278,6 +278,35 @@ int intel_guc_slpc_set_max_freq(struct intel_guc_slpc *slpc, u32 val)
>  	return ret;
>  }
>  
> +int intel_guc_slpc_get_max_freq(struct intel_guc_slpc *slpc, u32 *val)
> +{
> +	struct slpc_shared_data *data;
> +	intel_wakeref_t wakeref;
> +	struct drm_i915_private *i915 = guc_to_gt(slpc_to_guc(slpc))->i915;
> +	int ret = 0;
> +
> +	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
> +
> +	/* Force GuC to update task data */
> +	if (slpc_read_task_state(slpc)) {
> +		DRM_ERROR("Unable to update task data");

use drm_err
missing \n
maybe this message could be moved to slpc_read_task_state ?

> +		ret = -EIO;
> +		goto done;
> +	}
> +
> +	GEM_BUG_ON(!slpc->vma);
> +
> +	drm_clflush_virt_range(slpc->vaddr, sizeof(struct slpc_shared_data));

maybe this can also be part of slpc_read_task_state ?

> +	data = slpc->vaddr;
> +
> +	*val = DIV_ROUND_CLOSEST(data->task_state_data.max_unslice_freq *
> +				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER);
> +
> +done:
> +	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
> +	return ret;
> +}
> +
>  /**
>   * intel_guc_slpc_min_freq_set() - Set min frequency limit for SLPC.
>   * @slpc: pointer to intel_guc_slpc.
> @@ -312,6 +341,35 @@ int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val)
>  	return ret;
>  }
>  
> +int intel_guc_slpc_get_min_freq(struct intel_guc_slpc *slpc, u32 *val)

missing kernel-doc (above intel_guc_slpc_min_freq_set has one)

> +{
> +	struct slpc_shared_data *data;
> +	intel_wakeref_t wakeref;
> +	struct drm_i915_private *i915 = guc_to_gt(slpc_to_guc(slpc))->i915;
> +	int ret = 0;
> +
> +	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
> +
> +	/* Force GuC to update task data */
> +	if (slpc_read_task_state(slpc)) {
> +		DRM_ERROR("Unable to update task data");

see above

> +		ret = -EIO;
> +		goto done;
> +	}
> +
> +	GEM_BUG_ON(!slpc->vma);
> +
> +	drm_clflush_virt_range(slpc->vaddr, sizeof(struct slpc_shared_data));

see above

Michal

> +	data = slpc->vaddr;
> +
> +	*val = DIV_ROUND_CLOSEST(data->task_state_data.min_unslice_freq *
> +				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER);
> +
> +done:
> +	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
> +	return ret;
> +}
> +
>  /*
>   * intel_guc_slpc_enable() - Start SLPC
>   * @slpc: pointer to intel_guc_slpc.
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
> index a473e1ea7c10..2cb830cdacb5 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
> @@ -36,5 +36,7 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc);
>  void intel_guc_slpc_fini(struct intel_guc_slpc *slpc);
>  int intel_guc_slpc_set_max_freq(struct intel_guc_slpc *slpc, u32 val);
>  int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val);
> +int intel_guc_slpc_get_max_freq(struct intel_guc_slpc *slpc, u32 *val);
> +int intel_guc_slpc_get_min_freq(struct intel_guc_slpc *slpc, u32 *val);
>  
>  #endif
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 10/16] drm/i915/guc/slpc: Add debugfs for slpc info
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 10/16] drm/i915/guc/slpc: Add debugfs for slpc info Vinay Belgaumkar
@ 2021-07-10 18:08   ` Michal Wajdeczko
  2021-07-20 23:00     ` Belgaumkar, Vinay
  0 siblings, 1 reply; 53+ messages in thread
From: Michal Wajdeczko @ 2021-07-10 18:08 UTC (permalink / raw)
  To: Vinay Belgaumkar, intel-gfx, dri-devel



On 10.07.2021 03:20, Vinay Belgaumkar wrote:
> This prints out relevant SLPC info from the SLPC shared structure.
> 
> We will send a h2g message which forces SLPC to update the
> shared data structure with latest information before reading it.
> 
> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
> ---
>  .../gpu/drm/i915/gt/uc/intel_guc_debugfs.c    | 16 ++++++
>  drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c   | 53 +++++++++++++++++++
>  drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h   |  3 ++
>  3 files changed, 72 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
> index 9a03ff56e654..bef749e54601 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
> @@ -12,6 +12,7 @@
>  #include "gt/uc/intel_guc_ct.h"
>  #include "gt/uc/intel_guc_ads.h"
>  #include "gt/uc/intel_guc_submission.h"
> +#include "gt/uc/intel_guc_slpc.h"
>  
>  static int guc_info_show(struct seq_file *m, void *data)
>  {
> @@ -50,11 +51,26 @@ static int guc_registered_contexts_show(struct seq_file *m, void *data)
>  }
>  DEFINE_GT_DEBUGFS_ATTRIBUTE(guc_registered_contexts);
>  
> +static int guc_slpc_info_show(struct seq_file *m, void *unused)
> +{
> +	struct intel_guc *guc = m->private;
> +	struct intel_guc_slpc *slpc = &guc->slpc;
> +	struct drm_printer p = drm_seq_file_printer(m);
> +
> +	if (!intel_guc_slpc_is_used(guc))
> +		return -ENODEV;
> +
> +	return intel_guc_slpc_info(slpc, &p);
> +}
> +

other entries don't have empty line here

> +DEFINE_GT_DEBUGFS_ATTRIBUTE(guc_slpc_info);
> +
>  void intel_guc_debugfs_register(struct intel_guc *guc, struct dentry *root)
>  {
>  	static const struct debugfs_gt_file files[] = {
>  		{ "guc_info", &guc_info_fops, NULL },
>  		{ "guc_registered_contexts", &guc_registered_contexts_fops, NULL },
> +		{ "guc_slpc_info", &guc_slpc_info_fops, NULL},

IIRC last field is "eval" where maybe you could add your own to check if
intel_guc_slpc_is_used() to avoid exposing this info if N/A

>  	};
>  
>  	if (!intel_guc_is_supported(guc))
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> index 98a283d31734..d179ba14ece6 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> @@ -432,6 +432,59 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
>  	return 0;
>  }
>  
> +int intel_guc_slpc_info(struct intel_guc_slpc *slpc, struct drm_printer *p)
> +{
> +	struct drm_i915_private *i915 = guc_to_gt(slpc_to_guc(slpc))->i915;
> +	struct slpc_shared_data *data;
> +	struct slpc_platform_info *platform_info;
> +	struct slpc_task_state_data *task_state_data;
> +	intel_wakeref_t wakeref;
> +	int ret = 0;
> +
> +	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
> +
> +	if (slpc_read_task_state(slpc)) {
> +		ret = -EIO;
> +		goto done;
> +	}
> +
> +	GEM_BUG_ON(!slpc->vma);
> +
> +	drm_clflush_virt_range(slpc->vaddr, sizeof(struct slpc_shared_data));

likely will go away if integrated into slpc_read_task_state

> +	data = slpc->vaddr;
> +
> +	platform_info = &data->platform_info;

is this used ?

> +	task_state_data = &data->task_state_data;

as it looks that you treat these sections separately, then maybe it
would be cleaner to have:

static void print_global_data(*global_data, *p) {}
static void print_platform_info(*platform_info, *p) {}
static void print_task_state_data(*task_state_data, *p) {}

> +
> +	drm_printf(p, "SLPC state: %s\n", slpc_state_stringify(data->global_state));
> +	drm_printf(p, "\tgtperf task active: %d\n",
> +			task_state_data->gtperf_task_active);
> +	drm_printf(p, "\tdcc task active: %d\n",
> +				task_state_data->dcc_task_active);
> +	drm_printf(p, "\tin dcc: %d\n",
> +				task_state_data->in_dcc);
> +	drm_printf(p, "\tfreq switch active: %d\n",
> +				task_state_data->freq_switch_active);
> +	drm_printf(p, "\tibc enabled: %d\n",
> +				task_state_data->ibc_enabled);
> +	drm_printf(p, "\tibc active: %d\n",
> +				task_state_data->ibc_active);
> +	drm_printf(p, "\tpg1 enabled: %s\n",
> +				yesno(task_state_data->pg1_enabled));
> +	drm_printf(p, "\tpg1 active: %s\n",
> +				yesno(task_state_data->pg1_active));
> +	drm_printf(p, "\tmax freq: %dMHz\n",
> +				DIV_ROUND_CLOSEST(data->task_state_data.max_unslice_freq *
> +				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER));
> +	drm_printf(p, "\tmin freq: %dMHz\n",
> +				DIV_ROUND_CLOSEST(data->task_state_data.min_unslice_freq *
> +				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER));

you defined task_state_data but in above 2 you're accessing it from data

Michal

> +
> +done:
> +	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
> +	return ret;
> +}
> +
>  void intel_guc_slpc_fini(struct intel_guc_slpc *slpc)
>  {
>  	if (!slpc->vma)
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
> index 2cb830cdacb5..cd12c5f19f4b 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
> @@ -10,6 +10,8 @@
>  #include <linux/mutex.h>
>  #include "intel_guc_slpc_fwif.h"
>  
> +struct drm_printer;
> +
>  struct intel_guc_slpc {
>  	/*Protects access to vma and SLPC actions */
>  	struct i915_vma *vma;
> @@ -38,5 +40,6 @@ int intel_guc_slpc_set_max_freq(struct intel_guc_slpc *slpc, u32 val);
>  int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val);
>  int intel_guc_slpc_get_max_freq(struct intel_guc_slpc *slpc, u32 *val);
>  int intel_guc_slpc_get_min_freq(struct intel_guc_slpc *slpc, u32 *val);
> +int intel_guc_slpc_info(struct intel_guc_slpc *slpc, struct drm_printer *p);
>  
>  #endif
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 12/16] drm/i915/guc/slpc: Cache platform frequency limits for slpc
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 12/16] drm/i915/guc/slpc: Cache platform frequency limits for slpc Vinay Belgaumkar
@ 2021-07-10 18:15   ` Michal Wajdeczko
  2021-07-17 19:30     ` Belgaumkar, Vinay
  2021-07-20 23:05     ` Belgaumkar, Vinay
  0 siblings, 2 replies; 53+ messages in thread
From: Michal Wajdeczko @ 2021-07-10 18:15 UTC (permalink / raw)
  To: Vinay Belgaumkar, intel-gfx, dri-devel



On 10.07.2021 03:20, Vinay Belgaumkar wrote:
> Cache rp0, rp1 and rpn platform limits into slpc structure
> for range checking while setting min/max frequencies.
> 
> Also add "soft" limits which keep track of frequency changes
> made from userland. These are initially set to platform min
> and max.
> 
> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> ---
>  drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 41 +++++++++++++++++++++
>  1 file changed, 41 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> index d32274cd1db7..6e978f27b7a6 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> @@ -86,6 +86,9 @@ static int slpc_shared_data_init(struct intel_guc_slpc *slpc)
>  		return err;
>  	}
>  
> +	slpc->max_freq_softlimit = 0;
> +	slpc->min_freq_softlimit = 0;

as mentioned earlier, now it is time to introduce these fields in .h

> +
>  	return err;
>  }
>  
> @@ -384,6 +387,29 @@ void intel_guc_pm_intrmsk_enable(struct intel_gt *gt)
>  			   GEN6_PMINTRMSK, pm_intrmsk_mbz, 0);
>  }
>  
> +static int intel_guc_slpc_set_softlimits(struct intel_guc_slpc *slpc)
> +{
> +	int ret = 0;
> +
> +	/* Softlimits are initially equivalent to platform limits
> +	 * unless they have deviated from defaults, in which case,
> +	 * we retain the values and set min/max accordingly.
> +	 */
> +	if (!slpc->max_freq_softlimit)
> +		slpc->max_freq_softlimit = slpc->rp0_freq;
> +	else if (slpc->max_freq_softlimit != slpc->rp0_freq)
> +		ret = intel_guc_slpc_set_max_freq(slpc,
> +					slpc->max_freq_softlimit);
> +
> +	if (!slpc->min_freq_softlimit)
> +		slpc->min_freq_softlimit = slpc->min_freq;
> +	else if (slpc->min_freq_softlimit != slpc->min_freq)
> +		ret = intel_guc_slpc_set_min_freq(slpc,
> +					slpc->min_freq_softlimit);
> +
> +	return ret;
> +}
> +
>  /*
>   * intel_guc_slpc_enable() - Start SLPC
>   * @slpc: pointer to intel_guc_slpc.
> @@ -402,6 +428,7 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
>  	struct drm_i915_private *i915 = slpc_to_i915(slpc);
>  	struct slpc_shared_data *data;
>  	int ret;
> +	u32 rp_state_cap;

move up to keep "ret" last

>  
>  	GEM_BUG_ON(!slpc->vma);
>  
> @@ -445,6 +472,20 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
>  			DIV_ROUND_CLOSEST(data->task_state_data.max_unslice_freq *
>  				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER));
>  
> +	rp_state_cap = intel_uncore_read(i915->gt.uncore, GEN6_RP_STATE_CAP);
> +
> +	slpc->rp0_freq = ((rp_state_cap >> 0) & 0xff) * GT_FREQUENCY_MULTIPLIER;
> +	slpc->min_freq = ((rp_state_cap >> 16) & 0xff) * GT_FREQUENCY_MULTIPLIER;
> +	slpc->rp1_freq = ((rp_state_cap >> 8) & 0xff) * GT_FREQUENCY_MULTIPLIER;

we should have definitions for these bits and then we should be able to
use REG_FIELD_GET

> +
> +	if (intel_guc_slpc_set_softlimits(slpc))
> +		drm_err(&i915->drm, "Unable to set softlimits");

missing \n
maybe we can also print error ?

> +
> +	drm_info(&i915->drm,
> +		 "Platform fused frequency values -  min: %u Mhz, max: %u Mhz",

missing \n
double space before 'min'

Michal

> +		 slpc->min_freq,
> +		 slpc->rp0_freq);
> +
>  	return 0;
>  }
>  
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 14/16] drm/i915/guc/slpc: Sysfs hooks for slpc
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 14/16] drm/i915/guc/slpc: Sysfs hooks for slpc Vinay Belgaumkar
                     ` (3 preceding siblings ...)
  2021-07-10 13:54   ` [Intel-gfx] [PATCH 14/16] drm/i915/guc/slpc: Sysfs hooks for slpc kernel test robot
@ 2021-07-10 18:20   ` Michal Wajdeczko
  2021-07-20 23:38     ` Belgaumkar, Vinay
  4 siblings, 1 reply; 53+ messages in thread
From: Michal Wajdeczko @ 2021-07-10 18:20 UTC (permalink / raw)
  To: Vinay Belgaumkar, intel-gfx, dri-devel



On 10.07.2021 03:20, Vinay Belgaumkar wrote:
> Update the get/set min/max freq hooks to work for
> slpc case as well. Consolidate helpers for requested/min/max
> frequency get/set to intel_rps where the proper action can
> be taken depending on whether slpc is enabled.

2x s/slpc/SLPC

> 
> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
> Signed-off-by: Sujaritha Sundaresan <sujaritha.sundaresan@intel.com>
> ---
>  drivers/gpu/drm/i915/gt/intel_rps.c | 135 ++++++++++++++++++++++++++++
>  drivers/gpu/drm/i915/gt/intel_rps.h |   5 ++
>  drivers/gpu/drm/i915/i915_pmu.c     |   2 +-
>  drivers/gpu/drm/i915/i915_reg.h     |   2 +
>  drivers/gpu/drm/i915/i915_sysfs.c   |  71 +++------------
>  5 files changed, 154 insertions(+), 61 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
> index e858eeb2c59d..88ffc5d90730 100644
> --- a/drivers/gpu/drm/i915/gt/intel_rps.c
> +++ b/drivers/gpu/drm/i915/gt/intel_rps.c
> @@ -37,6 +37,12 @@ static struct intel_uncore *rps_to_uncore(struct intel_rps *rps)
>  	return rps_to_gt(rps)->uncore;
>  }
>  
> +static struct intel_guc_slpc *rps_to_slpc(struct intel_rps *rps)
> +{
> +	struct intel_gt *gt = rps_to_gt(rps);
> +	return &gt->uc.guc.slpc;

either add empty line between decl/code or make it one-liner

> +}
> +
>  static bool rps_uses_slpc(struct intel_rps *rps)
>  {
>  	struct intel_gt *gt = rps_to_gt(rps);
> @@ -1960,6 +1966,135 @@ u32 intel_rps_read_actual_frequency(struct intel_rps *rps)
>  	return freq;
>  }
>  
> +u32 intel_rps_read_punit_req(struct intel_rps *rps)
> +{
> +	struct intel_uncore *uncore = rps_to_uncore(rps);
> +

drop empty line

> +	u32 pureq = intel_uncore_read(uncore, GEN6_RPNSWREQ);
> +
> +	return pureq;
> +}
> +
> +u32 intel_rps_get_req(struct intel_rps *rps, u32 pureq)
> +{
> +	u32 req = pureq >> GEN9_SW_REQ_UNSLICE_RATIO_SHIFT;
> +
> +	return req;
> +}
> +
> +u32 intel_rps_read_punit_req_frequency(struct intel_rps *rps)
> +{
> +	u32 freq = intel_rps_get_req(rps, intel_rps_read_punit_req(rps));
> +
> +	return intel_gpu_freq(rps, freq);
> +}
> +
> +u32 intel_rps_get_requested_frequency(struct intel_rps *rps)
> +{
> +	if (rps_uses_slpc(rps))
> +		return intel_rps_read_punit_req_frequency(rps);
> +	else
> +		return intel_gpu_freq(rps, rps->cur_freq);
> +}
> +
> +u32 intel_rps_get_max_frequency(struct intel_rps *rps)
> +{
> +	struct intel_guc_slpc *slpc = rps_to_slpc(rps);
> +
> +	if (rps_uses_slpc(rps))
> +		return slpc->max_freq_softlimit;
> +	else
> +		return intel_gpu_freq(rps, rps->max_freq_softlimit);
> +}
> +
> +int intel_rps_set_max_frequency(struct intel_rps *rps, u32 val)
> +{
> +	struct intel_guc_slpc *slpc = rps_to_slpc(rps);
> +	int ret;
> +
> +	if (rps_uses_slpc(rps))
> +		return intel_guc_slpc_set_max_freq(slpc, val);
> +
> +	mutex_lock(&rps->lock);
> +
> +	val = intel_freq_opcode(rps, val);
> +	if (val < rps->min_freq ||
> +	    val > rps->max_freq ||
> +	    val < rps->min_freq_softlimit) {
> +		ret = -EINVAL;
> +		goto unlock;
> +	}
> +
> +	if (val > rps->rp0_freq)
> +		DRM_DEBUG("User requested overclocking to %d\n",

use drm_dbg

Michal

> +			  intel_gpu_freq(rps, val));
> +
> +	rps->max_freq_softlimit = val;
> +
> +	val = clamp_t(int, rps->cur_freq,
> +		      rps->min_freq_softlimit,
> +		      rps->max_freq_softlimit);
> +
> +	/*
> +	 * We still need *_set_rps to process the new max_delay and
> +	 * update the interrupt limits and PMINTRMSK even though
> +	 * frequency request may be unchanged.
> +	 */
> +	intel_rps_set(rps, val);
> +
> +unlock:
> +	mutex_unlock(&rps->lock);
> +
> +	return ret;
> +}
> +
> +u32 intel_rps_get_min_frequency(struct intel_rps *rps)
> +{
> +	struct intel_guc_slpc *slpc = rps_to_slpc(rps);
> +
> +	if (rps_uses_slpc(rps))
> +		return slpc->min_freq_softlimit;
> +	else
> +		return intel_gpu_freq(rps, rps->min_freq_softlimit);
> +}
> +
> +int intel_rps_set_min_frequency(struct intel_rps *rps, u32 val)
> +{
> +	struct intel_guc_slpc *slpc = rps_to_slpc(rps);
> +	int ret;
> +
> +	if (rps_uses_slpc(rps))
> +		return intel_guc_slpc_set_min_freq(slpc, val);
> +
> +	mutex_lock(&rps->lock);
> +
> +	val = intel_freq_opcode(rps, val);
> +	if (val < rps->min_freq ||
> +	    val > rps->max_freq ||
> +	    val > rps->max_freq_softlimit) {
> +		ret = -EINVAL;
> +		goto unlock;
> +	}
> +
> +	rps->min_freq_softlimit = val;
> +
> +	val = clamp_t(int, rps->cur_freq,
> +		      rps->min_freq_softlimit,
> +		      rps->max_freq_softlimit);
> +
> +	/*
> +	 * We still need *_set_rps to process the new min_delay and
> +	 * update the interrupt limits and PMINTRMSK even though
> +	 * frequency request may be unchanged.
> +	 */
> +	intel_rps_set(rps, val);
> +
> +unlock:
> +	mutex_unlock(&rps->lock);
> +
> +	return ret;
> +}
> +
>  /* External interface for intel_ips.ko */
>  
>  static struct drm_i915_private __rcu *ips_mchdev;
> diff --git a/drivers/gpu/drm/i915/gt/intel_rps.h b/drivers/gpu/drm/i915/gt/intel_rps.h
> index 1d2cfc98b510..9a09ff5ebf64 100644
> --- a/drivers/gpu/drm/i915/gt/intel_rps.h
> +++ b/drivers/gpu/drm/i915/gt/intel_rps.h
> @@ -31,6 +31,11 @@ int intel_gpu_freq(struct intel_rps *rps, int val);
>  int intel_freq_opcode(struct intel_rps *rps, int val);
>  u32 intel_rps_get_cagf(struct intel_rps *rps, u32 rpstat1);
>  u32 intel_rps_read_actual_frequency(struct intel_rps *rps);
> +u32 intel_rps_get_requested_frequency(struct intel_rps *rps);
> +u32 intel_rps_get_min_frequency(struct intel_rps *rps);
> +int intel_rps_set_min_frequency(struct intel_rps *rps, u32 val);
> +u32 intel_rps_get_max_frequency(struct intel_rps *rps);
> +int intel_rps_set_max_frequency(struct intel_rps *rps, u32 val);
>  
>  void gen5_rps_irq_handler(struct intel_rps *rps);
>  void gen6_rps_irq_handler(struct intel_rps *rps, u32 pm_iir);
> diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
> index 34d37d46a126..a896bec18255 100644
> --- a/drivers/gpu/drm/i915/i915_pmu.c
> +++ b/drivers/gpu/drm/i915/i915_pmu.c
> @@ -407,7 +407,7 @@ frequency_sample(struct intel_gt *gt, unsigned int period_ns)
>  
>  	if (pmu->enable & config_mask(I915_PMU_REQUESTED_FREQUENCY)) {
>  		add_sample_mult(&pmu->sample[__I915_SAMPLE_FREQ_REQ],
> -				intel_gpu_freq(rps, rps->cur_freq),
> +				intel_rps_get_requested_frequency(rps),
>  				period_ns / 1000);
>  	}
>  
> diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
> index 7d9e90aa3ec0..8ab3c2f8f8e4 100644
> --- a/drivers/gpu/drm/i915/i915_reg.h
> +++ b/drivers/gpu/drm/i915/i915_reg.h
> @@ -9195,6 +9195,8 @@ enum {
>  #define   GEN9_FREQUENCY(x)			((x) << 23)
>  #define   GEN6_OFFSET(x)			((x) << 19)
>  #define   GEN6_AGGRESSIVE_TURBO			(0 << 15)
> +#define   GEN9_SW_REQ_UNSLICE_RATIO_SHIFT 	23
> +
>  #define GEN6_RC_VIDEO_FREQ			_MMIO(0xA00C)
>  #define GEN6_RC_CONTROL				_MMIO(0xA090)
>  #define   GEN6_RC_CTL_RC6pp_ENABLE		(1 << 16)
> diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
> index 873bf996ceb5..f2eee8491b19 100644
> --- a/drivers/gpu/drm/i915/i915_sysfs.c
> +++ b/drivers/gpu/drm/i915/i915_sysfs.c
> @@ -272,7 +272,7 @@ static ssize_t gt_cur_freq_mhz_show(struct device *kdev,
>  	struct drm_i915_private *i915 = kdev_minor_to_i915(kdev);
>  	struct intel_rps *rps = &i915->gt.rps;
>  
> -	return sysfs_emit(buf, "%d\n", intel_gpu_freq(rps, rps->cur_freq));
> +	return sysfs_emit(buf, "%d\n", intel_rps_get_requested_frequency(rps));
>  }
>  
>  static ssize_t gt_boost_freq_mhz_show(struct device *kdev, struct device_attribute *attr, char *buf)
> @@ -326,9 +326,10 @@ static ssize_t vlv_rpe_freq_mhz_show(struct device *kdev,
>  static ssize_t gt_max_freq_mhz_show(struct device *kdev, struct device_attribute *attr, char *buf)
>  {
>  	struct drm_i915_private *dev_priv = kdev_minor_to_i915(kdev);
> -	struct intel_rps *rps = &dev_priv->gt.rps;
> +	struct intel_gt *gt = &dev_priv->gt;
> +	struct intel_rps *rps = &gt->rps;
>  
> -	return sysfs_emit(buf, "%d\n", intel_gpu_freq(rps, rps->max_freq_softlimit));
> +	return sysfs_emit(buf, "%d\n", intel_rps_get_max_frequency(rps));
>  }
>  
>  static ssize_t gt_max_freq_mhz_store(struct device *kdev,
> @@ -336,7 +337,8 @@ static ssize_t gt_max_freq_mhz_store(struct device *kdev,
>  				     const char *buf, size_t count)
>  {
>  	struct drm_i915_private *dev_priv = kdev_minor_to_i915(kdev);
> -	struct intel_rps *rps = &dev_priv->gt.rps;
> +	struct intel_gt *gt = &dev_priv->gt;
> +	struct intel_rps *rps = &gt->rps;
>  	ssize_t ret;
>  	u32 val;
>  
> @@ -344,35 +346,7 @@ static ssize_t gt_max_freq_mhz_store(struct device *kdev,
>  	if (ret)
>  		return ret;
>  
> -	mutex_lock(&rps->lock);
> -
> -	val = intel_freq_opcode(rps, val);
> -	if (val < rps->min_freq ||
> -	    val > rps->max_freq ||
> -	    val < rps->min_freq_softlimit) {
> -		ret = -EINVAL;
> -		goto unlock;
> -	}
> -
> -	if (val > rps->rp0_freq)
> -		DRM_DEBUG("User requested overclocking to %d\n",
> -			  intel_gpu_freq(rps, val));
> -
> -	rps->max_freq_softlimit = val;
> -
> -	val = clamp_t(int, rps->cur_freq,
> -		      rps->min_freq_softlimit,
> -		      rps->max_freq_softlimit);
> -
> -	/*
> -	 * We still need *_set_rps to process the new max_delay and
> -	 * update the interrupt limits and PMINTRMSK even though
> -	 * frequency request may be unchanged.
> -	 */
> -	intel_rps_set(rps, val);
> -
> -unlock:
> -	mutex_unlock(&rps->lock);
> +	ret = intel_rps_set_max_frequency(rps, val);
>  
>  	return ret ?: count;
>  }
> @@ -380,9 +354,10 @@ static ssize_t gt_max_freq_mhz_store(struct device *kdev,
>  static ssize_t gt_min_freq_mhz_show(struct device *kdev, struct device_attribute *attr, char *buf)
>  {
>  	struct drm_i915_private *dev_priv = kdev_minor_to_i915(kdev);
> -	struct intel_rps *rps = &dev_priv->gt.rps;
> +	struct intel_gt *gt = &dev_priv->gt;
> +	struct intel_rps *rps = &gt->rps;
>  
> -	return sysfs_emit(buf, "%d\n", intel_gpu_freq(rps, rps->min_freq_softlimit));
> +	return sysfs_emit(buf, "%d\n", intel_rps_get_min_frequency(rps));
>  }
>  
>  static ssize_t gt_min_freq_mhz_store(struct device *kdev,
> @@ -398,31 +373,7 @@ static ssize_t gt_min_freq_mhz_store(struct device *kdev,
>  	if (ret)
>  		return ret;
>  
> -	mutex_lock(&rps->lock);
> -
> -	val = intel_freq_opcode(rps, val);
> -	if (val < rps->min_freq ||
> -	    val > rps->max_freq ||
> -	    val > rps->max_freq_softlimit) {
> -		ret = -EINVAL;
> -		goto unlock;
> -	}
> -
> -	rps->min_freq_softlimit = val;
> -
> -	val = clamp_t(int, rps->cur_freq,
> -		      rps->min_freq_softlimit,
> -		      rps->max_freq_softlimit);
> -
> -	/*
> -	 * We still need *_set_rps to process the new min_delay and
> -	 * update the interrupt limits and PMINTRMSK even though
> -	 * frequency request may be unchanged.
> -	 */
> -	intel_rps_set(rps, val);
> -
> -unlock:
> -	mutex_unlock(&rps->lock);
> +	ret = intel_rps_set_min_frequency(rps, val);
>  
>  	return ret ?: count;
>  }
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 15/16] drm/i915/guc/slpc: slpc selftest
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 15/16] drm/i915/guc/slpc: slpc selftest Vinay Belgaumkar
@ 2021-07-10 18:29   ` Michal Wajdeczko
  2021-07-21  1:06     ` Belgaumkar, Vinay
  0 siblings, 1 reply; 53+ messages in thread
From: Michal Wajdeczko @ 2021-07-10 18:29 UTC (permalink / raw)
  To: Vinay Belgaumkar, intel-gfx, dri-devel



On 10.07.2021 03:20, Vinay Belgaumkar wrote:
> Tests that exercise the slpc get/set frequency interfaces.
> 
> Clamp_max will set max frequency to multiple levels and check
> that slpc requests frequency lower than or equal to it.
> 
> Clamp_min will set min frequency to different levels and check
> if slpc requests are higher or equal to those levels.

2x s/slpc/SLPC

> 
> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> ---
>  drivers/gpu/drm/i915/gt/intel_rps.c           |   1 +
>  drivers/gpu/drm/i915/gt/selftest_slpc.c       | 333 ++++++++++++++++++
>  drivers/gpu/drm/i915/gt/selftest_slpc.h       |  12 +
>  .../drm/i915/selftests/i915_live_selftests.h  |   1 +
>  4 files changed, 347 insertions(+)
>  create mode 100644 drivers/gpu/drm/i915/gt/selftest_slpc.c
>  create mode 100644 drivers/gpu/drm/i915/gt/selftest_slpc.h
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
> index 88ffc5d90730..16ac2e840881 100644
> --- a/drivers/gpu/drm/i915/gt/intel_rps.c
> +++ b/drivers/gpu/drm/i915/gt/intel_rps.c
> @@ -2288,4 +2288,5 @@ EXPORT_SYMBOL_GPL(i915_gpu_turbo_disable);
>  
>  #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
>  #include "selftest_rps.c"
> +#include "selftest_slpc.c"
>  #endif
> diff --git a/drivers/gpu/drm/i915/gt/selftest_slpc.c b/drivers/gpu/drm/i915/gt/selftest_slpc.c
> new file mode 100644
> index 000000000000..f440c1cb2afa
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/gt/selftest_slpc.c
> @@ -0,0 +1,333 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2020 Intel Corporation

2021

> + */
> +#include "selftest_slpc.h"
> +#include "selftest_rps.h"
> +
> +#include <linux/pm_qos.h>
> +#include <linux/sort.h>

system headers should go first

> +
> +#include "intel_engine_heartbeat.h"
> +#include "intel_engine_pm.h"
> +#include "intel_gpu_commands.h"
> +#include "intel_gt_clock_utils.h"
> +#include "intel_gt_pm.h"
> +#include "intel_rc6.h"
> +#include "selftest_engine_heartbeat.h"
> +#include "intel_rps.h"
> +#include "selftests/igt_flush_test.h"
> +#include "selftests/igt_spinner.h"

wrong order ?

> +
> +#define NUM_STEPS 5
> +#define H2G_DELAY 50000
> +#define delay_for_h2g() usleep_range(H2G_DELAY, H2G_DELAY + 10000)
> +
> +static int set_min_freq(struct intel_guc_slpc *slpc, int freq)
> +{
> +	int ret;

add empty line

> +	ret = intel_guc_slpc_set_min_freq(slpc, freq);
> +	if (ret) {
> +		pr_err("Could not set min frequency to [%d]\n", freq);
> +		return ret;
> +	} else {
> +		/* Delay to ensure h2g completes */
> +		delay_for_h2g();
> +	}
> +
> +	return ret;
> +}
> +
> +static int set_max_freq(struct intel_guc_slpc *slpc, int freq)
> +{
> +	int ret;

add empty line

> +	ret = intel_guc_slpc_set_max_freq(slpc, freq);
> +	if (ret) {
> +		pr_err("Could not set maximum frequency [%d]\n",
> +			freq);
> +		return ret;
> +	} else {
> +		/* Delay to ensure h2g completes */
> +		delay_for_h2g();
> +	}
> +
> +	return ret;
> +}
> +
> +int live_slpc_clamp_min(void *arg)
> +{
> +	struct drm_i915_private *i915 = arg;
> +	struct intel_gt *gt = &i915->gt;
> +	struct intel_guc_slpc *slpc;
> +	struct intel_rps *rps;
> +	struct intel_engine_cs *engine;
> +	enum intel_engine_id id;
> +	struct igt_spinner spin;
> +	int err = 0;

usually "err" is last decl

> +	u32 slpc_min_freq, slpc_max_freq;
> +
> +

too many empty lines

> +	slpc = &gt->uc.guc.slpc;
> +	rps = &gt->rps;

could be initialized in decl above

> +
> +	if (!intel_uc_uses_guc_slpc(&gt->uc))
> +		return 0;
> +
> +	if (igt_spinner_init(&spin, gt))
> +		return -ENOMEM;
> +
> +	if (intel_guc_slpc_get_max_freq(slpc, &slpc_max_freq)) {
> +		pr_err("Could not get SLPC max freq");
> +		return -EIO;
> +	}
> +
> +	if (intel_guc_slpc_get_min_freq(slpc, &slpc_min_freq)) {
> +		pr_err("Could not get SLPC min freq");
> +		return -EIO;
> +	}
> +
> +	if (slpc_min_freq == slpc_max_freq) {
> +		pr_err("Min/Max are fused to the same value");
> +		return -EINVAL;
> +	}

3x missing \n

> +
> +	intel_gt_pm_wait_for_idle(gt);
> +	intel_gt_pm_get(gt);
> +	for_each_engine(engine, gt, id) {
> +		struct i915_request *rq;
> +		u32 step, min_freq, req_freq;
> +		u32 act_freq, max_act_freq;
> +
> +		if (!intel_engine_can_store_dword(engine))
> +			continue;
> +
> +		/* Go from min to max in 5 steps */
> +		step = (slpc_max_freq - slpc_min_freq)/NUM_STEPS;

add spaces ") / NUM"

> +		max_act_freq = slpc_min_freq;
> +		for (min_freq = slpc_min_freq; min_freq < slpc_max_freq; min_freq+=step)

add spaces " += "

> +		{
> +			err = set_min_freq(slpc, min_freq);
> +			if (err)
> +				break;
> +
> +			st_engine_heartbeat_disable(engine);
> +
> +

keep only one empty line

> +			rq = igt_spinner_create_request(&spin,
> +					engine->kernel_context,
> +					MI_NOOP);
> +			if (IS_ERR(rq)) {
> +				err = PTR_ERR(rq);
> +				st_engine_heartbeat_enable(engine);
> +				break;
> +			}
> +
> +			i915_request_add(rq);
> +
> +			if (!igt_wait_for_spinner(&spin, rq)) {
> +				pr_err("%s: Spinner did not start\n",
> +					engine->name);
> +				igt_spinner_end(&spin);
> +				st_engine_heartbeat_enable(engine);
> +				intel_gt_set_wedged(engine->gt);
> +				err = -EIO;
> +				break;
> +			}
> +
> +			/* Wait for GuC to detect business and raise
> +			 * requested frequency if necessary */
> +			delay_for_h2g();
> +
> +			req_freq = intel_rps_read_punit_req_frequency(rps);
> +
> +			/* GuC requests freq in multiples of 50/3 MHz */
> +			if (req_freq < (min_freq - 50/3)) {
> +				pr_err("SWReq is %d, should be at least %d", req_freq,
> +					min_freq - 50/3);
> +				igt_spinner_end(&spin);
> +				st_engine_heartbeat_enable(engine);
> +				err = -EINVAL;
> +				break;
> +			}
> +
> +			act_freq =  intel_rps_read_actual_frequency(rps);
> +			if (act_freq > max_act_freq)
> +				max_act_freq = act_freq;
> +
> +			igt_spinner_end(&spin);
> +			st_engine_heartbeat_enable(engine);
> +		}
> +
> +		pr_info("Max actual frequency for %s was %d",
> +				engine->name, max_act_freq);
> +
> +		/* Actual frequency should rise above min */
> +		if (max_act_freq == slpc_min_freq) {
> +			pr_err("Actual freq did not rise above min");
> +			err = -EINVAL;
> +		}

2x missing \n

and few more below

> +
> +		if (err)
> +			break;
> +	}
> +
> +	/* Restore min/max frequencies */
> +	set_max_freq(slpc, slpc_max_freq);
> +	set_min_freq(slpc, slpc_min_freq);
> +
> +	if (igt_flush_test(gt->i915))
> +		err = -EIO;
> +
> +	intel_gt_pm_put(gt);
> +	igt_spinner_fini(&spin);
> +	intel_gt_pm_wait_for_idle(gt);
> +
> +	return err;
> +}
> +
> +int live_slpc_clamp_max(void *arg)
> +{
> +	struct drm_i915_private *i915 = arg;
> +	struct intel_gt *gt = &i915->gt;
> +	struct intel_guc_slpc *slpc;
> +	struct intel_rps *rps;
> +	struct intel_engine_cs *engine;
> +	enum intel_engine_id id;
> +	struct igt_spinner spin;
> +	int err = 0;
> +	u32 slpc_min_freq, slpc_max_freq;
> +
> +	slpc = &gt->uc.guc.slpc;
> +	rps = &gt->rps;
> +
> +	if (!intel_uc_uses_guc_slpc(&gt->uc))
> +		return 0;
> +
> +	if (igt_spinner_init(&spin, gt))
> +		return -ENOMEM;
> +
> +	if (intel_guc_slpc_get_max_freq(slpc, &slpc_max_freq)) {
> +		pr_err("Could not get SLPC max freq");
> +		return -EIO;
> +	}
> +
> +	if (intel_guc_slpc_get_min_freq(slpc, &slpc_min_freq)) {
> +		pr_err("Could not get SLPC min freq");
> +		return -EIO;
> +	}
> +
> +	if (slpc_min_freq == slpc_max_freq) {
> +		pr_err("Min/Max are fused to the same value");
> +		return -EINVAL;
> +	}
> +
> +	intel_gt_pm_wait_for_idle(gt);
> +	intel_gt_pm_get(gt);
> +	for_each_engine(engine, gt, id) {
> +		struct i915_request *rq;
> +		u32 max_freq, req_freq;
> +		u32 act_freq, max_act_freq;
> +		u32 step;
> +
> +		if (!intel_engine_can_store_dword(engine))
> +			continue;
> +
> +		/* Go from max to min in 5 steps */
> +		step = (slpc_max_freq - slpc_min_freq)/NUM_STEPS;
> +		max_act_freq = slpc_min_freq;
> +		for (max_freq = slpc_max_freq; max_freq > slpc_min_freq; max_freq-=step)
> +		{
> +			err = set_max_freq(slpc, max_freq);
> +			if (err)
> +				break;
> +
> +			st_engine_heartbeat_disable(engine);
> +
> +			rq = igt_spinner_create_request(&spin,
> +						engine->kernel_context,
> +						MI_NOOP);
> +			if (IS_ERR(rq)) {
> +				st_engine_heartbeat_enable(engine);
> +				err = PTR_ERR(rq);
> +				break;
> +			}
> +
> +			i915_request_add(rq);
> +
> +			if (!igt_wait_for_spinner(&spin, rq)) {
> +				pr_err("%s: SLPC spinner did not start\n",
> +				       engine->name);
> +				igt_spinner_end(&spin);
> +				st_engine_heartbeat_enable(engine);
> +				intel_gt_set_wedged(engine->gt);
> +				err = -EIO;
> +				break;
> +			}
> +
> +			delay_for_h2g();
> +
> +			/* Verify that SWREQ indeed was set to specific value */
> +			req_freq = intel_rps_read_punit_req_frequency(rps);
> +
> +			/* GuC requests freq in multiples of 50/3 MHz */
> +			if (req_freq > (max_freq + 50/3)) {
> +				pr_err("SWReq is %d, should be at most %d", req_freq,
> +					max_freq + 50/3);
> +				igt_spinner_end(&spin);
> +				st_engine_heartbeat_enable(engine);
> +				err = -EINVAL;
> +				break;
> +			}
> +
> +			act_freq =  intel_rps_read_actual_frequency(rps);
> +			if (act_freq > max_act_freq)
> +				max_act_freq = act_freq;
> +
> +			st_engine_heartbeat_enable(engine);
> +			igt_spinner_end(&spin);
> +
> +			if (err)
> +				break;
> +		}
> +
> +		pr_info("Max actual frequency for %s was %d",
> +				engine->name, max_act_freq);
> +
> +		/* Actual frequency should rise above min */
> +		if (max_act_freq == slpc_min_freq) {
> +			pr_err("Actual freq did not rise above min");
> +			err = -EINVAL;
> +		}
> +
> +		if (igt_flush_test(gt->i915)) {
> +			err = -EIO;
> +			break;
> +		}
> +
> +		if (err)
> +			break;
> +	}
> +
> +	/* Restore min/max freq */
> +	set_max_freq(slpc, slpc_max_freq);
> +	set_min_freq(slpc, slpc_min_freq);
> +
> +	intel_gt_pm_put(gt);
> +	igt_spinner_fini(&spin);
> +	intel_gt_pm_wait_for_idle(gt);
> +
> +	return err;
> +}
> +
> +int intel_slpc_live_selftests(struct drm_i915_private *i915)
> +{
> +	static const struct i915_subtest tests[] = {
> +		SUBTEST(live_slpc_clamp_max),
> +		SUBTEST(live_slpc_clamp_min),
> +	};
> +
> +	if (intel_gt_is_wedged(&i915->gt))
> +		return 0;
> +
> +	return i915_live_subtests(tests, i915);
> +}
> diff --git a/drivers/gpu/drm/i915/gt/selftest_slpc.h b/drivers/gpu/drm/i915/gt/selftest_slpc.h
> new file mode 100644
> index 000000000000..8dfb40916a8c
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/gt/selftest_slpc.h
> @@ -0,0 +1,12 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2020 Intel Corporation

2021

Michal

> + */
> +
> +#ifndef SELFTEST_SLPC_H
> +#define SELFTEST_SLPC_H
> +
> +int live_slpc_clamp_max(void *arg);
> +int live_slpc_clamp_min(void *arg);
> +
> +#endif /* SELFTEST_SLPC_H */
> diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> index e2fd1b61af71..1746a56dda06 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> +++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
> @@ -47,5 +47,6 @@ selftest(hangcheck, intel_hangcheck_live_selftests)
>  selftest(execlists, intel_execlists_live_selftests)
>  selftest(ring_submission, intel_ring_submission_live_selftests)
>  selftest(perf, i915_perf_live_selftests)
> +selftest(slpc, intel_slpc_live_selftests)
>  /* Here be dragons: keep last to run last! */
>  selftest(late_gt_pm, intel_gt_pm_late_selftests)
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 16/16] drm/i915/guc/rc: Setup and enable GUCRC feature
  2021-07-10  1:20 ` [Intel-gfx] [PATCH 16/16] drm/i915/guc/rc: Setup and enable GUCRC feature Vinay Belgaumkar
@ 2021-07-10 18:41   ` Michal Wajdeczko
  2021-07-21  1:11     ` Belgaumkar, Vinay
  0 siblings, 1 reply; 53+ messages in thread
From: Michal Wajdeczko @ 2021-07-10 18:41 UTC (permalink / raw)
  To: Vinay Belgaumkar, intel-gfx, dri-devel



On 10.07.2021 03:20, Vinay Belgaumkar wrote:
> This feature hands over the control of HW RC6 to the GUC.
> GUC decides when to put HW into RC6 based on it's internal
> busyness algorithms.
> 
> GUCRC needs GUC submission to be enabled, and only
> supported on Gen12+ for now.
> 
> When GUCRC is enabled, do not set HW RC6. Use a H2G message
> to tell guc to enable GUCRC. When disabling RC6, tell guc to

s/GUC/GuC
s/guc/GuC

> revert RC6 control back to KMD.
> 
> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> ---
>  drivers/gpu/drm/i915/Makefile                 |  1 +
>  drivers/gpu/drm/i915/gt/intel_rc6.c           | 22 ++++--
>  .../gpu/drm/i915/gt/uc/abi/guc_actions_abi.h  |  6 ++
>  drivers/gpu/drm/i915/gt/uc/intel_guc.c        |  1 +
>  drivers/gpu/drm/i915/gt/uc/intel_guc.h        |  2 +
>  drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c     | 79 +++++++++++++++++++
>  drivers/gpu/drm/i915/gt/uc/intel_guc_rc.h     | 32 ++++++++
>  drivers/gpu/drm/i915/gt/uc/intel_uc.h         |  2 +
>  8 files changed, 140 insertions(+), 5 deletions(-)
>  create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c
>  create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_rc.h
> 
> diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
> index d8eac4468df9..3fc17f20d88e 100644
> --- a/drivers/gpu/drm/i915/Makefile
> +++ b/drivers/gpu/drm/i915/Makefile
> @@ -186,6 +186,7 @@ i915-y += gt/uc/intel_uc.o \
>  	  gt/uc/intel_guc_fw.o \
>  	  gt/uc/intel_guc_log.o \
>  	  gt/uc/intel_guc_log_debugfs.o \
> +	  gt/uc/intel_guc_rc.o \
>  	  gt/uc/intel_guc_slpc.o \
>  	  gt/uc/intel_guc_submission.o \
>  	  gt/uc/intel_huc.o \
> diff --git a/drivers/gpu/drm/i915/gt/intel_rc6.c b/drivers/gpu/drm/i915/gt/intel_rc6.c
> index 259d7eb4e165..299fcf10b04b 100644
> --- a/drivers/gpu/drm/i915/gt/intel_rc6.c
> +++ b/drivers/gpu/drm/i915/gt/intel_rc6.c
> @@ -98,11 +98,19 @@ static void gen11_rc6_enable(struct intel_rc6 *rc6)
>  	set(uncore, GEN9_MEDIA_PG_IDLE_HYSTERESIS, 60);
>  	set(uncore, GEN9_RENDER_PG_IDLE_HYSTERESIS, 60);
>  
> -	/* 3a: Enable RC6 */
> -	rc6->ctl_enable =
> -		GEN6_RC_CTL_HW_ENABLE |
> -		GEN6_RC_CTL_RC6_ENABLE |
> -		GEN6_RC_CTL_EI_MODE(1);
> +	/* 3a: Enable RC6
> +	 *
> +	 * With GUCRC, we do not enable bit 31 of RC_CTL,
> +	 * thus allowing GuC to control RC6 entry/exit fully instead.
> +	 * We will not set the HW ENABLE and EI bits
> +	 */
> +	if (!intel_guc_rc_enable(&gt->uc.guc))
> +		rc6->ctl_enable = GEN6_RC_CTL_RC6_ENABLE;
> +	else
> +		rc6->ctl_enable =
> +			GEN6_RC_CTL_HW_ENABLE |
> +			GEN6_RC_CTL_RC6_ENABLE |
> +			GEN6_RC_CTL_EI_MODE(1);
>  
>  	pg_enable =
>  		GEN9_RENDER_PG_ENABLE |
> @@ -513,6 +521,10 @@ static void __intel_rc6_disable(struct intel_rc6 *rc6)
>  {
>  	struct drm_i915_private *i915 = rc6_to_i915(rc6);
>  	struct intel_uncore *uncore = rc6_to_uncore(rc6);
> +	struct intel_gt *gt = rc6_to_gt(rc6);
> +
> +	/* Take control of RC6 back from GuC */
> +	intel_guc_rc_disable(&gt->uc.guc);
>  
>  	intel_uncore_forcewake_get(uncore, FORCEWAKE_ALL);
>  	if (GRAPHICS_VER(i915) >= 9)
> diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
> index 596cf4b818e5..2ddb9cdc0a59 100644
> --- a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
> +++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
> @@ -136,6 +136,7 @@ enum intel_guc_action {
>  	INTEL_GUC_ACTION_CONTEXT_RESET_NOTIFICATION = 0x1008,
>  	INTEL_GUC_ACTION_ENGINE_FAILURE_NOTIFICATION = 0x1009,
>  	INTEL_GUC_ACTION_SLPC_REQUEST = 0x3003,
> +	INTEL_GUC_ACTION_SETUP_PC_GUCRC = 0x3004,
>  	INTEL_GUC_ACTION_AUTHENTICATE_HUC = 0x4000,
>  	INTEL_GUC_ACTION_REGISTER_CONTEXT = 0x4502,
>  	INTEL_GUC_ACTION_DEREGISTER_CONTEXT = 0x4503,
> @@ -146,6 +147,11 @@ enum intel_guc_action {
>  	INTEL_GUC_ACTION_LIMIT
>  };
>  
> +enum intel_guc_rc_options {
> +	INTEL_GUCRC_HOST_CONTROL,
> +	INTEL_GUCRC_FIRMWARE_CONTROL,
> +};
> +
>  enum intel_guc_preempt_options {
>  	INTEL_GUC_PREEMPT_OPTION_DROP_WORK_Q = 0x4,
>  	INTEL_GUC_PREEMPT_OPTION_DROP_SUBMIT_Q = 0x8,
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
> index 82863a9bc8e8..0d55b24f7c67 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
> @@ -158,6 +158,7 @@ void intel_guc_init_early(struct intel_guc *guc)
>  	intel_guc_log_init_early(&guc->log);
>  	intel_guc_submission_init_early(guc);
>  	intel_guc_slpc_init_early(guc);
> +	intel_guc_rc_init_early(guc);
>  
>  	mutex_init(&guc->send_mutex);
>  	spin_lock_init(&guc->irq_lock);
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> index 0dbbd9cf553f..592d52e5e93c 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> @@ -59,6 +59,8 @@ struct intel_guc {
>  
>  	bool submission_supported;
>  	bool submission_selected;
> +	bool rc_supported;
> +	bool rc_selected;
>  	bool slpc_supported;
>  	bool slpc_selected;
>  
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c
> new file mode 100644
> index 000000000000..45b61432c56d
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c
> @@ -0,0 +1,79 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2020 Intel Corporation

2021

> +*/

unaligned *

> +
> +#include "intel_guc_rc.h"
> +#include "gt/intel_gt.h"
> +#include "i915_drv.h"
> +
> +static bool __guc_rc_supported(struct intel_guc *guc)
> +{
> +	/* GuC RC is unavailable for pre-Gen12 */
> +	return guc->submission_supported &&
> +		GRAPHICS_VER(guc_to_gt(guc)->i915) >= 12;
> +}
> +
> +static bool __guc_rc_selected(struct intel_guc *guc)
> +{
> +	if (!intel_guc_rc_is_supported(guc))
> +		return false;
> +
> +	return guc->submission_selected;
> +}
> +
> +void intel_guc_rc_init_early(struct intel_guc *guc)
> +{
> +	guc->rc_supported = __guc_rc_supported(guc);
> +	guc->rc_selected = __guc_rc_selected(guc);
> +}
> +
> +static int guc_action_control_gucrc(struct intel_guc *guc, bool enable)
> +{
> +	struct drm_device *drm = &guc_to_gt(guc)->i915->drm;
> +	u32 rc_mode = enable ? INTEL_GUCRC_FIRMWARE_CONTROL :
> +				INTEL_GUCRC_HOST_CONTROL;
> +	u32 action[] = {
> +		INTEL_GUC_ACTION_SETUP_PC_GUCRC,
> +		rc_mode
> +	};
> +	int ret;
> +
> +	ret = intel_guc_send(guc, action, ARRAY_SIZE(action));
> +	if (ret)

since intel_guc_send() may return non-zero value from data0 RESPONSE
field, assuming that this action expects there MBZ this should be:

	ret = ret > 0 ? -EPROTO : ret;

otherwise some static code analyzers might complain

> +		drm_err(drm, "Failed to set GUCRC mode(%d), err=%d\n",

you may want to print error with %pe
and move this message to __guc_rc_control because of the above

> +			rc_mode, ret);
> +
> +	return ret;
> +}
> +
> +static int __guc_rc_control(struct intel_guc *guc, bool enable)
> +{
> +	struct intel_gt *gt = guc_to_gt(guc);
> +	int ret;
> +
> +	if (!intel_uc_uses_guc_rc(&gt->uc))
> +		return -ENOTSUPP;
> +
> +	if (!intel_guc_is_ready(guc))
> +		return -EINVAL;
> +
> +	ret = guc_action_control_gucrc(guc, enable);
> +	if (unlikely(ret))

	drm_err(drm, "Failed to %s GuC RC mode (%pe)\n",
		enabledisable(enable), ERR_PTR(ret));

> +		return ret;
> +
> +	drm_info(&gt->i915->drm, "GuC RC %s\n",
> +	         enableddisabled(enable));
> +
> +	return 0;
> +}
> +
> +int intel_guc_rc_enable(struct intel_guc *guc)
> +{
> +	return __guc_rc_control(guc, true);
> +}
> +
> +int intel_guc_rc_disable(struct intel_guc *guc)
> +{
> +	return __guc_rc_control(guc, false);
> +}
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.h
> new file mode 100644
> index 000000000000..169e60726e5b
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.h
> @@ -0,0 +1,32 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2020 Intel Corporation

2021

> + */
> +
> +#ifndef _INTEL_GUC_RC_H_
> +#define _INTEL_GUC_RC_H_
> +
> +#include <linux/types.h>

do you need this include here ?

Michal

> +#include "intel_guc_submission.h"
> +
> +void intel_guc_rc_init_early(struct intel_guc *guc);
> +
> +static inline bool intel_guc_rc_is_supported(struct intel_guc *guc)
> +{
> +	return guc->rc_supported;
> +}
> +
> +static inline bool intel_guc_rc_is_wanted(struct intel_guc *guc)
> +{
> +	return guc->submission_selected && intel_guc_rc_is_supported(guc);
> +}
> +
> +static inline bool intel_guc_rc_is_used(struct intel_guc *guc)
> +{
> +	return intel_guc_submission_is_used(guc) && intel_guc_rc_is_wanted(guc);
> +}
> +
> +int intel_guc_rc_enable(struct intel_guc *guc);
> +int intel_guc_rc_disable(struct intel_guc *guc);
> +
> +#endif
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.h b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
> index 38e465fd8a0c..29d8ad6d9087 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.h
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
> @@ -7,6 +7,7 @@
>  #define _INTEL_UC_H_
>  
>  #include "intel_guc.h"
> +#include "intel_guc_rc.h"
>  #include "intel_guc_submission.h"
>  #include "intel_huc.h"
>  #include "i915_params.h"
> @@ -84,6 +85,7 @@ uc_state_checkers(guc, guc);
>  uc_state_checkers(huc, huc);
>  uc_state_checkers(guc, guc_submission);
>  uc_state_checkers(guc, guc_slpc);
> +uc_state_checkers(guc, guc_rc);
>  
>  #undef uc_state_checkers
>  #undef __uc_state_checker
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 02/16] drm/i915/guc/slpc: Initial definitions for slpc
  2021-07-10 14:27   ` Michal Wajdeczko
@ 2021-07-12 18:40     ` Belgaumkar, Vinay
  2021-07-12 23:43     ` Belgaumkar, Vinay
  1 sibling, 0 replies; 53+ messages in thread
From: Belgaumkar, Vinay @ 2021-07-12 18:40 UTC (permalink / raw)
  To: Michal Wajdeczko, intel-gfx, dri-devel



On 7/10/2021 7:27 AM, Michal Wajdeczko wrote:
> Hi Vinay,
> 
> On 10.07.2021 03:20, Vinay Belgaumkar wrote:
>> Add macros to check for slpc support. This feature is currently supported
>> for gen12+ and enabled whenever guc submission is enabled/selected.
> 
> please try to use consistent names across all patches:
> 
> s/slpc/SLPC
> s/gen12/Gen12
> s/guc/GuC
> 
>>
>> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
>> Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gt/uc/intel_guc.c        |  1 +
>>   drivers/gpu/drm/i915/gt/uc/intel_guc.h        |  2 ++
>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 21 +++++++++++++++++++
>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.h | 16 ++++++++++++++
>>   drivers/gpu/drm/i915/gt/uc/intel_uc.c         |  6 ++++--
>>   drivers/gpu/drm/i915/gt/uc/intel_uc.h         |  1 +
>>   6 files changed, 45 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
>> index 979128e28372..b9a809f2d221 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
>> @@ -157,6 +157,7 @@ void intel_guc_init_early(struct intel_guc *guc)
>>   	intel_guc_ct_init_early(&guc->ct);
>>   	intel_guc_log_init_early(&guc->log);
>>   	intel_guc_submission_init_early(guc);
>> +	intel_guc_slpc_init_early(guc);
>>   
>>   	mutex_init(&guc->send_mutex);
>>   	spin_lock_init(&guc->irq_lock);
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
>> index 5d94cf482516..e5a456918b88 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
>> @@ -57,6 +57,8 @@ struct intel_guc {
>>   
>>   	bool submission_supported;
>>   	bool submission_selected;
>> +	bool slpc_supported;
>> +	bool slpc_selected;
>>   
>>   	struct i915_vma *ads_vma;
>>   	struct __guc_ads_blob *ads_blob;
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> index 9c102bf0c8e3..e2644a05f298 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> @@ -2351,6 +2351,27 @@ void intel_guc_submission_init_early(struct intel_guc *guc)
>>   	guc->submission_selected = __guc_submission_selected(guc);
>>   }
>>   
>> +static bool __guc_slpc_supported(struct intel_guc *guc)
> 
> hmm, easy to confuse with intel_guc_slpc_is_supported, so maybe:
> 
> __detect_slpc_supported()

ok.
> 
> (yes, I know you were following code above)
> 
>> +{
>> +	/* GuC slpc is unavailable for pre-Gen12 */
> 
> s/slpc/SLPC
> 
>> +	return guc->submission_supported &&
>> +		GRAPHICS_VER(guc_to_gt(guc)->i915) >= 12;
>> +}
>> +
>> +static bool __guc_slpc_selected(struct intel_guc *guc)
>> +{
>> +	if (!intel_guc_slpc_is_supported(guc))
>> +		return false;
>> +
>> +	return guc->submission_selected;
>> +}
>> +
>> +void intel_guc_slpc_init_early(struct intel_guc *guc)
>> +{
>> +	guc->slpc_supported = __guc_slpc_supported(guc);
>> +	guc->slpc_selected = __guc_slpc_selected(guc);
>> +}
> 
> in patch 4/16 you are introducing intel_guc_slpc.c|h so to have proper
> encapsulation better to define this function as
> 
> void intel_guc_slpc_init_early(struct intel_guc_slpc *slpc) { }

the uc_state_checkers force the use of struct intel_guc *guc as the 
param. don't think I can change that to refer to slpc instead.

static inline bool intel_guc_slpc_is_supported(struct intel_guc *guc)
{
         return guc->slpc_supported;
}

slpc_supported needs to be inside the guc struct.

> 
> and move it to intel_guc_slpc.c
> 
>> +
>>   static inline struct intel_context *
>>   g2h_context_lookup(struct intel_guc *guc, u32 desc_idx)
>>   {
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>> index be767eb6ff71..7ae5fd052faf 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>> @@ -13,6 +13,7 @@
>>   struct drm_printer;
>>   struct intel_engine_cs;
>>   
>> +void intel_guc_slpc_init_early(struct intel_guc *guc);
> 
> it really does not belong to this .h
> 
>>   void intel_guc_submission_init_early(struct intel_guc *guc);
>>   int intel_guc_submission_init(struct intel_guc *guc);
>>   void intel_guc_submission_enable(struct intel_guc *guc);
>> @@ -50,4 +51,19 @@ static inline bool intel_guc_submission_is_used(struct intel_guc *guc)
>>   	return intel_guc_is_used(guc) && intel_guc_submission_is_wanted(guc);
>>   }
>>   
>> +static inline bool intel_guc_slpc_is_supported(struct intel_guc *guc)
>> +{
>> +	return guc->slpc_supported;
>> +}
>> +
>> +static inline bool intel_guc_slpc_is_wanted(struct intel_guc *guc)
>> +{
>> +	return guc->slpc_selected;
>> +}
>> +
>> +static inline bool intel_guc_slpc_is_used(struct intel_guc *guc)
>> +{
>> +	return intel_guc_submission_is_used(guc) && intel_guc_slpc_is_wanted(guc);
>> +}
> 
> did you try to define them in intel_guc_slpc.h ?
> 
> note that to avoid circular dependencies you can define slpc struct in
> intel_guc_slpc_types.h and then
> 
> in intel_guc.h:
> 	#include "intel_guc_slpc_types.h" instead of intel_guc_slpc.h
> 
> in intel_guc_slpc.h:
> 	#include "intel_guc.h"
> 	#include "intel_guc_slpc_types.h"
> 	#include "intel_guc_submission.h"

Sure, will give that a try.

> 
>> +
>>   #endif
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
>> index 61be0aa81492..dca5f6d0641b 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
>> @@ -76,16 +76,18 @@ static void __confirm_options(struct intel_uc *uc)
>>   	struct drm_i915_private *i915 = uc_to_gt(uc)->i915;
>>   
>>   	drm_dbg(&i915->drm,
>> -		"enable_guc=%d (guc:%s submission:%s huc:%s)\n",
>> +		"enable_guc=%d (guc:%s submission:%s huc:%s slpc:%s)\n",
>>   		i915->params.enable_guc,
>>   		yesno(intel_uc_wants_guc(uc)),
>>   		yesno(intel_uc_wants_guc_submission(uc)),
>> -		yesno(intel_uc_wants_huc(uc)));
>> +		yesno(intel_uc_wants_huc(uc)),
>> +		yesno(intel_uc_wants_guc_slpc(uc)));
>>   
>>   	if (i915->params.enable_guc == 0) {
>>   		GEM_BUG_ON(intel_uc_wants_guc(uc));
>>   		GEM_BUG_ON(intel_uc_wants_guc_submission(uc));
>>   		GEM_BUG_ON(intel_uc_wants_huc(uc));
>> +		GEM_BUG_ON(intel_uc_wants_guc_slpc(uc));
>>   		return;
>>   	}
>>   
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.h b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
>> index e2da2b6e76e1..38e465fd8a0c 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.h
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
>> @@ -83,6 +83,7 @@ __uc_state_checker(x, func, uses, used)
>>   uc_state_checkers(guc, guc);
>>   uc_state_checkers(huc, huc);
>>   uc_state_checkers(guc, guc_submission);
>> +uc_state_checkers(guc, guc_slpc);
>>   
>>   #undef uc_state_checkers
>>   #undef __uc_state_checker
>>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 02/16] drm/i915/guc/slpc: Initial definitions for slpc
  2021-07-10 14:27   ` Michal Wajdeczko
  2021-07-12 18:40     ` Belgaumkar, Vinay
@ 2021-07-12 23:43     ` Belgaumkar, Vinay
  1 sibling, 0 replies; 53+ messages in thread
From: Belgaumkar, Vinay @ 2021-07-12 23:43 UTC (permalink / raw)
  To: Michal Wajdeczko, intel-gfx, dri-devel



On 7/10/2021 7:27 AM, Michal Wajdeczko wrote:
> Hi Vinay,
> 
> On 10.07.2021 03:20, Vinay Belgaumkar wrote:
>> Add macros to check for slpc support. This feature is currently supported
>> for gen12+ and enabled whenever guc submission is enabled/selected.
> 
> please try to use consistent names across all patches:
> 
> s/slpc/SLPC
> s/gen12/Gen12
> s/guc/GuC

Ok.

> 
>>
>> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
>> Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gt/uc/intel_guc.c        |  1 +
>>   drivers/gpu/drm/i915/gt/uc/intel_guc.h        |  2 ++
>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 21 +++++++++++++++++++
>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.h | 16 ++++++++++++++
>>   drivers/gpu/drm/i915/gt/uc/intel_uc.c         |  6 ++++--
>>   drivers/gpu/drm/i915/gt/uc/intel_uc.h         |  1 +
>>   6 files changed, 45 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
>> index 979128e28372..b9a809f2d221 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
>> @@ -157,6 +157,7 @@ void intel_guc_init_early(struct intel_guc *guc)
>>   	intel_guc_ct_init_early(&guc->ct);
>>   	intel_guc_log_init_early(&guc->log);
>>   	intel_guc_submission_init_early(guc);
>> +	intel_guc_slpc_init_early(guc);
>>   
>>   	mutex_init(&guc->send_mutex);
>>   	spin_lock_init(&guc->irq_lock);
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
>> index 5d94cf482516..e5a456918b88 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
>> @@ -57,6 +57,8 @@ struct intel_guc {
>>   
>>   	bool submission_supported;
>>   	bool submission_selected;
>> +	bool slpc_supported;
>> +	bool slpc_selected;
>>   
>>   	struct i915_vma *ads_vma;
>>   	struct __guc_ads_blob *ads_blob;
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> index 9c102bf0c8e3..e2644a05f298 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> @@ -2351,6 +2351,27 @@ void intel_guc_submission_init_early(struct intel_guc *guc)
>>   	guc->submission_selected = __guc_submission_selected(guc);
>>   }
>>   
>> +static bool __guc_slpc_supported(struct intel_guc *guc)
> 
> hmm, easy to confuse with intel_guc_slpc_is_supported, so maybe:
> 
> __detect_slpc_supported()
> 
> (yes, I know you were following code above)
> 
>> +{
>> +	/* GuC slpc is unavailable for pre-Gen12 */
> 
> s/slpc/SLPC

  Ok.

> 
>> +	return guc->submission_supported &&
>> +		GRAPHICS_VER(guc_to_gt(guc)->i915) >= 12;
>> +}
>> +
>> +static bool __guc_slpc_selected(struct intel_guc *guc)
>> +{
>> +	if (!intel_guc_slpc_is_supported(guc))
>> +		return false;
>> +
>> +	return guc->submission_selected;
>> +}
>> +
>> +void intel_guc_slpc_init_early(struct intel_guc *guc)
>> +{
>> +	guc->slpc_supported = __guc_slpc_supported(guc);
>> +	guc->slpc_selected = __guc_slpc_selected(guc);
>> +}
> 
> in patch 4/16 you are introducing intel_guc_slpc.c|h so to have proper
> encapsulation better to define this function as
> 
> void intel_guc_slpc_init_early(struct intel_guc_slpc *slpc) { }
> 
> and move it to intel_guc_slpc.c

done.

> 
>> +
>>   static inline struct intel_context *
>>   g2h_context_lookup(struct intel_guc *guc, u32 desc_idx)
>>   {
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>> index be767eb6ff71..7ae5fd052faf 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>> @@ -13,6 +13,7 @@
>>   struct drm_printer;
>>   struct intel_engine_cs;
>>   
>> +void intel_guc_slpc_init_early(struct intel_guc *guc);
> 
> it really does not belong to this .h
> 
>>   void intel_guc_submission_init_early(struct intel_guc *guc);
>>   int intel_guc_submission_init(struct intel_guc *guc);
>>   void intel_guc_submission_enable(struct intel_guc *guc);
>> @@ -50,4 +51,19 @@ static inline bool intel_guc_submission_is_used(struct intel_guc *guc)
>>   	return intel_guc_is_used(guc) && intel_guc_submission_is_wanted(guc);
>>   }
>>   
>> +static inline bool intel_guc_slpc_is_supported(struct intel_guc *guc)
>> +{
>> +	return guc->slpc_supported;
>> +}
>> +
>> +static inline bool intel_guc_slpc_is_wanted(struct intel_guc *guc)
>> +{
>> +	return guc->slpc_selected;
>> +}
>> +
>> +static inline bool intel_guc_slpc_is_used(struct intel_guc *guc)
>> +{
>> +	return intel_guc_submission_is_used(guc) && intel_guc_slpc_is_wanted(guc);
>> +}
> 
> did you try to define them in intel_guc_slpc.h ?
> 
> note that to avoid circular dependencies you can define slpc struct in
> intel_guc_slpc_types.h and then
> 
> in intel_guc.h:
> 	#include "intel_guc_slpc_types.h" instead of intel_guc_slpc.h
> 
> in intel_guc_slpc.h:
> 	#include "intel_guc.h"
> 	#include "intel_guc_slpc_types.h"
> 	#include "intel_guc_submission.h"
> 

that worked.

Thanks,
Vinay.

>> +
>>   #endif
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
>> index 61be0aa81492..dca5f6d0641b 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
>> @@ -76,16 +76,18 @@ static void __confirm_options(struct intel_uc *uc)
>>   	struct drm_i915_private *i915 = uc_to_gt(uc)->i915;
>>   
>>   	drm_dbg(&i915->drm,
>> -		"enable_guc=%d (guc:%s submission:%s huc:%s)\n",
>> +		"enable_guc=%d (guc:%s submission:%s huc:%s slpc:%s)\n",
>>   		i915->params.enable_guc,
>>   		yesno(intel_uc_wants_guc(uc)),
>>   		yesno(intel_uc_wants_guc_submission(uc)),
>> -		yesno(intel_uc_wants_huc(uc)));
>> +		yesno(intel_uc_wants_huc(uc)),
>> +		yesno(intel_uc_wants_guc_slpc(uc)));
>>   
>>   	if (i915->params.enable_guc == 0) {
>>   		GEM_BUG_ON(intel_uc_wants_guc(uc));
>>   		GEM_BUG_ON(intel_uc_wants_guc_submission(uc));
>>   		GEM_BUG_ON(intel_uc_wants_huc(uc));
>> +		GEM_BUG_ON(intel_uc_wants_guc_slpc(uc));
>>   		return;
>>   	}
>>   
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.h b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
>> index e2da2b6e76e1..38e465fd8a0c 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.h
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
>> @@ -83,6 +83,7 @@ __uc_state_checker(x, func, uses, used)
>>   uc_state_checkers(guc, guc);
>>   uc_state_checkers(huc, huc);
>>   uc_state_checkers(guc, guc_submission);
>> +uc_state_checkers(guc, guc_slpc);
>>   
>>   #undef uc_state_checkers
>>   #undef __uc_state_checker
>>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 04/16] drm/i915/guc/slpc: Lay out slpc init/enable/disable/fini
  2021-07-10 14:35   ` Michal Wajdeczko
@ 2021-07-13  0:37     ` Belgaumkar, Vinay
  0 siblings, 0 replies; 53+ messages in thread
From: Belgaumkar, Vinay @ 2021-07-13  0:37 UTC (permalink / raw)
  To: Michal Wajdeczko, intel-gfx, dri-devel



On 7/10/2021 7:35 AM, Michal Wajdeczko wrote:
> 
> 
> On 10.07.2021 03:20, Vinay Belgaumkar wrote:
>> Declare header and source files for SLPC, along with init and
>> enable/disable function templates.
> 
> later you claim that "disable" is not needed

Changed.

> 
>>
>> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
>> Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
>> ---
>>   drivers/gpu/drm/i915/Makefile               |  1 +
>>   drivers/gpu/drm/i915/gt/uc/intel_guc.h      |  2 ++
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 34 +++++++++++++++++++++
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h | 16 ++++++++++
>>   4 files changed, 53 insertions(+)
>>   create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>>   create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
>>
>> diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
>> index ab7679957623..d8eac4468df9 100644
>> --- a/drivers/gpu/drm/i915/Makefile
>> +++ b/drivers/gpu/drm/i915/Makefile
>> @@ -186,6 +186,7 @@ i915-y += gt/uc/intel_uc.o \
>>   	  gt/uc/intel_guc_fw.o \
>>   	  gt/uc/intel_guc_log.o \
>>   	  gt/uc/intel_guc_log_debugfs.o \
>> +	  gt/uc/intel_guc_slpc.o \
>>   	  gt/uc/intel_guc_submission.o \
>>   	  gt/uc/intel_huc.o \
>>   	  gt/uc/intel_huc_debugfs.o \
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
>> index e5a456918b88..0dbbd9cf553f 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
>> @@ -15,6 +15,7 @@
>>   #include "intel_guc_ct.h"
>>   #include "intel_guc_log.h"
>>   #include "intel_guc_reg.h"
>> +#include "intel_guc_slpc.h"
>>   #include "intel_uc_fw.h"
>>   #include "i915_utils.h"
>>   #include "i915_vma.h"
>> @@ -30,6 +31,7 @@ struct intel_guc {
>>   	struct intel_uc_fw fw;
>>   	struct intel_guc_log log;
>>   	struct intel_guc_ct ct;
>> +	struct intel_guc_slpc slpc;
>>   
>>   	/* Global engine used to submit requests to GuC */
>>   	struct i915_sched_engine *sched_engine;
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> new file mode 100644
>> index 000000000000..c1f569d2300d
>> --- /dev/null
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> @@ -0,0 +1,34 @@
>> +/*
>> + * SPDX-License-Identifier: MIT
> 
> SPDX tag shall be in very first line, for .c:
> 
> // SPDX-License-Identifier: MIT
> 
>> + *
>> + * Copyright © 2020 Intel Corporation
> 
> 2021

done.

> 
>> + */
>> +
>> +#include "intel_guc_slpc.h"
>> +
>> +int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
>> +{
>> +	return 0;
>> +}
>> +
>> +/*
>> + * intel_guc_slpc_enable() - Start SLPC
>> + * @slpc: pointer to intel_guc_slpc.
>> + *
>> + * SLPC is enabled by setting up the shared data structure and
>> + * sending reset event to GuC SLPC. Initial data is setup in
>> + * intel_guc_slpc_init. Here we send the reset event. We do
>> + * not currently need a slpc_disable since this is taken care
>> + * of automatically when a reset/suspend occurs and the guc
> 
> s/guc/GuC
> 
>> + * channels are destroyed.
> 
> you mean CTB ?

yes, fixed.

> 
>> + *
>> + * Return: 0 on success, non-zero error code on failure.
>> + */
>> +int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
>> +{
>> +	return 0;
>> +}
>> +
>> +void intel_guc_slpc_fini(struct intel_guc_slpc *slpc)
>> +{
>> +}
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
>> new file mode 100644
>> index 000000000000..74fd86769163
>> --- /dev/null
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
>> @@ -0,0 +1,16 @@
>> +/*
>> + * SPDX-License-Identifier: MIT
> 
> SPDX tag shall be in very first line, for .h:
> 
> /* SPDX-License-Identifier: MIT */
> 
>> + *
>> + * Copyright © 2020 Intel Corporation
> 
> 2021
> 
>> + */
>> +#ifndef _INTEL_GUC_SLPC_H_
>> +#define _INTEL_GUC_SLPC_H_
>> +
>> +struct intel_guc_slpc {
>> +};
> 
> move all data definitions to intel_guc_slpc_types.h and include it here
> 
>> +
>> +int intel_guc_slpc_init(struct intel_guc_slpc *slpc);
>> +int intel_guc_slpc_enable(struct intel_guc_slpc *slpc);
>> +void intel_guc_slpc_fini(struct intel_guc_slpc *slpc);
>> +
>> +#endif
>>
> 
> and as suggested in comment to 2/14 you should likely move this patch to
> the front of the series

Yes, squashed with the first patch.

Thanks,
Vinay.

> 
> Michal
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 05/16] drm/i915/guc/slpc: Adding slpc communication interfaces
  2021-07-10 15:52   ` Michal Wajdeczko
@ 2021-07-13 23:22     ` Belgaumkar, Vinay
  0 siblings, 0 replies; 53+ messages in thread
From: Belgaumkar, Vinay @ 2021-07-13 23:22 UTC (permalink / raw)
  To: Michal Wajdeczko, intel-gfx, dri-devel



On 7/10/2021 8:52 AM, Michal Wajdeczko wrote:
> 
> 
> On 10.07.2021 03:20, Vinay Belgaumkar wrote:
>> Replicate the SLPC header file in GuC for the most part. There are
> 
> what you mean by "replicate" here?
> 
>> some SLPC mode based parameters which haven't been included since
>> we are not using them.
>>
>> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
>> Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gt/uc/intel_guc.c        |   4 +
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h   |   2 +
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h   |   2 +
>>   .../gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h  | 255 ++++++++++++++++++
>>   4 files changed, 263 insertions(+)
>>   create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h
>>
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
>> index b9a809f2d221..9d61b2d54de4 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
>> @@ -202,11 +202,15 @@ static u32 guc_ctl_debug_flags(struct intel_guc *guc)
>>   
>>   static u32 guc_ctl_feature_flags(struct intel_guc *guc)
>>   {
>> +	struct intel_gt *gt = guc_to_gt(guc);
>>   	u32 flags = 0;
>>   
>>   	if (!intel_guc_submission_is_used(guc))
>>   		flags |= GUC_CTL_DISABLE_SCHEDULER;
>>   
>> +	if (intel_uc_uses_guc_slpc(&gt->uc))
>> +		flags |= GUC_CTL_ENABLE_SLPC;
>> +
>>   	return flags;
>>   }
>>   
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
>> index 94bb1ca6f889..19e2504d7a36 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
>> @@ -114,6 +114,8 @@
>>   #define   GUC_ADS_ADDR_SHIFT		1
>>   #define   GUC_ADS_ADDR_MASK		(0xFFFFF << GUC_ADS_ADDR_SHIFT)
>>   
>> +#define GUC_CTL_ENABLE_SLPC            BIT(2)
> 
> this should be defined closer to GUC_CTL_FEATURE

done.

> 
>> +
>>   #define GUC_CTL_MAX_DWORDS		(SOFT_SCRATCH_COUNT - 2) /* [1..14] */
>>   
>>   /* Generic GT SysInfo data types */
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
>> index 74fd86769163..98036459a1a3 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
>> @@ -6,6 +6,8 @@
>>   #ifndef _INTEL_GUC_SLPC_H_
>>   #define _INTEL_GUC_SLPC_H_
>>   
>> +#include "intel_guc_slpc_fwif.h"
> 
> doesn't seem to be needed right now

Removed for this patch.
> 
>> +
>>   struct intel_guc_slpc {
>>   };
>>   
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h
>> new file mode 100644
>> index 000000000000..2a5e71428374
>> --- /dev/null
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_fwif.h
> 
> I've started to move all pure ABI definitions to files in abi/ folder,
> leaving in guc_fwif.h only our next level helpers/wrappers.
> 
> Can you move these SLPC definition there too ? maybe as dedicated:
> 
> 	abi/guc_slpc_abi.h

done.

> 
>> @@ -0,0 +1,255 @@
>> +/*
>> + * SPDX-License-Identifier: MIT
> 
> use proper format
> 
>> + *
>> + * Copyright © 2020 Intel Corporation
> 
> 2021
> 
>> + */
>> +#ifndef _INTEL_GUC_SLPC_FWIF_H_
>> +#define _INTEL_GUC_SLPC_FWIF_H_
>> +
>> +#include <linux/types.h>
>> +
>> +/* This file replicates the header in GuC code for handling SLPC related
>> + * data structures and sizes
>> + */
> 
> use proper format for multi-line comments:
> 
> 	/*
> 	 * blah blah
> 	 * blah blah
> 	 */

done.

> 
>> +
>> +/* SLPC exposes certain parameters for global configuration by the host.
>> + * These are referred to as override parameters, because in most cases
>> + * the host will not need to modify the default values used by SLPC.
>> + * SLPC remembers the default values which allows the host to easily restore
>> + * them by simply unsetting the override. The host can set or unset override
>> + * parameters during SLPC (re-)initialization using the SLPC Reset event.
>> + * The host can also set or unset override parameters on the fly using the
>> + * Parameter Set and Parameter Unset events
>> + */
>> +#define SLPC_MAX_OVERRIDE_PARAMETERS	256
>> +#define SLPC_OVERRIDE_BITFIELD_SIZE \
>> +		(SLPC_MAX_OVERRIDE_PARAMETERS / 32)
>> +
>> +#define SLPC_PAGE_SIZE_BYTES			4096
>> +#define SLPC_CACHELINE_SIZE_BYTES		64
>> +#define SLPC_SHARE_DATA_SIZE_BYTE_HEADER	SLPC_CACHELINE_SIZE_BYTES
>> +#define SLPC_SHARE_DATA_SIZE_BYTE_PLATFORM_INFO	SLPC_CACHELINE_SIZE_BYTES
>> +#define SLPC_SHARE_DATA_SIZE_BYTE_TASK_STATE	SLPC_CACHELINE_SIZE_BYTES
>> +#define SLPC_SHARE_DATA_MODE_DEFN_TABLE_SIZE	SLPC_PAGE_SIZE_BYTES
> 
> can you put some simply diagram that would describe this layout ?

done for the shared data struct.

> 
>> +
>> +#define SLPC_SHARE_DATA_SIZE_BYTE_MAX		(2 * SLPC_PAGE_SIZE_BYTES)
>> +
>> +/* Cacheline size aligned (Total size needed for
>> + * SLPM_KMD_MAX_OVERRIDE_PARAMETERS=256 is 1088 bytes)
>> + */
>> +#define SLPC_SHARE_DATA_SIZE_BYTE_PARAM		(((((SLPC_MAX_OVERRIDE_PARAMETERS * 4) \
>> +						+ ((SLPC_MAX_OVERRIDE_PARAMETERS / 32) * 4)) \
>> +		+ (SLPC_CACHELINE_SIZE_BYTES-1)) / SLPC_CACHELINE_SIZE_BYTES)*SLPC_CACHELINE_SIZE_BYTES)
>> +
>> +#define SLPC_SHARE_DATA_SIZE_BYTE_OTHER		(SLPC_SHARE_DATA_SIZE_BYTE_MAX - \
>> +					(SLPC_SHARE_DATA_SIZE_BYTE_HEADER \
>> +					+ SLPC_SHARE_DATA_SIZE_BYTE_PLATFORM_INFO \
>> +					+ SLPC_SHARE_DATA_SIZE_BYTE_TASK_STATE \
>> +					+ SLPC_SHARE_DATA_SIZE_BYTE_PARAM \
>> +					+ SLPC_SHARE_DATA_MODE_DEFN_TABLE_SIZE))
>> +
>> +#define SLPC_EVENT(id, argc)			((u32)(id) << 8 | (argc))
>> +
>> +#define SLPC_PARAM_TASK_DEFAULT			0
>> +#define SLPC_PARAM_TASK_ENABLED			1
>> +#define SLPC_PARAM_TASK_DISABLED		2
>> +#define SLPC_PARAM_TASK_UNKNOWN			3
> 
> many values below are defined as enum, why these values are #defines ?
> 
> and is there any relation to these ones defined below (look similar)?

No, they are different, added an enum.

> 
>   +	SLPC_PARAM_TASK_ENABLE_GTPERF = 0,
>   +	SLPC_PARAM_TASK_DISABLE_GTPERF = 1,
>   +	SLPC_PARAM_TASK_ENABLE_BALANCER = 2,
>   +	SLPC_PARAM_TASK_DISABLE_BALANCER = 3,
>   +	SLPC_PARAM_TASK_ENABLE_DCC = 4,
>   +	SLPC_PARAM_TASK_DISABLE_DCC = 5,
> 
>> +
>> +enum slpc_status {
>> +	SLPC_STATUS_OK = 0,
>> +	SLPC_STATUS_ERROR = 1,
>> +	SLPC_STATUS_ILLEGAL_COMMAND = 2,
>> +	SLPC_STATUS_INVALID_ARGS = 3,
>> +	SLPC_STATUS_INVALID_PARAMS = 4,
>> +	SLPC_STATUS_INVALID_DATA = 5,
>> +	SLPC_STATUS_OUT_OF_RANGE = 6,
>> +	SLPC_STATUS_NOT_SUPPORTED = 7,
>> +	SLPC_STATUS_NOT_IMPLEMENTED = 8,
>> +	SLPC_STATUS_NO_DATA = 9,
>> +	SLPC_STATUS_EVENT_NOT_REGISTERED = 10,
>> +	SLPC_STATUS_REGISTER_LOCKED = 11,
>> +	SLPC_STATUS_TEMPORARILY_UNAVAILABLE = 12,
>> +	SLPC_STATUS_VALUE_ALREADY_SET = 13,
>> +	SLPC_STATUS_VALUE_ALREADY_UNSET = 14,
>> +	SLPC_STATUS_VALUE_NOT_CHANGED = 15,
>> +	SLPC_STATUS_MEMIO_ERROR = 16,
>> +	SLPC_STATUS_EVENT_QUEUED_REQ_DPC = 17,
>> +	SLPC_STATUS_EVENT_QUEUED_NOREQ_DPC = 18,
>> +	SLPC_STATUS_NO_EVENT_QUEUED = 19,
>> +	SLPC_STATUS_OUT_OF_SPACE = 20,
>> +	SLPC_STATUS_TIMEOUT = 21,
>> +	SLPC_STATUS_NO_LOCK = 22,
>> +	SLPC_STATUS_MAX
>> +};
>> +
>> +enum slpc_event_id {
>> +	SLPC_EVENT_RESET = 0,
>> +	SLPC_EVENT_SHUTDOWN = 1,
>> +	SLPC_EVENT_PLATFORM_INFO_CHANGE = 2,
>> +	SLPC_EVENT_DISPLAY_MODE_CHANGE = 3,
>> +	SLPC_EVENT_FLIP_COMPLETE = 4,
>> +	SLPC_EVENT_QUERY_TASK_STATE = 5,
>> +	SLPC_EVENT_PARAMETER_SET = 6,
>> +	SLPC_EVENT_PARAMETER_UNSET = 7,
>> +};
>> +
>> +enum slpc_param_id {
>> +	SLPC_PARAM_TASK_ENABLE_GTPERF = 0,
>> +	SLPC_PARAM_TASK_DISABLE_GTPERF = 1,
>> +	SLPC_PARAM_TASK_ENABLE_BALANCER = 2,
>> +	SLPC_PARAM_TASK_DISABLE_BALANCER = 3,
>> +	SLPC_PARAM_TASK_ENABLE_DCC = 4,
>> +	SLPC_PARAM_TASK_DISABLE_DCC = 5,
>> +	SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ = 6,
>> +	SLPC_PARAM_GLOBAL_MAX_GT_UNSLICE_FREQ_MHZ = 7,
>> +	SLPC_PARAM_GLOBAL_MIN_GT_SLICE_FREQ_MHZ = 8,
>> +	SLPC_PARAM_GLOBAL_MAX_GT_SLICE_FREQ_MHZ = 9,
>> +	SLPC_PARAM_GTPERF_THRESHOLD_MAX_FPS = 10,
>> +	SLPC_PARAM_GLOBAL_DISABLE_GT_FREQ_MANAGEMENT = 11,
>> +	SLPC_PARAM_GTPERF_ENABLE_FRAMERATE_STALLING = 12,
>> +	SLPC_PARAM_GLOBAL_DISABLE_RC6_MODE_CHANGE = 13,
>> +	SLPC_PARAM_GLOBAL_OC_UNSLICE_FREQ_MHZ = 14,
>> +	SLPC_PARAM_GLOBAL_OC_SLICE_FREQ_MHZ = 15,
>> +	SLPC_PARAM_GLOBAL_ENABLE_IA_GT_BALANCING = 16,
>> +	SLPC_PARAM_GLOBAL_ENABLE_ADAPTIVE_BURST_TURBO = 17,
>> +	SLPC_PARAM_GLOBAL_ENABLE_EVAL_MODE = 18,
>> +	SLPC_PARAM_GLOBAL_ENABLE_BALANCER_IN_NON_GAMING_MODE = 19,
>> +	SLPC_PARAM_GLOBAL_RT_MODE_TURBO_FREQ_DELTA_MHZ = 20,
>> +	SLPC_PARAM_PWRGATE_RC_MODE = 21,
>> +	SLPC_PARAM_EDR_MODE_COMPUTE_TIMEOUT_MS = 22,
>> +	SLPC_PARAM_EDR_QOS_FREQ_MHZ = 23,
>> +	SLPC_PARAM_MEDIA_FF_RATIO_MODE = 24,
>> +	SLPC_PARAM_ENABLE_IA_FREQ_LIMITING = 25,
>> +	SLPC_PARAM_STRATEGIES = 26,
>> +	SLPC_PARAM_POWER_PROFILE = 27,
>> +	SLPC_IGNORE_EFFICIENT_FREQUENCY = 28,
> 
> no PARAM tag inside this enum name
> 
>> +	SLPC_MAX_PARAM = 32,
> 
> can we move this out of enum, maybe as standalone #define ?
> or remove it as doesn't seem to be useful at all

Added PARAM tag, it needs to be part of this.

> 
>> +};
>> +
>> +enum slpc_global_state {
>> +	SLPC_GLOBAL_STATE_NOT_RUNNING = 0,
>> +	SLPC_GLOBAL_STATE_INITIALIZING = 1,
>> +	SLPC_GLOBAL_STATE_RESETTING = 2,
>> +	SLPC_GLOBAL_STATE_RUNNING = 3,
>> +	SLPC_GLOBAL_STATE_SHUTTING_DOWN = 4,
>> +	SLPC_GLOBAL_STATE_ERROR = 5
>> +};
>> +
>> +enum slpc_platform_sku {
>> +	SLPC_PLATFORM_SKU_UNDEFINED = 0,
>> +	SLPC_PLATFORM_SKU_ULX = 1,
>> +	SLPC_PLATFORM_SKU_ULT = 2,
>> +	SLPC_PLATFORM_SKU_T = 3,
>> +	SLPC_PLATFORM_SKU_MOBL = 4,
>> +	SLPC_PLATFORM_SKU_DT = 5,
>> +	SLPC_PLATFORM_SKU_UNKNOWN = 6,
>> +};
>> +
>> +struct slpc_platform_info {
>> +	union {
>> +		u32 sku;  /**< SKU info */
>> +		struct {
>> +			u32 reserved:8;
>> +			u32 fused_slice_count:8;
>> +			u32 reserved1:16;
>> +		};
>> +	};
>> +        union
>> +	{
>> +		u32 bitfield2;       /**< IA capability info*/
>> +		struct {
>> +			u32 max_p0_freq_bins:8;
>> +			u32 p1_freq_bins:8;
>> +			u32 pe_freq_bins:8;
>> +			u32 pn_freq_bins:8;
>> +		};
>> +	};
>> +	u32 reserved2[2];
>> +} __packed;
> 
> I'm not a big fan of using C bitfields for interface definitions
> 
> can we switch to regular #defines and use FIELD_GET|PREP ?

Done.

> 
>> +
>> +struct slpc_task_state_data {
>> +	union {
>> +		u32 bitfield1;
>> +		struct {
>> +			u32 gtperf_task_active:1;
>> +			u32 gtperf_stall_possible:1;
>> +			u32 gtperf_gaming_mode:1;
>> +			u32 gtperf_target_fps:8;
>> +			u32 dcc_task_active:1;
>> +			u32 in_dcc:1;
>> +			u32 in_dct:1;
>> +			u32 freq_switch_active:1;
>> +			u32 ibc_enabled:1;
>> +			u32 ibc_active:1;
>> +			u32 pg1_enabled:1;
>> +			u32 pg1_active:1;
>> +		};
>> +	};
>> +	union {
>> +		u32 bitfield2;
>> +		struct {
>> +			u32 max_unslice_freq:8;
>> +			u32 min_unslice_freq:8;
>> +			u32 max_slice_freq:8;
>> +			u32 min_slice_freq:8;
>> +		};
>> +	};
>> +} __packed;
>> +
>> +struct slpc_shared_data {
>> +	union {
>> +		struct {
>> +			/* Total size in bytes of this buffer. */
>> +			u32 shared_data_size;
>> +			u32 global_state;
>> +			u32 display_data_addr;
>> +		};
> 
> below all structs are named, this one not, why ?
> 
>> +		unsigned char reserved_header[SLPC_SHARE_DATA_SIZE_BYTE_HEADER];
> 
> this could be just "u8"
> 
> and I assume all these "reserved" are in fact padding, no ?
> 
>> +	};
>> +
>> +	union {
>> +		struct slpc_platform_info platform_info;
>> +		unsigned char reserved_platform[SLPC_SHARE_DATA_SIZE_BYTE_PLATFORM_INFO];
>> +	};
> 
> maybe we can avoid these unions by declaring padding explicitly:
> 
> 	struct slpc_platform_info platform_info;
> 	u8 platform_info_pad[SLPC_SHARE_DATA_SIZE_BYTE_PLATFORM_INFO -
> 	                     sizeof(struct slpc_platform_info)];
> 
>> +
>> +	union {
>> +		struct slpc_task_state_data task_state_data;
>> +		unsigned char reserved_task_state[SLPC_SHARE_DATA_SIZE_BYTE_TASK_STATE];
>> +	};
>> +
>> +	union {
>> +		struct {
>> +		u32 override_params_set_bits[SLPC_OVERRIDE_BITFIELD_SIZE];
>> +		u32 override_params_values[SLPC_MAX_OVERRIDE_PARAMETERS];
>> +		};
>> +		unsigned char reserved_override_parameter[SLPC_SHARE_DATA_SIZE_BYTE_PARAM];
>> +	};
>> +
>> +	unsigned char reserved_other[SLPC_SHARE_DATA_SIZE_BYTE_OTHER];
>> +
>> +	/* PAGE 2 (4096 bytes), mode based parameter will be removed soon */
>> +	unsigned char reserved_mode_definition[4096];
>> +} __packed;
>> +
>> +enum slpc_reset_flags {
>> +	SLPC_RESET_FLAG_TDR_OCCURRED = (1 << 0)
>> +};
>> +
>> +#define SLPC_EVENT_MAX_INPUT_ARGS  9
>> +#define SLPC_EVENT_MAX_OUTPUT_ARGS 1
>> +
>> +union slpc_event_input_header {
>> +	u32 value;
>> +	struct {
>> +		u32 num_args:8;
>> +		u32 event_id:8;
>> +	};
>> +};
> 
> I guess earlier #define SLPC_EVENT is related to above
> can we keep related definitions together ?
> 
>> +
>> +struct slpc_event_input {
>> +	u32 h2g_action_id;
>> +	union slpc_event_input_header header;
>> +	u32 args[SLPC_EVENT_MAX_INPUT_ARGS];
>> +} __packed;
> 
> this looks like a attempt to define details of the
> INTEL_GUC_ACTION_SLPC_REQUEST HXG request message.
> 
> so maybe it can be moved to abi/guc_actions_slpc_abi.h ?
> best if you can define it in the same fashion as CTB registration one
> 

Moved all this to the slpc_abi file and removed the fwif file for now.

Thanks,
Vinay.

> Michal
> 
>> +
>> +#endif
>>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 06/16] drm/i915/guc/slpc: Allocate, initialize and release slpc
  2021-07-10 16:05   ` Michal Wajdeczko
@ 2021-07-14  1:40     ` Belgaumkar, Vinay
  0 siblings, 0 replies; 53+ messages in thread
From: Belgaumkar, Vinay @ 2021-07-14  1:40 UTC (permalink / raw)
  To: Michal Wajdeczko, intel-gfx, dri-devel



On 7/10/2021 9:05 AM, Michal Wajdeczko wrote:
> 
> 
> On 10.07.2021 03:20, Vinay Belgaumkar wrote:
>> Allocate data structures for SLPC and functions for
>> initializing on host side.
>>
>> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
>> Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gt/uc/intel_guc.c      | 11 +++++++
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 36 ++++++++++++++++++++-
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h | 20 ++++++++++++
>>   3 files changed, 66 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
>> index 9d61b2d54de4..82863a9bc8e8 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
>> @@ -336,6 +336,12 @@ int intel_guc_init(struct intel_guc *guc)
>>   			goto err_ct;
>>   	}
>>   
>> +	if (intel_guc_slpc_is_used(guc)) {
>> +		ret = intel_guc_slpc_init(&guc->slpc);
>> +		if (ret)
>> +			goto err_submission;
>> +	}
>> +
>>   	/* now that everything is perma-pinned, initialize the parameters */
>>   	guc_init_params(guc);
>>   
>> @@ -346,6 +352,8 @@ int intel_guc_init(struct intel_guc *guc)
>>   
>>   	return 0;
>>   
>> +err_submission:
>> +	intel_guc_submission_fini(guc);
>>   err_ct:
>>   	intel_guc_ct_fini(&guc->ct);
>>   err_ads:
>> @@ -368,6 +376,9 @@ void intel_guc_fini(struct intel_guc *guc)
>>   
>>   	i915_ggtt_disable_guc(gt->ggtt);
>>   
>> +	if (intel_guc_slpc_is_used(guc))
>> +		intel_guc_slpc_fini(&guc->slpc);
>> +
>>   	if (intel_guc_submission_is_used(guc))
>>   		intel_guc_submission_fini(guc);
>>   
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> index c1f569d2300d..94e2f19951aa 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> @@ -4,11 +4,41 @@
>>    * Copyright © 2020 Intel Corporation
>>    */
>>   
>> +#include <asm/msr-index.h>
> 
> hmm, what exactly is needed from this header ?

Was being used in a previous version for MSR reads, removed.

> 
>> +
>> +#include "gt/intel_gt.h"
>> +#include "gt/intel_rps.h"
>> +
>> +#include "i915_drv.h"
>>   #include "intel_guc_slpc.h"
>> +#include "intel_pm.h"
>> +
>> +static inline struct intel_guc *slpc_to_guc(struct intel_guc_slpc *slpc)
>> +{
>> +	return container_of(slpc, struct intel_guc, slpc);
>> +}
>> +
>> +static int slpc_shared_data_init(struct intel_guc_slpc *slpc)
>> +{
>> +	struct intel_guc *guc = slpc_to_guc(slpc);
>> +	int err;
>> +	u32 size = PAGE_ALIGN(sizeof(struct slpc_shared_data));
> 
> move err decl here
> 
>> +
>> +	err = intel_guc_allocate_and_map_vma(guc, size, &slpc->vma, &slpc->vaddr);
>> +	if (unlikely(err)) {
>> +		DRM_ERROR("Failed to allocate slpc struct (err=%d)\n", err);
> 
> s/slpc/SLPC
> 
> and use drm_err instead
> and you may also want to print error as %pe

added.

> 
>> +		i915_vma_unpin_and_release(&slpc->vma, I915_VMA_RELEASE_MAP);
> 
> do you really need this ?

removed.
> 
>> +		return err;
>> +	}
>> +
>> +	return err;
>> +}
>>   
>>   int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
>>   {
>> -	return 0;
>> +	GEM_BUG_ON(slpc->vma);
>> +
>> +	return slpc_shared_data_init(slpc);
>>   }
>>   
>>   /*
>> @@ -31,4 +61,8 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
>>   
>>   void intel_guc_slpc_fini(struct intel_guc_slpc *slpc)
>>   {
>> +	if (!slpc->vma)
>> +		return;
>> +
>> +	i915_vma_unpin_and_release(&slpc->vma, I915_VMA_RELEASE_MAP);
>>   }
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
>> index 98036459a1a3..a2643b904165 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
>> @@ -3,12 +3,32 @@
>>    *
>>    * Copyright © 2020 Intel Corporation
>>    */
>> +
> 
> should be fixed in earlier patch
> 
>>   #ifndef _INTEL_GUC_SLPC_H_
>>   #define _INTEL_GUC_SLPC_H_
>>   
>> +#include <linux/mutex.h>
>>   #include "intel_guc_slpc_fwif.h"
>>   
>>   struct intel_guc_slpc {
>> +	/*Protects access to vma and SLPC actions */
> 
> hmm, missing mutex ;)

Removed.

> 
>> +	struct i915_vma *vma;
>> +	void *vaddr;
> 
> no need to be void, define it as ptr to slpc_shared_data
> 
>> +
>> +	/* platform frequency limits */
>> +	u32 min_freq;
>> +	u32 rp0_freq;
>> +	u32 rp1_freq;
>> +
>> +	/* frequency softlimits */
>> +	u32 min_freq_softlimit;
>> +	u32 max_freq_softlimit;
>> +
>> +	struct {
>> +		u32 param_id;
>> +		u32 param_value;
>> +		u32 param_override;
>> +	} debug;
> 
> can you add all these extra fields in patches which will need them?
> 
> Michal

Done.

Thanks,
Vinay.

> 
>>   };
>>   
>>   int intel_guc_slpc_init(struct intel_guc_slpc *slpc);
>>
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 07/16] drm/i915/guc/slpc: Enable slpc and add related H2G events
  2021-07-10 17:37   ` Michal Wajdeczko
@ 2021-07-15  1:58     ` Belgaumkar, Vinay
  2021-07-21 17:36       ` Michal Wajdeczko
  0 siblings, 1 reply; 53+ messages in thread
From: Belgaumkar, Vinay @ 2021-07-15  1:58 UTC (permalink / raw)
  To: Michal Wajdeczko, intel-gfx, dri-devel



On 7/10/2021 10:37 AM, Michal Wajdeczko wrote:
> 
> 
> On 10.07.2021 03:20, Vinay Belgaumkar wrote:
>> Add methods for interacting with guc for enabling SLPC. Enable
>> SLPC after guc submission has been established. GuC load will
> 
> s/guc/GuC
> 
>> fail if SLPC cannot be successfully initialized. Add various
>> helper methods to set/unset the parameters for SLPC. They can
>> be set using h2g calls or directly setting bits in the shared
> 
> /h2g/H2G

done.
> 
>> data structure.
>>
>> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
>> Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c   | 221 ++++++++++++++++++
>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.c |   4 -
>>   drivers/gpu/drm/i915/gt/uc/intel_uc.c         |  10 +
>>   3 files changed, 231 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> index 94e2f19951aa..e579408d1c19 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> @@ -18,6 +18,61 @@ static inline struct intel_guc *slpc_to_guc(struct intel_guc_slpc *slpc)
>>   	return container_of(slpc, struct intel_guc, slpc);
>>   }
>>   
>> +static inline struct intel_gt *slpc_to_gt(struct intel_guc_slpc *slpc)
>> +{
>> +	return guc_to_gt(slpc_to_guc(slpc));
>> +}
>> +
>> +static inline struct drm_i915_private *slpc_to_i915(struct intel_guc_slpc *slpc)
>> +{
>> +	return (slpc_to_gt(slpc))->i915;
>> +}
>> +
>> +static void slpc_mem_set_param(struct slpc_shared_data *data,
>> +				u32 id, u32 value)
>> +{
>> +	GEM_BUG_ON(id >= SLPC_MAX_OVERRIDE_PARAMETERS);
>> +	/* When the flag bit is set, corresponding value will be read
>> +	 * and applied by slpc.
> 
> fix format of multi-line comment
> s/slpc/SLPC

Done.

> 
>> +	 */
>> +	data->override_params_set_bits[id >> 5] |= (1 << (id % 32));
> 
> use __set_bit instead
> 
>> +	data->override_params_values[id] = value;
>> +}
>> +
>> +static void slpc_mem_unset_param(struct slpc_shared_data *data,
>> +				 u32 id)
>> +{
>> +	GEM_BUG_ON(id >= SLPC_MAX_OVERRIDE_PARAMETERS);
>> +	/* When the flag bit is unset, corresponding value will not be
>> +	 * read by slpc.
>> +	 */
>> +	data->override_params_set_bits[id >> 5] &= (~(1 << (id % 32)));
> 
> same here

Done.

> 
>> +	data->override_params_values[id] = 0;
>> +}
>> +
>> +static void slpc_mem_task_control(struct slpc_shared_data *data,
>> +				 u64 val, u32 enable_id, u32 disable_id)
> 
> hmm, u64 to pass simple tri-state flag ?
> 
>> +{
>> +	/* Enabling a param involves setting the enable_id
>> +	 * to 1 and disable_id to 0. Setting it to default
>> +	 * will unset both enable and disable ids and let
>> +	 * slpc choose it's default values.
> 
> fix format + s/slpc/SLPC
> 
>> +	 */
>> +	if (val == SLPC_PARAM_TASK_DEFAULT) {
>> +		/* set default */
>> +		slpc_mem_unset_param(data, enable_id);
>> +		slpc_mem_unset_param(data, disable_id);
>> +	} else if (val == SLPC_PARAM_TASK_ENABLED) {
>> +		/* set enable */
>> +		slpc_mem_set_param(data, enable_id, 1);
>> +		slpc_mem_set_param(data, disable_id, 0);
>> +	} else if (val == SLPC_PARAM_TASK_DISABLED) {
>> +		/* set disable */
>> +		slpc_mem_set_param(data, disable_id, 1);
>> +		slpc_mem_set_param(data, enable_id, 0);
>> +	}
> 
> maybe instead of SLPC_PARAM_TASK_* flags (that btw were confusing me
> earlier) you can define 3x small helpers:
> 
> static void slpc_mem_set_default(data, enable_id, disable_id);
> static void slpc_mem_set_enabled(data, enable_id, disable_id);
> static void slpc_mem_set_disabled(data, enable_id, disable_id);
>

Agree, done.

> 
>> +}
>> +
>>   static int slpc_shared_data_init(struct intel_guc_slpc *slpc)
>>   {
>>   	struct intel_guc *guc = slpc_to_guc(slpc);
>> @@ -34,6 +89,128 @@ static int slpc_shared_data_init(struct intel_guc_slpc *slpc)
>>   	return err;
>>   }
>>   
>> +/*
>> + * Send SLPC event to guc
>> + *
>> + */
>> +static int slpc_send(struct intel_guc_slpc *slpc,
>> +			struct slpc_event_input *input,
>> +			u32 in_len)
>> +{
>> +	struct intel_guc *guc = slpc_to_guc(slpc);
>> +	u32 *action;
>> +
>> +	action = (u32 *)input;
>> +	action[0] = INTEL_GUC_ACTION_SLPC_REQUEST;
> 
> why not just updating input->h2g_action_id ?

Removed this, using your suggestion below instead.

> 
>> +
>> +	return intel_guc_send(guc, action, in_len);
>> +}
>> +
>> +static bool slpc_running(struct intel_guc_slpc *slpc)
>> +{
>> +	struct slpc_shared_data *data;
>> +	u32 slpc_global_state;
>> +
>> +	GEM_BUG_ON(!slpc->vma);
>> +
>> +	drm_clflush_virt_range(slpc->vaddr, sizeof(struct slpc_shared_data));
> 
> do you really need to flush all 8K of shared data?
> it looks that you only need single u32

sure.

> 
>> +	data = slpc->vaddr;
>> +
>> +	slpc_global_state = data->global_state;
>> +
>> +	return (data->global_state == SLPC_GLOBAL_STATE_RUNNING);
>> +}
>> +
>> +static int host2guc_slpc_query_task_state(struct intel_guc_slpc *slpc)
>> +{
>> +	struct intel_guc *guc = slpc_to_guc(slpc);
>> +	u32 shared_data_gtt_offset = intel_guc_ggtt_offset(guc, slpc->vma);
>> +	struct slpc_event_input data = {0};
>> +
>> +	data.header.value = SLPC_EVENT(SLPC_EVENT_QUERY_TASK_STATE, 2);
> 
> you defined header.num_args and header.event_id, don't want to use them?
> 
>> +	data.args[0] = shared_data_gtt_offset;
>> +	data.args[1] = 0;
>> +
>> +	return slpc_send(slpc, &data, 4);
> 
> magic 4
> 
>> +}
>> +
>> +static int slpc_read_task_state(struct intel_guc_slpc *slpc)
>> +{
>> +	return host2guc_slpc_query_task_state(slpc);
>> +}
> 
> hmm, all this looks complicated more than needed, why not just:
> 
> static int guc_action_slpc_query(struct intel_guc *guc, u32 offset)
> {
> 	u32 request[] = {
> 		INTEL_GUC_ACTION_SLPC_REQUEST,
> 		SLPC_EVENT(SLPC_EVENT_QUERY_TASK_STATE, 2),
> 		offset,
> 		0,
> 	};
> 	int err;
> 
> 	return intel_guc_send(guc, request, ARRAY_SIZE(request));
> }
> 
> static int slpc_query_task_state(struct intel_guc_slpc *slpc)
> {
> 	struct intel_guc *guc = slpc_to_guc(slpc);
> 	u32 offset = intel_guc_ggtt_offset(guc, slpc->vma);
> 
> 	return guc_action_slpc_query(guc, offset);
> }

Using this now, definitely simpler.

> 
> btw, there is little magic in H2G data, as only event enums were defined
> in slpc_fwif.h (or slpc_abi.h) but it looks that len and format of args
> depends on the actual event used

yes, it also expects the action and num_args in the same word.

> 
>> +
>> +static const char *slpc_state_stringify(enum slpc_global_state state)
>> +{
>> +	const char *str = NULL;
>> +
>> +	switch (state) {
>> +	case SLPC_GLOBAL_STATE_NOT_RUNNING:
>> +		str = "not running";
>> +		break;
>> +	case SLPC_GLOBAL_STATE_INITIALIZING:
>> +		str = "initializing";
>> +		break;
>> +	case SLPC_GLOBAL_STATE_RESETTING:
>> +		str = "resetting";
>> +		break;
>> +	case SLPC_GLOBAL_STATE_RUNNING:
>> +		str = "running";
>> +		break;
>> +	case SLPC_GLOBAL_STATE_SHUTTING_DOWN:
>> +		str = "shutting down";
>> +		break;
>> +	case SLPC_GLOBAL_STATE_ERROR:
>> +		str = "error";
>> +		break;
>> +	default:
>> +		str = "unknown";
>> +		break;
>> +	}
>> +
>> +	return str;
>> +}
>> +
>> +static const char *get_slpc_state(struct intel_guc_slpc *slpc)
> 
> lot of duplicated code with slpc_running()
> 
> maybe there should be:
> 	u32 slpc_get_state(slpc);
> 	bool slpc_is_running(slpc);
> 	const char *slpc_state_string(slpc);

Ok, makes sense.

> 
> 
>> +{
>> +	struct slpc_shared_data *data;
>> +	u32 slpc_global_state;
>> +
>> +	GEM_BUG_ON(!slpc->vma);
>> +
>> +	drm_clflush_virt_range(slpc->vaddr, sizeof(struct slpc_shared_data));
>> +	data = slpc->vaddr;
>> +
>> +	slpc_global_state = data->global_state;
>> +
>> +	return slpc_state_stringify(slpc_global_state);
>> +}
>> +
>> +static int host2guc_slpc_reset(struct intel_guc_slpc *slpc)
>> +{
>> +	struct intel_guc *guc = slpc_to_guc(slpc);
>> +	u32 shared_data_gtt_offset = intel_guc_ggtt_offset(guc, slpc->vma);
>> +	struct slpc_event_input data = {0};
>> +	int ret;
>> +
>> +	data.header.value = SLPC_EVENT(SLPC_EVENT_RESET, 2);
>> +	data.args[0] = shared_data_gtt_offset;
>> +	data.args[1] = 0;
>> +
>> +	/* TODO: Hardcoded 4 needs define */
>> +	ret = slpc_send(slpc, &data, 4);
>> +
>> +	if (!ret) {
>> +		/* TODO: How long to Wait until SLPC is running */
> 
> do we know state transitions ?
> maybe there is no point in waiting for RUNNING if it is in ERROR or
> SHUTTING_DOWN ?

It goes from "resetting" to "running" apparently, but the transitions 
are too quick to bother trapping. Apparently it should transition within 
1ms or so, so 5ms polling (at 10us interval) wait does not seem that bad.

> 
>> +		if (wait_for(slpc_running(slpc), 5)) {
> 
> magic 5
ok.

> 
>> +			DRM_ERROR("SLPC not enabled! State = %s\n",
> 
> use drm_err
ok.

> 
>> +				  get_slpc_state(slpc));
>> +			return -EIO;
>> +		}
>> +	}
>> +
>> +	return ret;
>> +}
>> +
>>   int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
>>   {
>>   	GEM_BUG_ON(slpc->vma);
>> @@ -56,6 +233,50 @@ int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
>>    */
>>   int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
>>   {
>> +	struct drm_i915_private *i915 = slpc_to_i915(slpc);
>> +	struct slpc_shared_data *data;
>> +	int ret;
>> +
>> +	GEM_BUG_ON(!slpc->vma);
>> +
>> +	memset(slpc->vaddr, 0, sizeof(struct slpc_shared_data));
>> +
>> +	data = slpc->vaddr;
>> +	data->shared_data_size = sizeof(struct slpc_shared_data);
>> +
>> +	/* Enable only GTPERF task, Disable others */
>> +	slpc_mem_task_control(data, SLPC_PARAM_TASK_ENABLED,
>> +				SLPC_PARAM_TASK_ENABLE_GTPERF,
>> +				SLPC_PARAM_TASK_DISABLE_GTPERF);
>> +
>> +	slpc_mem_task_control(data, SLPC_PARAM_TASK_DISABLED,
>> +				SLPC_PARAM_TASK_ENABLE_BALANCER,
>> +				SLPC_PARAM_TASK_DISABLE_BALANCER);
>> +
>> +	slpc_mem_task_control(data, SLPC_PARAM_TASK_DISABLED,
>> +				SLPC_PARAM_TASK_ENABLE_DCC,
>> +				SLPC_PARAM_TASK_DISABLE_DCC);
>> +
>> +	ret = host2guc_slpc_reset(slpc);
>> +	if (ret) {
>> +		drm_err(&i915->drm, "SLPC Reset event returned %d", ret);
> 
> you may want to print error as %pe

Not sure I understand why? I thought %pe was only for pointer errors?


> missing \n

> 
>> +		return -EIO;
>> +	}
>> +
>> +	DRM_INFO("SLPC state: %s\n", get_slpc_state(slpc));
> 
> use drm_info
> 
>> +
>> +	if (slpc_read_task_state(slpc))
>> +		drm_err(&i915->drm, "Unable to read task state data");
> 
> missing \n
> 
>> +
>> +	drm_clflush_virt_range(slpc->vaddr, sizeof(struct slpc_shared_data));
>> +
>> +	/* min and max frequency limits being used by SLPC */
>> +	drm_info(&i915->drm, "SLPC min freq: %u Mhz, max is %u Mhz",
> 
> missing \n
> 
>> +			DIV_ROUND_CLOSEST(data->task_state_data.min_unslice_freq *
>> +				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER),
>> +			DIV_ROUND_CLOSEST(data->task_state_data.max_unslice_freq *
>> +				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER));
> 
> this info/code seems to be duplicated in patch 10/16
> maybe just call intel_guc_slpc_info() here once available ?

intel_guc_slpc_info() prints a lot of other info, just need to print the 
frequencies here.

> 
>> +
>>   	return 0;
>>   }
>>   
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> index e2644a05f298..3e76d4d5f7bb 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> @@ -2321,10 +2321,6 @@ void intel_guc_submission_enable(struct intel_guc *guc)
>>   
>>   void intel_guc_submission_disable(struct intel_guc *guc)
>>   {
>> -	struct intel_gt *gt = guc_to_gt(guc);
>> -
>> -	GEM_BUG_ON(gt->awake); /* GT should be parked first */
> 
> if not mistake, can you explain why it was removed ?

This was part of a different commit. The BUG_ON in 
disable_guc_submission was added with an assumption that it will be 
called only during driver unload and not expected to hold any GT PM 
references. Since this needs to be called from an error scenario during 
slpc enable, remove the BUG_ON. Do we need this as a separate commit?

> 
>> -
>>   	/* Note: By the time we're here, GuC may have already been reset */
>>   }
>>   
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
>> index dca5f6d0641b..7b6c767d3eb0 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
>> @@ -501,6 +501,14 @@ static int __uc_init_hw(struct intel_uc *uc)
>>   	if (intel_uc_uses_guc_submission(uc))
>>   		intel_guc_submission_enable(guc);
>>   
>> +	if (intel_uc_uses_guc_slpc(uc)) {
>> +		ret = intel_guc_slpc_enable(&guc->slpc);
>> +		if (ret)
>> +			goto err_submission;
>> +		drm_info(&i915->drm, "GuC SLPC %s\n",
>> +			 enableddisabled(intel_uc_uses_guc_slpc(uc)));
> 
> move this drm_info after below GuC report and/or modify to have:

yup, incorrect merge. Moved.

Thanks,
Vinay.

> 
> "GuC firmware path.bin version 1.0 loaded:yes"
> "GuC submission:enabled"
> "GuC SLPC:enabled"
> "HuC firmware path.bin version 1.0 authenticated:yes"
> 
> Michal
> 
>> +	}
>> +
>>   	drm_info(&i915->drm, "%s firmware %s version %u.%u %s:%s\n",
>>   		 intel_uc_fw_type_repr(INTEL_UC_FW_TYPE_GUC), guc->fw.path,
>>   		 guc->fw.major_ver_found, guc->fw.minor_ver_found,
>> @@ -521,6 +529,8 @@ static int __uc_init_hw(struct intel_uc *uc)
>>   	/*
>>   	 * We've failed to load the firmware :(
>>   	 */
>> +err_submission:
>> +	intel_guc_submission_disable(guc);
>>   err_log_capture:
>>   	__uc_capture_load_err_log(uc);
>>   err_out:
>>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 08/16] drm/i915/guc/slpc: Add methods to set min/max frequency
  2021-07-10 17:47   ` Michal Wajdeczko
@ 2021-07-16 18:00     ` Belgaumkar, Vinay
  0 siblings, 0 replies; 53+ messages in thread
From: Belgaumkar, Vinay @ 2021-07-16 18:00 UTC (permalink / raw)
  To: Michal Wajdeczko, intel-gfx, dri-devel



On 7/10/2021 10:47 AM, Michal Wajdeczko wrote:
> 
> 
> On 10.07.2021 03:20, Vinay Belgaumkar wrote:
>> Add param set h2g helpers to set the min and max frequencies
>> for use by SLPC.
>>
>> Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
>> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 94 +++++++++++++++++++++
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h |  2 +
>>   2 files changed, 96 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> index e579408d1c19..19cb26479942 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> @@ -106,6 +106,19 @@ static int slpc_send(struct intel_guc_slpc *slpc,
>>   	return intel_guc_send(guc, action, in_len);
>>   }
>>   
>> +static int host2guc_slpc_set_param(struct intel_guc_slpc *slpc,
>> +				   u32 id, u32 value)
>> +{
>> +	struct slpc_event_input data = {0};
>> +
>> +	data.header.value = SLPC_EVENT(SLPC_EVENT_PARAMETER_SET, 2);
>> +	data.args[0] = id;
>> +	data.args[1] = value;
>> +
>> +	return slpc_send(slpc, &data, 4);
> 
> as suggested before, use of explicit function like:
> 
> static int guc_action_slpc_param(guc, u32 id, u32 value)
> {
> 	u32 request[] = {
> 		INTEL_GUC_ACTION_SLPC_REQUEST,
> 		SLPC_EVENT(SLPC_EVENT_PARAMETER_SET, 2),
> 		id,
> 		value,
> 	};
> 
> 	return intel_guc_send(guc, request, ARRAY_SIZE(request));
> }
> 
> will be simpler/cleaner

done.

> 
>> +}
>> +
>> +
>>   static bool slpc_running(struct intel_guc_slpc *slpc)
>>   {
>>   	struct slpc_shared_data *data;
>> @@ -134,6 +147,19 @@ static int host2guc_slpc_query_task_state(struct intel_guc_slpc *slpc)
>>   	return slpc_send(slpc, &data, 4);
>>   }
>>   
>> +static int slpc_set_param(struct intel_guc_slpc *slpc, u32 id, u32 value)
>> +{
>> +	struct drm_i915_private *i915 = slpc_to_i915(slpc);
>> +	GEM_BUG_ON(id >= SLPC_MAX_PARAM);
>> +
>> +	if (host2guc_slpc_set_param(slpc, id, value)) {
>> +		drm_err(&i915->drm, "Unable to set param %x", id);
> 
> missing \n
> what about printing value to be set ?
> what about printing send error %pe ?

done.

> 
>> +		return -EIO;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>>   static int slpc_read_task_state(struct intel_guc_slpc *slpc)
>>   {
>>   	return host2guc_slpc_query_task_state(slpc);
>> @@ -218,6 +244,74 @@ int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
>>   	return slpc_shared_data_init(slpc);
>>   }
>>   
>> +/**
>> + * intel_guc_slpc_max_freq_set() - Set max frequency limit for SLPC.
>> + * @slpc: pointer to intel_guc_slpc.
>> + * @val: encoded frequency
> 
> what's the encoding ?

It should just be frequency (MHz).

> 
>> + *
>> + * This function will invoke GuC SLPC action to update the max frequency
>> + * limit for slice and unslice.
>> + *
>> + * Return: 0 on success, non-zero error code on failure.
>> + */
>> +int intel_guc_slpc_set_max_freq(struct intel_guc_slpc *slpc, u32 val)
>> +{
>> +	int ret;
>> +	struct drm_i915_private *i915 = slpc_to_i915(slpc);
>> +	intel_wakeref_t wakeref;
>> +
>> +	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
> 
> use can use with_intel_runtime_pm(rpm, wakeref)

Ok.
> 
>> +
>> +	ret = slpc_set_param(slpc,
>> +		       SLPC_PARAM_GLOBAL_MAX_GT_UNSLICE_FREQ_MHZ,
>> +		       val);
>> +
>> +	if (ret) {
>> +		drm_err(&i915->drm,
>> +			"Set max frequency unslice returned %d", ret);
> 
> missing \n
> print error with %pe
> but slpc_set_param returns only -EIO ;(

I was done that way so the sysfs method that calls it gets a standard 
value. Will change that.

> 
>> +		ret = -EIO;
>> +		goto done;
>> +	}
>> +
>> +done:
>> +	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
>> +	return ret;
>> +}
>> +
>> +/**
>> + * intel_guc_slpc_min_freq_set() - Set min frequency limit for SLPC.
>> + * @slpc: pointer to intel_guc_slpc.
>> + * @val: encoded frequency
>> + *
>> + * This function will invoke GuC SLPC action to update the min frequency
>> + * limit.
>> + *
>> + * Return: 0 on success, non-zero error code on failure.
>> + */
>> +int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val)
>> +{
>> +	int ret;
>> +	struct intel_guc *guc = slpc_to_guc(slpc);
>> +	struct drm_i915_private *i915 = guc_to_gt(guc)->i915;
>> +	intel_wakeref_t wakeref;
>> +
>> +	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
>> +
>> +	ret = slpc_set_param(slpc,
>> +		       SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ,
>> +		       val);
>> +	if (ret) {
>> +		drm_err(&i915->drm,
>> +			"Set min frequency for unslice returned %d", ret);
> 
> as above

done.
Thanks,

Vinay.
> 
> Michal
> 
>> +		ret = -EIO;
>> +		goto done;
>> +	}
>> +
>> +done:
>> +	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
>> +	return ret;
>> +}
>> +
>>   /*
>>    * intel_guc_slpc_enable() - Start SLPC
>>    * @slpc: pointer to intel_guc_slpc.
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
>> index a2643b904165..a473e1ea7c10 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
>> @@ -34,5 +34,7 @@ struct intel_guc_slpc {
>>   int intel_guc_slpc_init(struct intel_guc_slpc *slpc);
>>   int intel_guc_slpc_enable(struct intel_guc_slpc *slpc);
>>   void intel_guc_slpc_fini(struct intel_guc_slpc *slpc);
>> +int intel_guc_slpc_set_max_freq(struct intel_guc_slpc *slpc, u32 val);
>> +int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val);
>>   
>>   #endif
>>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 12/16] drm/i915/guc/slpc: Cache platform frequency limits for slpc
  2021-07-10 18:15   ` Michal Wajdeczko
@ 2021-07-17 19:30     ` Belgaumkar, Vinay
  2021-07-20 23:05     ` Belgaumkar, Vinay
  1 sibling, 0 replies; 53+ messages in thread
From: Belgaumkar, Vinay @ 2021-07-17 19:30 UTC (permalink / raw)
  To: Michal Wajdeczko, intel-gfx, dri-devel



On 7/10/2021 11:15 AM, Michal Wajdeczko wrote:
> 
> 
> On 10.07.2021 03:20, Vinay Belgaumkar wrote:
>> Cache rp0, rp1 and rpn platform limits into slpc structure
>> for range checking while setting min/max frequencies.
>>
>> Also add "soft" limits which keep track of frequency changes
>> made from userland. These are initially set to platform min
>> and max.
>>
>> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 41 +++++++++++++++++++++
>>   1 file changed, 41 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> index d32274cd1db7..6e978f27b7a6 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> @@ -86,6 +86,9 @@ static int slpc_shared_data_init(struct intel_guc_slpc *slpc)
>>   		return err;
>>   	}
>>   
>> +	slpc->max_freq_softlimit = 0;
>> +	slpc->min_freq_softlimit = 0;
> 
> as mentioned earlier, now it is time to introduce these fields in .h

ok.

> 
>> +
>>   	return err;
>>   }
>>   
>> @@ -384,6 +387,29 @@ void intel_guc_pm_intrmsk_enable(struct intel_gt *gt)
>>   			   GEN6_PMINTRMSK, pm_intrmsk_mbz, 0);
>>   }
>>   
>> +static int intel_guc_slpc_set_softlimits(struct intel_guc_slpc *slpc)
>> +{
>> +	int ret = 0;
>> +
>> +	/* Softlimits are initially equivalent to platform limits
>> +	 * unless they have deviated from defaults, in which case,
>> +	 * we retain the values and set min/max accordingly.
>> +	 */
>> +	if (!slpc->max_freq_softlimit)
>> +		slpc->max_freq_softlimit = slpc->rp0_freq;
>> +	else if (slpc->max_freq_softlimit != slpc->rp0_freq)
>> +		ret = intel_guc_slpc_set_max_freq(slpc,
>> +					slpc->max_freq_softlimit);
>> +
>> +	if (!slpc->min_freq_softlimit)
>> +		slpc->min_freq_softlimit = slpc->min_freq;
>> +	else if (slpc->min_freq_softlimit != slpc->min_freq)
>> +		ret = intel_guc_slpc_set_min_freq(slpc,
>> +					slpc->min_freq_softlimit);
>> +
>> +	return ret;
>> +}
>> +
>>   /*
>>    * intel_guc_slpc_enable() - Start SLPC
>>    * @slpc: pointer to intel_guc_slpc.
>> @@ -402,6 +428,7 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
>>   	struct drm_i915_private *i915 = slpc_to_i915(slpc);
>>   	struct slpc_shared_data *data;
>>   	int ret;
>> +	u32 rp_state_cap;
> 
> move up to keep "ret" last
done.

> 
>>   
>>   	GEM_BUG_ON(!slpc->vma);
>>   
>> @@ -445,6 +472,20 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
>>   			DIV_ROUND_CLOSEST(data->task_state_data.max_unslice_freq *
>>   				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER));
>>   
>> +	rp_state_cap = intel_uncore_read(i915->gt.uncore, GEN6_RP_STATE_CAP);
>> +
>> +	slpc->rp0_freq = ((rp_state_cap >> 0) & 0xff) * GT_FREQUENCY_MULTIPLIER;
>> +	slpc->min_freq = ((rp_state_cap >> 16) & 0xff) * GT_FREQUENCY_MULTIPLIER;
>> +	slpc->rp1_freq = ((rp_state_cap >> 8) & 0xff) * GT_FREQUENCY_MULTIPLIER;
> 
> we should have definitions for these bits and then we should be able to
> use REG_FIELD_GET

ok.

> 
>> +
>> +	if (intel_guc_slpc_set_softlimits(slpc))
>> +		drm_err(&i915->drm, "Unable to set softlimits");
> 
> missing \n
> maybe we can also print error ?

done.

> 
>> +
>> +	drm_info(&i915->drm,
>> +		 "Platform fused frequency values -  min: %u Mhz, max: %u Mhz",
> 
> missing \n
> double space before 'min'

done.

Thanks,
Vinay.
> 
> Michal
> 
>> +		 slpc->min_freq,
>> +		 slpc->rp0_freq);
>> +
>>   	return 0;
>>   }
>>   
>>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 09/16] drm/i915/guc/slpc: Add get max/min freq hooks
  2021-07-10 17:52   ` Michal Wajdeczko
@ 2021-07-20 22:08     ` Belgaumkar, Vinay
  0 siblings, 0 replies; 53+ messages in thread
From: Belgaumkar, Vinay @ 2021-07-20 22:08 UTC (permalink / raw)
  To: Michal Wajdeczko, intel-gfx, dri-devel



On 7/10/2021 10:52 AM, Michal Wajdeczko wrote:
> 
> 
> On 10.07.2021 03:20, Vinay Belgaumkar wrote:
>> Add helpers to read the min/max frequency being used
>> by SLPC. This is done by send a h2g command which forces
> 
> s/h2g/H2G

done.

> 
>> SLPC to update the shared data struct which can then be
>> read.
>>
>> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
>> Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 58 +++++++++++++++++++++
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h |  2 +
>>   2 files changed, 60 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> index 19cb26479942..98a283d31734 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> @@ -278,6 +278,35 @@ int intel_guc_slpc_set_max_freq(struct intel_guc_slpc *slpc, u32 val)
>>   	return ret;
>>   }
>>   
>> +int intel_guc_slpc_get_max_freq(struct intel_guc_slpc *slpc, u32 *val)
>> +{
>> +	struct slpc_shared_data *data;
>> +	intel_wakeref_t wakeref;
>> +	struct drm_i915_private *i915 = guc_to_gt(slpc_to_guc(slpc))->i915;
>> +	int ret = 0;
>> +
>> +	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
>> +
>> +	/* Force GuC to update task data */
>> +	if (slpc_read_task_state(slpc)) {
>> +		DRM_ERROR("Unable to update task data");
> 
> use drm_err
> missing \n
> maybe this message could be moved to slpc_read_task_state ?

Done.

> 
>> +		ret = -EIO;
>> +		goto done;
>> +	}
>> +
>> +	GEM_BUG_ON(!slpc->vma);
>> +
>> +	drm_clflush_virt_range(slpc->vaddr, sizeof(struct slpc_shared_data));
> 
> maybe this can also be part of slpc_read_task_state ?

Yup.

> 
>> +	data = slpc->vaddr;
>> +
>> +	*val = DIV_ROUND_CLOSEST(data->task_state_data.max_unslice_freq *
>> +				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER);
>> +
>> +done:
>> +	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
>> +	return ret;
>> +}
>> +
>>   /**
>>    * intel_guc_slpc_min_freq_set() - Set min frequency limit for SLPC.
>>    * @slpc: pointer to intel_guc_slpc.
>> @@ -312,6 +341,35 @@ int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val)
>>   	return ret;
>>   }
>>   
>> +int intel_guc_slpc_get_min_freq(struct intel_guc_slpc *slpc, u32 *val)
> 
> missing kernel-doc (above intel_guc_slpc_min_freq_set has one)

done.
> 
>> +{
>> +	struct slpc_shared_data *data;
>> +	intel_wakeref_t wakeref;
>> +	struct drm_i915_private *i915 = guc_to_gt(slpc_to_guc(slpc))->i915;
>> +	int ret = 0;
>> +
>> +	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
>> +
>> +	/* Force GuC to update task data */
>> +	if (slpc_read_task_state(slpc)) {
>> +		DRM_ERROR("Unable to update task data");
> 
> see above
> 
>> +		ret = -EIO;
>> +		goto done;
>> +	}
>> +
>> +	GEM_BUG_ON(!slpc->vma);
>> +
>> +	drm_clflush_virt_range(slpc->vaddr, sizeof(struct slpc_shared_data));
> 
> see above
Done.

Thanks,
Vinay.

> 
> Michal
> 
>> +	data = slpc->vaddr;
>> +
>> +	*val = DIV_ROUND_CLOSEST(data->task_state_data.min_unslice_freq *
>> +				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER);
>> +
>> +done:
>> +	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
>> +	return ret;
>> +}
>> +
>>   /*
>>    * intel_guc_slpc_enable() - Start SLPC
>>    * @slpc: pointer to intel_guc_slpc.
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
>> index a473e1ea7c10..2cb830cdacb5 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
>> @@ -36,5 +36,7 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc);
>>   void intel_guc_slpc_fini(struct intel_guc_slpc *slpc);
>>   int intel_guc_slpc_set_max_freq(struct intel_guc_slpc *slpc, u32 val);
>>   int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val);
>> +int intel_guc_slpc_get_max_freq(struct intel_guc_slpc *slpc, u32 *val);
>> +int intel_guc_slpc_get_min_freq(struct intel_guc_slpc *slpc, u32 *val);
>>   
>>   #endif
>>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 10/16] drm/i915/guc/slpc: Add debugfs for slpc info
  2021-07-10 18:08   ` Michal Wajdeczko
@ 2021-07-20 23:00     ` Belgaumkar, Vinay
  0 siblings, 0 replies; 53+ messages in thread
From: Belgaumkar, Vinay @ 2021-07-20 23:00 UTC (permalink / raw)
  To: Michal Wajdeczko, intel-gfx, dri-devel



On 7/10/2021 11:08 AM, Michal Wajdeczko wrote:
> 
> 
> On 10.07.2021 03:20, Vinay Belgaumkar wrote:
>> This prints out relevant SLPC info from the SLPC shared structure.
>>
>> We will send a h2g message which forces SLPC to update the
>> shared data structure with latest information before reading it.
>>
>> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
>> Signed-off-by: Sundaresan Sujaritha <sujaritha.sundaresan@intel.com>
>> ---
>>   .../gpu/drm/i915/gt/uc/intel_guc_debugfs.c    | 16 ++++++
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c   | 53 +++++++++++++++++++
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h   |  3 ++
>>   3 files changed, 72 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
>> index 9a03ff56e654..bef749e54601 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
>> @@ -12,6 +12,7 @@
>>   #include "gt/uc/intel_guc_ct.h"
>>   #include "gt/uc/intel_guc_ads.h"
>>   #include "gt/uc/intel_guc_submission.h"
>> +#include "gt/uc/intel_guc_slpc.h"
>>   
>>   static int guc_info_show(struct seq_file *m, void *data)
>>   {
>> @@ -50,11 +51,26 @@ static int guc_registered_contexts_show(struct seq_file *m, void *data)
>>   }
>>   DEFINE_GT_DEBUGFS_ATTRIBUTE(guc_registered_contexts);
>>   
>> +static int guc_slpc_info_show(struct seq_file *m, void *unused)
>> +{
>> +	struct intel_guc *guc = m->private;
>> +	struct intel_guc_slpc *slpc = &guc->slpc;
>> +	struct drm_printer p = drm_seq_file_printer(m);
>> +
>> +	if (!intel_guc_slpc_is_used(guc))
>> +		return -ENODEV;
>> +
>> +	return intel_guc_slpc_info(slpc, &p);
>> +}
>> +
> 
> other entries don't have empty line here

Removed.

> 
>> +DEFINE_GT_DEBUGFS_ATTRIBUTE(guc_slpc_info);
>> +
>>   void intel_guc_debugfs_register(struct intel_guc *guc, struct dentry *root)
>>   {
>>   	static const struct debugfs_gt_file files[] = {
>>   		{ "guc_info", &guc_info_fops, NULL },
>>   		{ "guc_registered_contexts", &guc_registered_contexts_fops, NULL },
>> +		{ "guc_slpc_info", &guc_slpc_info_fops, NULL},
> 
> IIRC last field is "eval" where maybe you could add your own to check if
> intel_guc_slpc_is_used() to avoid exposing this info if N/A
ok, added.

> 
>>   	};
>>   
>>   	if (!intel_guc_is_supported(guc))
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> index 98a283d31734..d179ba14ece6 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> @@ -432,6 +432,59 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
>>   	return 0;
>>   }
>>   
>> +int intel_guc_slpc_info(struct intel_guc_slpc *slpc, struct drm_printer *p)
>> +{
>> +	struct drm_i915_private *i915 = guc_to_gt(slpc_to_guc(slpc))->i915;
>> +	struct slpc_shared_data *data;
>> +	struct slpc_platform_info *platform_info;
>> +	struct slpc_task_state_data *task_state_data;
>> +	intel_wakeref_t wakeref;
>> +	int ret = 0;
>> +
>> +	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
>> +
>> +	if (slpc_read_task_state(slpc)) {
>> +		ret = -EIO;
>> +		goto done;
>> +	}
>> +
>> +	GEM_BUG_ON(!slpc->vma);
>> +
>> +	drm_clflush_virt_range(slpc->vaddr, sizeof(struct slpc_shared_data));
> 
> likely will go away if integrated into slpc_read_task_state

yup.

> 
>> +	data = slpc->vaddr;
>> +
>> +	platform_info = &data->platform_info;
> 
> is this used ?

no, removed.
> 
>> +	task_state_data = &data->task_state_data;
> 
> as it looks that you treat these sections separately, then maybe it
> would be cleaner to have:
> 
> static void print_global_data(*global_data, *p) {}
> static void print_platform_info(*platform_info, *p) {}
> static void print_task_state_data(*task_state_data, *p) {}

If we make these indivudal functions, we'll need to duplicate a lot
of code - like getting wakeref and reading task state. Better to keep
it all in one function instead. There is no other use case to print them
except for debugfs.

> 
>> +
>> +	drm_printf(p, "SLPC state: %s\n", slpc_state_stringify(data->global_state));
>> +	drm_printf(p, "\tgtperf task active: %d\n",
>> +			task_state_data->gtperf_task_active);
>> +	drm_printf(p, "\tdcc task active: %d\n",
>> +				task_state_data->dcc_task_active);
>> +	drm_printf(p, "\tin dcc: %d\n",
>> +				task_state_data->in_dcc);
>> +	drm_printf(p, "\tfreq switch active: %d\n",
>> +				task_state_data->freq_switch_active);
>> +	drm_printf(p, "\tibc enabled: %d\n",
>> +				task_state_data->ibc_enabled);
>> +	drm_printf(p, "\tibc active: %d\n",
>> +				task_state_data->ibc_active);
>> +	drm_printf(p, "\tpg1 enabled: %s\n",
>> +				yesno(task_state_data->pg1_enabled));
>> +	drm_printf(p, "\tpg1 active: %s\n",
>> +				yesno(task_state_data->pg1_active));
>> +	drm_printf(p, "\tmax freq: %dMHz\n",
>> +				DIV_ROUND_CLOSEST(data->task_state_data.max_unslice_freq *
>> +				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER));
>> +	drm_printf(p, "\tmin freq: %dMHz\n",
>> +				DIV_ROUND_CLOSEST(data->task_state_data.min_unslice_freq *
>> +				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER));
> 
> you defined task_state_data but in above 2 you're accessing it from data

Fixed.

Thanks,
Vinay.

> 
> Michal
> 
>> +
>> +done:
>> +	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
>> +	return ret;
>> +}
>> +
>>   void intel_guc_slpc_fini(struct intel_guc_slpc *slpc)
>>   {
>>   	if (!slpc->vma)
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
>> index 2cb830cdacb5..cd12c5f19f4b 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
>> @@ -10,6 +10,8 @@
>>   #include <linux/mutex.h>
>>   #include "intel_guc_slpc_fwif.h"
>>   
>> +struct drm_printer;
>> +
>>   struct intel_guc_slpc {
>>   	/*Protects access to vma and SLPC actions */
>>   	struct i915_vma *vma;
>> @@ -38,5 +40,6 @@ int intel_guc_slpc_set_max_freq(struct intel_guc_slpc *slpc, u32 val);
>>   int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val);
>>   int intel_guc_slpc_get_max_freq(struct intel_guc_slpc *slpc, u32 *val);
>>   int intel_guc_slpc_get_min_freq(struct intel_guc_slpc *slpc, u32 *val);
>> +int intel_guc_slpc_info(struct intel_guc_slpc *slpc, struct drm_printer *p);
>>   
>>   #endif
>>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 12/16] drm/i915/guc/slpc: Cache platform frequency limits for slpc
  2021-07-10 18:15   ` Michal Wajdeczko
  2021-07-17 19:30     ` Belgaumkar, Vinay
@ 2021-07-20 23:05     ` Belgaumkar, Vinay
  1 sibling, 0 replies; 53+ messages in thread
From: Belgaumkar, Vinay @ 2021-07-20 23:05 UTC (permalink / raw)
  To: Michal Wajdeczko, intel-gfx, dri-devel



On 7/10/2021 11:15 AM, Michal Wajdeczko wrote:
> 
> 
> On 10.07.2021 03:20, Vinay Belgaumkar wrote:
>> Cache rp0, rp1 and rpn platform limits into slpc structure
>> for range checking while setting min/max frequencies.
>>
>> Also add "soft" limits which keep track of frequency changes
>> made from userland. These are initially set to platform min
>> and max.
>>
>> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 41 +++++++++++++++++++++
>>   1 file changed, 41 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> index d32274cd1db7..6e978f27b7a6 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
>> @@ -86,6 +86,9 @@ static int slpc_shared_data_init(struct intel_guc_slpc *slpc)
>>   		return err;
>>   	}
>>   
>> +	slpc->max_freq_softlimit = 0;
>> +	slpc->min_freq_softlimit = 0;
> 
> as mentioned earlier, now it is time to introduce these fields in .h

done.

> 
>> +
>>   	return err;
>>   }
>>   
>> @@ -384,6 +387,29 @@ void intel_guc_pm_intrmsk_enable(struct intel_gt *gt)
>>   			   GEN6_PMINTRMSK, pm_intrmsk_mbz, 0);
>>   }
>>   
>> +static int intel_guc_slpc_set_softlimits(struct intel_guc_slpc *slpc)
>> +{
>> +	int ret = 0;
>> +
>> +	/* Softlimits are initially equivalent to platform limits
>> +	 * unless they have deviated from defaults, in which case,
>> +	 * we retain the values and set min/max accordingly.
>> +	 */
>> +	if (!slpc->max_freq_softlimit)
>> +		slpc->max_freq_softlimit = slpc->rp0_freq;
>> +	else if (slpc->max_freq_softlimit != slpc->rp0_freq)
>> +		ret = intel_guc_slpc_set_max_freq(slpc,
>> +					slpc->max_freq_softlimit);
>> +
>> +	if (!slpc->min_freq_softlimit)
>> +		slpc->min_freq_softlimit = slpc->min_freq;
>> +	else if (slpc->min_freq_softlimit != slpc->min_freq)
>> +		ret = intel_guc_slpc_set_min_freq(slpc,
>> +					slpc->min_freq_softlimit);
>> +
>> +	return ret;
>> +}
>> +
>>   /*
>>    * intel_guc_slpc_enable() - Start SLPC
>>    * @slpc: pointer to intel_guc_slpc.
>> @@ -402,6 +428,7 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
>>   	struct drm_i915_private *i915 = slpc_to_i915(slpc);
>>   	struct slpc_shared_data *data;
>>   	int ret;
>> +	u32 rp_state_cap;
> 
> move up to keep "ret" last

Done.

> 
>>   
>>   	GEM_BUG_ON(!slpc->vma);
>>   
>> @@ -445,6 +472,20 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
>>   			DIV_ROUND_CLOSEST(data->task_state_data.max_unslice_freq *
>>   				GT_FREQUENCY_MULTIPLIER, GEN9_FREQ_SCALER));
>>   
>> +	rp_state_cap = intel_uncore_read(i915->gt.uncore, GEN6_RP_STATE_CAP);
>> +
>> +	slpc->rp0_freq = ((rp_state_cap >> 0) & 0xff) * GT_FREQUENCY_MULTIPLIER;
>> +	slpc->min_freq = ((rp_state_cap >> 16) & 0xff) * GT_FREQUENCY_MULTIPLIER;
>> +	slpc->rp1_freq = ((rp_state_cap >> 8) & 0xff) * GT_FREQUENCY_MULTIPLIER;
> 
> we should have definitions for these bits and then we should be able to
> use REG_FIELD_GET

done.

> 
>> +
>> +	if (intel_guc_slpc_set_softlimits(slpc))
>> +		drm_err(&i915->drm, "Unable to set softlimits");
> 
> missing \n
> maybe we can also print error ?

done.

> 
>> +
>> +	drm_info(&i915->drm,
>> +		 "Platform fused frequency values -  min: %u Mhz, max: %u Mhz",
> 
> missing \n
> double space before 'min'

done.

Thanks,
Vinay.
> 
> Michal
> 
>> +		 slpc->min_freq,
>> +		 slpc->rp0_freq);
>> +
>>   	return 0;
>>   }
>>   
>>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 14/16] drm/i915/guc/slpc: Sysfs hooks for slpc
  2021-07-10 18:20   ` Michal Wajdeczko
@ 2021-07-20 23:38     ` Belgaumkar, Vinay
  0 siblings, 0 replies; 53+ messages in thread
From: Belgaumkar, Vinay @ 2021-07-20 23:38 UTC (permalink / raw)
  To: Michal Wajdeczko, intel-gfx, dri-devel



On 7/10/2021 11:20 AM, Michal Wajdeczko wrote:
> 
> 
> On 10.07.2021 03:20, Vinay Belgaumkar wrote:
>> Update the get/set min/max freq hooks to work for
>> slpc case as well. Consolidate helpers for requested/min/max
>> frequency get/set to intel_rps where the proper action can
>> be taken depending on whether slpc is enabled.
> 
> 2x s/slpc/SLPC

done.
> 
>>
>> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
>> Signed-off-by: Sujaritha Sundaresan <sujaritha.sundaresan@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gt/intel_rps.c | 135 ++++++++++++++++++++++++++++
>>   drivers/gpu/drm/i915/gt/intel_rps.h |   5 ++
>>   drivers/gpu/drm/i915/i915_pmu.c     |   2 +-
>>   drivers/gpu/drm/i915/i915_reg.h     |   2 +
>>   drivers/gpu/drm/i915/i915_sysfs.c   |  71 +++------------
>>   5 files changed, 154 insertions(+), 61 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
>> index e858eeb2c59d..88ffc5d90730 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_rps.c
>> +++ b/drivers/gpu/drm/i915/gt/intel_rps.c
>> @@ -37,6 +37,12 @@ static struct intel_uncore *rps_to_uncore(struct intel_rps *rps)
>>   	return rps_to_gt(rps)->uncore;
>>   }
>>   
>> +static struct intel_guc_slpc *rps_to_slpc(struct intel_rps *rps)
>> +{
>> +	struct intel_gt *gt = rps_to_gt(rps);
>> +	return &gt->uc.guc.slpc;
> 
> either add empty line between decl/code or make it one-liner

done.

> 
>> +}
>> +
>>   static bool rps_uses_slpc(struct intel_rps *rps)
>>   {
>>   	struct intel_gt *gt = rps_to_gt(rps);
>> @@ -1960,6 +1966,135 @@ u32 intel_rps_read_actual_frequency(struct intel_rps *rps)
>>   	return freq;
>>   }
>>   
>> +u32 intel_rps_read_punit_req(struct intel_rps *rps)
>> +{
>> +	struct intel_uncore *uncore = rps_to_uncore(rps);
>> +
> 
> drop empty line

done.
> 
>> +	u32 pureq = intel_uncore_read(uncore, GEN6_RPNSWREQ);
>> +
>> +	return pureq;
>> +}
>> +
>> +u32 intel_rps_get_req(struct intel_rps *rps, u32 pureq)
>> +{
>> +	u32 req = pureq >> GEN9_SW_REQ_UNSLICE_RATIO_SHIFT;
>> +
>> +	return req;
>> +}
>> +
>> +u32 intel_rps_read_punit_req_frequency(struct intel_rps *rps)
>> +{
>> +	u32 freq = intel_rps_get_req(rps, intel_rps_read_punit_req(rps));
>> +
>> +	return intel_gpu_freq(rps, freq);
>> +}
>> +
>> +u32 intel_rps_get_requested_frequency(struct intel_rps *rps)
>> +{
>> +	if (rps_uses_slpc(rps))
>> +		return intel_rps_read_punit_req_frequency(rps);
>> +	else
>> +		return intel_gpu_freq(rps, rps->cur_freq);
>> +}
>> +
>> +u32 intel_rps_get_max_frequency(struct intel_rps *rps)
>> +{
>> +	struct intel_guc_slpc *slpc = rps_to_slpc(rps);
>> +
>> +	if (rps_uses_slpc(rps))
>> +		return slpc->max_freq_softlimit;
>> +	else
>> +		return intel_gpu_freq(rps, rps->max_freq_softlimit);
>> +}
>> +
>> +int intel_rps_set_max_frequency(struct intel_rps *rps, u32 val)
>> +{
>> +	struct intel_guc_slpc *slpc = rps_to_slpc(rps);
>> +	int ret;
>> +
>> +	if (rps_uses_slpc(rps))
>> +		return intel_guc_slpc_set_max_freq(slpc, val);
>> +
>> +	mutex_lock(&rps->lock);
>> +
>> +	val = intel_freq_opcode(rps, val);
>> +	if (val < rps->min_freq ||
>> +	    val > rps->max_freq ||
>> +	    val < rps->min_freq_softlimit) {
>> +		ret = -EINVAL;
>> +		goto unlock;
>> +	}
>> +
>> +	if (val > rps->rp0_freq)
>> +		DRM_DEBUG("User requested overclocking to %d\n",
> 
> use drm_dbg

Done.

Thanks,
Vinay.
> 
> Michal
> 
>> +			  intel_gpu_freq(rps, val));
>> +
>> +	rps->max_freq_softlimit = val;
>> +
>> +	val = clamp_t(int, rps->cur_freq,
>> +		      rps->min_freq_softlimit,
>> +		      rps->max_freq_softlimit);
>> +
>> +	/*
>> +	 * We still need *_set_rps to process the new max_delay and
>> +	 * update the interrupt limits and PMINTRMSK even though
>> +	 * frequency request may be unchanged.
>> +	 */
>> +	intel_rps_set(rps, val);
>> +
>> +unlock:
>> +	mutex_unlock(&rps->lock);
>> +
>> +	return ret;
>> +}
>> +
>> +u32 intel_rps_get_min_frequency(struct intel_rps *rps)
>> +{
>> +	struct intel_guc_slpc *slpc = rps_to_slpc(rps);
>> +
>> +	if (rps_uses_slpc(rps))
>> +		return slpc->min_freq_softlimit;
>> +	else
>> +		return intel_gpu_freq(rps, rps->min_freq_softlimit);
>> +}
>> +
>> +int intel_rps_set_min_frequency(struct intel_rps *rps, u32 val)
>> +{
>> +	struct intel_guc_slpc *slpc = rps_to_slpc(rps);
>> +	int ret;
>> +
>> +	if (rps_uses_slpc(rps))
>> +		return intel_guc_slpc_set_min_freq(slpc, val);
>> +
>> +	mutex_lock(&rps->lock);
>> +
>> +	val = intel_freq_opcode(rps, val);
>> +	if (val < rps->min_freq ||
>> +	    val > rps->max_freq ||
>> +	    val > rps->max_freq_softlimit) {
>> +		ret = -EINVAL;
>> +		goto unlock;
>> +	}
>> +
>> +	rps->min_freq_softlimit = val;
>> +
>> +	val = clamp_t(int, rps->cur_freq,
>> +		      rps->min_freq_softlimit,
>> +		      rps->max_freq_softlimit);
>> +
>> +	/*
>> +	 * We still need *_set_rps to process the new min_delay and
>> +	 * update the interrupt limits and PMINTRMSK even though
>> +	 * frequency request may be unchanged.
>> +	 */
>> +	intel_rps_set(rps, val);
>> +
>> +unlock:
>> +	mutex_unlock(&rps->lock);
>> +
>> +	return ret;
>> +}
>> +
>>   /* External interface for intel_ips.ko */
>>   
>>   static struct drm_i915_private __rcu *ips_mchdev;
>> diff --git a/drivers/gpu/drm/i915/gt/intel_rps.h b/drivers/gpu/drm/i915/gt/intel_rps.h
>> index 1d2cfc98b510..9a09ff5ebf64 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_rps.h
>> +++ b/drivers/gpu/drm/i915/gt/intel_rps.h
>> @@ -31,6 +31,11 @@ int intel_gpu_freq(struct intel_rps *rps, int val);
>>   int intel_freq_opcode(struct intel_rps *rps, int val);
>>   u32 intel_rps_get_cagf(struct intel_rps *rps, u32 rpstat1);
>>   u32 intel_rps_read_actual_frequency(struct intel_rps *rps);
>> +u32 intel_rps_get_requested_frequency(struct intel_rps *rps);
>> +u32 intel_rps_get_min_frequency(struct intel_rps *rps);
>> +int intel_rps_set_min_frequency(struct intel_rps *rps, u32 val);
>> +u32 intel_rps_get_max_frequency(struct intel_rps *rps);
>> +int intel_rps_set_max_frequency(struct intel_rps *rps, u32 val);
>>   
>>   void gen5_rps_irq_handler(struct intel_rps *rps);
>>   void gen6_rps_irq_handler(struct intel_rps *rps, u32 pm_iir);
>> diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
>> index 34d37d46a126..a896bec18255 100644
>> --- a/drivers/gpu/drm/i915/i915_pmu.c
>> +++ b/drivers/gpu/drm/i915/i915_pmu.c
>> @@ -407,7 +407,7 @@ frequency_sample(struct intel_gt *gt, unsigned int period_ns)
>>   
>>   	if (pmu->enable & config_mask(I915_PMU_REQUESTED_FREQUENCY)) {
>>   		add_sample_mult(&pmu->sample[__I915_SAMPLE_FREQ_REQ],
>> -				intel_gpu_freq(rps, rps->cur_freq),
>> +				intel_rps_get_requested_frequency(rps),
>>   				period_ns / 1000);
>>   	}
>>   
>> diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
>> index 7d9e90aa3ec0..8ab3c2f8f8e4 100644
>> --- a/drivers/gpu/drm/i915/i915_reg.h
>> +++ b/drivers/gpu/drm/i915/i915_reg.h
>> @@ -9195,6 +9195,8 @@ enum {
>>   #define   GEN9_FREQUENCY(x)			((x) << 23)
>>   #define   GEN6_OFFSET(x)			((x) << 19)
>>   #define   GEN6_AGGRESSIVE_TURBO			(0 << 15)
>> +#define   GEN9_SW_REQ_UNSLICE_RATIO_SHIFT 	23
>> +
>>   #define GEN6_RC_VIDEO_FREQ			_MMIO(0xA00C)
>>   #define GEN6_RC_CONTROL				_MMIO(0xA090)
>>   #define   GEN6_RC_CTL_RC6pp_ENABLE		(1 << 16)
>> diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
>> index 873bf996ceb5..f2eee8491b19 100644
>> --- a/drivers/gpu/drm/i915/i915_sysfs.c
>> +++ b/drivers/gpu/drm/i915/i915_sysfs.c
>> @@ -272,7 +272,7 @@ static ssize_t gt_cur_freq_mhz_show(struct device *kdev,
>>   	struct drm_i915_private *i915 = kdev_minor_to_i915(kdev);
>>   	struct intel_rps *rps = &i915->gt.rps;
>>   
>> -	return sysfs_emit(buf, "%d\n", intel_gpu_freq(rps, rps->cur_freq));
>> +	return sysfs_emit(buf, "%d\n", intel_rps_get_requested_frequency(rps));
>>   }
>>   
>>   static ssize_t gt_boost_freq_mhz_show(struct device *kdev, struct device_attribute *attr, char *buf)
>> @@ -326,9 +326,10 @@ static ssize_t vlv_rpe_freq_mhz_show(struct device *kdev,
>>   static ssize_t gt_max_freq_mhz_show(struct device *kdev, struct device_attribute *attr, char *buf)
>>   {
>>   	struct drm_i915_private *dev_priv = kdev_minor_to_i915(kdev);
>> -	struct intel_rps *rps = &dev_priv->gt.rps;
>> +	struct intel_gt *gt = &dev_priv->gt;
>> +	struct intel_rps *rps = &gt->rps;
>>   
>> -	return sysfs_emit(buf, "%d\n", intel_gpu_freq(rps, rps->max_freq_softlimit));
>> +	return sysfs_emit(buf, "%d\n", intel_rps_get_max_frequency(rps));
>>   }
>>   
>>   static ssize_t gt_max_freq_mhz_store(struct device *kdev,
>> @@ -336,7 +337,8 @@ static ssize_t gt_max_freq_mhz_store(struct device *kdev,
>>   				     const char *buf, size_t count)
>>   {
>>   	struct drm_i915_private *dev_priv = kdev_minor_to_i915(kdev);
>> -	struct intel_rps *rps = &dev_priv->gt.rps;
>> +	struct intel_gt *gt = &dev_priv->gt;
>> +	struct intel_rps *rps = &gt->rps;
>>   	ssize_t ret;
>>   	u32 val;
>>   
>> @@ -344,35 +346,7 @@ static ssize_t gt_max_freq_mhz_store(struct device *kdev,
>>   	if (ret)
>>   		return ret;
>>   
>> -	mutex_lock(&rps->lock);
>> -
>> -	val = intel_freq_opcode(rps, val);
>> -	if (val < rps->min_freq ||
>> -	    val > rps->max_freq ||
>> -	    val < rps->min_freq_softlimit) {
>> -		ret = -EINVAL;
>> -		goto unlock;
>> -	}
>> -
>> -	if (val > rps->rp0_freq)
>> -		DRM_DEBUG("User requested overclocking to %d\n",
>> -			  intel_gpu_freq(rps, val));
>> -
>> -	rps->max_freq_softlimit = val;
>> -
>> -	val = clamp_t(int, rps->cur_freq,
>> -		      rps->min_freq_softlimit,
>> -		      rps->max_freq_softlimit);
>> -
>> -	/*
>> -	 * We still need *_set_rps to process the new max_delay and
>> -	 * update the interrupt limits and PMINTRMSK even though
>> -	 * frequency request may be unchanged.
>> -	 */
>> -	intel_rps_set(rps, val);
>> -
>> -unlock:
>> -	mutex_unlock(&rps->lock);
>> +	ret = intel_rps_set_max_frequency(rps, val);
>>   
>>   	return ret ?: count;
>>   }
>> @@ -380,9 +354,10 @@ static ssize_t gt_max_freq_mhz_store(struct device *kdev,
>>   static ssize_t gt_min_freq_mhz_show(struct device *kdev, struct device_attribute *attr, char *buf)
>>   {
>>   	struct drm_i915_private *dev_priv = kdev_minor_to_i915(kdev);
>> -	struct intel_rps *rps = &dev_priv->gt.rps;
>> +	struct intel_gt *gt = &dev_priv->gt;
>> +	struct intel_rps *rps = &gt->rps;
>>   
>> -	return sysfs_emit(buf, "%d\n", intel_gpu_freq(rps, rps->min_freq_softlimit));
>> +	return sysfs_emit(buf, "%d\n", intel_rps_get_min_frequency(rps));
>>   }
>>   
>>   static ssize_t gt_min_freq_mhz_store(struct device *kdev,
>> @@ -398,31 +373,7 @@ static ssize_t gt_min_freq_mhz_store(struct device *kdev,
>>   	if (ret)
>>   		return ret;
>>   
>> -	mutex_lock(&rps->lock);
>> -
>> -	val = intel_freq_opcode(rps, val);
>> -	if (val < rps->min_freq ||
>> -	    val > rps->max_freq ||
>> -	    val > rps->max_freq_softlimit) {
>> -		ret = -EINVAL;
>> -		goto unlock;
>> -	}
>> -
>> -	rps->min_freq_softlimit = val;
>> -
>> -	val = clamp_t(int, rps->cur_freq,
>> -		      rps->min_freq_softlimit,
>> -		      rps->max_freq_softlimit);
>> -
>> -	/*
>> -	 * We still need *_set_rps to process the new min_delay and
>> -	 * update the interrupt limits and PMINTRMSK even though
>> -	 * frequency request may be unchanged.
>> -	 */
>> -	intel_rps_set(rps, val);
>> -
>> -unlock:
>> -	mutex_unlock(&rps->lock);
>> +	ret = intel_rps_set_min_frequency(rps, val);
>>   
>>   	return ret ?: count;
>>   }
>>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 15/16] drm/i915/guc/slpc: slpc selftest
  2021-07-10 18:29   ` Michal Wajdeczko
@ 2021-07-21  1:06     ` Belgaumkar, Vinay
  0 siblings, 0 replies; 53+ messages in thread
From: Belgaumkar, Vinay @ 2021-07-21  1:06 UTC (permalink / raw)
  To: Michal Wajdeczko, intel-gfx, dri-devel



On 7/10/2021 11:29 AM, Michal Wajdeczko wrote:
> 
> 
> On 10.07.2021 03:20, Vinay Belgaumkar wrote:
>> Tests that exercise the slpc get/set frequency interfaces.
>>
>> Clamp_max will set max frequency to multiple levels and check
>> that slpc requests frequency lower than or equal to it.
>>
>> Clamp_min will set min frequency to different levels and check
>> if slpc requests are higher or equal to those levels.
> 
> 2x s/slpc/SLPC

Done.

> 
>>
>> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gt/intel_rps.c           |   1 +
>>   drivers/gpu/drm/i915/gt/selftest_slpc.c       | 333 ++++++++++++++++++
>>   drivers/gpu/drm/i915/gt/selftest_slpc.h       |  12 +
>>   .../drm/i915/selftests/i915_live_selftests.h  |   1 +
>>   4 files changed, 347 insertions(+)
>>   create mode 100644 drivers/gpu/drm/i915/gt/selftest_slpc.c
>>   create mode 100644 drivers/gpu/drm/i915/gt/selftest_slpc.h
>>
>> diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
>> index 88ffc5d90730..16ac2e840881 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_rps.c
>> +++ b/drivers/gpu/drm/i915/gt/intel_rps.c
>> @@ -2288,4 +2288,5 @@ EXPORT_SYMBOL_GPL(i915_gpu_turbo_disable);
>>   
>>   #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
>>   #include "selftest_rps.c"
>> +#include "selftest_slpc.c"
>>   #endif
>> diff --git a/drivers/gpu/drm/i915/gt/selftest_slpc.c b/drivers/gpu/drm/i915/gt/selftest_slpc.c
>> new file mode 100644
>> index 000000000000..f440c1cb2afa
>> --- /dev/null
>> +++ b/drivers/gpu/drm/i915/gt/selftest_slpc.c
>> @@ -0,0 +1,333 @@
>> +// SPDX-License-Identifier: MIT
>> +/*
>> + * Copyright © 2020 Intel Corporation
> 
> 2021

Done.

> 
>> + */
>> +#include "selftest_slpc.h"
>> +#include "selftest_rps.h"
>> +
>> +#include <linux/pm_qos.h>
>> +#include <linux/sort.h>
> 
> system headers should go first

Cleaned up and removed the unwanted headers.

> 
>> +
>> +#include "intel_engine_heartbeat.h"
>> +#include "intel_engine_pm.h"
>> +#include "intel_gpu_commands.h"
>> +#include "intel_gt_clock_utils.h"
>> +#include "intel_gt_pm.h"
>> +#include "intel_rc6.h"
>> +#include "selftest_engine_heartbeat.h"
>> +#include "intel_rps.h"
>> +#include "selftests/igt_flush_test.h"
>> +#include "selftests/igt_spinner.h"
> 
> wrong order ?

Removed all includes, since these are already included in 
selftest_rps.c. This gets included in intel_rps.c while compiling the 
slpc selftest.

> 
>> +
>> +#define NUM_STEPS 5
>> +#define H2G_DELAY 50000
>> +#define delay_for_h2g() usleep_range(H2G_DELAY, H2G_DELAY + 10000)
>> +
>> +static int set_min_freq(struct intel_guc_slpc *slpc, int freq)
>> +{
>> +	int ret;
> 
> add empty line

done.

> 
>> +	ret = intel_guc_slpc_set_min_freq(slpc, freq);
>> +	if (ret) {
>> +		pr_err("Could not set min frequency to [%d]\n", freq);
>> +		return ret;
>> +	} else {
>> +		/* Delay to ensure h2g completes */
>> +		delay_for_h2g();
>> +	}
>> +
>> +	return ret;
>> +}
>> +
>> +static int set_max_freq(struct intel_guc_slpc *slpc, int freq)
>> +{
>> +	int ret;
> 
> add empty line

done.

> 
>> +	ret = intel_guc_slpc_set_max_freq(slpc, freq);
>> +	if (ret) {
>> +		pr_err("Could not set maximum frequency [%d]\n",
>> +			freq);
>> +		return ret;
>> +	} else {
>> +		/* Delay to ensure h2g completes */
>> +		delay_for_h2g();
>> +	}
>> +
>> +	return ret;
>> +}
>> +
>> +int live_slpc_clamp_min(void *arg)
>> +{
>> +	struct drm_i915_private *i915 = arg;
>> +	struct intel_gt *gt = &i915->gt;
>> +	struct intel_guc_slpc *slpc;
>> +	struct intel_rps *rps;
>> +	struct intel_engine_cs *engine;
>> +	enum intel_engine_id id;
>> +	struct igt_spinner spin;
>> +	int err = 0;
> 
> usually "err" is last decl

ok.

> 
>> +	u32 slpc_min_freq, slpc_max_freq;
>> +
>> +
> 
> too many empty lines

removed.

> 
>> +	slpc = &gt->uc.guc.slpc;
>> +	rps = &gt->rps;
> 
> could be initialized in decl above

ok.

> 
>> +
>> +	if (!intel_uc_uses_guc_slpc(&gt->uc))
>> +		return 0;
>> +
>> +	if (igt_spinner_init(&spin, gt))
>> +		return -ENOMEM;
>> +
>> +	if (intel_guc_slpc_get_max_freq(slpc, &slpc_max_freq)) {
>> +		pr_err("Could not get SLPC max freq");
>> +		return -EIO;
>> +	}
>> +
>> +	if (intel_guc_slpc_get_min_freq(slpc, &slpc_min_freq)) {
>> +		pr_err("Could not get SLPC min freq");
>> +		return -EIO;
>> +	}
>> +
>> +	if (slpc_min_freq == slpc_max_freq) {
>> +		pr_err("Min/Max are fused to the same value");
>> +		return -EINVAL;
>> +	}
> 
> 3x missing \n

done.

> 
>> +
>> +	intel_gt_pm_wait_for_idle(gt);
>> +	intel_gt_pm_get(gt);
>> +	for_each_engine(engine, gt, id) {
>> +		struct i915_request *rq;
>> +		u32 step, min_freq, req_freq;
>> +		u32 act_freq, max_act_freq;
>> +
>> +		if (!intel_engine_can_store_dword(engine))
>> +			continue;
>> +
>> +		/* Go from min to max in 5 steps */
>> +		step = (slpc_max_freq - slpc_min_freq)/NUM_STEPS;
> 
> add spaces ") / NUM"

ok.

> 
>> +		max_act_freq = slpc_min_freq;
>> +		for (min_freq = slpc_min_freq; min_freq < slpc_max_freq; min_freq+=step)
> 
> add spaces " += "

ok.

> 
>> +		{
>> +			err = set_min_freq(slpc, min_freq);
>> +			if (err)
>> +				break;
>> +
>> +			st_engine_heartbeat_disable(engine);
>> +
>> +
> 
> keep only one empty line

ok.

> 
>> +			rq = igt_spinner_create_request(&spin,
>> +					engine->kernel_context,
>> +					MI_NOOP);
>> +			if (IS_ERR(rq)) {
>> +				err = PTR_ERR(rq);
>> +				st_engine_heartbeat_enable(engine);
>> +				break;
>> +			}
>> +
>> +			i915_request_add(rq);
>> +
>> +			if (!igt_wait_for_spinner(&spin, rq)) {
>> +				pr_err("%s: Spinner did not start\n",
>> +					engine->name);
>> +				igt_spinner_end(&spin);
>> +				st_engine_heartbeat_enable(engine);
>> +				intel_gt_set_wedged(engine->gt);
>> +				err = -EIO;
>> +				break;
>> +			}
>> +
>> +			/* Wait for GuC to detect business and raise
>> +			 * requested frequency if necessary */
>> +			delay_for_h2g();
>> +
>> +			req_freq = intel_rps_read_punit_req_frequency(rps);
>> +
>> +			/* GuC requests freq in multiples of 50/3 MHz */
>> +			if (req_freq < (min_freq - 50/3)) {
>> +				pr_err("SWReq is %d, should be at least %d", req_freq,
>> +					min_freq - 50/3);
>> +				igt_spinner_end(&spin);
>> +				st_engine_heartbeat_enable(engine);
>> +				err = -EINVAL;
>> +				break;
>> +			}
>> +
>> +			act_freq =  intel_rps_read_actual_frequency(rps);
>> +			if (act_freq > max_act_freq)
>> +				max_act_freq = act_freq;
>> +
>> +			igt_spinner_end(&spin);
>> +			st_engine_heartbeat_enable(engine);
>> +		}
>> +
>> +		pr_info("Max actual frequency for %s was %d",
>> +				engine->name, max_act_freq);
>> +
>> +		/* Actual frequency should rise above min */
>> +		if (max_act_freq == slpc_min_freq) {
>> +			pr_err("Actual freq did not rise above min");
>> +			err = -EINVAL;
>> +		}
> 
> 2x missing \n
> 
> and few more below

added.

> 
>> +
>> +		if (err)
>> +			break;
>> +	}
>> +
>> +	/* Restore min/max frequencies */
>> +	set_max_freq(slpc, slpc_max_freq);
>> +	set_min_freq(slpc, slpc_min_freq);
>> +
>> +	if (igt_flush_test(gt->i915))
>> +		err = -EIO;
>> +
>> +	intel_gt_pm_put(gt);
>> +	igt_spinner_fini(&spin);
>> +	intel_gt_pm_wait_for_idle(gt);
>> +
>> +	return err;
>> +}
>> +
>> +int live_slpc_clamp_max(void *arg)
>> +{
>> +	struct drm_i915_private *i915 = arg;
>> +	struct intel_gt *gt = &i915->gt;
>> +	struct intel_guc_slpc *slpc;
>> +	struct intel_rps *rps;
>> +	struct intel_engine_cs *engine;
>> +	enum intel_engine_id id;
>> +	struct igt_spinner spin;
>> +	int err = 0;
>> +	u32 slpc_min_freq, slpc_max_freq;
>> +
>> +	slpc = &gt->uc.guc.slpc;
>> +	rps = &gt->rps;
>> +
>> +	if (!intel_uc_uses_guc_slpc(&gt->uc))
>> +		return 0;
>> +
>> +	if (igt_spinner_init(&spin, gt))
>> +		return -ENOMEM;
>> +
>> +	if (intel_guc_slpc_get_max_freq(slpc, &slpc_max_freq)) {
>> +		pr_err("Could not get SLPC max freq");
>> +		return -EIO;
>> +	}
>> +
>> +	if (intel_guc_slpc_get_min_freq(slpc, &slpc_min_freq)) {
>> +		pr_err("Could not get SLPC min freq");
>> +		return -EIO;
>> +	}
>> +
>> +	if (slpc_min_freq == slpc_max_freq) {
>> +		pr_err("Min/Max are fused to the same value");
>> +		return -EINVAL;
>> +	}
>> +
>> +	intel_gt_pm_wait_for_idle(gt);
>> +	intel_gt_pm_get(gt);
>> +	for_each_engine(engine, gt, id) {
>> +		struct i915_request *rq;
>> +		u32 max_freq, req_freq;
>> +		u32 act_freq, max_act_freq;
>> +		u32 step;
>> +
>> +		if (!intel_engine_can_store_dword(engine))
>> +			continue;
>> +
>> +		/* Go from max to min in 5 steps */
>> +		step = (slpc_max_freq - slpc_min_freq)/NUM_STEPS;
>> +		max_act_freq = slpc_min_freq;
>> +		for (max_freq = slpc_max_freq; max_freq > slpc_min_freq; max_freq-=step)
>> +		{
>> +			err = set_max_freq(slpc, max_freq);
>> +			if (err)
>> +				break;
>> +
>> +			st_engine_heartbeat_disable(engine);
>> +
>> +			rq = igt_spinner_create_request(&spin,
>> +						engine->kernel_context,
>> +						MI_NOOP);
>> +			if (IS_ERR(rq)) {
>> +				st_engine_heartbeat_enable(engine);
>> +				err = PTR_ERR(rq);
>> +				break;
>> +			}
>> +
>> +			i915_request_add(rq);
>> +
>> +			if (!igt_wait_for_spinner(&spin, rq)) {
>> +				pr_err("%s: SLPC spinner did not start\n",
>> +				       engine->name);
>> +				igt_spinner_end(&spin);
>> +				st_engine_heartbeat_enable(engine);
>> +				intel_gt_set_wedged(engine->gt);
>> +				err = -EIO;
>> +				break;
>> +			}
>> +
>> +			delay_for_h2g();
>> +
>> +			/* Verify that SWREQ indeed was set to specific value */
>> +			req_freq = intel_rps_read_punit_req_frequency(rps);
>> +
>> +			/* GuC requests freq in multiples of 50/3 MHz */
>> +			if (req_freq > (max_freq + 50/3)) {
>> +				pr_err("SWReq is %d, should be at most %d", req_freq,
>> +					max_freq + 50/3);
>> +				igt_spinner_end(&spin);
>> +				st_engine_heartbeat_enable(engine);
>> +				err = -EINVAL;
>> +				break;
>> +			}
>> +
>> +			act_freq =  intel_rps_read_actual_frequency(rps);
>> +			if (act_freq > max_act_freq)
>> +				max_act_freq = act_freq;
>> +
>> +			st_engine_heartbeat_enable(engine);
>> +			igt_spinner_end(&spin);
>> +
>> +			if (err)
>> +				break;
>> +		}
>> +
>> +		pr_info("Max actual frequency for %s was %d",
>> +				engine->name, max_act_freq);
>> +
>> +		/* Actual frequency should rise above min */
>> +		if (max_act_freq == slpc_min_freq) {
>> +			pr_err("Actual freq did not rise above min");
>> +			err = -EINVAL;
>> +		}
>> +
>> +		if (igt_flush_test(gt->i915)) {
>> +			err = -EIO;
>> +			break;
>> +		}
>> +
>> +		if (err)
>> +			break;
>> +	}
>> +
>> +	/* Restore min/max freq */
>> +	set_max_freq(slpc, slpc_max_freq);
>> +	set_min_freq(slpc, slpc_min_freq);
>> +
>> +	intel_gt_pm_put(gt);
>> +	igt_spinner_fini(&spin);
>> +	intel_gt_pm_wait_for_idle(gt);
>> +
>> +	return err;
>> +}
>> +
>> +int intel_slpc_live_selftests(struct drm_i915_private *i915)
>> +{
>> +	static const struct i915_subtest tests[] = {
>> +		SUBTEST(live_slpc_clamp_max),
>> +		SUBTEST(live_slpc_clamp_min),
>> +	};
>> +
>> +	if (intel_gt_is_wedged(&i915->gt))
>> +		return 0;
>> +
>> +	return i915_live_subtests(tests, i915);
>> +}
>> diff --git a/drivers/gpu/drm/i915/gt/selftest_slpc.h b/drivers/gpu/drm/i915/gt/selftest_slpc.h
>> new file mode 100644
>> index 000000000000..8dfb40916a8c
>> --- /dev/null
>> +++ b/drivers/gpu/drm/i915/gt/selftest_slpc.h
>> @@ -0,0 +1,12 @@
>> +/* SPDX-License-Identifier: MIT */
>> +/*
>> + * Copyright © 2020 Intel Corporation
> 
> 2021

done.
Thanks,
Vinay.
> 
> Michal
> 
>> + */
>> +
>> +#ifndef SELFTEST_SLPC_H
>> +#define SELFTEST_SLPC_H
>> +
>> +int live_slpc_clamp_max(void *arg);
>> +int live_slpc_clamp_min(void *arg);
>> +
>> +#endif /* SELFTEST_SLPC_H */
>> diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
>> index e2fd1b61af71..1746a56dda06 100644
>> --- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
>> +++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
>> @@ -47,5 +47,6 @@ selftest(hangcheck, intel_hangcheck_live_selftests)
>>   selftest(execlists, intel_execlists_live_selftests)
>>   selftest(ring_submission, intel_ring_submission_live_selftests)
>>   selftest(perf, i915_perf_live_selftests)
>> +selftest(slpc, intel_slpc_live_selftests)
>>   /* Here be dragons: keep last to run last! */
>>   selftest(late_gt_pm, intel_gt_pm_late_selftests)
>>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 16/16] drm/i915/guc/rc: Setup and enable GUCRC feature
  2021-07-10 18:41   ` Michal Wajdeczko
@ 2021-07-21  1:11     ` Belgaumkar, Vinay
  0 siblings, 0 replies; 53+ messages in thread
From: Belgaumkar, Vinay @ 2021-07-21  1:11 UTC (permalink / raw)
  To: Michal Wajdeczko, intel-gfx, dri-devel



On 7/10/2021 11:41 AM, Michal Wajdeczko wrote:
> 
> 
> On 10.07.2021 03:20, Vinay Belgaumkar wrote:
>> This feature hands over the control of HW RC6 to the GUC.
>> GUC decides when to put HW into RC6 based on it's internal
>> busyness algorithms.
>>
>> GUCRC needs GUC submission to be enabled, and only
>> supported on Gen12+ for now.
>>
>> When GUCRC is enabled, do not set HW RC6. Use a H2G message
>> to tell guc to enable GUCRC. When disabling RC6, tell guc to
> 
> s/GUC/GuC
> s/guc/GuC

Done.

> 
>> revert RC6 control back to KMD.
>>
>> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
>> ---
>>   drivers/gpu/drm/i915/Makefile                 |  1 +
>>   drivers/gpu/drm/i915/gt/intel_rc6.c           | 22 ++++--
>>   .../gpu/drm/i915/gt/uc/abi/guc_actions_abi.h  |  6 ++
>>   drivers/gpu/drm/i915/gt/uc/intel_guc.c        |  1 +
>>   drivers/gpu/drm/i915/gt/uc/intel_guc.h        |  2 +
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c     | 79 +++++++++++++++++++
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_rc.h     | 32 ++++++++
>>   drivers/gpu/drm/i915/gt/uc/intel_uc.h         |  2 +
>>   8 files changed, 140 insertions(+), 5 deletions(-)
>>   create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c
>>   create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_rc.h
>>
>> diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
>> index d8eac4468df9..3fc17f20d88e 100644
>> --- a/drivers/gpu/drm/i915/Makefile
>> +++ b/drivers/gpu/drm/i915/Makefile
>> @@ -186,6 +186,7 @@ i915-y += gt/uc/intel_uc.o \
>>   	  gt/uc/intel_guc_fw.o \
>>   	  gt/uc/intel_guc_log.o \
>>   	  gt/uc/intel_guc_log_debugfs.o \
>> +	  gt/uc/intel_guc_rc.o \
>>   	  gt/uc/intel_guc_slpc.o \
>>   	  gt/uc/intel_guc_submission.o \
>>   	  gt/uc/intel_huc.o \
>> diff --git a/drivers/gpu/drm/i915/gt/intel_rc6.c b/drivers/gpu/drm/i915/gt/intel_rc6.c
>> index 259d7eb4e165..299fcf10b04b 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_rc6.c
>> +++ b/drivers/gpu/drm/i915/gt/intel_rc6.c
>> @@ -98,11 +98,19 @@ static void gen11_rc6_enable(struct intel_rc6 *rc6)
>>   	set(uncore, GEN9_MEDIA_PG_IDLE_HYSTERESIS, 60);
>>   	set(uncore, GEN9_RENDER_PG_IDLE_HYSTERESIS, 60);
>>   
>> -	/* 3a: Enable RC6 */
>> -	rc6->ctl_enable =
>> -		GEN6_RC_CTL_HW_ENABLE |
>> -		GEN6_RC_CTL_RC6_ENABLE |
>> -		GEN6_RC_CTL_EI_MODE(1);
>> +	/* 3a: Enable RC6
>> +	 *
>> +	 * With GUCRC, we do not enable bit 31 of RC_CTL,
>> +	 * thus allowing GuC to control RC6 entry/exit fully instead.
>> +	 * We will not set the HW ENABLE and EI bits
>> +	 */
>> +	if (!intel_guc_rc_enable(&gt->uc.guc))
>> +		rc6->ctl_enable = GEN6_RC_CTL_RC6_ENABLE;
>> +	else
>> +		rc6->ctl_enable =
>> +			GEN6_RC_CTL_HW_ENABLE |
>> +			GEN6_RC_CTL_RC6_ENABLE |
>> +			GEN6_RC_CTL_EI_MODE(1);
>>   
>>   	pg_enable =
>>   		GEN9_RENDER_PG_ENABLE |
>> @@ -513,6 +521,10 @@ static void __intel_rc6_disable(struct intel_rc6 *rc6)
>>   {
>>   	struct drm_i915_private *i915 = rc6_to_i915(rc6);
>>   	struct intel_uncore *uncore = rc6_to_uncore(rc6);
>> +	struct intel_gt *gt = rc6_to_gt(rc6);
>> +
>> +	/* Take control of RC6 back from GuC */
>> +	intel_guc_rc_disable(&gt->uc.guc);
>>   
>>   	intel_uncore_forcewake_get(uncore, FORCEWAKE_ALL);
>>   	if (GRAPHICS_VER(i915) >= 9)
>> diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
>> index 596cf4b818e5..2ddb9cdc0a59 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
>> +++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
>> @@ -136,6 +136,7 @@ enum intel_guc_action {
>>   	INTEL_GUC_ACTION_CONTEXT_RESET_NOTIFICATION = 0x1008,
>>   	INTEL_GUC_ACTION_ENGINE_FAILURE_NOTIFICATION = 0x1009,
>>   	INTEL_GUC_ACTION_SLPC_REQUEST = 0x3003,
>> +	INTEL_GUC_ACTION_SETUP_PC_GUCRC = 0x3004,
>>   	INTEL_GUC_ACTION_AUTHENTICATE_HUC = 0x4000,
>>   	INTEL_GUC_ACTION_REGISTER_CONTEXT = 0x4502,
>>   	INTEL_GUC_ACTION_DEREGISTER_CONTEXT = 0x4503,
>> @@ -146,6 +147,11 @@ enum intel_guc_action {
>>   	INTEL_GUC_ACTION_LIMIT
>>   };
>>   
>> +enum intel_guc_rc_options {
>> +	INTEL_GUCRC_HOST_CONTROL,
>> +	INTEL_GUCRC_FIRMWARE_CONTROL,
>> +};
>> +
>>   enum intel_guc_preempt_options {
>>   	INTEL_GUC_PREEMPT_OPTION_DROP_WORK_Q = 0x4,
>>   	INTEL_GUC_PREEMPT_OPTION_DROP_SUBMIT_Q = 0x8,
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
>> index 82863a9bc8e8..0d55b24f7c67 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
>> @@ -158,6 +158,7 @@ void intel_guc_init_early(struct intel_guc *guc)
>>   	intel_guc_log_init_early(&guc->log);
>>   	intel_guc_submission_init_early(guc);
>>   	intel_guc_slpc_init_early(guc);
>> +	intel_guc_rc_init_early(guc);
>>   
>>   	mutex_init(&guc->send_mutex);
>>   	spin_lock_init(&guc->irq_lock);
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
>> index 0dbbd9cf553f..592d52e5e93c 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
>> @@ -59,6 +59,8 @@ struct intel_guc {
>>   
>>   	bool submission_supported;
>>   	bool submission_selected;
>> +	bool rc_supported;
>> +	bool rc_selected;
>>   	bool slpc_supported;
>>   	bool slpc_selected;
>>   
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c
>> new file mode 100644
>> index 000000000000..45b61432c56d
>> --- /dev/null
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.c
>> @@ -0,0 +1,79 @@
>> +// SPDX-License-Identifier: MIT
>> +/*
>> + * Copyright © 2020 Intel Corporation
> 
> 2021

Done.
> 
>> +*/
> 
> unaligned *

done.

> 
>> +
>> +#include "intel_guc_rc.h"
>> +#include "gt/intel_gt.h"
>> +#include "i915_drv.h"
>> +
>> +static bool __guc_rc_supported(struct intel_guc *guc)
>> +{
>> +	/* GuC RC is unavailable for pre-Gen12 */
>> +	return guc->submission_supported &&
>> +		GRAPHICS_VER(guc_to_gt(guc)->i915) >= 12;
>> +}
>> +
>> +static bool __guc_rc_selected(struct intel_guc *guc)
>> +{
>> +	if (!intel_guc_rc_is_supported(guc))
>> +		return false;
>> +
>> +	return guc->submission_selected;
>> +}
>> +
>> +void intel_guc_rc_init_early(struct intel_guc *guc)
>> +{
>> +	guc->rc_supported = __guc_rc_supported(guc);
>> +	guc->rc_selected = __guc_rc_selected(guc);
>> +}
>> +
>> +static int guc_action_control_gucrc(struct intel_guc *guc, bool enable)
>> +{
>> +	struct drm_device *drm = &guc_to_gt(guc)->i915->drm;
>> +	u32 rc_mode = enable ? INTEL_GUCRC_FIRMWARE_CONTROL :
>> +				INTEL_GUCRC_HOST_CONTROL;
>> +	u32 action[] = {
>> +		INTEL_GUC_ACTION_SETUP_PC_GUCRC,
>> +		rc_mode
>> +	};
>> +	int ret;
>> +
>> +	ret = intel_guc_send(guc, action, ARRAY_SIZE(action));
>> +	if (ret)
> 
> since intel_guc_send() may return non-zero value from data0 RESPONSE
> field, assuming that this action expects there MBZ this should be:
> 
> 	ret = ret > 0 ? -EPROTO : ret;
> 
> otherwise some static code analyzers might complain
> 
>> +		drm_err(drm, "Failed to set GUCRC mode(%d), err=%d\n",
> 
> you may want to print error with %pe
> and move this message to __guc_rc_control because of the above

Ok, done.

> 
>> +			rc_mode, ret);
>> +
>> +	return ret;
>> +}
>> +
>> +static int __guc_rc_control(struct intel_guc *guc, bool enable)
>> +{
>> +	struct intel_gt *gt = guc_to_gt(guc);
>> +	int ret;
>> +
>> +	if (!intel_uc_uses_guc_rc(&gt->uc))
>> +		return -ENOTSUPP;
>> +
>> +	if (!intel_guc_is_ready(guc))
>> +		return -EINVAL;
>> +
>> +	ret = guc_action_control_gucrc(guc, enable);
>> +	if (unlikely(ret))
> 
> 	drm_err(drm, "Failed to %s GuC RC mode (%pe)\n",
> 		enabledisable(enable), ERR_PTR(ret));
> 
>> +		return ret;
>> +
>> +	drm_info(&gt->i915->drm, "GuC RC %s\n",
>> +	         enableddisabled(enable));
>> +
>> +	return 0;
>> +}
>> +
>> +int intel_guc_rc_enable(struct intel_guc *guc)
>> +{
>> +	return __guc_rc_control(guc, true);
>> +}
>> +
>> +int intel_guc_rc_disable(struct intel_guc *guc)
>> +{
>> +	return __guc_rc_control(guc, false);
>> +}
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.h
>> new file mode 100644
>> index 000000000000..169e60726e5b
>> --- /dev/null
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_rc.h
>> @@ -0,0 +1,32 @@
>> +/* SPDX-License-Identifier: MIT */
>> +/*
>> + * Copyright © 2020 Intel Corporation
> 
> 2021

Done.

> 
>> + */
>> +
>> +#ifndef _INTEL_GUC_RC_H_
>> +#define _INTEL_GUC_RC_H_
>> +
>> +#include <linux/types.h>
> 
> do you need this include here ?

guess not.

Thanks,
Vinay.
> 
> Michal
> 
>> +#include "intel_guc_submission.h"
>> +
>> +void intel_guc_rc_init_early(struct intel_guc *guc);
>> +
>> +static inline bool intel_guc_rc_is_supported(struct intel_guc *guc)
>> +{
>> +	return guc->rc_supported;
>> +}
>> +
>> +static inline bool intel_guc_rc_is_wanted(struct intel_guc *guc)
>> +{
>> +	return guc->submission_selected && intel_guc_rc_is_supported(guc);
>> +}
>> +
>> +static inline bool intel_guc_rc_is_used(struct intel_guc *guc)
>> +{
>> +	return intel_guc_submission_is_used(guc) && intel_guc_rc_is_wanted(guc);
>> +}
>> +
>> +int intel_guc_rc_enable(struct intel_guc *guc);
>> +int intel_guc_rc_disable(struct intel_guc *guc);
>> +
>> +#endif
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.h b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
>> index 38e465fd8a0c..29d8ad6d9087 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.h
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
>> @@ -7,6 +7,7 @@
>>   #define _INTEL_UC_H_
>>   
>>   #include "intel_guc.h"
>> +#include "intel_guc_rc.h"
>>   #include "intel_guc_submission.h"
>>   #include "intel_huc.h"
>>   #include "i915_params.h"
>> @@ -84,6 +85,7 @@ uc_state_checkers(guc, guc);
>>   uc_state_checkers(huc, huc);
>>   uc_state_checkers(guc, guc_submission);
>>   uc_state_checkers(guc, guc_slpc);
>> +uc_state_checkers(guc, guc_rc);
>>   
>>   #undef uc_state_checkers
>>   #undef __uc_state_checker
>>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Intel-gfx] [PATCH 07/16] drm/i915/guc/slpc: Enable slpc and add related H2G events
  2021-07-15  1:58     ` Belgaumkar, Vinay
@ 2021-07-21 17:36       ` Michal Wajdeczko
  0 siblings, 0 replies; 53+ messages in thread
From: Michal Wajdeczko @ 2021-07-21 17:36 UTC (permalink / raw)
  To: Belgaumkar, Vinay, intel-gfx, dri-devel



On 15.07.2021 03:58, Belgaumkar, Vinay wrote:
> 
> 
> On 7/10/2021 10:37 AM, Michal Wajdeczko wrote:
>>
>>
>> On 10.07.2021 03:20, Vinay Belgaumkar wrote:
...
>>>   diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> index e2644a05f298..3e76d4d5f7bb 100644
>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> @@ -2321,10 +2321,6 @@ void intel_guc_submission_enable(struct
>>> intel_guc *guc)
>>>     void intel_guc_submission_disable(struct intel_guc *guc)
>>>   {
>>> -    struct intel_gt *gt = guc_to_gt(guc);
>>> -
>>> -    GEM_BUG_ON(gt->awake); /* GT should be parked first */
>>
>> if not mistake, can you explain why it was removed ?
> 
> This was part of a different commit. The BUG_ON in
> disable_guc_submission was added with an assumption that it will be
> called only during driver unload and not expected to hold any GT PM
> references. Since this needs to be called from an error scenario during
> slpc enable, remove the BUG_ON. Do we need this as a separate commit?

yes, please

Michal

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 53+ messages in thread

end of thread, other threads:[~2021-07-21 17:36 UTC | newest]

Thread overview: 53+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-10  1:20 [Intel-gfx] [PATCH 00/16] Enable GuC based power management features Vinay Belgaumkar
2021-07-10  1:20 ` [Intel-gfx] [PATCH 01/16] drm/i915/guc: Squashed patch - DO NOT REVIEW Vinay Belgaumkar
2021-07-10  1:20 ` [Intel-gfx] [PATCH 02/16] drm/i915/guc/slpc: Initial definitions for slpc Vinay Belgaumkar
2021-07-10 14:27   ` Michal Wajdeczko
2021-07-12 18:40     ` Belgaumkar, Vinay
2021-07-12 23:43     ` Belgaumkar, Vinay
2021-07-10  1:20 ` [Intel-gfx] [PATCH 03/16] drm/i915/guc/slpc: Gate Host RPS when slpc is enabled Vinay Belgaumkar
2021-07-10  1:20 ` [Intel-gfx] [PATCH 04/16] drm/i915/guc/slpc: Lay out slpc init/enable/disable/fini Vinay Belgaumkar
2021-07-10 14:35   ` Michal Wajdeczko
2021-07-13  0:37     ` Belgaumkar, Vinay
2021-07-10  1:20 ` [Intel-gfx] [PATCH 05/16] drm/i915/guc/slpc: Adding slpc communication interfaces Vinay Belgaumkar
2021-07-10 15:52   ` Michal Wajdeczko
2021-07-13 23:22     ` Belgaumkar, Vinay
2021-07-10  1:20 ` [Intel-gfx] [PATCH 06/16] drm/i915/guc/slpc: Allocate, initialize and release slpc Vinay Belgaumkar
2021-07-10 16:05   ` Michal Wajdeczko
2021-07-14  1:40     ` Belgaumkar, Vinay
2021-07-10  1:20 ` [Intel-gfx] [PATCH 07/16] drm/i915/guc/slpc: Enable slpc and add related H2G events Vinay Belgaumkar
2021-07-10 17:37   ` Michal Wajdeczko
2021-07-15  1:58     ` Belgaumkar, Vinay
2021-07-21 17:36       ` Michal Wajdeczko
2021-07-10  1:20 ` [Intel-gfx] [PATCH 08/16] drm/i915/guc/slpc: Add methods to set min/max frequency Vinay Belgaumkar
2021-07-10  3:07   ` kernel test robot
2021-07-10  5:17   ` kernel test robot
2021-07-10 17:47   ` Michal Wajdeczko
2021-07-16 18:00     ` Belgaumkar, Vinay
2021-07-10  1:20 ` [Intel-gfx] [PATCH 09/16] drm/i915/guc/slpc: Add get max/min freq hooks Vinay Belgaumkar
2021-07-10 17:52   ` Michal Wajdeczko
2021-07-20 22:08     ` Belgaumkar, Vinay
2021-07-10  1:20 ` [Intel-gfx] [PATCH 10/16] drm/i915/guc/slpc: Add debugfs for slpc info Vinay Belgaumkar
2021-07-10 18:08   ` Michal Wajdeczko
2021-07-20 23:00     ` Belgaumkar, Vinay
2021-07-10  1:20 ` [Intel-gfx] [PATCH 11/16] drm/i915/guc/slpc: Enable ARAT timer interrupt Vinay Belgaumkar
2021-07-10  1:20 ` [Intel-gfx] [PATCH 12/16] drm/i915/guc/slpc: Cache platform frequency limits for slpc Vinay Belgaumkar
2021-07-10 18:15   ` Michal Wajdeczko
2021-07-17 19:30     ` Belgaumkar, Vinay
2021-07-20 23:05     ` Belgaumkar, Vinay
2021-07-10  1:20 ` [Intel-gfx] [PATCH 13/16] drm/i915/guc/slpc: Update slpc to use platform min/max Vinay Belgaumkar
2021-07-10  1:20 ` [Intel-gfx] [PATCH 14/16] drm/i915/guc/slpc: Sysfs hooks for slpc Vinay Belgaumkar
2021-07-10  6:18   ` kernel test robot
2021-07-10  7:30   ` kernel test robot
2021-07-10  7:30   ` [Intel-gfx] [RFC PATCH] drm/i915/guc/slpc: intel_rps_read_punit_req() can be static kernel test robot
2021-07-10 13:54   ` [Intel-gfx] [PATCH 14/16] drm/i915/guc/slpc: Sysfs hooks for slpc kernel test robot
2021-07-10 18:20   ` Michal Wajdeczko
2021-07-20 23:38     ` Belgaumkar, Vinay
2021-07-10  1:20 ` [Intel-gfx] [PATCH 15/16] drm/i915/guc/slpc: slpc selftest Vinay Belgaumkar
2021-07-10 18:29   ` Michal Wajdeczko
2021-07-21  1:06     ` Belgaumkar, Vinay
2021-07-10  1:20 ` [Intel-gfx] [PATCH 16/16] drm/i915/guc/rc: Setup and enable GUCRC feature Vinay Belgaumkar
2021-07-10 18:41   ` Michal Wajdeczko
2021-07-21  1:11     ` Belgaumkar, Vinay
2021-07-10  1:40 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Enable GuC based power management features Patchwork
2021-07-10  1:41 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-07-10  2:09 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).