All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Dixit, Ashutosh" <ashutosh.dixit@intel.com>
To: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
Cc: intel-gfx@lists.freedesktop.org
Subject: Re: [Intel-gfx] [PATCH] i915/pmu: Wire GuC backend to per-client busyness
Date: Wed, 24 Aug 2022 22:03:19 -0700	[thread overview]
Message-ID: <87o7w8ykx4.wl-ashutosh.dixit@intel.com> (raw)
In-Reply-To: <20220804232125.211449-1-umesh.nerlige.ramappa@intel.com>

On Thu, 04 Aug 2022 16:21:25 -0700, Umesh Nerlige Ramappa wrote:
>

Hi Umesh,

Still reviewing but I have a question below.

> diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
> index 654a092ed3d6..e2d70a9fdac0 100644
> --- a/drivers/gpu/drm/i915/gt/intel_context.c
> +++ b/drivers/gpu/drm/i915/gt/intel_context.c
> @@ -576,16 +576,24 @@ void intel_context_bind_parent_child(struct intel_context *parent,
>	child->parallel.parent = parent;
>  }
>
> -u64 intel_context_get_total_runtime_ns(const struct intel_context *ce)
> +u64 intel_context_get_total_runtime_ns(struct intel_context *ce)
>  {
>	u64 total, active;
>
> +	if (ce->ops->update_stats)
> +		ce->ops->update_stats(ce);
> +

/snip/

> @@ -1396,6 +1399,10 @@ static void guc_timestamp_ping(struct work_struct *wrk)
>	with_intel_runtime_pm(&gt->i915->runtime_pm, wakeref)
>		__update_guc_busyness_stats(guc);
>
> +	/* adjust context stats for overflow */
> +	xa_for_each(&guc->context_lookup, index, ce)
> +		__guc_context_update_clks(ce);
> +

The question is why do we have 2 functions: __guc_context_update_clks()
(which we call periodically from guc_timestamp_ping()) and
guc_context_update_stats() (which we call non-periodically from
intel_context_get_total_runtime_ns()? Why don't we have just one function
which is called from both places? Or rather why don't we call
guc_context_update_stats() from both places?

If we don't call guc_context_update_stats() periodically from
guc_timestamp_ping() how e.g. does ce->stats.runtime.start_gt_clk get reset
to 0? If it gets reset to 0 in __guc_context_update_clks() then why do we
need to reset it in guc_context_update_stats()?

Also IMO guc->timestamp.lock should be taken by this single function,
(otherwise guc_context_update_stats() is modifying
ce->stats.runtime.start_gt_clk without taking the lock).

Thanks.
--
Ashutosh

> +static void __guc_context_update_clks(struct intel_context *ce)
> +{
> +	struct intel_guc *guc = ce_to_guc(ce);
> +	struct intel_gt *gt = ce->engine->gt;
> +	u32 *pphwsp, last_switch, engine_id;
> +	u64 start_gt_clk, active;
> +	unsigned long flags;
> +	ktime_t unused;
> +
> +	spin_lock_irqsave(&guc->timestamp.lock, flags);
> +
> +	/*
> +	 * GPU updates ce->lrc_reg_state[CTX_TIMESTAMP] when context is switched
> +	 * out, however GuC updates PPHWSP offsets below. Hence KMD (CPU)
> +	 * relies on GuC and GPU for busyness calculations. Due to this, A
> +	 * potential race was highlighted in an earlier review that can lead to
> +	 * double accounting of busyness. While the solution to this is a wip,
> +	 * busyness is still usable for platforms running GuC submission.
> +	 */
> +	pphwsp = ((void *)ce->lrc_reg_state) - LRC_STATE_OFFSET;
> +	last_switch = READ_ONCE(pphwsp[PPHWSP_GUC_CONTEXT_USAGE_STAMP_LO]);
> +	engine_id = READ_ONCE(pphwsp[PPHWSP_GUC_CONTEXT_USAGE_ENGINE_ID]);
> +
> +	guc_update_pm_timestamp(guc, &unused);
> +
> +	if (engine_id != 0xffffffff && last_switch) {
> +		start_gt_clk = READ_ONCE(ce->stats.runtime.start_gt_clk);
> +		__extend_last_switch(guc, &start_gt_clk, last_switch);
> +		active = intel_gt_clock_interval_to_ns(gt, guc->timestamp.gt_stamp - start_gt_clk);
> +		WRITE_ONCE(ce->stats.runtime.start_gt_clk, start_gt_clk);
> +		WRITE_ONCE(ce->stats.active, active);
> +	} else {
> +		lrc_update_runtime(ce);
> +	}
> +
> +	spin_unlock_irqrestore(&guc->timestamp.lock, flags);
> +}
> +
> +static void guc_context_update_stats(struct intel_context *ce)
> +{
> +	if (!intel_context_pin_if_active(ce)) {
> +		WRITE_ONCE(ce->stats.runtime.start_gt_clk, 0);
> +		WRITE_ONCE(ce->stats.active, 0);
> +		return;
> +	}
> +
> +	__guc_context_update_clks(ce);
> +	intel_context_unpin(ce);
> +}

  parent reply	other threads:[~2022-08-25  7:06 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-04 23:21 [Intel-gfx] [PATCH] i915/pmu: Wire GuC backend to per-client busyness Umesh Nerlige Ramappa
2022-08-04 23:36 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for i915/pmu: Wire GuC backend to per-client busyness (rev4) Patchwork
2022-08-04 23:55 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2022-08-05  5:46 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2022-08-05  9:45 ` [Intel-gfx] [PATCH] i915/pmu: Wire GuC backend to per-client busyness Tvrtko Ursulin
2022-08-05 15:18   ` Umesh Nerlige Ramappa
     [not found]     ` <87fshl3yw0.wl-ashutosh.dixit@intel.com>
2022-08-26 15:44       ` Umesh Nerlige Ramappa
2022-08-25  5:03 ` Dixit, Ashutosh [this message]
2022-08-25 21:12   ` Dixit, Ashutosh
2022-08-26  1:44 ` Dixit, Ashutosh
2022-08-26 16:33   ` Umesh Nerlige Ramappa
2022-08-31 20:25     ` Dixit, Ashutosh
2022-08-31 22:57       ` Umesh Nerlige Ramappa
  -- strict thread matches above, loose matches on Subject: below --
2022-06-16 22:13 Nerlige Ramappa, Umesh
2022-06-17  8:00 ` Tvrtko Ursulin
2022-07-27  6:01   ` Umesh Nerlige Ramappa
2022-07-27  8:48     ` Tvrtko Ursulin
2022-08-01 19:02       ` Umesh Nerlige Ramappa
2022-08-02  8:41         ` Tvrtko Ursulin
2022-08-02 23:38           ` Umesh Nerlige Ramappa
2022-08-04  1:21             ` Umesh Nerlige Ramappa
2022-08-04  7:25               ` Tvrtko Ursulin
2022-06-16 19:08 Nerlige Ramappa, Umesh
2022-06-14  0:46 Nerlige Ramappa, Umesh
2022-06-14 13:30 ` Tvrtko Ursulin
2022-06-14 16:32   ` Umesh Nerlige Ramappa
2022-06-15  7:08     ` Tvrtko Ursulin
2022-06-15 17:42       ` Umesh Nerlige Ramappa
2022-08-25  6:18         ` Dixit, Ashutosh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87o7w8ykx4.wl-ashutosh.dixit@intel.com \
    --to=ashutosh.dixit@intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=umesh.nerlige.ramappa@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.