From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BA0C9ECAAD1 for ; Thu, 25 Aug 2022 21:12:47 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E629610E2A4; Thu, 25 Aug 2022 21:12:46 +0000 (UTC) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by gabe.freedesktop.org (Postfix) with ESMTPS id 68C9210E19E for ; Thu, 25 Aug 2022 21:12:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661461963; x=1692997963; h=date:message-id:from:to:cc:subject:in-reply-to: references:mime-version; bh=3L84iYfZq4M/bdEyDbNPU7EhYFQIWK8XCPx84Q6gmE0=; b=nnucLLBBoJjz788ZGCwEmwyEeDEat70oLRoDVo+7T2uSLVNI6IzGuYMH /39lTq6GlyVx5CJ2u9eid+AxLlPrAFuBxf9BTXWlU3ccQy5TgUGteUfV0 RVzyPR6gQhJbrNz3x20zMq+JruZ76BlVt+VFALzXveJ+1WSHjD8QuUtxp +7lH9SN2iaktn4xN7LgysWSAOd8qJmI5UmvE1Lzyfz7ptcdzaisv5/mym FKwYl0NDi1EO3V9fjYtEcZmtlrBjUTCz6GV+qoJKopmWJgOtG3vBzU1TW 3bCWIomF6dg5rUBBg73hQ6g8+WkGhuI6PxY+V6dNRroTHrgfYTUV0sDTI g==; X-IronPort-AV: E=McAfee;i="6500,9779,10450"; a="293099157" X-IronPort-AV: E=Sophos;i="5.93,264,1654585200"; d="scan'208";a="293099157" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Aug 2022 14:12:42 -0700 X-IronPort-AV: E=Sophos;i="5.93,264,1654585200"; d="scan'208";a="610323917" Received: from adixit-mobl.amr.corp.intel.com (HELO adixit-arch.intel.com) ([10.252.130.121]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Aug 2022 14:12:42 -0700 Date: Thu, 25 Aug 2022 14:12:11 -0700 Message-ID: <87pmgojadw.wl-ashutosh.dixit@intel.com> From: "Dixit, Ashutosh" To: Umesh Nerlige Ramappa In-Reply-To: <87o7w8ykx4.wl-ashutosh.dixit@intel.com> References: <20220804232125.211449-1-umesh.nerlige.ramappa@intel.com> <87o7w8ykx4.wl-ashutosh.dixit@intel.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?ISO-8859-4?Q?Goj=F2?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/28.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII Subject: Re: [Intel-gfx] [PATCH] i915/pmu: Wire GuC backend to per-client busyness X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: intel-gfx@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" On Wed, 24 Aug 2022 22:03:19 -0700, Dixit, Ashutosh wrote: > > On Thu, 04 Aug 2022 16:21:25 -0700, Umesh Nerlige Ramappa wrote: > > > > Hi Umesh, > > Still reviewing but I have a question below. Please ignore this mail for now, mostly a result of my misunderstanding the code. I will ask again if I have any questions. Thanks. > > > diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c > > index 654a092ed3d6..e2d70a9fdac0 100644 > > --- a/drivers/gpu/drm/i915/gt/intel_context.c > > +++ b/drivers/gpu/drm/i915/gt/intel_context.c > > @@ -576,16 +576,24 @@ void intel_context_bind_parent_child(struct intel_context *parent, > > child->parallel.parent = parent; > > } > > > > -u64 intel_context_get_total_runtime_ns(const struct intel_context *ce) > > +u64 intel_context_get_total_runtime_ns(struct intel_context *ce) > > { > > u64 total, active; > > > > + if (ce->ops->update_stats) > > + ce->ops->update_stats(ce); > > + > > /snip/ > > > @@ -1396,6 +1399,10 @@ static void guc_timestamp_ping(struct work_struct *wrk) > > with_intel_runtime_pm(>->i915->runtime_pm, wakeref) > > __update_guc_busyness_stats(guc); > > > > + /* adjust context stats for overflow */ > > + xa_for_each(&guc->context_lookup, index, ce) > > + __guc_context_update_clks(ce); > > + > > The question is why do we have 2 functions: __guc_context_update_clks() > (which we call periodically from guc_timestamp_ping()) and > guc_context_update_stats() (which we call non-periodically from > intel_context_get_total_runtime_ns()? Why don't we have just one function > which is called from both places? Or rather why don't we call > guc_context_update_stats() from both places? > > If we don't call guc_context_update_stats() periodically from > guc_timestamp_ping() how e.g. does ce->stats.runtime.start_gt_clk get reset > to 0? If it gets reset to 0 in __guc_context_update_clks() then why do we > need to reset it in guc_context_update_stats()? > > Also IMO guc->timestamp.lock should be taken by this single function, > (otherwise guc_context_update_stats() is modifying > ce->stats.runtime.start_gt_clk without taking the lock). > > Thanks. > -- > Ashutosh > > > +static void __guc_context_update_clks(struct intel_context *ce) > > +{ > > + struct intel_guc *guc = ce_to_guc(ce); > > + struct intel_gt *gt = ce->engine->gt; > > + u32 *pphwsp, last_switch, engine_id; > > + u64 start_gt_clk, active; > > + unsigned long flags; > > + ktime_t unused; > > + > > + spin_lock_irqsave(&guc->timestamp.lock, flags); > > + > > + /* > > + * GPU updates ce->lrc_reg_state[CTX_TIMESTAMP] when context is switched > > + * out, however GuC updates PPHWSP offsets below. Hence KMD (CPU) > > + * relies on GuC and GPU for busyness calculations. Due to this, A > > + * potential race was highlighted in an earlier review that can lead to > > + * double accounting of busyness. While the solution to this is a wip, > > + * busyness is still usable for platforms running GuC submission. > > + */ > > + pphwsp = ((void *)ce->lrc_reg_state) - LRC_STATE_OFFSET; > > + last_switch = READ_ONCE(pphwsp[PPHWSP_GUC_CONTEXT_USAGE_STAMP_LO]); > > + engine_id = READ_ONCE(pphwsp[PPHWSP_GUC_CONTEXT_USAGE_ENGINE_ID]); > > + > > + guc_update_pm_timestamp(guc, &unused); > > + > > + if (engine_id != 0xffffffff && last_switch) { > > + start_gt_clk = READ_ONCE(ce->stats.runtime.start_gt_clk); > > + __extend_last_switch(guc, &start_gt_clk, last_switch); > > + active = intel_gt_clock_interval_to_ns(gt, guc->timestamp.gt_stamp - start_gt_clk); > > + WRITE_ONCE(ce->stats.runtime.start_gt_clk, start_gt_clk); > > + WRITE_ONCE(ce->stats.active, active); > > + } else { > > + lrc_update_runtime(ce); > > + } > > + > > + spin_unlock_irqrestore(&guc->timestamp.lock, flags); > > +} > > + > > +static void guc_context_update_stats(struct intel_context *ce) > > +{ > > + if (!intel_context_pin_if_active(ce)) { > > + WRITE_ONCE(ce->stats.runtime.start_gt_clk, 0); > > + WRITE_ONCE(ce->stats.active, 0); > > + return; > > + } > > + > > + __guc_context_update_clks(ce); > > + intel_context_unpin(ce); > > +}