From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D1606ECAAD1 for ; Wed, 31 Aug 2022 20:25:22 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A179A10E20D; Wed, 31 Aug 2022 20:25:21 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2DC7310E20D for ; Wed, 31 Aug 2022 20:25:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661977514; x=1693513514; h=date:message-id:from:to:cc:subject:in-reply-to: references:mime-version; bh=AIFqnpBKiMmdsUVwF0HlHBsc5UeOFST6dNqUhc272AA=; b=doXKVmvjaloHNchN2TFkuZPpdj4Ec5u8NXci/QODhtURKkZOevNfsnnB bIsfGqTvpzQnoPmOY+rqYQ5n53GTTLDEb7M+uiwIpewSvxz1I0cbfxEwx hIRJYggBbwhWNJGG/CEcV/uLO9Yg4mUkVlabvtcy+JNVRt5Qx9a/WIdzF /2tyL+uztaHly6RnHVa4Csp3japeqi09JJ1/1HSZW2pKI06UPNz8NujnW KI+NWNRzlr5ARf3miex5IXci0v6+5lOBCHVUUP05RZfh4WapN65s1ZVRj Jc4c5HsLKAousbD7v6AFZtXyqJTkQlAboWpLXtEkQl0KDzFmbwRkTnqXk g==; X-IronPort-AV: E=McAfee;i="6500,9779,10456"; a="278554726" X-IronPort-AV: E=Sophos;i="5.93,279,1654585200"; d="scan'208";a="278554726" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Aug 2022 13:25:13 -0700 X-IronPort-AV: E=Sophos;i="5.93,279,1654585200"; d="scan'208";a="940560478" Received: from adixit-mobl.amr.corp.intel.com (HELO adixit-arch.intel.com) ([10.209.25.8]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Aug 2022 13:25:11 -0700 Date: Wed, 31 Aug 2022 13:25:11 -0700 Message-ID: <87wnaorwig.wl-ashutosh.dixit@intel.com> From: "Dixit, Ashutosh" To: Umesh Nerlige Ramappa In-Reply-To: References: <20220804232125.211449-1-umesh.nerlige.ramappa@intel.com> <87lerbkcbx.wl-ashutosh.dixit@intel.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?ISO-8859-4?Q?Goj=F2?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/28.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII Subject: Re: [Intel-gfx] [PATCH] i915/pmu: Wire GuC backend to per-client busyness X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: intel-gfx@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" On Fri, 26 Aug 2022 09:33:08 -0700, Umesh Nerlige Ramappa wrote: > Hi Umesh, Just to communicate my thoughts I have posted this patch on top of your patch: [1] https://patchwork.freedesktop.org/series/107983/ Could you please take a look at that and see if it makes sense. > On Thu, Aug 25, 2022 at 06:44:50PM -0700, Dixit, Ashutosh wrote: > > On Thu, 04 Aug 2022 16:21:25 -0700, Umesh Nerlige Ramappa wrote: > > > > Hi Umesh, I am fairly new to this code so some questions will be below will > > be newbie questions, thanks for bearing with me. > > > >> diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c > >> index 654a092ed3d6..e2d70a9fdac0 100644 > >> --- a/drivers/gpu/drm/i915/gt/intel_context.c > >> +++ b/drivers/gpu/drm/i915/gt/intel_context.c > >> @@ -576,16 +576,24 @@ void intel_context_bind_parent_child(struct intel_context *parent, > >> child->parallel.parent = parent; > >> } > >> > >> -u64 intel_context_get_total_runtime_ns(const struct intel_context *ce) > >> +u64 intel_context_get_total_runtime_ns(struct intel_context *ce) > >> { > >> u64 total, active; > >> > >> + if (ce->ops->update_stats) > >> + ce->ops->update_stats(ce); > >> + > >> total = ce->stats.runtime.total; > >> if (ce->ops->flags & COPS_RUNTIME_CYCLES) > >> total *= ce->engine->gt->clock_period_ns; > >> > >> active = READ_ONCE(ce->stats.active); > >> - if (active) > >> + /* > >> + * When COPS_RUNTIME_ACTIVE_TOTAL is set for ce->cops, the backend > >> + * already provides the total active time of the context, so skip this > >> + * calculation when this flag is set. > >> + */ > >> + if (active && !(ce->ops->flags & COPS_RUNTIME_ACTIVE_TOTAL)) > >> active = intel_context_clock() - active; > >> > >> return total + active; > > > > /snip/ > > > >> @@ -1396,6 +1399,10 @@ static void guc_timestamp_ping(struct work_struct *wrk) > >> with_intel_runtime_pm(>->i915->runtime_pm, wakeref) > >> __update_guc_busyness_stats(guc); > >> > >> + /* adjust context stats for overflow */ > >> + xa_for_each(&guc->context_lookup, index, ce) > >> + __guc_context_update_clks(ce); > > > > What is the reason for calling __guc_context_update_clks() periodically > > from guc_timestamp_ping() since it appears we should just be able to call > > __guc_context_update_clks() from intel_context_get_total_runtime_ns() to > > update 'active'? Is the reason for calling __guc_context_update_clks() > > periodically that the calculations in __guc_context_update_clks() become > > invalid if the counters overflow? > > Correct, these are 32-bit counters and the worker just tracks overflow. OK. > > > > >> + > >> intel_gt_reset_unlock(gt, srcu); > >> > >> mod_delayed_work(system_highpri_wq, &guc->timestamp.work, > >> @@ -1469,6 +1476,56 @@ void intel_guc_busyness_unpark(struct intel_gt *gt) > >> guc->timestamp.ping_delay); > >> } > >> > >> +static void __guc_context_update_clks(struct intel_context *ce) > >> +{ > >> + struct intel_guc *guc = ce_to_guc(ce); > >> + struct intel_gt *gt = ce->engine->gt; > >> + u32 *pphwsp, last_switch, engine_id; > >> + u64 start_gt_clk, active; > >> + unsigned long flags; > >> + ktime_t unused; > >> + > >> + spin_lock_irqsave(&guc->timestamp.lock, flags); > >> + > >> + /* > >> + * GPU updates ce->lrc_reg_state[CTX_TIMESTAMP] when context is switched > >> + * out, however GuC updates PPHWSP offsets below. Hence KMD (CPU) > >> + * relies on GuC and GPU for busyness calculations. Due to this, A > >> + * potential race was highlighted in an earlier review that can lead to > >> + * double accounting of busyness. While the solution to this is a wip, > >> + * busyness is still usable for platforms running GuC submission. > >> + */ > >> + pphwsp = ((void *)ce->lrc_reg_state) - LRC_STATE_OFFSET; > >> + last_switch = READ_ONCE(pphwsp[PPHWSP_GUC_CONTEXT_USAGE_STAMP_LO]); > >> + engine_id = READ_ONCE(pphwsp[PPHWSP_GUC_CONTEXT_USAGE_ENGINE_ID]); > >> + > >> + guc_update_pm_timestamp(guc, &unused); > >> + > >> + if (engine_id != 0xffffffff && last_switch) { > >> + start_gt_clk = READ_ONCE(ce->stats.runtime.start_gt_clk); > >> + __extend_last_switch(guc, &start_gt_clk, last_switch); > >> + active = intel_gt_clock_interval_to_ns(gt, guc->timestamp.gt_stamp - start_gt_clk); > >> + WRITE_ONCE(ce->stats.runtime.start_gt_clk, start_gt_clk); > >> + WRITE_ONCE(ce->stats.active, active); > > > > Should not need WRITE_ONCE to update regular memory. Not even sure we need > > READ_ONCE above. > > Not sure I checked what they do. I was thinking these are needed for the > memory ordering (as in be sure that start_gt_clk is updated before > active). As long as our operations are done under correct locks we don't have to worry about memory ordering. That is one of the reasons I am doing everything under the spinlock in [1]. > > > > >> + } else { > >> + lrc_update_runtime(ce); > > > > As was being discussed, should not need this here in this function. See > > below too. > > In short, I added this here so that a query for busyness following idle can > be obtained immediately. For GuC backend, the context is unpinned after > disabling scheduling on that context and that is asynchronous. Also if > there are more requests on that context, the scheduling may not be disabled > and unpin may not happen, so updated runtime would only be seen much much > later. > > It is still safe to call from here because we know that the context is not > active and has switched out. If it did switch in while we were reading > this, that's still fine, we would only report the value stored in the > context image. Agreed, but in [1] I have made this unconditional, not sure if you will agree or see problems with that. > > > > >> + } > >> + > >> + spin_unlock_irqrestore(&guc->timestamp.lock, flags); > >> +} > >> + > >> +static void guc_context_update_stats(struct intel_context *ce) > >> +{ > >> + if (!intel_context_pin_if_active(ce)) { > >> + WRITE_ONCE(ce->stats.runtime.start_gt_clk, 0); > >> + WRITE_ONCE(ce->stats.active, 0); > > > > Why do these need to be initialized to 0? Looks like the calculations in > > __guc_context_update_clks() will work even if we don't do this? Also I > > didn't follow the 'if (!intel_context_pin_if_active(ce))' check. > > __guc_context_update_clks accesses the context image, so we need to make > sure it's pinned. pin if active will not sleep/wait, so we can use it in > this path. I have added pinning in [1]. > if context is not active, then we update the active stats to 0. In [1] active is just a local variable and I don't touch ce->stats.active at all. > >> + return; > >> + } > >> + > >> + __guc_context_update_clks(ce); > >> + intel_context_unpin(ce); > >> +} > >> + > >> static inline bool > >> submission_disabled(struct intel_guc *guc) > >> { > >> @@ -2723,6 +2780,7 @@ static void guc_context_unpin(struct intel_context *ce) > >> { > >> struct intel_guc *guc = cce_to_guc(ce); > >> > >> + lrc_update_runtime(ce); > > > > How about moving this into lrc_unpin() since that gets called from all guc > > context types (parent/child/virtual). > > looks like lrc_unpin is called from context_unpin path. > > Same as above: for GuC, the context_unpin is an async operation and may not > happen if there are multiple requests in queue. In [1] I have left lrc_unpin in guc_context_unpin but changed to lrc_update_runtime_locked. Thanks. -- Ashutosh