All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, Intel-gfx@lists.freedesktop.org
Subject: Re: [Intel-gfx] [RFC 08/12] drm/i915: Expose per-engine client busyness
Date: Wed, 11 Mar 2020 10:17:21 +0000	[thread overview]
Message-ID: <9d415063-3295-d788-9a39-6677133b0118@linux.intel.com> (raw)
In-Reply-To: <158387113173.28297.4263001679670929491@build.alporthouse.com>


On 10/03/2020 20:12, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-03-10 20:04:23)
>>
>> On 10/03/2020 18:32, Chris Wilson wrote:
>>> Quoting Tvrtko Ursulin (2020-03-09 18:31:25)
>>>> +static ssize_t
>>>> +show_client_busy(struct device *kdev, struct device_attribute *attr, char *buf)
>>>> +{
>>>> +       struct i915_engine_busy_attribute *i915_attr =
>>>> +               container_of(attr, typeof(*i915_attr), attr);
>>>> +       unsigned int class = i915_attr->engine_class;
>>>> +       struct i915_drm_client *client = i915_attr->client;
>>>> +       u64 total = atomic64_read(&client->past_runtime[class]);
>>>> +       struct list_head *list = &client->ctx_list;
>>>> +       struct i915_gem_context *ctx;
>>>> +
>>>> +       rcu_read_lock();
>>>> +       list_for_each_entry_rcu(ctx, list, client_link) {
>>>> +               total += atomic64_read(&ctx->past_runtime[class]);
>>>> +               total += pphwsp_busy_add(ctx, class);
>>>> +       }
>>>> +       rcu_read_unlock();
>>>> +
>>>> +       total *= RUNTIME_INFO(i915_attr->i915)->cs_timestamp_period_ns;
>>>
>>> Planning early retirement? In 600 years, they'll have forgotten how to
>>> email ;)
>>
>> Shruggety shrug. :) I am guessing you would prefer both internal
>> representations (sw and pphwsp runtimes) to be consistently in
>> nanoseconds? I thought why multiply at various places when once at the
>> readout time is enough.
> 
> It's fine. I was just double checking overflow, and then remembered the
> end result is 64b nanoseconds.
> 
> Keep the internal representation convenient for accumulation, and the
> conversion at the boundary.
>   
>> And I should mention again how I am not sure at the moment how to meld
>> the two stats into one more "perfect" output.
> 
> One of the things that crossed my mind was wondering if it was possible
> to throw in a pulse before reading the stats (if active etc). Usual
> dilemma with non-preemptible contexts, so probably not worth it as those
> hogs will remain hogs.
> 
> And I worry about the disparity between sw busy and hw runtime.

How about I stop tracking accumulated sw runtime and just use it for the 
active portion. So reporting back hw runtime + sw active runtime. In 
other words sw tracking only covers the portion between context_in and 
context_out. Sounds worth a try.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2020-03-11 10:17 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
2020-03-09 21:34   ` Chris Wilson
2020-03-09 23:26     ` Tvrtko Ursulin
2020-03-10  0:13       ` Chris Wilson
2020-03-10  8:44         ` Tvrtko Ursulin
2020-03-10 11:41   ` Chris Wilson
2020-03-10 12:04     ` Tvrtko Ursulin
2020-03-10 17:59   ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 02/12] drm/i915: Update client name on context create Tvrtko Ursulin
2020-03-10 18:11   ` Chris Wilson
2020-03-10 19:52     ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 03/12] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin
2020-03-10 18:20   ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 04/12] drm/i915: Use explicit flag to mark unreachable intel_context Tvrtko Ursulin
2020-03-10 15:30   ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 05/12] drm/i915: Track runtime spent in unreachable intel_contexts Tvrtko Ursulin
2020-03-10 18:25   ` Chris Wilson
2020-03-10 20:00     ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 06/12] drm/i915: Track runtime spent in closed GEM contexts Tvrtko Ursulin
2020-03-10 18:28   ` Chris Wilson
2020-03-10 20:01     ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 07/12] drm/i915: Track all user contexts per client Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 08/12] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
2020-03-10 18:32   ` Chris Wilson
2020-03-10 20:04     ` Tvrtko Ursulin
2020-03-10 20:12       ` Chris Wilson
2020-03-11 10:17         ` Tvrtko Ursulin [this message]
2020-03-09 18:31 ` [Intel-gfx] [RFC 09/12] drm/i915: Track per-context engine busyness Tvrtko Ursulin
2020-03-10 18:36   ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 10/12] drm/i915: Carry over past software tracked context runtime Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 11/12] drm/i915: Prefer software tracked context busyness Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 12/12] compare runtimes Tvrtko Ursulin
2020-03-09 19:05 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness (rev5) Patchwork
2020-03-09 19:13 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-03-09 22:02 ` [Intel-gfx] [RFC 00/12] Per client engine busyness Chris Wilson
2020-03-09 23:30   ` Tvrtko Ursulin
2020-03-10 15:11 ` [Intel-gfx] ✗ Fi.CI.BAT: failure for Per client engine busyness (rev5) Patchwork
2020-03-10 15:11 ` [Intel-gfx] ✗ Fi.CI.BUILD: warning " Patchwork
2020-03-10 15:19 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2020-03-10 15:19 ` [Intel-gfx] ✗ Fi.CI.BUILD: warning " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9d415063-3295-d788-9a39-6677133b0118@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=Intel-gfx@lists.freedesktop.org \
    --cc=chris@chris-wilson.co.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.