All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>,
	Intel-gfx@lists.freedesktop.org,
	Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Subject: Re: [PATCH v4] drm/i915: Execlists small cleanups and micro-optimisations
Date: Mon, 29 Feb 2016 11:40:37 +0000	[thread overview]
Message-ID: <56D42E35.3010608@linux.intel.com> (raw)
In-Reply-To: <20160229111349.GA727@nuc-i3427.alporthouse.com>



On 29/02/16 11:13, Chris Wilson wrote:
> On Mon, Feb 29, 2016 at 11:01:49AM +0000, Tvrtko Ursulin wrote:
>>
>> On 29/02/16 10:53, Chris Wilson wrote:
>>> On Mon, Feb 29, 2016 at 10:45:34AM +0000, Tvrtko Ursulin wrote:
>>>> This ok?
>>>>
>>>> """
>>>> One unexplained result is with "gem_latency -n 0" (dispatching
>>>> empty batches) which shows 5% more throughput, 8% less CPU time,
>>>> 25% better producer and consumer latencies, but 15% higher
>>>> dispatch latency which looks like a possible measuring artifact.
>>>> """
>>>
>>> I doubt it is a measuring artefact since throughput = 1/(dispatch +
>>> latency + test overhead), and the dispatch latency here is larger than
>>> the wakeup latency and so has greater impact on throughput in this
>>> scenario.
>>
>> I don't follow you, if dispatch latency has larger effect on
>> throughput how to explain the increase and still better throughput?
>>
>> I see in gem_latency this block:
>>
>> 	measure_latency(p, &p->latency);
>> 	igt_stats_push(&p->dispatch, *p->last_timestamp - start);
>>
>> measure_latency waits for the batch to complete and then dispatch
>> latency uses p->last_timestamp which is something written by the GPU
>> and not a CPU view of the latency ?
>
> Exactly, measurements are entirely made from the running engine clock
> (which is ~80ns clock, and should be verified during init). The register
> is read before dispatch, inside the batch and then at wakeup, but the
> information is presented as dispatch = batch - start and
> wakeup = end - batch, so to get the duration (end - start) we need
> to add the two together. Throughput will also include some overhead from
> the test iteration (that will mainly be scheduler interference).
>
> My comment about dispatch having greater effect, is in terms of
> its higher absolute value (so the relative % means a larger change wrt
> throughput).

Change to this then?

"""
     One unexplained result is with "gem_latency -n 0" (dispatching
     empty batches) which shows 5% more throughput, 8% less CPU time,
     25% better producer and consumer latencies, but 15% higher
     dispatch latency which looks like an amplified effect of test
     overhead.
"""

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2016-02-29 11:40 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-26 15:37 [PATCH] drm/i915: Execlists small cleanups and micro-optimisations Tvrtko Ursulin
2016-02-26 16:03 ` ✗ Fi.CI.BAT: failure for " Patchwork
2016-02-26 16:36 ` [PATCH] " Chris Wilson
2016-02-26 16:58   ` [PATCH v4] " Tvrtko Ursulin
2016-02-26 20:24     ` Chris Wilson
2016-02-29 10:45       ` Tvrtko Ursulin
2016-02-29 10:53         ` Chris Wilson
2016-02-29 11:01           ` Tvrtko Ursulin
2016-02-29 11:13             ` Chris Wilson
2016-02-29 11:40               ` Tvrtko Ursulin [this message]
2016-02-29 11:48                 ` Chris Wilson
2016-02-29 11:59                   ` Tvrtko Ursulin
2016-03-01 10:21                     ` Tvrtko Ursulin
2016-03-01 10:32                       ` Chris Wilson
2016-03-01 10:41                         ` Tvrtko Ursulin
2016-03-01 10:57                           ` Chris Wilson
2016-03-01 10:32                     ` Chris Wilson
2016-02-26 17:27 ` ✗ Fi.CI.BAT: failure for drm/i915: Execlists small cleanups and micro-optimisations (rev2) Patchwork
2016-02-29 10:16   ` Tvrtko Ursulin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56D42E35.3010608@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=Intel-gfx@lists.freedesktop.org \
    --cc=chris@chris-wilson.co.uk \
    --cc=tvrtko.ursulin@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.