intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Chris Wilson <chris@chris-wilson.co.uk>
To: Dave Airlie <airlied@gmail.com>
Cc: Intel Graphics Development <intel-gfx@lists.freedesktop.org>
Subject: Re: [Intel-gfx] Time, where did it go?
Date: Fri, 07 Aug 2020 08:12:36 +0100	[thread overview]
Message-ID: <159678435631.14655.6966712365882745877@build.alporthouse.com> (raw)
In-Reply-To: <CAPM=9twb2jhWhwvD3HWjG04ihxnYv+EgJ0rQPwL_aHSjJn-NNQ@mail.gmail.com>

Quoting Dave Airlie (2020-08-04 22:45:25)
> On Mon, 3 Aug 2020 at 05:36, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >
> > Quoting Dave Airlie (2020-08-02 18:56:44)
> > > On Mon, 3 Aug 2020 at 02:44, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > > >
> > > > Lots of small incremental improvements to reduce execution latency
> > > > which basically offsets the small regressions incurred when compared to
> > > > 5.7. And then there are some major fixes found while staring agape at
> > > > lockstat.
> > >
> > > What introduced the 5.7 regressions? are they documented somewhere.
> >
> > No. There's a 5.8-rc1 bisect (to the merge but not into rc1) for
> > something in the core causing perf fluctuations, but I have not yet
> > reproduced that one to bisect into the rc1 merge. [The system that showed
> > the issue has historically seen strong swings from p-state setup, might
> > be that again?]. This is from measuring simulated transcode workloads that
> > we've built up to track KPI. That we can then compare against the real
> > workloads run by other groups.
> >
> > > What is the goal here, is there a benchmark or application that this
> > > benefits that you can quantify the benefits?
> >
> > Entirely motivated by not wanting to have to explain why there's even a
> > 1% regression in their client metrics. They wouldn't even notice for a
> > few releases by which point the problem is likely compounded and we
> > suddenly have crisis meetings.
> >
> > > Is the lack of userspace command submission a problem vs other vendors here?
> >
> > If you mean HW scheduling (which is the bit that we are most in dire need
> > of for replacing this series), not really, our closest equivalent has not
> > yet proven itself, at least in previous incarnations, adequate to their
> > requirements.
> 
> I don't think this sort of thing is acceptable for upstream. This is
> the platform problem going crazy.
> Something regresses in the kernel core, and you refactor the i915
> driver to get horribly more complicated to avoid fixing the core
> kernel regressions?

Far from it. We are removing the complication we added to submit to the
HW from two places and only allowing it to be done from one, with the
resulting simplification and removal of the associated locking.

The impact on v5.7 can be seen as bimodal distributions, like

+mB--------------------------------------------------------------------+
|                                                                b     |
|                                                                bbb   |
|                                                               bbbb   |
|                                                               bbbbb  |
|                                                             b bbbbb  |
|                                                             b bbbbb  |
|   a                                                 b   b   b bbbbb  |
|   aa                                                b  bb   bbbbbbb  |
|  aaa                                                b  bb  bbbbbbbbb |
|  aaa                                            b   bb bbbbbbbbbbbbb |
|a aaaa                                           b b bbbbbbbbbbbbbbbbb|
|   A_|                                                          MA_|  |
+----------------------------------------------------------------------+
    N                Min           Max        Median           Avg        Stddev
a  16( 12)      0.879662      0.889174     0.8855545    0.88558567  0.0010029988
b 224(159)      0.971574       1.00869     0.9999995     1.0005503  0.0018680506

where it took a substantial of runs to settle at the level of previous
kernels. That repeats across many different simulations.

With v5.8, the slow start is gone again
+mB--------------------------------------------------------------------+
|                                                 bbba.                |
|                                               bbb..a. aaa            |
|                                             b bb.......aabaab        |
|                                             b .b.......a.....        |
|                                      b aa  bbb...............bb      |
|            a           a a b   b  ba b aab bbb...............bb      |
|            aa     a    a aa.   b  b.bbaa.ba.b.................b  b   |
|b.a   a     .aa .  abbb ..aa.b a. ab..b...b..b....................b .b|
|                                                  |_MA____|           |
|                                               |___A___|              |
+----------------------------------------------------------------------+
    N                Min           Max        Median           Avg        Stddev
a 224(159)      0.971574       1.00869     0.9999995     1.0005503  0.0018680506
b 240(155)      0.971079       1.00927      0.999126    0.99905576  0.0016999716
Difference with 100.0% confidence: -0.00149458 (-0.15%)

As for the impact of shaving an average of 0.4us from the submission
paths?

+mB--------------------------------------------------------------------+
|                                   d                                  |
|                                   -                                  |
|                                   +.                                 |
|                                  d+. c  a                            |
|                                ..d+- c  a                           b|
|b              b      b   ab ab.-.-++-.-a.aa d         b    b        b|
|                                 |__MA____|                           |
|                                   A_|                                |
|                                   A_|                                |
|                                   A|                                 |
+----------------------------------------------------------------------+
                    Min           Max        Median           Avg        Stddev
1 client           -4.01           3.8         0.245         0.822     1.6734194
2 clients         -15.88          15.4          0.08    0.21222222    0.36365429
4 clients          -1.68          1.36         0.045    0.12916667    0.28137514
8 clients          -1.49          4.49         0.055         0.043   0.045227818
(normalized, %)

And this simulations do not even touch upon the implementation bugs that cause
quadratic worst case complexity in a linear algorithm.

> This has to stop, if Intel can't stop it internally, i.e. the GEM
> kernel team hasn't got the sort of power, then it has to stop
> upstream.
> 
> This is a hard NAK for this sort of refactoring, now and in the future.

Your assessment of the nature of the changes is flawed.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2020-08-07  7:12 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-02 16:43 [Intel-gfx] Time, where did it go? Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 01/42] drm/i915: Fix wrong return value Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 02/42] drm/i915/gem: Don't drop the timeline lock during execbuf Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 03/42] drm/i915/gem: Reduce context termination list iteration guard to RCU Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 04/42] drm/i915/gt: Protect context lifetime with RCU Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 05/42] drm/i915/gt: Free stale request on destroying the virtual engine Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 06/42] drm/i915/gt: Track signaled breadcrumbs outside of the breadcrumb spinlock Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 07/42] drm/i915/gt: Split the breadcrumb spinlock between global and contexts Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 08/42] drm/i915: Drop i915_request.lock serialisation around await_start Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 09/42] drm/i915: Drop i915_request.lock requirement for intel_rps_boost() Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 10/42] drm/i915/gem: Reduce ctx->engine_mutex for reading the clone source Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 11/42] drm/i915/gem: Reduce ctx->engines_mutex for get_engines() Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 12/42] drm/i915: Reduce test_and_set_bit to set_bit in i915_request_submit() Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 13/42] drm/i915/gt: Decouple completed requests on unwind Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 14/42] drm/i915/gt: Check for a completed last request once Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 15/42] drm/i915/gt: Refactor heartbeat request construction and submission Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 16/42] drm/i915/gt: Replace direct submit with direct call to tasklet Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 17/42] drm/i915/gt: Use virtual_engine during execlists_dequeue Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 18/42] drm/i915/gt: Decouple inflight virtual engines Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 19/42] drm/i915/gt: Defer schedule_out until after the next dequeue Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 20/42] drm/i915/gt: Resubmit the virtual engine on schedule-out Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 21/42] drm/i915/gt: Simplify virtual engine handling for execlists_hold() Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 22/42] drm/i915/gt: ce->inflight updates are now serialised Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 23/42] drm/i915/gt: Drop atomic for engine->fw_active tracking Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 24/42] drm/i915/gt: Extract busy-stats for ring-scheduler Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 25/42] drm/i915/gt: Convert stats.active to plain unsigned int Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 26/42] drm/i915: Lift waiter/signaler iterators Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 27/42] drm/i915: Strip out internal priorities Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 28/42] drm/i915: Remove I915_USER_PRIORITY_SHIFT Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 29/42] drm/i915/gt: Defer the kmem_cache_free() until after the HW submit Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 30/42] drm/i915: Prune empty priolists Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 31/42] drm/i915: Replace engine->schedule() with a known request operation Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 32/42] drm/i915/gt: Do not suspend bonded requests if one hangs Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 33/42] drm/i915: Teach the i915_dependency to use a double-lock Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 34/42] drm/i915: Restructure priority inheritance Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 35/42] drm/i915/selftests: Measure set-priority duration Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 36/42] drm/i915: Improve DFS for priority inheritance Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 37/42] drm/i915/gt: Remove timeslice suppression Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 38/42] drm/i915: Fair low-latency scheduling Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 39/42] drm/i915/gt: Specify a deadline for the heartbeat Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 40/42] drm/i915: Replace the priority boosting for the display with a deadline Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 41/42] drm/i915: Move saturated workload detection back to the context Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 42/42] drm/i915/gt: Another tweak for flushing the tasklets Chris Wilson
2020-08-02 17:14 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/42] drm/i915: Fix wrong return value Patchwork
2020-08-02 17:14 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-08-02 17:34 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-08-02 17:56 ` [Intel-gfx] Time, where did it go? Dave Airlie
2020-08-02 19:36   ` Chris Wilson
2020-08-04 21:45     ` Dave Airlie
2020-08-07  7:12       ` Chris Wilson [this message]
2020-08-09 20:01         ` Dave Airlie
2020-08-02 21:03 ` [Intel-gfx] ✗ Fi.CI.IGT: failure for series starting with [01/42] drm/i915: Fix wrong return value Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=159678435631.14655.6966712365882745877@build.alporthouse.com \
    --to=chris@chris-wilson.co.uk \
    --cc=airlied@gmail.com \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).