All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, Intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 2/2] drm/i915: Keep the per-object list of VMAs under control
Date: Mon, 1 Feb 2016 13:29:16 +0000	[thread overview]
Message-ID: <56AF5DAC.4020606@linux.intel.com> (raw)
In-Reply-To: <20160201111252.GA15851@nuc-i3427.alporthouse.com>


On 01/02/16 11:12, Chris Wilson wrote:
> On Mon, Feb 01, 2016 at 11:00:08AM +0000, Tvrtko Ursulin wrote:
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>
>> Where objects are shared across contexts and heavy rendering
>> is in progress, execlist retired request queue will grow
>> unbound until the GPU is idle enough for the retire worker
>> to run and call intel_execlists_retire_requests.
>>
>> With some workloads, like for example gem_close_race, that
>> never happens causing the shared object VMA list to grow to
>> epic proportions, and in turn causes retirement call sites to
>> spend linearly more and more time walking the obj->vma_list.
>>
>> End result is the above mentioned test case taking ten minutes
>> to complete and using up more than a GiB of RAM just for the VMA
>> objects.
>>
>> If we instead trigger the execlist house keeping a bit more
>> often, obj->vma_list will be kept in check by the virtue of
>> context cleanup running and zapping the inactive VMAs.
>>
>> This makes the test case an order of magnitude faster and brings
>> memory use back to normal.
>>
>> This also makes the code more self-contained since the
>> intel_execlists_retire_requests call-site is now in a more
>> appropriate place and implementation leakage is somewhat
>> reduced.
>
> However, this then causes a perf regression since we unpin the contexts
> too frequently and do not have any mitigation in place yet.

I suppose it is possible. What takes most time - page table clears on 
VMA unbinds? It is just that this looks so bad at the moment. :( Luckily 
it is just the IGT..

Regards,

Tvrtko

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2016-02-01 13:29 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-01 11:00 [PATCH 1/2] dmr/i915: Use appropriate spinlock flavour Tvrtko Ursulin
2016-02-01 11:00 ` [PATCH 2/2] drm/i915: Keep the per-object list of VMAs under control Tvrtko Ursulin
2016-02-01 11:12   ` Chris Wilson
2016-02-01 13:29     ` Tvrtko Ursulin [this message]
2016-02-01 13:41       ` Chris Wilson
2016-02-01 11:18 ` ✗ Fi.CI.BAT: failure for series starting with [1/2] dmr/i915: Use appropriate spinlock flavour Patchwork
2016-02-11  8:52 ` [PATCH 1/2] " Daniel Vetter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56AF5DAC.4020606@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=Intel-gfx@lists.freedesktop.org \
    --cc=chris@chris-wilson.co.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.