From mboxrd@z Thu Jan 1 00:00:00 1970 From: bugzilla-daemon@freedesktop.org Subject: [Bug 103100] Performance regression with various games in drm-next-amd-staging Kernel Date: Wed, 22 Nov 2017 11:56:41 +0000 Message-ID: References: Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0414950032==" Return-path: Received: from culpepper.freedesktop.org (culpepper.freedesktop.org [IPv6:2610:10:20:722:a800:ff:fe98:4b55]) by gabe.freedesktop.org (Postfix) with ESMTP id C57BD6E307 for ; Wed, 22 Nov 2017 11:56:40 +0000 (UTC) In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" To: dri-devel@lists.freedesktop.org List-Id: dri-devel@lists.freedesktop.org --===============0414950032== Content-Type: multipart/alternative; boundary="15113518000.EacC0.12563"; charset="UTF-8" --15113518000.EacC0.12563 Date: Wed, 22 Nov 2017 11:56:40 +0000 MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.freedesktop.org/ Auto-Submitted: auto-generated https://bugs.freedesktop.org/show_bug.cgi?id=3D103100 --- Comment #6 from Christian K=C3=B6nig = --- (In reply to Michel D=C3=A4nzer from comment #4) > Christian, might it be possible to e.g. maintain a list of per-VM BOs whi= ch > were evicted from VRAM, and try to move them back to VRAM as part of the > existing mechanism for this (for "normal" BOs)? That would certainly be possible, but I don't think it would help in any wa= y. The kernel simply doesn't know any more which BOs are currently used and wh= ich aren't. So as soon as userspace allocates more VRAM than physical available= we are practically lost. In other words we would just cycle over a list of BOs evicted from VRAM on every submission and would rarely be able to move something back in. What we could do is try to move buffers back into VRAM when memory is freed, but that happens so rarely as well that it probably doesn't make much sense either. Can somebody analyze exactly why those games are now slower than they have = been before? E.g. which buffers are fighting for VRAM? Or are they maybe fighting for GTT? --=20 You are receiving this mail because: You are the assignee for the bug.= --15113518000.EacC0.12563 Date: Wed, 22 Nov 2017 11:56:40 +0000 MIME-Version: 1.0 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.freedesktop.org/ Auto-Submitted: auto-generated

Commen= t # 6 on bug 10310= 0 from Christian K=C3= =B6nig
(In reply to Michel D=C3=A4nzer from comment #4)
> Christian, might it be possible to e.g. maintain=
 a list of per-VM BOs which
> were evicted from VRAM, and try to move them back to VRAM as part of t=
he
> existing mechanism for this (for "normal" BOs)?

That would certainly be possible, but I don't think it would help in any wa=
y.

The kernel simply doesn't know any more which BOs are currently used and wh=
ich
aren't. So as soon as userspace allocates more VRAM than physical available=
 we
are practically lost.

In other words we would just cycle over a list of BOs evicted from VRAM on
every submission and would rarely be able to move something back in.

What we could do is try to move buffers back into VRAM when memory is freed,
but that happens so rarely as well that it probably doesn't make much sense
either.

Can somebody analyze exactly why those games are now slower than they have =
been
before? E.g. which buffers are fighting for VRAM? Or are they maybe fighting
for GTT?


You are receiving this mail because:
  • You are the assignee for the bug.
= --15113518000.EacC0.12563-- --===============0414950032== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KZHJpLWRldmVs IG1haWxpbmcgbGlzdApkcmktZGV2ZWxAbGlzdHMuZnJlZWRlc2t0b3Aub3JnCmh0dHBzOi8vbGlz dHMuZnJlZWRlc2t0b3Aub3JnL21haWxtYW4vbGlzdGluZm8vZHJpLWRldmVsCg== --===============0414950032==--