dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Zack Rusin <zackr@vmware.com>
To: <dri-devel@lists.freedesktop.org>
Cc: Martin Krastev <krastevm@vmware.com>
Subject: [PATCH v2 3/4] drm/vmwgfx: Be a lot more flexible with MOB limits
Date: Fri, 23 Jul 2021 12:51:52 -0400	[thread overview]
Message-ID: <20210723165153.113198-3-zackr@vmware.com> (raw)
In-Reply-To: <20210723165153.113198-1-zackr@vmware.com>

The code was trying to keep a strict limit on the amount of mob
memory that was used in the guest by making it match the host
settings. There's technically no reason to do that (guests can
certainly use more than the host can have resident in renderers
at the same time).

In particular this is problematic because our userspace is not
great at handling OOM conditions and running out of MOB space
results in GL apps crashing, e.g. gnome-shell likes to allocate
huge surfaces (~61MB for the desktop on 2560x1600 with two workspaces)
and running out of memory there means that the gnome-shell crashes
on startup taking us back to the login and resulting in a system
where one can not login in graphically anymore.

Instead of letting the userspace crash we can extend available
MOB space, we just don't want to use all of the RAM for graphics,
so we're going to limit it to half of RAM.

With the addition of some extra logging this should make the
"guest has been configured with not enough graphics memory"
errors a lot easier to diagnose in cases where the automatic
expansion of MOB space fails.

Signed-off-by: Zack Rusin <zackr@vmware.com>
Reviewed-by: Martin Krastev <krastevm@vmware.com>
---
 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c           |  9 +++++
 drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c | 36 +++++++++++++++++--
 2 files changed, 43 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
index 049ea3b8229b..ab9a1750e1df 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
@@ -942,6 +942,15 @@ static int vmw_driver_load(struct vmw_private *dev_priv, u32 pci_id)
 		dev_priv->texture_max_height = 8192;
 		dev_priv->max_primary_mem = dev_priv->vram_size;
 	}
+	drm_info(&dev_priv->drm,
+		 "Legacy memory limits: VRAM = %llu kB, FIFO = %llu kB, surface = %u kB\n",
+		 (u64)dev_priv->vram_size / 1024,
+		 (u64)dev_priv->fifo_mem_size / 1024,
+		 dev_priv->memory_size / 1024);
+
+	drm_info(&dev_priv->drm,
+		 "MOB limits: max mob size = %u kB, max mob pages = %u\n",
+		 dev_priv->max_mob_size / 1024, dev_priv->max_mob_pages);
 
 	vmw_print_bitmap(&dev_priv->drm, "Capabilities",
 			 dev_priv->capabilities,
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
index 28ceb749a733..b2c4af331c9d 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
@@ -71,8 +71,40 @@ static int vmw_gmrid_man_get_node(struct ttm_resource_manager *man,
 
 	if (gman->max_gmr_pages > 0) {
 		gman->used_gmr_pages += (*res)->num_pages;
-		if (unlikely(gman->used_gmr_pages > gman->max_gmr_pages))
-			goto nospace;
+		/*
+		 * Because the graphics memory is a soft limit we can try to
+		 * expand it instead of letting the userspace apps crash.
+		 * We're just going to have a sane limit (half of RAM)
+		 * on the number of MOB's that we create and will try to keep
+		 * the system running until we reach that.
+		 */
+		if (unlikely(gman->used_gmr_pages > gman->max_gmr_pages)) {
+			const unsigned long max_graphics_pages = totalram_pages() / 2;
+			uint32_t new_max_pages = 0;
+
+			DRM_WARN("vmwgfx: mob memory overflow. Consider increasing guest RAM and graphicsMemory.\n");
+			vmw_host_printf("vmwgfx, warning: mob memory overflow. Consider increasing guest RAM and graphicsMemory.\n");
+
+			if (gman->max_gmr_pages > (max_graphics_pages / 2)) {
+				DRM_WARN("vmwgfx: guest requires more than half of RAM for graphics.\n");
+				new_max_pages = max_graphics_pages;
+			} else
+				new_max_pages = gman->max_gmr_pages * 2;
+			if (new_max_pages > gman->max_gmr_pages && new_max_pages >= gman->used_gmr_pages) {
+				DRM_WARN("vmwgfx: increasing guest mob limits to %u kB.\n",
+					 ((new_max_pages) << (PAGE_SHIFT - 10)));
+
+				gman->max_gmr_pages = new_max_pages;
+			} else {
+				char buf[256];
+				snprintf(buf, sizeof(buf),
+					 "vmwgfx, error: guest graphics is out of memory (mob limit at: %ukB).\n",
+					 ((gman->max_gmr_pages) << (PAGE_SHIFT - 10)));
+				vmw_host_printf(buf);
+				DRM_WARN("%s", buf);
+				goto nospace;
+			}
+		}
 	}
 
 	(*res)->start = id;
-- 
2.30.2


  parent reply	other threads:[~2021-07-23 16:52 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-23 16:51 [PATCH v2 1/4] drm/vmwgfx: Switch to using DRM_IOCTL_DEF_DRV Zack Rusin
2021-07-23 16:51 ` [PATCH v2 2/4] drm/vmwgfx: Cleanup logging Zack Rusin
2021-07-23 16:51 ` Zack Rusin [this message]
2021-07-23 16:51 ` [PATCH v2 4/4] drm/vmwgfx: Use 2.19 version number to recognize mks-stats ioctls Zack Rusin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210723165153.113198-3-zackr@vmware.com \
    --to=zackr@vmware.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=krastevm@vmware.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).