All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC][PATCH v2 0/4] Support GEM object mappings from I/O memory
@ 2020-08-06  8:52 Thomas Zimmermann
  2020-08-06  8:52 ` [PATCH v2 1/4] drm/vboxvideo: Use drm_gem_vram_vmap() interfaces Thomas Zimmermann
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Thomas Zimmermann @ 2020-08-06  8:52 UTC (permalink / raw)
  To: airlied, daniel, kraxel, maarten.lankhorst, mripard, hdegoede,
	sean, sam, emil.l.velikov, lyude, noralf, zou_wei
  Cc: Thomas Zimmermann, dri-devel

DRM's fbdev console uses regular load and store operations to update
framebuffer memory. The bochs driver on sparc64 requires the use of
I/O-specific load and store operations. We have a workaround, but need
a long-term solution to the problem. Previous attempts to resolve the
issue returned an extra is_iomem flag from vmap(), or added a separate
vmap_iomem() callback to GEM objects.

This patchset is yet another iteration with a different idea. Instead
of a raw pointer, vmap() interfaces now return a structure that contains
the buffer address in system or I/O memory, plus a flag that signals
which location the address is in.

Patch #1 updates the vboxvideo driver to match the latest VRAM helpers.
This simplifies the other patches and should be merged in any case.

Patch #2 adds struct drm_gem_membuf, which contains the pointer and flag,
and converts the generic GEM interfaces to use it.

Patch #3 converts vmap/vunmap in GEM object functions and updates most
GEM backends. A few drivers are still missing, but the patch should be
acceptable for an RFC.

Patch #4 changes fbdev helpers to access framebuffer memory either with
system or I/O memcpy functions.

Thomas Zimmermann (4):
  drm/vboxvideo: Use drm_gem_vram_vmap() interfaces
  drm/gem: Update client API to use struct drm_gem_membuf
  drm/gem: Use struct drm_gem_membuf in vmap op and convert GEM backends
  drm/fb_helper: Access framebuffer as I/O memory, if required

 drivers/gpu/drm/ast/ast_cursor.c       |  29 ++-
 drivers/gpu/drm/ast/ast_drv.h          |   2 +-
 drivers/gpu/drm/bochs/bochs_kms.c      |   1 -
 drivers/gpu/drm/drm_client.c           |  25 ++-
 drivers/gpu/drm/drm_fb_helper.c        | 246 ++++++++++++++++++++++---
 drivers/gpu/drm/drm_gem.c              |  28 +--
 drivers/gpu/drm/drm_gem_cma_helper.c   |  15 +-
 drivers/gpu/drm/drm_gem_shmem_helper.c |  31 ++--
 drivers/gpu/drm/drm_gem_vram_helper.c  | 142 +++++---------
 drivers/gpu/drm/drm_internal.h         |   5 +-
 drivers/gpu/drm/drm_prime.c            |  16 +-
 drivers/gpu/drm/mgag200/mgag200_mode.c |  11 +-
 drivers/gpu/drm/qxl/qxl_display.c      |  12 +-
 drivers/gpu/drm/qxl/qxl_draw.c         |  14 +-
 drivers/gpu/drm/qxl/qxl_drv.h          |   6 +-
 drivers/gpu/drm/qxl/qxl_object.c       |  19 +-
 drivers/gpu/drm/qxl/qxl_object.h       |   2 +-
 drivers/gpu/drm/qxl/qxl_prime.c        |  12 +-
 drivers/gpu/drm/tiny/cirrus.c          |  15 +-
 drivers/gpu/drm/tiny/gm12u320.c        |  12 +-
 drivers/gpu/drm/udl/udl_modeset.c      |  10 +-
 drivers/gpu/drm/vboxvideo/vbox_mode.c  |  17 +-
 include/drm/drm_client.h               |   7 +-
 include/drm/drm_device.h               |  26 +++
 include/drm/drm_gem.h                  |   5 +-
 include/drm/drm_gem_cma_helper.h       |   4 +-
 include/drm/drm_gem_shmem_helper.h     |   4 +-
 include/drm/drm_gem_vram_helper.h      |   9 +-
 include/drm/drm_mode_config.h          |  12 --
 29 files changed, 464 insertions(+), 273 deletions(-)

--
2.28.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v2 1/4] drm/vboxvideo: Use drm_gem_vram_vmap() interfaces
  2020-08-06  8:52 [RFC][PATCH v2 0/4] Support GEM object mappings from I/O memory Thomas Zimmermann
@ 2020-08-06  8:52 ` Thomas Zimmermann
  2020-08-06  8:52 ` [PATCH v2 2/4] drm/gem: Update client API to use struct drm_gem_membuf Thomas Zimmermann
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Thomas Zimmermann @ 2020-08-06  8:52 UTC (permalink / raw)
  To: airlied, daniel, kraxel, maarten.lankhorst, mripard, hdegoede,
	sean, sam, emil.l.velikov, lyude, noralf, zou_wei
  Cc: Thomas Zimmermann, dri-devel

VRAM helpers support ref counting for pin and vmap operations, no need to
avoid these operations with the internal kmap interface. Also unexport
the kmap interfaces from VRAM helpers.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_gem_vram_helper.c | 56 +--------------------------
 drivers/gpu/drm/vboxvideo/vbox_mode.c | 10 +++--
 include/drm/drm_gem_vram_helper.h     |  3 --
 3 files changed, 8 insertions(+), 61 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 5f03c6137ef9..a6af198e26ea 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -97,8 +97,8 @@ static const struct drm_gem_object_funcs drm_gem_vram_object_funcs;
  * hardware's draing engine.
  *
  * To access a buffer object's memory from the DRM driver, call
- * drm_gem_vram_kmap(). It (optionally) maps the buffer into kernel address
- * space and returns the memory address. Use drm_gem_vram_kunmap() to
+ * drm_gem_vram_vmap(). It maps the buffer into kernel address
+ * space and returns the memory address. Use drm_gem_vram_vunmap() to
  * release the mapping.
  */
 
@@ -436,39 +436,6 @@ static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
 	return kmap->virtual;
 }
 
-/**
- * drm_gem_vram_kmap() - Maps a GEM VRAM object into kernel address space
- * @gbo:	the GEM VRAM object
- * @map:	establish a mapping if necessary
- * @is_iomem:	returns true if the mapped memory is I/O memory, or false \
-	otherwise; can be NULL
- *
- * This function maps the buffer object into the kernel's address space
- * or returns the current mapping. If the parameter map is false, the
- * function only queries the current mapping, but does not establish a
- * new one.
- *
- * Returns:
- * The buffers virtual address if mapped, or
- * NULL if not mapped, or
- * an ERR_PTR()-encoded error code otherwise.
- */
-void *drm_gem_vram_kmap(struct drm_gem_vram_object *gbo, bool map,
-			bool *is_iomem)
-{
-	int ret;
-	void *virtual;
-
-	ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
-	if (ret)
-		return ERR_PTR(ret);
-	virtual = drm_gem_vram_kmap_locked(gbo, map, is_iomem);
-	ttm_bo_unreserve(&gbo->bo);
-
-	return virtual;
-}
-EXPORT_SYMBOL(drm_gem_vram_kmap);
-
 static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
 {
 	if (WARN_ON_ONCE(!gbo->kmap_use_count))
@@ -484,22 +451,6 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
 	 */
 }
 
-/**
- * drm_gem_vram_kunmap() - Unmaps a GEM VRAM object
- * @gbo:	the GEM VRAM object
- */
-void drm_gem_vram_kunmap(struct drm_gem_vram_object *gbo)
-{
-	int ret;
-
-	ret = ttm_bo_reserve(&gbo->bo, false, false, NULL);
-	if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
-		return;
-	drm_gem_vram_kunmap_locked(gbo);
-	ttm_bo_unreserve(&gbo->bo);
-}
-EXPORT_SYMBOL(drm_gem_vram_kunmap);
-
 /**
  * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
  *                       space
@@ -511,9 +462,6 @@ EXPORT_SYMBOL(drm_gem_vram_kunmap);
  * permanently. Call drm_gem_vram_vunmap() with the returned address to
  * unmap and unpin the GEM VRAM object.
  *
- * If you have special requirements for the pinning or mapping operations,
- * call drm_gem_vram_pin() and drm_gem_vram_kmap() directly.
- *
  * Returns:
  * The buffer's virtual address on success, or
  * an ERR_PTR()-encoded error code otherwise.
diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
index d9a5af62af89..4fcc0a542b8a 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
@@ -397,11 +397,13 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 
 	vbox_crtc->cursor_enabled = true;
 
-	/* pinning is done in prepare/cleanup framebuffer */
-	src = drm_gem_vram_kmap(gbo, true, NULL);
+	src = drm_gem_vram_vmap(gbo);
 	if (IS_ERR(src)) {
+		/*
+		 * BUG: we should have pinned the BO in prepare_fb().
+		 */
 		mutex_unlock(&vbox->hw_mutex);
-		DRM_WARN("Could not kmap cursor bo, skipping update\n");
+		DRM_WARN("Could not map cursor bo, skipping update\n");
 		return;
 	}
 
@@ -414,7 +416,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 	data_size = width * height * 4 + mask_size;
 
 	copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
-	drm_gem_vram_kunmap(gbo);
+	drm_gem_vram_vunmap(gbo, src);
 
 	flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
 		VBOX_MOUSE_POINTER_ALPHA;
diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
index 035332f3723f..b34f1da89cc7 100644
--- a/include/drm/drm_gem_vram_helper.h
+++ b/include/drm/drm_gem_vram_helper.h
@@ -101,9 +101,6 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
 s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
 int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
 int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
-void *drm_gem_vram_kmap(struct drm_gem_vram_object *gbo, bool map,
-			bool *is_iomem);
-void drm_gem_vram_kunmap(struct drm_gem_vram_object *gbo);
 void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
 void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
 
-- 
2.28.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 2/4] drm/gem: Update client API to use struct drm_gem_membuf
  2020-08-06  8:52 [RFC][PATCH v2 0/4] Support GEM object mappings from I/O memory Thomas Zimmermann
  2020-08-06  8:52 ` [PATCH v2 1/4] drm/vboxvideo: Use drm_gem_vram_vmap() interfaces Thomas Zimmermann
@ 2020-08-06  8:52 ` Thomas Zimmermann
  2020-08-13 10:26   ` Daniel Vetter
  2020-08-06  8:52 ` [PATCH v2 3/4] drm/gem: Use struct drm_gem_membuf in vmap op and convert GEM backends Thomas Zimmermann
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Thomas Zimmermann @ 2020-08-06  8:52 UTC (permalink / raw)
  To: airlied, daniel, kraxel, maarten.lankhorst, mripard, hdegoede,
	sean, sam, emil.l.velikov, lyude, noralf, zou_wei
  Cc: Thomas Zimmermann, dri-devel

GEM's vmap interface now wraps memory pointers in struct drm_gem_membuf.
The structure represents a pointer into the framebuffer, which is either
in I/O memory or in system memory. The structure contains a flag that
distinguishes these cases.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_client.c    | 25 ++++++++++++-------------
 drivers/gpu/drm/drm_fb_helper.c | 18 +++++++++---------
 drivers/gpu/drm/drm_gem.c       | 19 +++++++++++--------
 drivers/gpu/drm/drm_internal.h  |  5 +++--
 drivers/gpu/drm/drm_prime.c     | 16 ++++++++++------
 include/drm/drm_client.h        |  7 ++++---
 include/drm/drm_device.h        | 26 ++++++++++++++++++++++++++
 7 files changed, 75 insertions(+), 41 deletions(-)

diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index 495f47d23d87..0359b82928c1 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -234,7 +234,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
 {
 	struct drm_device *dev = buffer->client->dev;
 
-	drm_gem_vunmap(buffer->gem, buffer->vaddr);
+	drm_gem_vunmap(buffer->gem, &buffer->membuf);
 
 	if (buffer->gem)
 		drm_gem_object_put(buffer->gem);
@@ -302,12 +302,13 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
  * Returns:
  *	The mapped memory's address
  */
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
+const struct drm_gem_membuf *
+drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 {
-	void *vaddr;
+	int ret;
 
-	if (buffer->vaddr)
-		return buffer->vaddr;
+	if (buffer->membuf.vaddr)
+		return &buffer->membuf;
 
 	/*
 	 * FIXME: The dependency on GEM here isn't required, we could
@@ -317,13 +318,11 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 	 * fd_install step out of the driver backend hooks, to make that
 	 * final step optional for internal users.
 	 */
-	vaddr = drm_gem_vmap(buffer->gem);
-	if (IS_ERR(vaddr))
-		return vaddr;
-
-	buffer->vaddr = vaddr;
+	ret = drm_gem_vmap(buffer->gem, &buffer->membuf);
+	if (ret)
+		return ERR_PTR(ret);
 
-	return vaddr;
+	return &buffer->membuf;
 }
 EXPORT_SYMBOL(drm_client_buffer_vmap);
 
@@ -337,8 +336,8 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
  */
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
 {
-	drm_gem_vunmap(buffer->gem, buffer->vaddr);
-	buffer->vaddr = NULL;
+	drm_gem_vunmap(buffer->gem, &buffer->membuf);
+	buffer->membuf.vaddr = NULL;
 }
 EXPORT_SYMBOL(drm_client_buffer_vunmap);
 
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 8697554ccd41..da24874247e7 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -394,7 +394,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
 	unsigned int cpp = fb->format->cpp[0];
 	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
 	void *src = fb_helper->fbdev->screen_buffer + offset;
-	void *dst = fb_helper->buffer->vaddr + offset;
+	void *dst = fb_helper->buffer->membuf.vaddr + offset;
 	size_t len = (clip->x2 - clip->x1) * cpp;
 	unsigned int y;
 
@@ -416,7 +416,7 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 	struct drm_clip_rect *clip = &helper->dirty_clip;
 	struct drm_clip_rect clip_copy;
 	unsigned long flags;
-	void *vaddr;
+	const struct drm_gem_membuf *buf;
 
 	spin_lock_irqsave(&helper->dirty_lock, flags);
 	clip_copy = *clip;
@@ -429,8 +429,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 
 		/* Generic fbdev uses a shadow buffer */
 		if (helper->buffer) {
-			vaddr = drm_client_buffer_vmap(helper->buffer);
-			if (IS_ERR(vaddr))
+			buf = drm_client_buffer_vmap(helper->buffer);
+			if (IS_ERR(buf))
 				return;
 			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
 		}
@@ -2076,7 +2076,7 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
 	struct drm_framebuffer *fb;
 	struct fb_info *fbi;
 	u32 format;
-	void *vaddr;
+	const struct drm_gem_membuf *membuf;
 
 	drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
 		    sizes->surface_width, sizes->surface_height,
@@ -2112,11 +2112,11 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
 		fb_deferred_io_init(fbi);
 	} else {
 		/* buffer is mapped for HW framebuffer */
-		vaddr = drm_client_buffer_vmap(fb_helper->buffer);
-		if (IS_ERR(vaddr))
-			return PTR_ERR(vaddr);
+		membuf = drm_client_buffer_vmap(fb_helper->buffer);
+		if (IS_ERR(membuf))
+			return PTR_ERR(membuf);
 
-		fbi->screen_buffer = vaddr;
+		fbi->screen_buffer = membuf->vaddr;
 		/* Shamelessly leak the physical address to user-space */
 #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
 		if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 19d73868490e..36ded6a56fb2 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1220,7 +1220,7 @@ void drm_gem_unpin(struct drm_gem_object *obj)
 		obj->dev->driver->gem_prime_unpin(obj);
 }
 
-void *drm_gem_vmap(struct drm_gem_object *obj)
+int drm_gem_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf)
 {
 	void *vaddr;
 
@@ -1229,23 +1229,26 @@ void *drm_gem_vmap(struct drm_gem_object *obj)
 	else if (obj->dev->driver->gem_prime_vmap)
 		vaddr = obj->dev->driver->gem_prime_vmap(obj);
 	else
-		vaddr = ERR_PTR(-EOPNOTSUPP);
+		return -EOPNOTSUPP;
 
 	if (!vaddr)
-		vaddr = ERR_PTR(-ENOMEM);
+		return -ENOMEM;
+
+	buf->vaddr = vaddr;
+	buf->is_iomem = false;
 
-	return vaddr;
+	return 0;
 }
 
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_vunmap(struct drm_gem_object *obj, const struct drm_gem_membuf *buf)
 {
-	if (!vaddr)
+	if (!buf || !buf->vaddr)
 		return;
 
 	if (obj->funcs && obj->funcs->vunmap)
-		obj->funcs->vunmap(obj, vaddr);
+		obj->funcs->vunmap(obj, buf->vaddr);
 	else if (obj->dev->driver->gem_prime_vunmap)
-		obj->dev->driver->gem_prime_vunmap(obj, vaddr);
+		obj->dev->driver->gem_prime_vunmap(obj, buf->vaddr);
 }
 
 /**
diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
index 8e01caaf95cc..201d71249954 100644
--- a/drivers/gpu/drm/drm_internal.h
+++ b/drivers/gpu/drm/drm_internal.h
@@ -36,6 +36,7 @@ struct dma_buf;
 struct drm_connector;
 struct drm_crtc;
 struct drm_framebuffer;
+struct drm_gem_membuf;
 struct drm_gem_object;
 struct drm_master;
 struct drm_minor;
@@ -186,8 +187,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
 
 int drm_gem_pin(struct drm_gem_object *obj);
 void drm_gem_unpin(struct drm_gem_object *obj);
-void *drm_gem_vmap(struct drm_gem_object *obj);
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf);
+void drm_gem_vunmap(struct drm_gem_object *obj, const struct drm_gem_membuf *buf);
 
 /* drm_debugfs.c drm_debugfs_crc.c */
 #if defined(CONFIG_DEBUG_FS)
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 1693aa7c14b5..d95a39030a93 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -671,13 +671,14 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
 void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
-	void *vaddr;
+	struct drm_gem_membuf buf;
+	int ret;
 
-	vaddr = drm_gem_vmap(obj);
-	if (IS_ERR(vaddr))
-		vaddr = NULL;
+	ret = drm_gem_vmap(obj, &buf);
+	if (ret)
+		buf.vaddr = NULL;
 
-	return vaddr;
+	return buf.vaddr;
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
 
@@ -692,8 +693,11 @@ EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
 void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
+	struct drm_gem_membuf buf;
 
-	drm_gem_vunmap(obj, vaddr);
+	buf.vaddr = vaddr;
+	buf.is_iomem = false;
+	drm_gem_vunmap(obj, &buf);
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
 
diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
index 7aaea665bfc2..5ed73c390619 100644
--- a/include/drm/drm_client.h
+++ b/include/drm/drm_client.h
@@ -14,6 +14,7 @@ struct drm_client_dev;
 struct drm_device;
 struct drm_file;
 struct drm_framebuffer;
+struct drm_gem_membuf;
 struct drm_gem_object;
 struct drm_minor;
 struct module;
@@ -141,9 +142,9 @@ struct drm_client_buffer {
 	struct drm_gem_object *gem;
 
 	/**
-	 * @vaddr: Virtual address for the buffer
+	 * @membuf: Virtual address for the buffer
 	 */
-	void *vaddr;
+	struct drm_gem_membuf membuf;
 
 	/**
 	 * @fb: DRM framebuffer
@@ -155,7 +156,7 @@ struct drm_client_buffer *
 drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
 void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
 int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
+const struct drm_gem_membuf *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
 
 int drm_client_modeset_create(struct drm_client_dev *client);
diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h
index 0988351d743c..6ecf03601c36 100644
--- a/include/drm/drm_device.h
+++ b/include/drm/drm_device.h
@@ -381,4 +381,30 @@ struct drm_device {
 #endif
 };
 
+/**
+ * struct drm_gem_membuf - GEM memory buffer
+ */
+struct drm_gem_membuf {
+	union {
+		/**
+		 * @vaddr:
+		 *
+		 * The virtual address for the buffer in system memory.
+		 */
+		void *vaddr;
+		/**
+		 * @vaddr_iomem:
+		 *
+		 * The virtual address for the buffer in I/O memory.
+		 */
+		void __iomem *vaddr_iomem;
+	};
+	/**
+	 * @is_iomem:
+	 *
+	 * True if the memory is located in I/O memory, false otherwise.
+	 */
+	bool is_iomem;
+};
+
 #endif
-- 
2.28.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 3/4] drm/gem: Use struct drm_gem_membuf in vmap op and convert GEM backends
  2020-08-06  8:52 [RFC][PATCH v2 0/4] Support GEM object mappings from I/O memory Thomas Zimmermann
  2020-08-06  8:52 ` [PATCH v2 1/4] drm/vboxvideo: Use drm_gem_vram_vmap() interfaces Thomas Zimmermann
  2020-08-06  8:52 ` [PATCH v2 2/4] drm/gem: Update client API to use struct drm_gem_membuf Thomas Zimmermann
@ 2020-08-06  8:52 ` Thomas Zimmermann
  2020-08-06 10:50   ` kernel test robot
  2020-08-06 11:36   ` kernel test robot
  2020-08-06  8:52 ` [PATCH v2 4/4] drm/fb_helper: Access framebuffer as I/O memory, if required Thomas Zimmermann
  2020-08-14  6:51 ` [RFC][PATCH v2 0/4] Support GEM object mappings from I/O memory Thomas Zimmermann
  4 siblings, 2 replies; 13+ messages in thread
From: Thomas Zimmermann @ 2020-08-06  8:52 UTC (permalink / raw)
  To: airlied, daniel, kraxel, maarten.lankhorst, mripard, hdegoede,
	sean, sam, emil.l.velikov, lyude, noralf, zou_wei
  Cc: Thomas Zimmermann, dri-devel

This patch replaces the vmap's use of raw pointers in GEM object
functions with instances of struct drm_gem_membuf. GEM backends are
converted as well.

For most GEM backends, this simply change the returned type. GEM VRAM
helpers are also updated to indicate whether the returned frmabuffer
address is in system or I/O memory.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/ast/ast_cursor.c       | 29 ++++-----
 drivers/gpu/drm/ast/ast_drv.h          |  2 +-
 drivers/gpu/drm/drm_gem.c              | 17 ++---
 drivers/gpu/drm/drm_gem_cma_helper.c   | 15 +++--
 drivers/gpu/drm/drm_gem_shmem_helper.c | 31 +++++----
 drivers/gpu/drm/drm_gem_vram_helper.c  | 90 ++++++++++++++------------
 drivers/gpu/drm/drm_internal.h         |  2 +-
 drivers/gpu/drm/mgag200/mgag200_mode.c | 11 ++--
 drivers/gpu/drm/qxl/qxl_display.c      | 12 ++--
 drivers/gpu/drm/qxl/qxl_draw.c         | 14 ++--
 drivers/gpu/drm/qxl/qxl_drv.h          |  6 +-
 drivers/gpu/drm/qxl/qxl_object.c       | 19 +++---
 drivers/gpu/drm/qxl/qxl_object.h       |  2 +-
 drivers/gpu/drm/qxl/qxl_prime.c        | 12 ++--
 drivers/gpu/drm/tiny/cirrus.c          | 15 ++---
 drivers/gpu/drm/tiny/gm12u320.c        | 12 ++--
 drivers/gpu/drm/udl/udl_modeset.c      | 10 +--
 drivers/gpu/drm/vboxvideo/vbox_mode.c  | 11 ++--
 include/drm/drm_gem.h                  |  5 +-
 include/drm/drm_gem_cma_helper.h       |  4 +-
 include/drm/drm_gem_shmem_helper.h     |  4 +-
 include/drm/drm_gem_vram_helper.h      |  6 +-
 22 files changed, 175 insertions(+), 154 deletions(-)

diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
index 6c96f74cdb9e..bdcc3d619e0e 100644
--- a/drivers/gpu/drm/ast/ast_cursor.c
+++ b/drivers/gpu/drm/ast/ast_cursor.c
@@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
 
 	for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
 		gbo = ast->cursor.gbo[i];
-		drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+		drm_gem_vram_vunmap(gbo, &ast->cursor.buf[i]);
 		drm_gem_vram_unpin(gbo);
 		drm_gem_vram_put(gbo);
 	}
@@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
 	struct drm_device *dev = &ast->base;
 	size_t size, i;
 	struct drm_gem_vram_object *gbo;
-	void __iomem *vaddr;
+	struct drm_gem_membuf buf;
 	int ret;
 
 	size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
@@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
 			drm_gem_vram_put(gbo);
 			goto err_drm_gem_vram_put;
 		}
-		vaddr = drm_gem_vram_vmap(gbo);
-		if (IS_ERR(vaddr)) {
-			ret = PTR_ERR(vaddr);
+		ret = drm_gem_vram_vmap(gbo, &buf);
+		if (ret) {
 			drm_gem_vram_unpin(gbo);
 			drm_gem_vram_put(gbo);
 			goto err_drm_gem_vram_put;
 		}
 
 		ast->cursor.gbo[i] = gbo;
-		ast->cursor.vaddr[i] = vaddr;
+		ast->cursor.buf[i] = buf;
 	}
 
 	return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
@@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
 	while (i) {
 		--i;
 		gbo = ast->cursor.gbo[i];
-		drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+		drm_gem_vram_vunmap(gbo, &ast->cursor.buf[i]);
 		drm_gem_vram_unpin(gbo);
 		drm_gem_vram_put(gbo);
 	}
@@ -170,8 +169,8 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 {
 	struct drm_device *dev = &ast->base;
 	struct drm_gem_vram_object *gbo;
+	struct drm_gem_membuf buf;
 	int ret;
-	void *src;
 	void __iomem *dst;
 
 	if (drm_WARN_ON_ONCE(dev, fb->width > AST_MAX_HWC_WIDTH) ||
@@ -183,18 +182,16 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 	ret = drm_gem_vram_pin(gbo, 0);
 	if (ret)
 		return ret;
-	src = drm_gem_vram_vmap(gbo);
-	if (IS_ERR(src)) {
-		ret = PTR_ERR(src);
+	ret = drm_gem_vram_vmap(gbo, &buf);
+	if (ret)
 		goto err_drm_gem_vram_unpin;
-	}
 
-	dst = ast->cursor.vaddr[ast->cursor.next_index];
+	dst = ast->cursor.buf[ast->cursor.next_index].vaddr_iomem;
 
 	/* do data transfer to cursor BO */
-	update_cursor_image(dst, src, fb->width, fb->height);
+	update_cursor_image(dst, buf.vaddr, fb->width, fb->height);
 
-	drm_gem_vram_vunmap(gbo, src);
+	drm_gem_vram_vunmap(gbo, &buf);
 	drm_gem_vram_unpin(gbo);
 
 	return 0;
@@ -256,7 +253,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
 	u8 __iomem *dst, __iomem *sig;
 	u8 jreg;
 
-	dst = ast->cursor.vaddr[ast->cursor.next_index];
+	dst = ast->cursor.buf[ast->cursor.next_index].vaddr;
 
 	sig = dst + AST_HWC_SIZE;
 	writel(x, sig + AST_HWC_SIGNATURE_X);
diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
index c1af6b725933..fcedc6896c24 100644
--- a/drivers/gpu/drm/ast/ast_drv.h
+++ b/drivers/gpu/drm/ast/ast_drv.h
@@ -131,7 +131,7 @@ struct ast_private {
 
 	struct {
 		struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
-		void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
+		struct drm_gem_membuf buf[AST_DEFAULT_HWC_NUM];
 		unsigned int next_index;
 	} cursor;
 
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 36ded6a56fb2..e3235ec46733 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1224,29 +1224,30 @@ int drm_gem_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf)
 {
 	void *vaddr;
 
-	if (obj->funcs && obj->funcs->vmap)
-		vaddr = obj->funcs->vmap(obj);
-	else if (obj->dev->driver->gem_prime_vmap)
+	if (obj->funcs && obj->funcs->vmap) {
+		return obj->funcs->vmap(obj, buf);
+	} else if (obj->dev->driver->gem_prime_vmap) {
 		vaddr = obj->dev->driver->gem_prime_vmap(obj);
-	else
+		buf->vaddr = vaddr;
+		buf->is_iomem = false;
+	} else {
 		return -EOPNOTSUPP;
+	}
 
 	if (!vaddr)
 		return -ENOMEM;
 
-	buf->vaddr = vaddr;
-	buf->is_iomem = false;
 
 	return 0;
 }
 
-void drm_gem_vunmap(struct drm_gem_object *obj, const struct drm_gem_membuf *buf)
+void drm_gem_vunmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf)
 {
 	if (!buf || !buf->vaddr)
 		return;
 
 	if (obj->funcs && obj->funcs->vunmap)
-		obj->funcs->vunmap(obj, buf->vaddr);
+		obj->funcs->vunmap(obj, buf);
 	else if (obj->dev->driver->gem_prime_vunmap)
 		obj->dev->driver->gem_prime_vunmap(obj, buf->vaddr);
 }
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 822edeadbab3..dc9c526ea200 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -537,6 +537,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
  * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
  *     address space
  * @obj: GEM object
+ * @buf: Returns the kernel virtual address of the CMA GEM object's backing
+ *       store.
  *
  * This function maps a buffer exported via DRM PRIME into the kernel's
  * virtual address space. Since the CMA buffers are already mapped into the
@@ -545,13 +547,16 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
  * driver's &drm_driver.gem_prime_vmap callback.
  *
  * Returns:
- * The kernel virtual address of the CMA GEM object's backing store.
+ * 0 on success, or a negative error code otherwise.
  */
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf)
 {
 	struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
 
-	return cma_obj->vaddr;
+	buf->vaddr = cma_obj->vaddr;
+	buf->is_iomem = false; /* TODO: should we declare this as I/O memory? */
+
+	return 0;
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
 
@@ -559,14 +564,14 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
  * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
  *     address space
  * @obj: GEM object
- * @vaddr: kernel virtual address where the CMA GEM object was mapped
+ * @buf: Kernel virtual address where the CMA GEM object was mapped
  *
  * This function removes a buffer exported via DRM PRIME from the kernel's
  * virtual address space. This is a no-op because CMA buffers cannot be
  * unmapped from kernel space. Drivers using the CMA helpers should set this
  * as their &drm_driver.gem_prime_vunmap callback.
  */
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf)
 {
 	/* Nothing to do */
 }
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 4b7cfbac4daa..6115663b5dcf 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -258,13 +258,13 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
 }
 EXPORT_SYMBOL(drm_gem_shmem_unpin);
 
-static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
+static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct drm_gem_membuf *buf)
 {
 	struct drm_gem_object *obj = &shmem->base;
 	int ret;
 
 	if (shmem->vmap_use_count++ > 0)
-		return shmem->vaddr;
+		goto out;
 
 	if (obj->import_attach) {
 		shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
@@ -287,7 +287,11 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 		goto err_put_pages;
 	}
 
-	return shmem->vaddr;
+out:
+	buf->vaddr = shmem->vaddr;
+	buf->is_iomem = false;
+
+	return 0;
 
 err_put_pages:
 	if (!obj->import_attach)
@@ -295,12 +299,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 err_zero_use:
 	shmem->vmap_use_count = 0;
 
-	return ERR_PTR(ret);
+	return ret;
 }
 
 /*
  * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
  * @shmem: shmem GEM object
+ * @buf: Returns the kernel virtual address of the SHMEM GEM object's backing
+ *       store.
  *
  * This function makes sure that a contiguous kernel virtual address mapping
  * exists for the buffer backing the shmem GEM object.
@@ -314,23 +320,23 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
  * Returns:
  * 0 on success or a negative error code on failure.
  */
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf)
 {
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
-	void *vaddr;
 	int ret;
 
 	ret = mutex_lock_interruptible(&shmem->vmap_lock);
 	if (ret)
-		return ERR_PTR(ret);
-	vaddr = drm_gem_shmem_vmap_locked(shmem);
+		return ret;
+	ret = drm_gem_shmem_vmap_locked(shmem, buf);
 	mutex_unlock(&shmem->vmap_lock);
 
-	return vaddr;
+	return ret;
 }
 EXPORT_SYMBOL(drm_gem_shmem_vmap);
 
-static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
+static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
+					struct drm_gem_membuf *buf)
 {
 	struct drm_gem_object *obj = &shmem->base;
 
@@ -352,6 +358,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 /*
  * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
  * @shmem: shmem GEM object
+ * @buf: Kernel virtual address where the SHMEM GEM object was mapped
  *
  * This function cleans up a kernel virtual address mapping acquired by
  * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
@@ -361,12 +368,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
  * also be called by drivers directly, in which case it will hide the
  * differences between dma-buf imported and natively allocated objects.
  */
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf)
 {
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
 
 	mutex_lock(&shmem->vmap_lock);
-	drm_gem_shmem_vunmap_locked(shmem);
+	drm_gem_shmem_vunmap_locked(shmem, buf);
 	mutex_unlock(&shmem->vmap_lock);
 }
 EXPORT_SYMBOL(drm_gem_shmem_vunmap);
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index a6af198e26ea..b5bd44e3d61a 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -408,8 +408,8 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
 }
 EXPORT_SYMBOL(drm_gem_vram_unpin);
 
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
-				      bool map, bool *is_iomem)
+static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, bool map,
+				    struct drm_gem_membuf *buf)
 {
 	int ret;
 	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
@@ -422,22 +422,28 @@ static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
 
 	ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
 
 out:
 	if (!kmap->virtual) {
-		if (is_iomem)
-			*is_iomem = false;
-		return NULL; /* not mapped; don't increment ref */
+		buf->vaddr = NULL;
+		buf->is_iomem = false;
+		return 0; /* not mapped; don't increment ref */
 	}
 	++gbo->kmap_use_count;
-	if (is_iomem)
-		return ttm_kmap_obj_virtual(kmap, is_iomem);
-	return kmap->virtual;
+	buf->vaddr = ttm_kmap_obj_virtual(kmap, &buf->is_iomem);
+	return 0;
 }
 
-static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
+static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
+				       struct drm_gem_membuf *buf)
 {
+	struct drm_device *dev = gbo->bo.base.dev;
+	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+
+	if (drm_WARN_ON_ONCE(dev, kmap->virtual != buf->vaddr))
+		return; /* BUG: buf not mapped from this BO */
+
 	if (WARN_ON_ONCE(!gbo->kmap_use_count))
 		return;
 	if (--gbo->kmap_use_count > 0)
@@ -454,7 +460,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
 /**
  * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
  *                       space
- * @gbo:	The GEM VRAM object to map
+ * @gbo: The GEM VRAM object to map
+ * @buf: Returns the kernel virtual address of the VRAM GEM object's backing
+ *       store.
  *
  * The vmap function pins a GEM VRAM object to its current location, either
  * system or video memory, and maps its buffer into kernel address space.
@@ -463,48 +471,46 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
  * unmap and unpin the GEM VRAM object.
  *
  * Returns:
- * The buffer's virtual address on success, or
- * an ERR_PTR()-encoded error code otherwise.
+ * 0 on success, or a negative error code otherwise.
  */
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo,
+			struct drm_gem_membuf *buf)
 {
 	int ret;
-	void *base;
 
 	ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
 
 	ret = drm_gem_vram_pin_locked(gbo, 0);
 	if (ret)
 		goto err_ttm_bo_unreserve;
-	base = drm_gem_vram_kmap_locked(gbo, true, NULL);
-	if (IS_ERR(base)) {
-		ret = PTR_ERR(base);
+	ret = drm_gem_vram_kmap_locked(gbo, true, buf);
+	if (ret)
 		goto err_drm_gem_vram_unpin_locked;
-	}
 
 	ttm_bo_unreserve(&gbo->bo);
 
-	return base;
+	return 0;
 
 err_drm_gem_vram_unpin_locked:
 	drm_gem_vram_unpin_locked(gbo);
 err_ttm_bo_unreserve:
 	ttm_bo_unreserve(&gbo->bo);
-	return ERR_PTR(ret);
+	return ret;
 }
 EXPORT_SYMBOL(drm_gem_vram_vmap);
 
 /**
  * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
- * @gbo:	The GEM VRAM object to unmap
- * @vaddr:	The mapping's base address as returned by drm_gem_vram_vmap()
+ * @gbo: The GEM VRAM object to unmap
+ * @buf: Kernel virtual address where the VRAM GEM object was mapped
  *
  * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
  * the documentation for drm_gem_vram_vmap() for more information.
  */
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo,
+			 struct drm_gem_membuf *buf)
 {
 	int ret;
 
@@ -512,7 +518,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
 	if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
 		return;
 
-	drm_gem_vram_kunmap_locked(gbo);
+	drm_gem_vram_kunmap_locked(gbo, buf);
 	drm_gem_vram_unpin_locked(gbo);
 
 	ttm_bo_unreserve(&gbo->bo);
@@ -860,37 +866,35 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
 }
 
 /**
- * drm_gem_vram_object_vmap() - \
-	Implements &struct drm_gem_object_funcs.vmap
- * @gem:	The GEM object to map
+ * drm_gem_vram_object_vmap() -
+ *	Implements &struct drm_gem_object_funcs.vmap
+ * @gem: The GEM object to map
+ * @buf: Returns the kernel virtual address of the VRAM GEM object's backing
+ *       store.
  *
  * Returns:
- * The buffers virtual address on success, or
- * NULL otherwise.
+ * 0 on success, or a negative error code otherwise.
  */
-static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
+static int drm_gem_vram_object_vmap(struct drm_gem_object *gem,
+				    struct drm_gem_membuf *buf)
 {
 	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
-	void *base;
 
-	base = drm_gem_vram_vmap(gbo);
-	if (IS_ERR(base))
-		return NULL;
-	return base;
+	return drm_gem_vram_vmap(gbo, buf);
 }
 
 /**
- * drm_gem_vram_object_vunmap() - \
-	Implements &struct drm_gem_object_funcs.vunmap
- * @gem:	The GEM object to unmap
- * @vaddr:	The mapping's base address
+ * drm_gem_vram_object_vunmap() -
+ *	Implements &struct drm_gem_object_funcs.vunmap
+ * @gem: The GEM object to unmap
+ * @buf: Kernel virtual address where the VRAM GEM object was mapped
  */
 static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
-				       void *vaddr)
+				       struct drm_gem_membuf *buf)
 {
 	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
 
-	drm_gem_vram_vunmap(gbo, vaddr);
+	drm_gem_vram_vunmap(gbo, buf);
 }
 
 /*
diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
index 201d71249954..15582bab6d7a 100644
--- a/drivers/gpu/drm/drm_internal.h
+++ b/drivers/gpu/drm/drm_internal.h
@@ -188,7 +188,7 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
 int drm_gem_pin(struct drm_gem_object *obj);
 void drm_gem_unpin(struct drm_gem_object *obj);
 int drm_gem_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf);
-void drm_gem_vunmap(struct drm_gem_object *obj, const struct drm_gem_membuf *buf);
+void drm_gem_vunmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf);
 
 /* drm_debugfs.c drm_debugfs_crc.c */
 #if defined(CONFIG_DEBUG_FS)
diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
index 38672f9e5c4f..5d23a4a4c340 100644
--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
+++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
@@ -1556,15 +1556,16 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
 		      struct drm_rect *clip)
 {
 	struct drm_device *dev = &mdev->base;
-	void *vmap;
+	struct drm_gem_membuf buf;
+	int ret;
 
-	vmap = drm_gem_shmem_vmap(fb->obj[0]);
-	if (drm_WARN_ON(dev, !vmap))
+	ret = drm_gem_shmem_vmap(fb->obj[0], &buf);
+	if (drm_WARN_ON(dev, ret))
 		return; /* BUG: SHMEM BO should always be vmapped */
 
-	drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
+	drm_fb_memcpy_dstclip(mdev->vram, buf.vaddr, fb, clip);
 
-	drm_gem_shmem_vunmap(fb->obj[0], vmap);
+	drm_gem_shmem_vunmap(fb->obj[0], &buf);
 
 	/* Always scanout image at VRAM offset 0 */
 	mgag200_set_startadd(mdev, (u32)0);
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
index 099dca48b0ff..95c224b275ca 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -579,7 +579,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 	struct drm_gem_object *obj;
 	struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
 	int ret;
-	void *user_ptr;
+	struct drm_gem_membuf user_buf;
+	struct drm_gem_membuf cursor_buf;
 	int size = 64*64*4;
 
 	ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd),
@@ -593,7 +594,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 		user_bo = gem_to_qxl_bo(obj);
 
 		/* pinning is done in the prepare/cleanup framevbuffer */
-		ret = qxl_bo_kmap(user_bo, &user_ptr);
+		ret = qxl_bo_kmap(user_bo, &user_buf);
 		if (ret)
 			goto out_free_release;
 
@@ -611,7 +612,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 		if (ret)
 			goto out_unpin;
 
-		ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
+		ret = qxl_bo_kmap(cursor_bo, &cursor_buf);
 		if (ret)
 			goto out_backoff;
 
@@ -625,7 +626,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 		cursor->chunk.next_chunk = 0;
 		cursor->chunk.prev_chunk = 0;
 		cursor->chunk.data_size = size;
-		memcpy(cursor->chunk.data, user_ptr, size);
+		memcpy(cursor->chunk.data, user_buf.vaddr, size);
 		qxl_bo_kunmap(cursor_bo);
 		qxl_bo_kunmap(user_bo);
 
@@ -1136,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
 {
 	int ret;
 	struct drm_gem_object *gobj;
+	struct drm_gem_membuf buf;
 	int monitors_config_size = sizeof(struct qxl_monitors_config) +
 		qxl_num_crtc * sizeof(struct qxl_head);
 
@@ -1152,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
 	if (ret)
 		return ret;
 
-	qxl_bo_kmap(qdev->monitors_config_bo, NULL);
+	qxl_bo_kmap(qdev->monitors_config_bo, &buf);
 
 	qdev->monitors_config = qdev->monitors_config_bo->kptr;
 	qdev->ram_header->monitors_config =
diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
index 3599db096973..406bfd8fe6f5 100644
--- a/drivers/gpu/drm/qxl/qxl_draw.c
+++ b/drivers/gpu/drm/qxl/qxl_draw.c
@@ -42,13 +42,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
 					      unsigned int num_clips,
 					      struct qxl_bo *clips_bo)
 {
+	struct drm_gem_membuf buf;
 	struct qxl_clip_rects *dev_clips;
 	int ret;
 
-	ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
-	if (ret) {
+	ret = qxl_bo_kmap(clips_bo, &buf);
+	if (ret)
 		return NULL;
-	}
+
+	dev_clips = buf.vaddr;
 	dev_clips->num_rects = num_clips;
 	dev_clips->chunk.next_chunk = 0;
 	dev_clips->chunk.prev_chunk = 0;
@@ -142,7 +144,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
 	int stride = fb->pitches[0];
 	/* depth is not actually interesting, we don't mask with it */
 	int depth = fb->format->cpp[0] * 8;
-	uint8_t *surface_base;
+	struct drm_gem_membuf surface_buf;
 	struct qxl_release *release;
 	struct qxl_bo *clips_bo;
 	struct qxl_drm_image *dimage;
@@ -197,11 +199,11 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
 	if (ret)
 		goto out_release_backoff;
 
-	ret = qxl_bo_kmap(bo, (void **)&surface_base);
+	ret = qxl_bo_kmap(bo, &surface_buf);
 	if (ret)
 		goto out_release_backoff;
 
-	ret = qxl_image_init(qdev, release, dimage, surface_base,
+	ret = qxl_image_init(qdev, release, dimage, surface_buf.vaddr,
 			     left - dumb_shadow_offset,
 			     top, width, height, depth, stride);
 	qxl_bo_kunmap(bo);
diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 9691449aefdb..fdb7548b12ee 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -336,7 +336,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
 void qxl_gem_object_close(struct drm_gem_object *obj,
 			  struct drm_file *file_priv);
 void qxl_bo_force_delete(struct qxl_device *qdev);
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
 
 /* qxl_dumb.c */
 int qxl_mode_dumb_create(struct drm_file *file_priv,
@@ -446,8 +445,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
 struct drm_gem_object *qxl_gem_prime_import_sg_table(
 	struct drm_device *dev, struct dma_buf_attachment *attach,
 	struct sg_table *sgt);
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf);
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+			  struct drm_gem_membuf *buf);
 int qxl_gem_prime_mmap(struct drm_gem_object *obj,
 				struct vm_area_struct *vma);
 
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 80e7a17aaddd..679bab3dae9f 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -143,24 +143,22 @@ int qxl_bo_create(struct qxl_device *qdev,
 	return 0;
 }
 
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
+int qxl_bo_kmap(struct qxl_bo *bo, struct drm_gem_membuf *buf)
 {
-	bool is_iomem;
 	int r;
 
 	if (bo->kptr) {
-		if (ptr)
-			*ptr = bo->kptr;
 		bo->map_count++;
-		return 0;
+		goto out;
 	}
 	r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
 	if (r)
 		return r;
-	bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
-	if (ptr)
-		*ptr = bo->kptr;
 	bo->map_count = 1;
+	bo->kptr = bo->kmap.virtual;
+
+out:
+	buf->vaddr = ttm_kmap_obj_virtual(&bo->kmap, &buf->is_iomem);
 	return 0;
 }
 
@@ -170,6 +168,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
 	void *rptr;
 	int ret;
 	struct io_mapping *map;
+	struct drm_gem_membuf buf;
 
 	if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
 		map = qdev->vram_mapping;
@@ -187,11 +186,11 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
 		return rptr;
 	}
 
-	ret = qxl_bo_kmap(bo, &rptr);
+	ret = qxl_bo_kmap(bo, &buf);
 	if (ret)
 		return NULL;
 
-	rptr += page_offset * PAGE_SIZE;
+	rptr = buf.vaddr + page_offset * PAGE_SIZE;
 	return rptr;
 }
 
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index 21fa81048f4f..a0bbff992047 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -86,7 +86,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
 			 bool kernel, bool pinned, u32 domain,
 			 struct qxl_surface *surf,
 			 struct qxl_bo **bo_ptr);
-extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
+extern int qxl_bo_kmap(struct qxl_bo *bo, struct drm_gem_membuf *buf);
 extern void qxl_bo_kunmap(struct qxl_bo *bo);
 void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
 void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index 7d3816fca5a8..ab299d4803ec 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
 	return ERR_PTR(-ENOSYS);
 }
 
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf)
 {
 	struct qxl_bo *bo = gem_to_qxl_bo(obj);
-	void *ptr;
 	int ret;
 
-	ret = qxl_bo_kmap(bo, &ptr);
+	ret = qxl_bo_kmap(bo, buf);
 	if (ret < 0)
-		return ERR_PTR(ret);
+		return ret;
 
-	return ptr;
+	return 0;
 }
 
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+			  struct drm_gem_membuf *buf)
 {
 	struct qxl_bo *bo = gem_to_qxl_bo(obj);
 
diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
index 744a8e337e41..17ad923972c9 100644
--- a/drivers/gpu/drm/tiny/cirrus.c
+++ b/drivers/gpu/drm/tiny/cirrus.c
@@ -314,36 +314,35 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
 			       struct drm_rect *rect)
 {
 	struct cirrus_device *cirrus = to_cirrus(fb->dev);
-	void *vmap;
+	struct drm_gem_membuf buf;
 	int idx, ret;
 
 	ret = -ENODEV;
 	if (!drm_dev_enter(&cirrus->dev, &idx))
 		goto out;
 
-	ret = -ENOMEM;
-	vmap = drm_gem_shmem_vmap(fb->obj[0]);
-	if (!vmap)
+	ret = drm_gem_shmem_vmap(fb->obj[0], &buf);
+	if (ret)
 		goto out_dev_exit;
 
 	if (cirrus->cpp == fb->format->cpp[0])
 		drm_fb_memcpy_dstclip(cirrus->vram,
-				      vmap, fb, rect);
+				      buf.vaddr, fb, rect);
 
 	else if (fb->format->cpp[0] == 4 && cirrus->cpp == 2)
 		drm_fb_xrgb8888_to_rgb565_dstclip(cirrus->vram,
 						  cirrus->pitch,
-						  vmap, fb, rect, false);
+						  buf.vaddr, fb, rect, false);
 
 	else if (fb->format->cpp[0] == 4 && cirrus->cpp == 3)
 		drm_fb_xrgb8888_to_rgb888_dstclip(cirrus->vram,
 						  cirrus->pitch,
-						  vmap, fb, rect);
+						  buf.vaddr, fb, rect);
 
 	else
 		WARN_ON_ONCE("cpp mismatch");
 
-	drm_gem_shmem_vunmap(fb->obj[0], vmap);
+	drm_gem_shmem_vunmap(fb->obj[0], &buf);
 	ret = 0;
 
 out_dev_exit:
diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
index cc397671f689..52409e63733e 100644
--- a/drivers/gpu/drm/tiny/gm12u320.c
+++ b/drivers/gpu/drm/tiny/gm12u320.c
@@ -248,7 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 {
 	int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
 	struct drm_framebuffer *fb;
-	void *vaddr;
+	struct drm_gem_membuf buf;
 	u8 *src;
 
 	mutex_lock(&gm12u320->fb_update.lock);
@@ -262,9 +262,9 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 	y1 = gm12u320->fb_update.rect.y1;
 	y2 = gm12u320->fb_update.rect.y2;
 
-	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
-	if (IS_ERR(vaddr)) {
-		GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
+	ret = drm_gem_shmem_vmap(fb->obj[0], &buf);
+	if (ret) {
+		GM12U320_ERR("failed to vmap fb: %d\n", ret);
 		goto put_fb;
 	}
 
@@ -277,7 +277,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 		}
 	}
 
-	src = vaddr + y1 * fb->pitches[0] + x1 * 4;
+	src = buf.vaddr + y1 * fb->pitches[0] + x1 * 4;
 
 	x1 += (GM12U320_REAL_WIDTH - GM12U320_USER_WIDTH) / 2;
 	x2 += (GM12U320_REAL_WIDTH - GM12U320_USER_WIDTH) / 2;
@@ -318,7 +318,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 			GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
 	}
 vunmap:
-	drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+	drm_gem_shmem_vunmap(fb->obj[0], &buf);
 put_fb:
 	drm_framebuffer_put(fb);
 	gm12u320->fb_update.fb = NULL;
diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
index fef43f4e3bac..87a0467b8752 100644
--- a/drivers/gpu/drm/udl/udl_modeset.c
+++ b/drivers/gpu/drm/udl/udl_modeset.c
@@ -276,7 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 	struct urb *urb;
 	struct drm_rect clip;
 	int log_bpp;
-	void *vaddr;
+	struct drm_gem_membuf buf;
 
 	ret = udl_log_cpp(fb->format->cpp[0]);
 	if (ret < 0)
@@ -296,8 +296,8 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 			return ret;
 	}
 
-	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
-	if (IS_ERR(vaddr)) {
+	ret = drm_gem_shmem_vmap(fb->obj[0], &buf);
+	if (ret) {
 		DRM_ERROR("failed to vmap fb\n");
 		goto out_dma_buf_end_cpu_access;
 	}
@@ -312,7 +312,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 		const int byte_offset = line_offset + (clip.x1 << log_bpp);
 		const int dev_byte_offset = (fb->width * i + clip.x1) << log_bpp;
 		const int byte_width = (clip.x2 - clip.x1) << log_bpp;
-		ret = udl_render_hline(dev, log_bpp, &urb, (char *)vaddr,
+		ret = udl_render_hline(dev, log_bpp, &urb, buf.vaddr,
 				       &cmd, byte_offset, dev_byte_offset,
 				       byte_width);
 		if (ret)
@@ -333,7 +333,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 	ret = 0;
 
 out_drm_gem_shmem_vunmap:
-	drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+	drm_gem_shmem_vunmap(fb->obj[0], &buf);
 out_dma_buf_end_cpu_access:
 	if (import_attach) {
 		tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
index 4fcc0a542b8a..a35d4d6ddaf3 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
@@ -384,7 +384,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 	u32 height = plane->state->crtc_h;
 	size_t data_size, mask_size;
 	u32 flags;
-	u8 *src;
+	struct drm_gem_membuf buf;
+	int ret;
 
 	/*
 	 * VirtualBox uses the host windowing system to draw the cursor so
@@ -397,8 +398,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 
 	vbox_crtc->cursor_enabled = true;
 
-	src = drm_gem_vram_vmap(gbo);
-	if (IS_ERR(src)) {
+	ret = drm_gem_vram_vmap(gbo, &buf);
+	if (ret) {
 		/*
 		 * BUG: we should have pinned the BO in prepare_fb().
 		 */
@@ -415,8 +416,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 	mask_size = ((width + 7) / 8 * height + 3) & ~3;
 	data_size = width * height * 4 + mask_size;
 
-	copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
-	drm_gem_vram_vunmap(gbo, src);
+	copy_cursor_image(buf.vaddr, vbox->cursor_data, width, height, mask_size);
+	drm_gem_vram_vunmap(gbo, &buf);
 
 	flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
 		VBOX_MOUSE_POINTER_ALPHA;
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 337a48321705..a2a291116859 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -39,6 +39,7 @@
 
 #include <drm/drm_vma_manager.h>
 
+struct drm_gem_membuf;
 struct drm_gem_object;
 
 /**
@@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
 	 *
 	 * This callback is optional.
 	 */
-	void *(*vmap)(struct drm_gem_object *obj);
+	int (*vmap)(struct drm_gem_object *obj, struct drm_gem_membuf *buf);
 
 	/**
 	 * @vunmap:
@@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
 	 *
 	 * This callback is optional.
 	 */
-	void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
+	void (*vunmap)(struct drm_gem_object *obj, struct drm_gem_membuf *buf);
 
 	/**
 	 * @mmap:
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index 2bfa2502607a..b8ceee4c765e 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -103,8 +103,8 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
 				  struct sg_table *sgt);
 int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf);
+void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf);
 
 struct drm_gem_object *
 drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 5381f0c8cf6f..650f8848ee6a 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
 void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
 int drm_gem_shmem_pin(struct drm_gem_object *obj);
 void drm_gem_shmem_unpin(struct drm_gem_object *obj);
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf);
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf);
 
 int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
 
diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
index b34f1da89cc7..0872c223a786 100644
--- a/include/drm/drm_gem_vram_helper.h
+++ b/include/drm/drm_gem_vram_helper.h
@@ -101,8 +101,10 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
 s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
 int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
 int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo,
+		      struct drm_gem_membuf *buf);
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo,
+			 struct drm_gem_membuf *buf);
 
 int drm_gem_vram_fill_create_dumb(struct drm_file *file,
 				  struct drm_device *dev,
-- 
2.28.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 4/4] drm/fb_helper: Access framebuffer as I/O memory, if required
  2020-08-06  8:52 [RFC][PATCH v2 0/4] Support GEM object mappings from I/O memory Thomas Zimmermann
                   ` (2 preceding siblings ...)
  2020-08-06  8:52 ` [PATCH v2 3/4] drm/gem: Use struct drm_gem_membuf in vmap op and convert GEM backends Thomas Zimmermann
@ 2020-08-06  8:52 ` Thomas Zimmermann
  2020-08-06 19:23   ` kernel test robot
  2020-08-14  6:51 ` [RFC][PATCH v2 0/4] Support GEM object mappings from I/O memory Thomas Zimmermann
  4 siblings, 1 reply; 13+ messages in thread
From: Thomas Zimmermann @ 2020-08-06  8:52 UTC (permalink / raw)
  To: airlied, daniel, kraxel, maarten.lankhorst, mripard, hdegoede,
	sean, sam, emil.l.velikov, lyude, noralf, zou_wei
  Cc: Thomas Zimmermann, dri-devel

At least sparc64 requires I/O-specific access to framebuffers. This
patch updates the fbdev console accordingly.

For drivers with direct access to the framebuffer memory, the callback
functions test for the type of memory and call the rsp fb_sys_ of fb_cfb_
functions. For drivers that employ a shadow buffer, fbdev's blit function
maps the framebuffer as either I/O or system memory, and uses the correct
memcpy function.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/bochs/bochs_kms.c |   1 -
 drivers/gpu/drm/drm_fb_helper.c   | 240 +++++++++++++++++++++++++++---
 include/drm/drm_mode_config.h     |  12 --
 3 files changed, 221 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
index 13d0d04c4457..853081d186d5 100644
--- a/drivers/gpu/drm/bochs/bochs_kms.c
+++ b/drivers/gpu/drm/bochs/bochs_kms.c
@@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
 	bochs->dev->mode_config.preferred_depth = 24;
 	bochs->dev->mode_config.prefer_shadow = 0;
 	bochs->dev->mode_config.prefer_shadow_fbdev = 1;
-	bochs->dev->mode_config.fbdev_use_iomem = true;
 	bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
 
 	bochs->dev->mode_config.funcs = &bochs_mode_funcs;
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index da24874247e7..a832672e83c5 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -394,18 +394,31 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
 	unsigned int cpp = fb->format->cpp[0];
 	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
 	void *src = fb_helper->fbdev->screen_buffer + offset;
-	void *dst = fb_helper->buffer->membuf.vaddr + offset;
 	size_t len = (clip->x2 - clip->x1) * cpp;
+	const struct drm_gem_membuf *buf;
 	unsigned int y;
 
-	for (y = clip->y1; y < clip->y2; y++) {
-		if (!fb_helper->dev->mode_config.fbdev_use_iomem)
-			memcpy(dst, src, len);
-		else
+	buf = drm_client_buffer_vmap(fb_helper->buffer);
+	if (IS_ERR(buf))
+		return;
+
+	if (buf->is_iomem) {
+		void __iomem *dst =
+			fb_helper->buffer->membuf.vaddr_iomem + offset;
+
+		for (y = clip->y1; y < clip->y2; y++) {
 			memcpy_toio((void __iomem *)dst, src, len);
+			src += fb->pitches[0];
+			dst += fb->pitches[0];
+		}
+	} else {
+		void *dst = fb_helper->buffer->membuf.vaddr + offset;
 
-		src += fb->pitches[0];
-		dst += fb->pitches[0];
+		for (y = clip->y1; y < clip->y2; y++) {
+			memcpy(dst, src, len);
+			src += fb->pitches[0];
+			dst += fb->pitches[0];
+		}
 	}
 }
 
@@ -416,7 +429,6 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 	struct drm_clip_rect *clip = &helper->dirty_clip;
 	struct drm_clip_rect clip_copy;
 	unsigned long flags;
-	const struct drm_gem_membuf *buf;
 
 	spin_lock_irqsave(&helper->dirty_lock, flags);
 	clip_copy = *clip;
@@ -428,12 +440,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 	if (clip_copy.x1 < clip_copy.x2 && clip_copy.y1 < clip_copy.y2) {
 
 		/* Generic fbdev uses a shadow buffer */
-		if (helper->buffer) {
-			buf = drm_client_buffer_vmap(helper->buffer);
-			if (IS_ERR(buf))
-				return;
+		if (helper->buffer)
 			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
-		}
+
 		if (helper->fb->funcs->dirty)
 			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
 						 &clip_copy, 1);
@@ -770,6 +779,134 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
 }
 EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
 
+static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
+				      size_t count, loff_t *ppos)
+{
+	unsigned long p = *ppos;
+	u8 *buffer, *dst;
+	u8 __iomem *src;
+	int c, cnt = 0, err = 0;
+	unsigned long total_size;
+	unsigned long alloc_size;
+
+	if (info->state != FBINFO_STATE_RUNNING)
+		return -EPERM;
+
+	total_size = info->screen_size;
+
+	if (total_size == 0)
+		total_size = info->fix.smem_len;
+
+	if (p >= total_size)
+		return 0;
+
+	if (count >= total_size)
+		count = total_size;
+
+	if (count + p > total_size)
+		count = total_size - p;
+
+	src = (u8 __iomem *)(info->screen_base + p);
+
+	alloc_size = min(count, PAGE_SIZE);
+
+	dst = kmalloc(alloc_size, GFP_KERNEL);
+	if (!dst)
+		return -ENOMEM;
+
+	while (count) {
+		c = min(count, alloc_size);
+
+		memcpy_fromio(dst, src, c);
+		if (copy_to_user(buf, dst, c)) {
+			err = -EFAULT;
+			break;
+		}
+
+		src += c;
+		*ppos += c;
+		buf += c;
+		cnt += c;
+		count -= c;
+	}
+
+	kfree(dst);
+
+	if (err)
+		return err;
+
+	return cnt;
+}
+
+static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
+				       size_t count, loff_t *ppos)
+{
+	unsigned long p = *ppos;
+	u8 *buffer, *src;
+	u8 __iomem *dst;
+	int c, cnt = 0, err = 0;
+	unsigned long total_size;
+	unsigned long alloc_size;
+
+	if (info->state != FBINFO_STATE_RUNNING)
+		return -EPERM;
+
+	total_size = info->screen_size;
+
+	if (total_size == 0)
+		total_size = info->fix.smem_len;
+
+	if (p > total_size)
+		return -EFBIG;
+
+	if (count > total_size) {
+		err = -EFBIG;
+		count = total_size;
+	}
+
+	if (count + p > total_size) {
+		/*
+		 * The framebuffer is too small. We do the
+		 * copy operation, but return an error code
+		 * afterwards. Taken from fbdev.
+		 */
+		if (!err)
+			err = -ENOSPC;
+		count = total_size - p;
+	}
+
+	alloc_size = min(count, PAGE_SIZE);
+
+	src = kmalloc(alloc_size, GFP_KERNEL);
+	if (!src)
+		return -ENOMEM;
+
+	dst = (u8 __iomem *)(info->screen_base + p);
+
+	while (count) {
+		c = min(count, alloc_size);
+
+		if (copy_from_user(src, buf, c)) {
+			err = -EFAULT;
+			break;
+		}
+		memcpy_toio(dst, src, c);
+
+		dst += c;
+		*ppos += c;
+		buf += c;
+		cnt += c;
+		count -= c;
+	}
+
+	kfree(src);
+
+	if (err)
+		return err;
+
+	return cnt;
+}
+
 /**
  * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
  * @info: fbdev registered by the helper
@@ -2042,6 +2179,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
 		return -ENODEV;
 }
 
+static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
+				 size_t count, loff_t *ppos)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+
+	if (buffer->membuf.vaddr_iomem)
+		return drm_fb_helper_cfb_read(info, buf, count, ppos);
+	else
+		return drm_fb_helper_sys_read(info, buf, count, ppos);
+}
+
+static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
+				  size_t count, loff_t *ppos)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+
+	if (buffer->membuf.vaddr_iomem)
+		return drm_fb_helper_cfb_write(info, buf, count, ppos);
+	else
+		return drm_fb_helper_sys_write(info, buf, count, ppos);
+}
+
+static void drm_fbdev_fb_fillrect(struct fb_info *info,
+				  const struct fb_fillrect *rect)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+
+	if (buffer->membuf.vaddr_iomem)
+		drm_fb_helper_cfb_fillrect(info, rect);
+	else
+		drm_fb_helper_sys_fillrect(info, rect);
+}
+
+static void drm_fbdev_fb_copyarea(struct fb_info *info,
+				  const struct fb_copyarea *area)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+
+	if (buffer->membuf.vaddr_iomem)
+		drm_fb_helper_cfb_copyarea(info, area);
+	else
+		drm_fb_helper_sys_copyarea(info, area);
+}
+
+static void drm_fbdev_fb_imageblit(struct fb_info *info,
+				   const struct fb_image *image)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+
+	if (buffer->membuf.vaddr_iomem)
+		drm_fb_helper_cfb_imageblit(info, image);
+	else
+		drm_fb_helper_sys_imageblit(info, image);
+}
+
 static const struct fb_ops drm_fbdev_fb_ops = {
 	.owner		= THIS_MODULE,
 	DRM_FB_HELPER_DEFAULT_OPS,
@@ -2049,11 +2246,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
 	.fb_release	= drm_fbdev_fb_release,
 	.fb_destroy	= drm_fbdev_fb_destroy,
 	.fb_mmap	= drm_fbdev_fb_mmap,
-	.fb_read	= drm_fb_helper_sys_read,
-	.fb_write	= drm_fb_helper_sys_write,
-	.fb_fillrect	= drm_fb_helper_sys_fillrect,
-	.fb_copyarea	= drm_fb_helper_sys_copyarea,
-	.fb_imageblit	= drm_fb_helper_sys_imageblit,
+	.fb_read	= drm_fbdev_fb_read,
+	.fb_write	= drm_fbdev_fb_write,
+	.fb_fillrect	= drm_fbdev_fb_fillrect,
+	.fb_copyarea	= drm_fbdev_fb_copyarea,
+	.fb_imageblit	= drm_fbdev_fb_imageblit,
 };
 
 static struct fb_deferred_io drm_fbdev_defio = {
@@ -2077,6 +2274,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
 	struct fb_info *fbi;
 	u32 format;
 	const struct drm_gem_membuf *membuf;
+	void __iomem *vaddr_iomem;
+	void *vaddr;
 
 	drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
 		    sizes->surface_width, sizes->surface_height,
@@ -2115,8 +2314,11 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
 		membuf = drm_client_buffer_vmap(fb_helper->buffer);
 		if (IS_ERR(membuf))
 			return PTR_ERR(membuf);
+		if (membuf->is_iomem)
+			fbi->screen_buffer = vaddr_iomem;
+		else
+			fbi->screen_buffer = vaddr;
 
-		fbi->screen_buffer = membuf->vaddr;
 		/* Shamelessly leak the physical address to user-space */
 #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
 		if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
index c2d3d71d133c..ffb9852a0638 100644
--- a/include/drm/drm_mode_config.h
+++ b/include/drm/drm_mode_config.h
@@ -865,18 +865,6 @@ struct drm_mode_config {
 	 */
 	bool prefer_shadow_fbdev;
 
-	/**
-	 * @fbdev_use_iomem:
-	 *
-	 * Set to true if framebuffer reside in iomem.
-	 * When set to true memcpy_toio() is used when copying the framebuffer in
-	 * drm_fb_helper.drm_fb_helper_dirty_blit_real().
-	 *
-	 * FIXME: This should be replaced with a per-mapping is_iomem
-	 * flag (like ttm does), and then used everywhere in fbdev code.
-	 */
-	bool fbdev_use_iomem;
-
 	/**
 	 * @quirk_addfb_prefer_xbgr_30bpp:
 	 *
-- 
2.28.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 3/4] drm/gem: Use struct drm_gem_membuf in vmap op and convert GEM backends
  2020-08-06  8:52 ` [PATCH v2 3/4] drm/gem: Use struct drm_gem_membuf in vmap op and convert GEM backends Thomas Zimmermann
@ 2020-08-06 10:50   ` kernel test robot
  2020-08-06 11:36   ` kernel test robot
  1 sibling, 0 replies; 13+ messages in thread
From: kernel test robot @ 2020-08-06 10:50 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 9751 bytes --]

Hi Thomas,

I love your patch! Yet something to improve:

[auto build test ERROR on drm-tip/drm-tip]
[also build test ERROR on linus/master next-20200806]
[cannot apply to drm-intel/for-linux-next tegra-drm/drm/tegra/for-next drm/drm-next drm-exynos/exynos-drm-next v5.8]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Thomas-Zimmermann/Support-GEM-object-mappings-from-I-O-memory/20200806-165413
base:   git://anongit.freedesktop.org/drm/drm-tip drm-tip
config: x86_64-allyesconfig (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce (this is a W=1 build):
        # save the attached .config to linux build tree
        make W=1 ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   drivers/gpu/drm/panfrost/panfrost_perfcnt.c: In function 'panfrost_perfcnt_enable_locked':
>> drivers/gpu/drm/panfrost/panfrost_perfcnt.c:104:17: error: too few arguments to function 'drm_gem_shmem_vmap'
     104 |  perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
         |                 ^~~~~~~~~~~~~~~~~~
   In file included from drivers/gpu/drm/panfrost/panfrost_perfcnt.c:5:
   include/drm/drm_gem_shmem_helper.h:116:5: note: declared here
     116 | int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf);
         |     ^~~~~~~~~~~~~~~~~~

vim +/drm_gem_shmem_vmap +104 drivers/gpu/drm/panfrost/panfrost_perfcnt.c

7786fd1087774c Boris Brezillon 2019-06-18   68  
7786fd1087774c Boris Brezillon 2019-06-18   69  static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
0a5239985a3bc0 Boris Brezillon 2019-11-29   70  					  struct drm_file *file_priv,
7786fd1087774c Boris Brezillon 2019-06-18   71  					  unsigned int counterset)
7786fd1087774c Boris Brezillon 2019-06-18   72  {
0a5239985a3bc0 Boris Brezillon 2019-11-29   73  	struct panfrost_file_priv *user = file_priv->driver_priv;
7786fd1087774c Boris Brezillon 2019-06-18   74  	struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
7786fd1087774c Boris Brezillon 2019-06-18   75  	struct drm_gem_shmem_object *bo;
dde2bb2da01e96 Boris Brezillon 2020-02-06   76  	u32 cfg, as;
7786fd1087774c Boris Brezillon 2019-06-18   77  	int ret;
7786fd1087774c Boris Brezillon 2019-06-18   78  
7786fd1087774c Boris Brezillon 2019-06-18   79  	if (user == perfcnt->user)
7786fd1087774c Boris Brezillon 2019-06-18   80  		return 0;
7786fd1087774c Boris Brezillon 2019-06-18   81  	else if (perfcnt->user)
7786fd1087774c Boris Brezillon 2019-06-18   82  		return -EBUSY;
7786fd1087774c Boris Brezillon 2019-06-18   83  
7786fd1087774c Boris Brezillon 2019-06-18   84  	ret = pm_runtime_get_sync(pfdev->dev);
7786fd1087774c Boris Brezillon 2019-06-18   85  	if (ret < 0)
7786fd1087774c Boris Brezillon 2019-06-18   86  		return ret;
7786fd1087774c Boris Brezillon 2019-06-18   87  
7786fd1087774c Boris Brezillon 2019-06-18   88  	bo = drm_gem_shmem_create(pfdev->ddev, perfcnt->bosize);
7786fd1087774c Boris Brezillon 2019-06-18   89  	if (IS_ERR(bo))
7786fd1087774c Boris Brezillon 2019-06-18   90  		return PTR_ERR(bo);
7786fd1087774c Boris Brezillon 2019-06-18   91  
7786fd1087774c Boris Brezillon 2019-06-18   92  	/* Map the perfcnt buf in the address space attached to file_priv. */
bdefca2d8dc0f8 Boris Brezillon 2020-01-15   93  	ret = panfrost_gem_open(&bo->base, file_priv);
7786fd1087774c Boris Brezillon 2019-06-18   94  	if (ret)
7786fd1087774c Boris Brezillon 2019-06-18   95  		goto err_put_bo;
7786fd1087774c Boris Brezillon 2019-06-18   96  
bdefca2d8dc0f8 Boris Brezillon 2020-01-15   97  	perfcnt->mapping = panfrost_gem_mapping_get(to_panfrost_bo(&bo->base),
bdefca2d8dc0f8 Boris Brezillon 2020-01-15   98  						    user);
bdefca2d8dc0f8 Boris Brezillon 2020-01-15   99  	if (!perfcnt->mapping) {
bdefca2d8dc0f8 Boris Brezillon 2020-01-15  100  		ret = -EINVAL;
bdefca2d8dc0f8 Boris Brezillon 2020-01-15  101  		goto err_close_bo;
bdefca2d8dc0f8 Boris Brezillon 2020-01-15  102  	}
bdefca2d8dc0f8 Boris Brezillon 2020-01-15  103  
7786fd1087774c Boris Brezillon 2019-06-18 @104  	perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
7786fd1087774c Boris Brezillon 2019-06-18  105  	if (IS_ERR(perfcnt->buf)) {
7786fd1087774c Boris Brezillon 2019-06-18  106  		ret = PTR_ERR(perfcnt->buf);
bdefca2d8dc0f8 Boris Brezillon 2020-01-15  107  		goto err_put_mapping;
7786fd1087774c Boris Brezillon 2019-06-18  108  	}
7786fd1087774c Boris Brezillon 2019-06-18  109  
7786fd1087774c Boris Brezillon 2019-06-18  110  	/*
7786fd1087774c Boris Brezillon 2019-06-18  111  	 * Invalidate the cache and clear the counters to start from a fresh
7786fd1087774c Boris Brezillon 2019-06-18  112  	 * state.
7786fd1087774c Boris Brezillon 2019-06-18  113  	 */
7786fd1087774c Boris Brezillon 2019-06-18  114  	reinit_completion(&pfdev->perfcnt->dump_comp);
7786fd1087774c Boris Brezillon 2019-06-18  115  	gpu_write(pfdev, GPU_INT_CLEAR,
7786fd1087774c Boris Brezillon 2019-06-18  116  		  GPU_IRQ_CLEAN_CACHES_COMPLETED |
7786fd1087774c Boris Brezillon 2019-06-18  117  		  GPU_IRQ_PERFCNT_SAMPLE_COMPLETED);
7786fd1087774c Boris Brezillon 2019-06-18  118  	gpu_write(pfdev, GPU_CMD, GPU_CMD_PERFCNT_CLEAR);
7786fd1087774c Boris Brezillon 2019-06-18  119  	gpu_write(pfdev, GPU_CMD, GPU_CMD_CLEAN_INV_CACHES);
7786fd1087774c Boris Brezillon 2019-06-18  120  	ret = wait_for_completion_timeout(&pfdev->perfcnt->dump_comp,
7786fd1087774c Boris Brezillon 2019-06-18  121  					  msecs_to_jiffies(1000));
7786fd1087774c Boris Brezillon 2019-06-18  122  	if (!ret) {
7786fd1087774c Boris Brezillon 2019-06-18  123  		ret = -ETIMEDOUT;
7786fd1087774c Boris Brezillon 2019-06-18  124  		goto err_vunmap;
7786fd1087774c Boris Brezillon 2019-06-18  125  	}
7786fd1087774c Boris Brezillon 2019-06-18  126  
7786fd1087774c Boris Brezillon 2019-06-18  127  	perfcnt->user = user;
7786fd1087774c Boris Brezillon 2019-06-18  128  
dde2bb2da01e96 Boris Brezillon 2020-02-06  129  	as = panfrost_mmu_as_get(pfdev, perfcnt->mapping->mmu);
dde2bb2da01e96 Boris Brezillon 2020-02-06  130  	cfg = GPU_PERFCNT_CFG_AS(as) |
7786fd1087774c Boris Brezillon 2019-06-18  131  	      GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_MANUAL);
7786fd1087774c Boris Brezillon 2019-06-18  132  
7786fd1087774c Boris Brezillon 2019-06-18  133  	/*
7786fd1087774c Boris Brezillon 2019-06-18  134  	 * Bifrost GPUs have 2 set of counters, but we're only interested by
7786fd1087774c Boris Brezillon 2019-06-18  135  	 * the first one for now.
7786fd1087774c Boris Brezillon 2019-06-18  136  	 */
7786fd1087774c Boris Brezillon 2019-06-18  137  	if (panfrost_model_is_bifrost(pfdev))
7786fd1087774c Boris Brezillon 2019-06-18  138  		cfg |= GPU_PERFCNT_CFG_SETSEL(counterset);
7786fd1087774c Boris Brezillon 2019-06-18  139  
7786fd1087774c Boris Brezillon 2019-06-18  140  	gpu_write(pfdev, GPU_PRFCNT_JM_EN, 0xffffffff);
7786fd1087774c Boris Brezillon 2019-06-18  141  	gpu_write(pfdev, GPU_PRFCNT_SHADER_EN, 0xffffffff);
7786fd1087774c Boris Brezillon 2019-06-18  142  	gpu_write(pfdev, GPU_PRFCNT_MMU_L2_EN, 0xffffffff);
7786fd1087774c Boris Brezillon 2019-06-18  143  
7786fd1087774c Boris Brezillon 2019-06-18  144  	/*
7786fd1087774c Boris Brezillon 2019-06-18  145  	 * Due to PRLAM-8186 we need to disable the Tiler before we enable HW
7786fd1087774c Boris Brezillon 2019-06-18  146  	 * counters.
7786fd1087774c Boris Brezillon 2019-06-18  147  	 */
7786fd1087774c Boris Brezillon 2019-06-18  148  	if (panfrost_has_hw_issue(pfdev, HW_ISSUE_8186))
7786fd1087774c Boris Brezillon 2019-06-18  149  		gpu_write(pfdev, GPU_PRFCNT_TILER_EN, 0);
7786fd1087774c Boris Brezillon 2019-06-18  150  	else
7786fd1087774c Boris Brezillon 2019-06-18  151  		gpu_write(pfdev, GPU_PRFCNT_TILER_EN, 0xffffffff);
7786fd1087774c Boris Brezillon 2019-06-18  152  
7786fd1087774c Boris Brezillon 2019-06-18  153  	gpu_write(pfdev, GPU_PERFCNT_CFG, cfg);
7786fd1087774c Boris Brezillon 2019-06-18  154  
7786fd1087774c Boris Brezillon 2019-06-18  155  	if (panfrost_has_hw_issue(pfdev, HW_ISSUE_8186))
7786fd1087774c Boris Brezillon 2019-06-18  156  		gpu_write(pfdev, GPU_PRFCNT_TILER_EN, 0xffffffff);
7786fd1087774c Boris Brezillon 2019-06-18  157  
bdefca2d8dc0f8 Boris Brezillon 2020-01-15  158  	/* The BO ref is retained by the mapping. */
496d0cc63f9c07 Emil Velikov    2020-05-15  159  	drm_gem_object_put(&bo->base);
bdefca2d8dc0f8 Boris Brezillon 2020-01-15  160  
7786fd1087774c Boris Brezillon 2019-06-18  161  	return 0;
7786fd1087774c Boris Brezillon 2019-06-18  162  
7786fd1087774c Boris Brezillon 2019-06-18  163  err_vunmap:
bdefca2d8dc0f8 Boris Brezillon 2020-01-15  164  	drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
bdefca2d8dc0f8 Boris Brezillon 2020-01-15  165  err_put_mapping:
bdefca2d8dc0f8 Boris Brezillon 2020-01-15  166  	panfrost_gem_mapping_put(perfcnt->mapping);
0a5239985a3bc0 Boris Brezillon 2019-11-29  167  err_close_bo:
bdefca2d8dc0f8 Boris Brezillon 2020-01-15  168  	panfrost_gem_close(&bo->base, file_priv);
7786fd1087774c Boris Brezillon 2019-06-18  169  err_put_bo:
496d0cc63f9c07 Emil Velikov    2020-05-15  170  	drm_gem_object_put(&bo->base);
7786fd1087774c Boris Brezillon 2019-06-18  171  	return ret;
7786fd1087774c Boris Brezillon 2019-06-18  172  }
7786fd1087774c Boris Brezillon 2019-06-18  173  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 75561 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 3/4] drm/gem: Use struct drm_gem_membuf in vmap op and convert GEM backends
  2020-08-06  8:52 ` [PATCH v2 3/4] drm/gem: Use struct drm_gem_membuf in vmap op and convert GEM backends Thomas Zimmermann
  2020-08-06 10:50   ` kernel test robot
@ 2020-08-06 11:36   ` kernel test robot
  1 sibling, 0 replies; 13+ messages in thread
From: kernel test robot @ 2020-08-06 11:36 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 7136 bytes --]

Hi Thomas,

I love your patch! Yet something to improve:

[auto build test ERROR on drm-tip/drm-tip]
[also build test ERROR on linus/master next-20200806]
[cannot apply to drm-intel/for-linux-next tegra-drm/drm/tegra/for-next drm/drm-next drm-exynos/exynos-drm-next v5.8]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Thomas-Zimmermann/Support-GEM-object-mappings-from-I-O-memory/20200806-165413
base:   git://anongit.freedesktop.org/drm/drm-tip drm-tip
config: xtensa-allyesconfig (attached as .config)
compiler: xtensa-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=xtensa 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All error/warnings (new ones prefixed by >>):

   drivers/gpu/drm/vc4/vc4_bo.c: In function 'vc4_prime_vmap':
>> drivers/gpu/drm/vc4/vc4_bo.c:779:9: error: too few arguments to function 'drm_gem_cma_prime_vmap'
     779 |  return drm_gem_cma_prime_vmap(obj);
         |         ^~~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/gpu/drm/vc4/vc4_drv.h:16,
                    from drivers/gpu/drm/vc4/vc4_bo.c:21:
   include/drm/drm_gem_cma_helper.h:106:5: note: declared here
     106 | int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf);
         |     ^~~~~~~~~~~~~~~~~~~~~~
>> drivers/gpu/drm/vc4/vc4_bo.c:780:1: warning: control reaches end of non-void function [-Wreturn-type]
     780 | }
         | ^
--
>> drivers/gpu/drm/vc4/vc4_drv.c:207:22: error: initialization of 'void (*)(struct drm_gem_object *, void *)' from incompatible pointer type 'void (*)(struct drm_gem_object *, struct drm_gem_membuf *)' [-Werror=incompatible-pointer-types]
     207 |  .gem_prime_vunmap = drm_gem_cma_prime_vunmap,
         |                      ^~~~~~~~~~~~~~~~~~~~~~~~
   drivers/gpu/drm/vc4/vc4_drv.c:207:22: note: (near initialization for 'vc4_drm_driver.gem_prime_vunmap')
   cc1: some warnings being treated as errors
--
>> drivers/gpu/drm/pl111/pl111_drv.c:235:20: error: initialization of 'void * (*)(struct drm_gem_object *)' from incompatible pointer type 'int (*)(struct drm_gem_object *, struct drm_gem_membuf *)' [-Werror=incompatible-pointer-types]
     235 |  .gem_prime_vmap = drm_gem_cma_prime_vmap,
         |                    ^~~~~~~~~~~~~~~~~~~~~~
   drivers/gpu/drm/pl111/pl111_drv.c:235:20: note: (near initialization for 'pl111_drm_driver.gem_prime_vmap')
   cc1: some warnings being treated as errors
--
   drivers/gpu/drm/lima/lima_gem.c: In function 'lima_gem_vmap':
>> drivers/gpu/drm/lima/lima_gem.c:187:9: error: too few arguments to function 'drm_gem_shmem_vmap'
     187 |  return drm_gem_shmem_vmap(obj);
         |         ^~~~~~~~~~~~~~~~~~
   In file included from drivers/gpu/drm/lima/lima_gem.h:7,
                    from drivers/gpu/drm/lima/lima_gem.c:17:
   include/drm/drm_gem_shmem_helper.h:116:5: note: declared here
     116 | int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf);
         |     ^~~~~~~~~~~~~~~~~~
   drivers/gpu/drm/lima/lima_gem.c: At top level:
>> drivers/gpu/drm/lima/lima_gem.c:208:10: error: initialization of 'int (*)(struct drm_gem_object *, struct drm_gem_membuf *)' from incompatible pointer type 'void * (*)(struct drm_gem_object *)' [-Werror=incompatible-pointer-types]
     208 |  .vmap = lima_gem_vmap,
         |          ^~~~~~~~~~~~~
   drivers/gpu/drm/lima/lima_gem.c:208:10: note: (near initialization for 'lima_gem_funcs.vmap')
   drivers/gpu/drm/lima/lima_gem.c: In function 'lima_gem_vmap':
>> drivers/gpu/drm/lima/lima_gem.c:188:1: warning: control reaches end of non-void function [-Wreturn-type]
     188 | }
         | ^
   cc1: some warnings being treated as errors
--
   drivers/gpu/drm/lima/lima_sched.c: In function 'lima_sched_run_job':
   drivers/gpu/drm/lima/lima_sched.c:226:20: warning: variable 'ret' set but not used [-Wunused-but-set-variable]
     226 |  struct dma_fence *ret;
         |                    ^~~
   drivers/gpu/drm/lima/lima_sched.c: In function 'lima_sched_build_error_task_list':
>> drivers/gpu/drm/lima/lima_sched.c:391:11: error: too few arguments to function 'drm_gem_shmem_vmap'
     391 |    data = drm_gem_shmem_vmap(&bo->base.base);
         |           ^~~~~~~~~~~~~~~~~~
   In file included from drivers/gpu/drm/lima/lima_gem.h:7,
                    from drivers/gpu/drm/lima/lima_sched.c:15:
   include/drm/drm_gem_shmem_helper.h:116:5: note: declared here
     116 | int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf);
         |     ^~~~~~~~~~~~~~~~~~
--
>> drivers/gpu/drm/xlnx/zynqmp_dpsub.c:89:22: error: initialization of 'void * (*)(struct drm_gem_object *)' from incompatible pointer type 'int (*)(struct drm_gem_object *, struct drm_gem_membuf *)' [-Werror=incompatible-pointer-types]
      89 |  .gem_prime_vmap   = drm_gem_cma_prime_vmap,
         |                      ^~~~~~~~~~~~~~~~~~~~~~
   drivers/gpu/drm/xlnx/zynqmp_dpsub.c:89:22: note: (near initialization for 'zynqmp_dpsub_drm_driver.gem_prime_vmap')
>> drivers/gpu/drm/xlnx/zynqmp_dpsub.c:90:23: error: initialization of 'void (*)(struct drm_gem_object *, void *)' from incompatible pointer type 'void (*)(struct drm_gem_object *, struct drm_gem_membuf *)' [-Werror=incompatible-pointer-types]
      90 |  .gem_prime_vunmap  = drm_gem_cma_prime_vunmap,
         |                       ^~~~~~~~~~~~~~~~~~~~~~~~
   drivers/gpu/drm/xlnx/zynqmp_dpsub.c:90:23: note: (near initialization for 'zynqmp_dpsub_drm_driver.gem_prime_vunmap')
   cc1: some warnings being treated as errors

vim +/drm_gem_cma_prime_vmap +779 drivers/gpu/drm/vc4/vc4_bo.c

463873d5701427 Eric Anholt 2015-11-30  769  
463873d5701427 Eric Anholt 2015-11-30  770  void *vc4_prime_vmap(struct drm_gem_object *obj)
463873d5701427 Eric Anholt 2015-11-30  771  {
463873d5701427 Eric Anholt 2015-11-30  772  	struct vc4_bo *bo = to_vc4_bo(obj);
463873d5701427 Eric Anholt 2015-11-30  773  
463873d5701427 Eric Anholt 2015-11-30  774  	if (bo->validated_shader) {
fb95992af1d779 Eric Anholt 2017-07-25  775  		DRM_DEBUG("mmaping of shader BOs not allowed.\n");
463873d5701427 Eric Anholt 2015-11-30  776  		return ERR_PTR(-EINVAL);
463873d5701427 Eric Anholt 2015-11-30  777  	}
463873d5701427 Eric Anholt 2015-11-30  778  
463873d5701427 Eric Anholt 2015-11-30 @779  	return drm_gem_cma_prime_vmap(obj);
463873d5701427 Eric Anholt 2015-11-30 @780  }
463873d5701427 Eric Anholt 2015-11-30  781  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 64567 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 4/4] drm/fb_helper: Access framebuffer as I/O memory, if required
  2020-08-06  8:52 ` [PATCH v2 4/4] drm/fb_helper: Access framebuffer as I/O memory, if required Thomas Zimmermann
@ 2020-08-06 19:23   ` kernel test robot
  0 siblings, 0 replies; 13+ messages in thread
From: kernel test robot @ 2020-08-06 19:23 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 4354 bytes --]

Hi Thomas,

I love your patch! Perhaps something to improve:

[auto build test WARNING on drm-tip/drm-tip]
[also build test WARNING on linus/master next-20200806]
[cannot apply to drm-intel/for-linux-next tegra-drm/drm/tegra/for-next drm/drm-next drm-exynos/exynos-drm-next v5.8]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Thomas-Zimmermann/Support-GEM-object-mappings-from-I-O-memory/20200806-165413
base:   git://anongit.freedesktop.org/drm/drm-tip drm-tip
config: x86_64-randconfig-s021-20200806 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.2-117-g8c7aee71-dirty
        # save the attached .config to linux build tree
        make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>


sparse warnings: (new ones prefixed by >>)

>> drivers/gpu/drm/drm_fb_helper.c:2318:44: sparse: sparse: incorrect type in assignment (different address spaces) @@     expected char *screen_buffer @@     got void [noderef] __iomem *vaddr_iomem @@
>> drivers/gpu/drm/drm_fb_helper.c:2318:44: sparse:     expected char *screen_buffer
>> drivers/gpu/drm/drm_fb_helper.c:2318:44: sparse:     got void [noderef] __iomem *vaddr_iomem

vim +2318 drivers/gpu/drm/drm_fb_helper.c

  2260	
  2261	/*
  2262	 * This function uses the client API to create a framebuffer backed by a dumb buffer.
  2263	 *
  2264	 * The _sys_ versions are used for &fb_ops.fb_read, fb_write, fb_fillrect,
  2265	 * fb_copyarea, fb_imageblit.
  2266	 */
  2267	static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
  2268					       struct drm_fb_helper_surface_size *sizes)
  2269	{
  2270		struct drm_client_dev *client = &fb_helper->client;
  2271		struct drm_device *dev = fb_helper->dev;
  2272		struct drm_client_buffer *buffer;
  2273		struct drm_framebuffer *fb;
  2274		struct fb_info *fbi;
  2275		u32 format;
  2276		const struct drm_gem_membuf *membuf;
  2277		void __iomem *vaddr_iomem;
  2278		void *vaddr;
  2279	
  2280		drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
  2281			    sizes->surface_width, sizes->surface_height,
  2282			    sizes->surface_bpp);
  2283	
  2284		format = drm_mode_legacy_fb_format(sizes->surface_bpp, sizes->surface_depth);
  2285		buffer = drm_client_framebuffer_create(client, sizes->surface_width,
  2286						       sizes->surface_height, format);
  2287		if (IS_ERR(buffer))
  2288			return PTR_ERR(buffer);
  2289	
  2290		fb_helper->buffer = buffer;
  2291		fb_helper->fb = buffer->fb;
  2292		fb = buffer->fb;
  2293	
  2294		fbi = drm_fb_helper_alloc_fbi(fb_helper);
  2295		if (IS_ERR(fbi))
  2296			return PTR_ERR(fbi);
  2297	
  2298		fbi->fbops = &drm_fbdev_fb_ops;
  2299		fbi->screen_size = fb->height * fb->pitches[0];
  2300		fbi->fix.smem_len = fbi->screen_size;
  2301	
  2302		drm_fb_helper_fill_info(fbi, fb_helper, sizes);
  2303	
  2304		if (drm_fbdev_use_shadow_fb(fb_helper)) {
  2305			fbi->screen_buffer = vzalloc(fbi->screen_size);
  2306			if (!fbi->screen_buffer)
  2307				return -ENOMEM;
  2308	
  2309			fbi->fbdefio = &drm_fbdev_defio;
  2310	
  2311			fb_deferred_io_init(fbi);
  2312		} else {
  2313			/* buffer is mapped for HW framebuffer */
  2314			membuf = drm_client_buffer_vmap(fb_helper->buffer);
  2315			if (IS_ERR(membuf))
  2316				return PTR_ERR(membuf);
  2317			if (membuf->is_iomem)
> 2318				fbi->screen_buffer = vaddr_iomem;
  2319			else
  2320				fbi->screen_buffer = vaddr;
  2321	
  2322			/* Shamelessly leak the physical address to user-space */
  2323	#if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
  2324			if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
  2325				fbi->fix.smem_start =
  2326					page_to_phys(virt_to_page(fbi->screen_buffer));
  2327	#endif
  2328		}
  2329	
  2330		return 0;
  2331	}
  2332	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 42966 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/4] drm/gem: Update client API to use struct drm_gem_membuf
  2020-08-06  8:52 ` [PATCH v2 2/4] drm/gem: Update client API to use struct drm_gem_membuf Thomas Zimmermann
@ 2020-08-13 10:26   ` Daniel Vetter
  2020-08-13 10:46     ` Thomas Zimmermann
  2020-08-14  6:47     ` Thomas Zimmermann
  0 siblings, 2 replies; 13+ messages in thread
From: Daniel Vetter @ 2020-08-13 10:26 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: emil.l.velikov, sam, dri-devel, hdegoede, kraxel, airlied, zou_wei, sean

On Thu, Aug 06, 2020 at 10:52:37AM +0200, Thomas Zimmermann wrote:
> GEM's vmap interface now wraps memory pointers in struct drm_gem_membuf.
> The structure represents a pointer into the framebuffer, which is either
> in I/O memory or in system memory. The structure contains a flag that
> distinguishes these cases.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/gpu/drm/drm_client.c    | 25 ++++++++++++-------------
>  drivers/gpu/drm/drm_fb_helper.c | 18 +++++++++---------
>  drivers/gpu/drm/drm_gem.c       | 19 +++++++++++--------
>  drivers/gpu/drm/drm_internal.h  |  5 +++--
>  drivers/gpu/drm/drm_prime.c     | 16 ++++++++++------
>  include/drm/drm_client.h        |  7 ++++---
>  include/drm/drm_device.h        | 26 ++++++++++++++++++++++++++
>  7 files changed, 75 insertions(+), 41 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
> index 495f47d23d87..0359b82928c1 100644
> --- a/drivers/gpu/drm/drm_client.c
> +++ b/drivers/gpu/drm/drm_client.c
> @@ -234,7 +234,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
>  {
>  	struct drm_device *dev = buffer->client->dev;
>  
> -	drm_gem_vunmap(buffer->gem, buffer->vaddr);
> +	drm_gem_vunmap(buffer->gem, &buffer->membuf);
>  
>  	if (buffer->gem)
>  		drm_gem_object_put(buffer->gem);
> @@ -302,12 +302,13 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
>   * Returns:
>   *	The mapped memory's address
>   */
> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
> +const struct drm_gem_membuf *
> +drm_client_buffer_vmap(struct drm_client_buffer *buffer)
>  {
> -	void *vaddr;
> +	int ret;
>  
> -	if (buffer->vaddr)
> -		return buffer->vaddr;
> +	if (buffer->membuf.vaddr)
> +		return &buffer->membuf;
>  
>  	/*
>  	 * FIXME: The dependency on GEM here isn't required, we could
> @@ -317,13 +318,11 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
>  	 * fd_install step out of the driver backend hooks, to make that
>  	 * final step optional for internal users.
>  	 */
> -	vaddr = drm_gem_vmap(buffer->gem);
> -	if (IS_ERR(vaddr))
> -		return vaddr;
> -
> -	buffer->vaddr = vaddr;
> +	ret = drm_gem_vmap(buffer->gem, &buffer->membuf);
> +	if (ret)
> +		return ERR_PTR(ret);
>  
> -	return vaddr;
> +	return &buffer->membuf;
>  }
>  EXPORT_SYMBOL(drm_client_buffer_vmap);
>  
> @@ -337,8 +336,8 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
>   */
>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
>  {
> -	drm_gem_vunmap(buffer->gem, buffer->vaddr);
> -	buffer->vaddr = NULL;
> +	drm_gem_vunmap(buffer->gem, &buffer->membuf);
> +	buffer->membuf.vaddr = NULL;
>  }
>  EXPORT_SYMBOL(drm_client_buffer_vunmap);
>  
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 8697554ccd41..da24874247e7 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -394,7 +394,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
>  	unsigned int cpp = fb->format->cpp[0];
>  	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
>  	void *src = fb_helper->fbdev->screen_buffer + offset;
> -	void *dst = fb_helper->buffer->vaddr + offset;
> +	void *dst = fb_helper->buffer->membuf.vaddr + offset;
>  	size_t len = (clip->x2 - clip->x1) * cpp;
>  	unsigned int y;
>  
> @@ -416,7 +416,7 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>  	struct drm_clip_rect *clip = &helper->dirty_clip;
>  	struct drm_clip_rect clip_copy;
>  	unsigned long flags;
> -	void *vaddr;
> +	const struct drm_gem_membuf *buf;
>  
>  	spin_lock_irqsave(&helper->dirty_lock, flags);
>  	clip_copy = *clip;
> @@ -429,8 +429,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>  
>  		/* Generic fbdev uses a shadow buffer */
>  		if (helper->buffer) {
> -			vaddr = drm_client_buffer_vmap(helper->buffer);
> -			if (IS_ERR(vaddr))
> +			buf = drm_client_buffer_vmap(helper->buffer);
> +			if (IS_ERR(buf))
>  				return;
>  			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
>  		}
> @@ -2076,7 +2076,7 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
>  	struct drm_framebuffer *fb;
>  	struct fb_info *fbi;
>  	u32 format;
> -	void *vaddr;
> +	const struct drm_gem_membuf *membuf;
>  
>  	drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
>  		    sizes->surface_width, sizes->surface_height,
> @@ -2112,11 +2112,11 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
>  		fb_deferred_io_init(fbi);
>  	} else {
>  		/* buffer is mapped for HW framebuffer */
> -		vaddr = drm_client_buffer_vmap(fb_helper->buffer);
> -		if (IS_ERR(vaddr))
> -			return PTR_ERR(vaddr);
> +		membuf = drm_client_buffer_vmap(fb_helper->buffer);
> +		if (IS_ERR(membuf))
> +			return PTR_ERR(membuf);
>  
> -		fbi->screen_buffer = vaddr;
> +		fbi->screen_buffer = membuf->vaddr;
>  		/* Shamelessly leak the physical address to user-space */
>  #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
>  		if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 19d73868490e..36ded6a56fb2 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1220,7 +1220,7 @@ void drm_gem_unpin(struct drm_gem_object *obj)
>  		obj->dev->driver->gem_prime_unpin(obj);
>  }
>  
> -void *drm_gem_vmap(struct drm_gem_object *obj)
> +int drm_gem_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf)
>  {
>  	void *vaddr;
>  
> @@ -1229,23 +1229,26 @@ void *drm_gem_vmap(struct drm_gem_object *obj)
>  	else if (obj->dev->driver->gem_prime_vmap)
>  		vaddr = obj->dev->driver->gem_prime_vmap(obj);
>  	else
> -		vaddr = ERR_PTR(-EOPNOTSUPP);
> +		return -EOPNOTSUPP;
>  
>  	if (!vaddr)
> -		vaddr = ERR_PTR(-ENOMEM);
> +		return -ENOMEM;
> +
> +	buf->vaddr = vaddr;
> +	buf->is_iomem = false;
>  
> -	return vaddr;
> +	return 0;
>  }
>  
> -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void drm_gem_vunmap(struct drm_gem_object *obj, const struct drm_gem_membuf *buf)
>  {
> -	if (!vaddr)
> +	if (!buf || !buf->vaddr)
>  		return;
>  
>  	if (obj->funcs && obj->funcs->vunmap)
> -		obj->funcs->vunmap(obj, vaddr);
> +		obj->funcs->vunmap(obj, buf->vaddr);
>  	else if (obj->dev->driver->gem_prime_vunmap)
> -		obj->dev->driver->gem_prime_vunmap(obj, vaddr);
> +		obj->dev->driver->gem_prime_vunmap(obj, buf->vaddr);
>  }
>  
>  /**
> diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
> index 8e01caaf95cc..201d71249954 100644
> --- a/drivers/gpu/drm/drm_internal.h
> +++ b/drivers/gpu/drm/drm_internal.h
> @@ -36,6 +36,7 @@ struct dma_buf;
>  struct drm_connector;
>  struct drm_crtc;
>  struct drm_framebuffer;
> +struct drm_gem_membuf;
>  struct drm_gem_object;
>  struct drm_master;
>  struct drm_minor;
> @@ -186,8 +187,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
>  
>  int drm_gem_pin(struct drm_gem_object *obj);
>  void drm_gem_unpin(struct drm_gem_object *obj);
> -void *drm_gem_vmap(struct drm_gem_object *obj);
> -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int drm_gem_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf);
> +void drm_gem_vunmap(struct drm_gem_object *obj, const struct drm_gem_membuf *buf);
>  
>  /* drm_debugfs.c drm_debugfs_crc.c */
>  #if defined(CONFIG_DEBUG_FS)
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index 1693aa7c14b5..d95a39030a93 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -671,13 +671,14 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
>  void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf)
>  {
>  	struct drm_gem_object *obj = dma_buf->priv;
> -	void *vaddr;
> +	struct drm_gem_membuf buf;
> +	int ret;
>  
> -	vaddr = drm_gem_vmap(obj);
> -	if (IS_ERR(vaddr))
> -		vaddr = NULL;
> +	ret = drm_gem_vmap(obj, &buf);
> +	if (ret)
> +		buf.vaddr = NULL;
>  
> -	return vaddr;
> +	return buf.vaddr;
>  }
>  EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
>  
> @@ -692,8 +693,11 @@ EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
>  void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
>  {
>  	struct drm_gem_object *obj = dma_buf->priv;
> +	struct drm_gem_membuf buf;
>  
> -	drm_gem_vunmap(obj, vaddr);
> +	buf.vaddr = vaddr;
> +	buf.is_iomem = false;
> +	drm_gem_vunmap(obj, &buf);
>  }
>  EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
>  
> diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
> index 7aaea665bfc2..5ed73c390619 100644
> --- a/include/drm/drm_client.h
> +++ b/include/drm/drm_client.h
> @@ -14,6 +14,7 @@ struct drm_client_dev;
>  struct drm_device;
>  struct drm_file;
>  struct drm_framebuffer;
> +struct drm_gem_membuf;
>  struct drm_gem_object;
>  struct drm_minor;
>  struct module;
> @@ -141,9 +142,9 @@ struct drm_client_buffer {
>  	struct drm_gem_object *gem;
>  
>  	/**
> -	 * @vaddr: Virtual address for the buffer
> +	 * @membuf: Virtual address for the buffer
>  	 */
> -	void *vaddr;
> +	struct drm_gem_membuf membuf;
>  
>  	/**
>  	 * @fb: DRM framebuffer
> @@ -155,7 +156,7 @@ struct drm_client_buffer *
>  drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
>  void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
>  int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
> +const struct drm_gem_membuf *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
>  
>  int drm_client_modeset_create(struct drm_client_dev *client);
> diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h
> index 0988351d743c..6ecf03601c36 100644
> --- a/include/drm/drm_device.h
> +++ b/include/drm/drm_device.h
> @@ -381,4 +381,30 @@ struct drm_device {
>  #endif
>  };
>  
> +/**
> + * struct drm_gem_membuf - GEM memory buffer

This isn't gem specific, and membuf feels a bit strange - it's not a
memory buffer, but more a buffer mapping. I'd call these struct
drm_bufmap.

> + */
> +struct drm_gem_membuf {
> +	union {
> +		/**
> +		 * @vaddr:
> +		 *
> +		 * The virtual address for the buffer in system memory.
> +		 */
> +		void *vaddr;
> +		/**
> +		 * @vaddr_iomem:
> +		 *
> +		 * The virtual address for the buffer in I/O memory.
> +		 */
> +		void __iomem *vaddr_iomem;
> +	};
> +	/**
> +	 * @is_iomem:
> +	 *
> +	 * True if the memory is located in I/O memory, false otherwise.
> +	 */
> +	bool is_iomem;
> +};

I think if we do this we should go all in, i.e. create a new header, and
then start to add all kinds of helper functions to it, like:

static inline drm_memcpy_to_bufmap(bufmap, ...)
{
	if (bufmap->is_iomem)
		memcpy_toio(bufmap->vaddr_iomem, ...);
	else
		memcpy(bufmap->vaddr_iomem, ...);

}

totally wrong code of course, but I think you get the idea. If we add
helpers like this I think a new header file is also warranted, so maybe
put it all in drm_bufmap.h and then wire up the kerneldoc.

Also I think cc: ttm people for an ack would be good, maybe long-term we
want to roll this out through ttm too.

Cheers, Daniel


> +
>  #endif
> -- 
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/4] drm/gem: Update client API to use struct drm_gem_membuf
  2020-08-13 10:26   ` Daniel Vetter
@ 2020-08-13 10:46     ` Thomas Zimmermann
  2020-08-13 14:03       ` Daniel Vetter
  2020-08-14  6:47     ` Thomas Zimmermann
  1 sibling, 1 reply; 13+ messages in thread
From: Thomas Zimmermann @ 2020-08-13 10:46 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: emil.l.velikov, sam, dri-devel, hdegoede, kraxel, airlied, zou_wei, sean


[-- Attachment #1.1.1: Type: text/plain, Size: 12631 bytes --]

Hi

Am 13.08.20 um 12:26 schrieb Daniel Vetter:
> On Thu, Aug 06, 2020 at 10:52:37AM +0200, Thomas Zimmermann wrote:
>> GEM's vmap interface now wraps memory pointers in struct drm_gem_membuf.
>> The structure represents a pointer into the framebuffer, which is either
>> in I/O memory or in system memory. The structure contains a flag that
>> distinguishes these cases.
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> ---
>>  drivers/gpu/drm/drm_client.c    | 25 ++++++++++++-------------
>>  drivers/gpu/drm/drm_fb_helper.c | 18 +++++++++---------
>>  drivers/gpu/drm/drm_gem.c       | 19 +++++++++++--------
>>  drivers/gpu/drm/drm_internal.h  |  5 +++--
>>  drivers/gpu/drm/drm_prime.c     | 16 ++++++++++------
>>  include/drm/drm_client.h        |  7 ++++---
>>  include/drm/drm_device.h        | 26 ++++++++++++++++++++++++++
>>  7 files changed, 75 insertions(+), 41 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
>> index 495f47d23d87..0359b82928c1 100644
>> --- a/drivers/gpu/drm/drm_client.c
>> +++ b/drivers/gpu/drm/drm_client.c
>> @@ -234,7 +234,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
>>  {
>>  	struct drm_device *dev = buffer->client->dev;
>>  
>> -	drm_gem_vunmap(buffer->gem, buffer->vaddr);
>> +	drm_gem_vunmap(buffer->gem, &buffer->membuf);
>>  
>>  	if (buffer->gem)
>>  		drm_gem_object_put(buffer->gem);
>> @@ -302,12 +302,13 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
>>   * Returns:
>>   *	The mapped memory's address
>>   */
>> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
>> +const struct drm_gem_membuf *
>> +drm_client_buffer_vmap(struct drm_client_buffer *buffer)
>>  {
>> -	void *vaddr;
>> +	int ret;
>>  
>> -	if (buffer->vaddr)
>> -		return buffer->vaddr;
>> +	if (buffer->membuf.vaddr)
>> +		return &buffer->membuf;
>>  
>>  	/*
>>  	 * FIXME: The dependency on GEM here isn't required, we could
>> @@ -317,13 +318,11 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
>>  	 * fd_install step out of the driver backend hooks, to make that
>>  	 * final step optional for internal users.
>>  	 */
>> -	vaddr = drm_gem_vmap(buffer->gem);
>> -	if (IS_ERR(vaddr))
>> -		return vaddr;
>> -
>> -	buffer->vaddr = vaddr;
>> +	ret = drm_gem_vmap(buffer->gem, &buffer->membuf);
>> +	if (ret)
>> +		return ERR_PTR(ret);
>>  
>> -	return vaddr;
>> +	return &buffer->membuf;
>>  }
>>  EXPORT_SYMBOL(drm_client_buffer_vmap);
>>  
>> @@ -337,8 +336,8 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
>>   */
>>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
>>  {
>> -	drm_gem_vunmap(buffer->gem, buffer->vaddr);
>> -	buffer->vaddr = NULL;
>> +	drm_gem_vunmap(buffer->gem, &buffer->membuf);
>> +	buffer->membuf.vaddr = NULL;
>>  }
>>  EXPORT_SYMBOL(drm_client_buffer_vunmap);
>>  
>> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
>> index 8697554ccd41..da24874247e7 100644
>> --- a/drivers/gpu/drm/drm_fb_helper.c
>> +++ b/drivers/gpu/drm/drm_fb_helper.c
>> @@ -394,7 +394,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
>>  	unsigned int cpp = fb->format->cpp[0];
>>  	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
>>  	void *src = fb_helper->fbdev->screen_buffer + offset;
>> -	void *dst = fb_helper->buffer->vaddr + offset;
>> +	void *dst = fb_helper->buffer->membuf.vaddr + offset;
>>  	size_t len = (clip->x2 - clip->x1) * cpp;
>>  	unsigned int y;
>>  
>> @@ -416,7 +416,7 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>>  	struct drm_clip_rect *clip = &helper->dirty_clip;
>>  	struct drm_clip_rect clip_copy;
>>  	unsigned long flags;
>> -	void *vaddr;
>> +	const struct drm_gem_membuf *buf;
>>  
>>  	spin_lock_irqsave(&helper->dirty_lock, flags);
>>  	clip_copy = *clip;
>> @@ -429,8 +429,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>>  
>>  		/* Generic fbdev uses a shadow buffer */
>>  		if (helper->buffer) {
>> -			vaddr = drm_client_buffer_vmap(helper->buffer);
>> -			if (IS_ERR(vaddr))
>> +			buf = drm_client_buffer_vmap(helper->buffer);
>> +			if (IS_ERR(buf))
>>  				return;
>>  			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
>>  		}
>> @@ -2076,7 +2076,7 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
>>  	struct drm_framebuffer *fb;
>>  	struct fb_info *fbi;
>>  	u32 format;
>> -	void *vaddr;
>> +	const struct drm_gem_membuf *membuf;
>>  
>>  	drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
>>  		    sizes->surface_width, sizes->surface_height,
>> @@ -2112,11 +2112,11 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
>>  		fb_deferred_io_init(fbi);
>>  	} else {
>>  		/* buffer is mapped for HW framebuffer */
>> -		vaddr = drm_client_buffer_vmap(fb_helper->buffer);
>> -		if (IS_ERR(vaddr))
>> -			return PTR_ERR(vaddr);
>> +		membuf = drm_client_buffer_vmap(fb_helper->buffer);
>> +		if (IS_ERR(membuf))
>> +			return PTR_ERR(membuf);
>>  
>> -		fbi->screen_buffer = vaddr;
>> +		fbi->screen_buffer = membuf->vaddr;
>>  		/* Shamelessly leak the physical address to user-space */
>>  #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
>>  		if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>> index 19d73868490e..36ded6a56fb2 100644
>> --- a/drivers/gpu/drm/drm_gem.c
>> +++ b/drivers/gpu/drm/drm_gem.c
>> @@ -1220,7 +1220,7 @@ void drm_gem_unpin(struct drm_gem_object *obj)
>>  		obj->dev->driver->gem_prime_unpin(obj);
>>  }
>>  
>> -void *drm_gem_vmap(struct drm_gem_object *obj)
>> +int drm_gem_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf)
>>  {
>>  	void *vaddr;
>>  
>> @@ -1229,23 +1229,26 @@ void *drm_gem_vmap(struct drm_gem_object *obj)
>>  	else if (obj->dev->driver->gem_prime_vmap)
>>  		vaddr = obj->dev->driver->gem_prime_vmap(obj);
>>  	else
>> -		vaddr = ERR_PTR(-EOPNOTSUPP);
>> +		return -EOPNOTSUPP;
>>  
>>  	if (!vaddr)
>> -		vaddr = ERR_PTR(-ENOMEM);
>> +		return -ENOMEM;
>> +
>> +	buf->vaddr = vaddr;
>> +	buf->is_iomem = false;
>>  
>> -	return vaddr;
>> +	return 0;
>>  }
>>  
>> -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
>> +void drm_gem_vunmap(struct drm_gem_object *obj, const struct drm_gem_membuf *buf)
>>  {
>> -	if (!vaddr)
>> +	if (!buf || !buf->vaddr)
>>  		return;
>>  
>>  	if (obj->funcs && obj->funcs->vunmap)
>> -		obj->funcs->vunmap(obj, vaddr);
>> +		obj->funcs->vunmap(obj, buf->vaddr);
>>  	else if (obj->dev->driver->gem_prime_vunmap)
>> -		obj->dev->driver->gem_prime_vunmap(obj, vaddr);
>> +		obj->dev->driver->gem_prime_vunmap(obj, buf->vaddr);
>>  }
>>  
>>  /**
>> diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
>> index 8e01caaf95cc..201d71249954 100644
>> --- a/drivers/gpu/drm/drm_internal.h
>> +++ b/drivers/gpu/drm/drm_internal.h
>> @@ -36,6 +36,7 @@ struct dma_buf;
>>  struct drm_connector;
>>  struct drm_crtc;
>>  struct drm_framebuffer;
>> +struct drm_gem_membuf;
>>  struct drm_gem_object;
>>  struct drm_master;
>>  struct drm_minor;
>> @@ -186,8 +187,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
>>  
>>  int drm_gem_pin(struct drm_gem_object *obj);
>>  void drm_gem_unpin(struct drm_gem_object *obj);
>> -void *drm_gem_vmap(struct drm_gem_object *obj);
>> -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
>> +int drm_gem_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf);
>> +void drm_gem_vunmap(struct drm_gem_object *obj, const struct drm_gem_membuf *buf);
>>  
>>  /* drm_debugfs.c drm_debugfs_crc.c */
>>  #if defined(CONFIG_DEBUG_FS)
>> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
>> index 1693aa7c14b5..d95a39030a93 100644
>> --- a/drivers/gpu/drm/drm_prime.c
>> +++ b/drivers/gpu/drm/drm_prime.c
>> @@ -671,13 +671,14 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
>>  void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf)
>>  {
>>  	struct drm_gem_object *obj = dma_buf->priv;
>> -	void *vaddr;
>> +	struct drm_gem_membuf buf;
>> +	int ret;
>>  
>> -	vaddr = drm_gem_vmap(obj);
>> -	if (IS_ERR(vaddr))
>> -		vaddr = NULL;
>> +	ret = drm_gem_vmap(obj, &buf);
>> +	if (ret)
>> +		buf.vaddr = NULL;
>>  
>> -	return vaddr;
>> +	return buf.vaddr;
>>  }
>>  EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
>>  
>> @@ -692,8 +693,11 @@ EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
>>  void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
>>  {
>>  	struct drm_gem_object *obj = dma_buf->priv;
>> +	struct drm_gem_membuf buf;
>>  
>> -	drm_gem_vunmap(obj, vaddr);
>> +	buf.vaddr = vaddr;
>> +	buf.is_iomem = false;
>> +	drm_gem_vunmap(obj, &buf);
>>  }
>>  EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
>>  
>> diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
>> index 7aaea665bfc2..5ed73c390619 100644
>> --- a/include/drm/drm_client.h
>> +++ b/include/drm/drm_client.h
>> @@ -14,6 +14,7 @@ struct drm_client_dev;
>>  struct drm_device;
>>  struct drm_file;
>>  struct drm_framebuffer;
>> +struct drm_gem_membuf;
>>  struct drm_gem_object;
>>  struct drm_minor;
>>  struct module;
>> @@ -141,9 +142,9 @@ struct drm_client_buffer {
>>  	struct drm_gem_object *gem;
>>  
>>  	/**
>> -	 * @vaddr: Virtual address for the buffer
>> +	 * @membuf: Virtual address for the buffer
>>  	 */
>> -	void *vaddr;
>> +	struct drm_gem_membuf membuf;
>>  
>>  	/**
>>  	 * @fb: DRM framebuffer
>> @@ -155,7 +156,7 @@ struct drm_client_buffer *
>>  drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
>>  void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
>>  int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
>> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
>> +const struct drm_gem_membuf *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
>>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
>>  
>>  int drm_client_modeset_create(struct drm_client_dev *client);
>> diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h
>> index 0988351d743c..6ecf03601c36 100644
>> --- a/include/drm/drm_device.h
>> +++ b/include/drm/drm_device.h
>> @@ -381,4 +381,30 @@ struct drm_device {
>>  #endif
>>  };
>>  
>> +/**
>> + * struct drm_gem_membuf - GEM memory buffer
> 
> This isn't gem specific, and membuf feels a bit strange - it's not a
> memory buffer, but more a buffer mapping. I'd call these struct
> drm_bufmap.

Yep, I didn't like the name either. struct drm_bufmap sounds good.

As i mentioned on irc, I'd also want to propose this as an update to
dma-buf's vmap. It's not that much more work and should fix the issue
once and for all. If they are interested, struct dma_buf_addr might be a
candidate.

> 
>> + */
>> +struct drm_gem_membuf {
>> +	union {
>> +		/**
>> +		 * @vaddr:
>> +		 *
>> +		 * The virtual address for the buffer in system memory.
>> +		 */
>> +		void *vaddr;
>> +		/**
>> +		 * @vaddr_iomem:
>> +		 *
>> +		 * The virtual address for the buffer in I/O memory.
>> +		 */
>> +		void __iomem *vaddr_iomem;
>> +	};
>> +	/**
>> +	 * @is_iomem:
>> +	 *
>> +	 * True if the memory is located in I/O memory, false otherwise.
>> +	 */
>> +	bool is_iomem;
>> +};
> 
> I think if we do this we should go all in, i.e. create a new header, and
> then start to add all kinds of helper functions to it, like:
> 
> static inline drm_memcpy_to_bufmap(bufmap, ...)
> {
> 	if (bufmap->is_iomem)
> 		memcpy_toio(bufmap->vaddr_iomem, ...);
> 	else
> 		memcpy(bufmap->vaddr_iomem, ...);
> 
> }
> 
> totally wrong code of course, but I think you get the idea. If we add
> helpers like this I think a new header file is also warranted, so maybe
> put it all in drm_bufmap.h and then wire up the kerneldoc.
> 
> Also I think cc: ttm people for an ack would be good, maybe long-term we
> want to roll this out through ttm too.

Sure.

Best regards
Thomas

> 
> Cheers, Daniel
> 
> 
>> +
>>  #endif
>> -- 
>> 2.28.0
>>
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/4] drm/gem: Update client API to use struct drm_gem_membuf
  2020-08-13 10:46     ` Thomas Zimmermann
@ 2020-08-13 14:03       ` Daniel Vetter
  0 siblings, 0 replies; 13+ messages in thread
From: Daniel Vetter @ 2020-08-13 14:03 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: Emil Velikov, Sam Ravnborg, dri-devel, Hans de Goede,
	Gerd Hoffmann, Dave Airlie, zou_wei, Sean Paul

On Thu, Aug 13, 2020 at 12:46 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
> Hi
>
> Am 13.08.20 um 12:26 schrieb Daniel Vetter:
> > On Thu, Aug 06, 2020 at 10:52:37AM +0200, Thomas Zimmermann wrote:
> >> GEM's vmap interface now wraps memory pointers in struct drm_gem_membuf.
> >> The structure represents a pointer into the framebuffer, which is either
> >> in I/O memory or in system memory. The structure contains a flag that
> >> distinguishes these cases.
> >>
> >> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >> ---
> >>  drivers/gpu/drm/drm_client.c    | 25 ++++++++++++-------------
> >>  drivers/gpu/drm/drm_fb_helper.c | 18 +++++++++---------
> >>  drivers/gpu/drm/drm_gem.c       | 19 +++++++++++--------
> >>  drivers/gpu/drm/drm_internal.h  |  5 +++--
> >>  drivers/gpu/drm/drm_prime.c     | 16 ++++++++++------
> >>  include/drm/drm_client.h        |  7 ++++---
> >>  include/drm/drm_device.h        | 26 ++++++++++++++++++++++++++
> >>  7 files changed, 75 insertions(+), 41 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
> >> index 495f47d23d87..0359b82928c1 100644
> >> --- a/drivers/gpu/drm/drm_client.c
> >> +++ b/drivers/gpu/drm/drm_client.c
> >> @@ -234,7 +234,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
> >>  {
> >>      struct drm_device *dev = buffer->client->dev;
> >>
> >> -    drm_gem_vunmap(buffer->gem, buffer->vaddr);
> >> +    drm_gem_vunmap(buffer->gem, &buffer->membuf);
> >>
> >>      if (buffer->gem)
> >>              drm_gem_object_put(buffer->gem);
> >> @@ -302,12 +302,13 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
> >>   * Returns:
> >>   *  The mapped memory's address
> >>   */
> >> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
> >> +const struct drm_gem_membuf *
> >> +drm_client_buffer_vmap(struct drm_client_buffer *buffer)
> >>  {
> >> -    void *vaddr;
> >> +    int ret;
> >>
> >> -    if (buffer->vaddr)
> >> -            return buffer->vaddr;
> >> +    if (buffer->membuf.vaddr)
> >> +            return &buffer->membuf;
> >>
> >>      /*
> >>       * FIXME: The dependency on GEM here isn't required, we could
> >> @@ -317,13 +318,11 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
> >>       * fd_install step out of the driver backend hooks, to make that
> >>       * final step optional for internal users.
> >>       */
> >> -    vaddr = drm_gem_vmap(buffer->gem);
> >> -    if (IS_ERR(vaddr))
> >> -            return vaddr;
> >> -
> >> -    buffer->vaddr = vaddr;
> >> +    ret = drm_gem_vmap(buffer->gem, &buffer->membuf);
> >> +    if (ret)
> >> +            return ERR_PTR(ret);
> >>
> >> -    return vaddr;
> >> +    return &buffer->membuf;
> >>  }
> >>  EXPORT_SYMBOL(drm_client_buffer_vmap);
> >>
> >> @@ -337,8 +336,8 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
> >>   */
> >>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
> >>  {
> >> -    drm_gem_vunmap(buffer->gem, buffer->vaddr);
> >> -    buffer->vaddr = NULL;
> >> +    drm_gem_vunmap(buffer->gem, &buffer->membuf);
> >> +    buffer->membuf.vaddr = NULL;
> >>  }
> >>  EXPORT_SYMBOL(drm_client_buffer_vunmap);
> >>
> >> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> >> index 8697554ccd41..da24874247e7 100644
> >> --- a/drivers/gpu/drm/drm_fb_helper.c
> >> +++ b/drivers/gpu/drm/drm_fb_helper.c
> >> @@ -394,7 +394,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> >>      unsigned int cpp = fb->format->cpp[0];
> >>      size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> >>      void *src = fb_helper->fbdev->screen_buffer + offset;
> >> -    void *dst = fb_helper->buffer->vaddr + offset;
> >> +    void *dst = fb_helper->buffer->membuf.vaddr + offset;
> >>      size_t len = (clip->x2 - clip->x1) * cpp;
> >>      unsigned int y;
> >>
> >> @@ -416,7 +416,7 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
> >>      struct drm_clip_rect *clip = &helper->dirty_clip;
> >>      struct drm_clip_rect clip_copy;
> >>      unsigned long flags;
> >> -    void *vaddr;
> >> +    const struct drm_gem_membuf *buf;
> >>
> >>      spin_lock_irqsave(&helper->dirty_lock, flags);
> >>      clip_copy = *clip;
> >> @@ -429,8 +429,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
> >>
> >>              /* Generic fbdev uses a shadow buffer */
> >>              if (helper->buffer) {
> >> -                    vaddr = drm_client_buffer_vmap(helper->buffer);
> >> -                    if (IS_ERR(vaddr))
> >> +                    buf = drm_client_buffer_vmap(helper->buffer);
> >> +                    if (IS_ERR(buf))
> >>                              return;
> >>                      drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> >>              }
> >> @@ -2076,7 +2076,7 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
> >>      struct drm_framebuffer *fb;
> >>      struct fb_info *fbi;
> >>      u32 format;
> >> -    void *vaddr;
> >> +    const struct drm_gem_membuf *membuf;
> >>
> >>      drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
> >>                  sizes->surface_width, sizes->surface_height,
> >> @@ -2112,11 +2112,11 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
> >>              fb_deferred_io_init(fbi);
> >>      } else {
> >>              /* buffer is mapped for HW framebuffer */
> >> -            vaddr = drm_client_buffer_vmap(fb_helper->buffer);
> >> -            if (IS_ERR(vaddr))
> >> -                    return PTR_ERR(vaddr);
> >> +            membuf = drm_client_buffer_vmap(fb_helper->buffer);
> >> +            if (IS_ERR(membuf))
> >> +                    return PTR_ERR(membuf);
> >>
> >> -            fbi->screen_buffer = vaddr;
> >> +            fbi->screen_buffer = membuf->vaddr;
> >>              /* Shamelessly leak the physical address to user-space */
> >>  #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
> >>              if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
> >> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> >> index 19d73868490e..36ded6a56fb2 100644
> >> --- a/drivers/gpu/drm/drm_gem.c
> >> +++ b/drivers/gpu/drm/drm_gem.c
> >> @@ -1220,7 +1220,7 @@ void drm_gem_unpin(struct drm_gem_object *obj)
> >>              obj->dev->driver->gem_prime_unpin(obj);
> >>  }
> >>
> >> -void *drm_gem_vmap(struct drm_gem_object *obj)
> >> +int drm_gem_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf)
> >>  {
> >>      void *vaddr;
> >>
> >> @@ -1229,23 +1229,26 @@ void *drm_gem_vmap(struct drm_gem_object *obj)
> >>      else if (obj->dev->driver->gem_prime_vmap)
> >>              vaddr = obj->dev->driver->gem_prime_vmap(obj);
> >>      else
> >> -            vaddr = ERR_PTR(-EOPNOTSUPP);
> >> +            return -EOPNOTSUPP;
> >>
> >>      if (!vaddr)
> >> -            vaddr = ERR_PTR(-ENOMEM);
> >> +            return -ENOMEM;
> >> +
> >> +    buf->vaddr = vaddr;
> >> +    buf->is_iomem = false;
> >>
> >> -    return vaddr;
> >> +    return 0;
> >>  }
> >>
> >> -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
> >> +void drm_gem_vunmap(struct drm_gem_object *obj, const struct drm_gem_membuf *buf)
> >>  {
> >> -    if (!vaddr)
> >> +    if (!buf || !buf->vaddr)
> >>              return;
> >>
> >>      if (obj->funcs && obj->funcs->vunmap)
> >> -            obj->funcs->vunmap(obj, vaddr);
> >> +            obj->funcs->vunmap(obj, buf->vaddr);
> >>      else if (obj->dev->driver->gem_prime_vunmap)
> >> -            obj->dev->driver->gem_prime_vunmap(obj, vaddr);
> >> +            obj->dev->driver->gem_prime_vunmap(obj, buf->vaddr);
> >>  }
> >>
> >>  /**
> >> diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
> >> index 8e01caaf95cc..201d71249954 100644
> >> --- a/drivers/gpu/drm/drm_internal.h
> >> +++ b/drivers/gpu/drm/drm_internal.h
> >> @@ -36,6 +36,7 @@ struct dma_buf;
> >>  struct drm_connector;
> >>  struct drm_crtc;
> >>  struct drm_framebuffer;
> >> +struct drm_gem_membuf;
> >>  struct drm_gem_object;
> >>  struct drm_master;
> >>  struct drm_minor;
> >> @@ -186,8 +187,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
> >>
> >>  int drm_gem_pin(struct drm_gem_object *obj);
> >>  void drm_gem_unpin(struct drm_gem_object *obj);
> >> -void *drm_gem_vmap(struct drm_gem_object *obj);
> >> -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
> >> +int drm_gem_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf);
> >> +void drm_gem_vunmap(struct drm_gem_object *obj, const struct drm_gem_membuf *buf);
> >>
> >>  /* drm_debugfs.c drm_debugfs_crc.c */
> >>  #if defined(CONFIG_DEBUG_FS)
> >> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> >> index 1693aa7c14b5..d95a39030a93 100644
> >> --- a/drivers/gpu/drm/drm_prime.c
> >> +++ b/drivers/gpu/drm/drm_prime.c
> >> @@ -671,13 +671,14 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
> >>  void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf)
> >>  {
> >>      struct drm_gem_object *obj = dma_buf->priv;
> >> -    void *vaddr;
> >> +    struct drm_gem_membuf buf;
> >> +    int ret;
> >>
> >> -    vaddr = drm_gem_vmap(obj);
> >> -    if (IS_ERR(vaddr))
> >> -            vaddr = NULL;
> >> +    ret = drm_gem_vmap(obj, &buf);
> >> +    if (ret)
> >> +            buf.vaddr = NULL;
> >>
> >> -    return vaddr;
> >> +    return buf.vaddr;
> >>  }
> >>  EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
> >>
> >> @@ -692,8 +693,11 @@ EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
> >>  void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
> >>  {
> >>      struct drm_gem_object *obj = dma_buf->priv;
> >> +    struct drm_gem_membuf buf;
> >>
> >> -    drm_gem_vunmap(obj, vaddr);
> >> +    buf.vaddr = vaddr;
> >> +    buf.is_iomem = false;
> >> +    drm_gem_vunmap(obj, &buf);
> >>  }
> >>  EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
> >>
> >> diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
> >> index 7aaea665bfc2..5ed73c390619 100644
> >> --- a/include/drm/drm_client.h
> >> +++ b/include/drm/drm_client.h
> >> @@ -14,6 +14,7 @@ struct drm_client_dev;
> >>  struct drm_device;
> >>  struct drm_file;
> >>  struct drm_framebuffer;
> >> +struct drm_gem_membuf;
> >>  struct drm_gem_object;
> >>  struct drm_minor;
> >>  struct module;
> >> @@ -141,9 +142,9 @@ struct drm_client_buffer {
> >>      struct drm_gem_object *gem;
> >>
> >>      /**
> >> -     * @vaddr: Virtual address for the buffer
> >> +     * @membuf: Virtual address for the buffer
> >>       */
> >> -    void *vaddr;
> >> +    struct drm_gem_membuf membuf;
> >>
> >>      /**
> >>       * @fb: DRM framebuffer
> >> @@ -155,7 +156,7 @@ struct drm_client_buffer *
> >>  drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
> >>  void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
> >>  int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
> >> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
> >> +const struct drm_gem_membuf *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
> >>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
> >>
> >>  int drm_client_modeset_create(struct drm_client_dev *client);
> >> diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h
> >> index 0988351d743c..6ecf03601c36 100644
> >> --- a/include/drm/drm_device.h
> >> +++ b/include/drm/drm_device.h
> >> @@ -381,4 +381,30 @@ struct drm_device {
> >>  #endif
> >>  };
> >>
> >> +/**
> >> + * struct drm_gem_membuf - GEM memory buffer
> >
> > This isn't gem specific, and membuf feels a bit strange - it's not a
> > memory buffer, but more a buffer mapping. I'd call these struct
> > drm_bufmap.
>
> Yep, I didn't like the name either. struct drm_bufmap sounds good.
>
> As i mentioned on irc, I'd also want to propose this as an update to
> dma-buf's vmap. It's not that much more work and should fix the issue
> once and for all. If they are interested, struct dma_buf_addr might be a
> candidate.

Ah if that's the goal, I'd say call it dma_buf_map from the start and
have it under the dma-buf documentation section.
-Daniel

>
> >
> >> + */
> >> +struct drm_gem_membuf {
> >> +    union {
> >> +            /**
> >> +             * @vaddr:
> >> +             *
> >> +             * The virtual address for the buffer in system memory.
> >> +             */
> >> +            void *vaddr;
> >> +            /**
> >> +             * @vaddr_iomem:
> >> +             *
> >> +             * The virtual address for the buffer in I/O memory.
> >> +             */
> >> +            void __iomem *vaddr_iomem;
> >> +    };
> >> +    /**
> >> +     * @is_iomem:
> >> +     *
> >> +     * True if the memory is located in I/O memory, false otherwise.
> >> +     */
> >> +    bool is_iomem;
> >> +};
> >
> > I think if we do this we should go all in, i.e. create a new header, and
> > then start to add all kinds of helper functions to it, like:
> >
> > static inline drm_memcpy_to_bufmap(bufmap, ...)
> > {
> >       if (bufmap->is_iomem)
> >               memcpy_toio(bufmap->vaddr_iomem, ...);
> >       else
> >               memcpy(bufmap->vaddr_iomem, ...);
> >
> > }
> >
> > totally wrong code of course, but I think you get the idea. If we add
> > helpers like this I think a new header file is also warranted, so maybe
> > put it all in drm_bufmap.h and then wire up the kerneldoc.
> >
> > Also I think cc: ttm people for an ack would be good, maybe long-term we
> > want to roll this out through ttm too.
>
> Sure.
>
> Best regards
> Thomas
>
> >
> > Cheers, Daniel
> >
> >
> >> +
> >>  #endif
> >> --
> >> 2.28.0
> >>
> >
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/4] drm/gem: Update client API to use struct drm_gem_membuf
  2020-08-13 10:26   ` Daniel Vetter
  2020-08-13 10:46     ` Thomas Zimmermann
@ 2020-08-14  6:47     ` Thomas Zimmermann
  1 sibling, 0 replies; 13+ messages in thread
From: Thomas Zimmermann @ 2020-08-14  6:47 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: sean, emil.l.velikov, dri-devel, hdegoede, kraxel, airlied, zou_wei, sam


[-- Attachment #1.1.1: Type: text/plain, Size: 12602 bytes --]

Hi

Am 13.08.20 um 12:26 schrieb Daniel Vetter:
> On Thu, Aug 06, 2020 at 10:52:37AM +0200, Thomas Zimmermann wrote:
>> GEM's vmap interface now wraps memory pointers in struct drm_gem_membuf.
>> The structure represents a pointer into the framebuffer, which is either
>> in I/O memory or in system memory. The structure contains a flag that
>> distinguishes these cases.
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> ---
>>  drivers/gpu/drm/drm_client.c    | 25 ++++++++++++-------------
>>  drivers/gpu/drm/drm_fb_helper.c | 18 +++++++++---------
>>  drivers/gpu/drm/drm_gem.c       | 19 +++++++++++--------
>>  drivers/gpu/drm/drm_internal.h  |  5 +++--
>>  drivers/gpu/drm/drm_prime.c     | 16 ++++++++++------
>>  include/drm/drm_client.h        |  7 ++++---
>>  include/drm/drm_device.h        | 26 ++++++++++++++++++++++++++
>>  7 files changed, 75 insertions(+), 41 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
>> index 495f47d23d87..0359b82928c1 100644
>> --- a/drivers/gpu/drm/drm_client.c
>> +++ b/drivers/gpu/drm/drm_client.c
>> @@ -234,7 +234,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
>>  {
>>  	struct drm_device *dev = buffer->client->dev;
>>  
>> -	drm_gem_vunmap(buffer->gem, buffer->vaddr);
>> +	drm_gem_vunmap(buffer->gem, &buffer->membuf);
>>  
>>  	if (buffer->gem)
>>  		drm_gem_object_put(buffer->gem);
>> @@ -302,12 +302,13 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
>>   * Returns:
>>   *	The mapped memory's address
>>   */
>> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
>> +const struct drm_gem_membuf *
>> +drm_client_buffer_vmap(struct drm_client_buffer *buffer)
>>  {
>> -	void *vaddr;
>> +	int ret;
>>  
>> -	if (buffer->vaddr)
>> -		return buffer->vaddr;
>> +	if (buffer->membuf.vaddr)
>> +		return &buffer->membuf;
>>  
>>  	/*
>>  	 * FIXME: The dependency on GEM here isn't required, we could
>> @@ -317,13 +318,11 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
>>  	 * fd_install step out of the driver backend hooks, to make that
>>  	 * final step optional for internal users.
>>  	 */
>> -	vaddr = drm_gem_vmap(buffer->gem);
>> -	if (IS_ERR(vaddr))
>> -		return vaddr;
>> -
>> -	buffer->vaddr = vaddr;
>> +	ret = drm_gem_vmap(buffer->gem, &buffer->membuf);
>> +	if (ret)
>> +		return ERR_PTR(ret);
>>  
>> -	return vaddr;
>> +	return &buffer->membuf;
>>  }
>>  EXPORT_SYMBOL(drm_client_buffer_vmap);
>>  
>> @@ -337,8 +336,8 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
>>   */
>>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
>>  {
>> -	drm_gem_vunmap(buffer->gem, buffer->vaddr);
>> -	buffer->vaddr = NULL;
>> +	drm_gem_vunmap(buffer->gem, &buffer->membuf);
>> +	buffer->membuf.vaddr = NULL;
>>  }
>>  EXPORT_SYMBOL(drm_client_buffer_vunmap);
>>  
>> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
>> index 8697554ccd41..da24874247e7 100644
>> --- a/drivers/gpu/drm/drm_fb_helper.c
>> +++ b/drivers/gpu/drm/drm_fb_helper.c
>> @@ -394,7 +394,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
>>  	unsigned int cpp = fb->format->cpp[0];
>>  	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
>>  	void *src = fb_helper->fbdev->screen_buffer + offset;
>> -	void *dst = fb_helper->buffer->vaddr + offset;
>> +	void *dst = fb_helper->buffer->membuf.vaddr + offset;
>>  	size_t len = (clip->x2 - clip->x1) * cpp;
>>  	unsigned int y;
>>  
>> @@ -416,7 +416,7 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>>  	struct drm_clip_rect *clip = &helper->dirty_clip;
>>  	struct drm_clip_rect clip_copy;
>>  	unsigned long flags;
>> -	void *vaddr;
>> +	const struct drm_gem_membuf *buf;
>>  
>>  	spin_lock_irqsave(&helper->dirty_lock, flags);
>>  	clip_copy = *clip;
>> @@ -429,8 +429,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>>  
>>  		/* Generic fbdev uses a shadow buffer */
>>  		if (helper->buffer) {
>> -			vaddr = drm_client_buffer_vmap(helper->buffer);
>> -			if (IS_ERR(vaddr))
>> +			buf = drm_client_buffer_vmap(helper->buffer);
>> +			if (IS_ERR(buf))
>>  				return;
>>  			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
>>  		}
>> @@ -2076,7 +2076,7 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
>>  	struct drm_framebuffer *fb;
>>  	struct fb_info *fbi;
>>  	u32 format;
>> -	void *vaddr;
>> +	const struct drm_gem_membuf *membuf;
>>  
>>  	drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
>>  		    sizes->surface_width, sizes->surface_height,
>> @@ -2112,11 +2112,11 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
>>  		fb_deferred_io_init(fbi);
>>  	} else {
>>  		/* buffer is mapped for HW framebuffer */
>> -		vaddr = drm_client_buffer_vmap(fb_helper->buffer);
>> -		if (IS_ERR(vaddr))
>> -			return PTR_ERR(vaddr);
>> +		membuf = drm_client_buffer_vmap(fb_helper->buffer);
>> +		if (IS_ERR(membuf))
>> +			return PTR_ERR(membuf);
>>  
>> -		fbi->screen_buffer = vaddr;
>> +		fbi->screen_buffer = membuf->vaddr;
>>  		/* Shamelessly leak the physical address to user-space */
>>  #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
>>  		if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>> index 19d73868490e..36ded6a56fb2 100644
>> --- a/drivers/gpu/drm/drm_gem.c
>> +++ b/drivers/gpu/drm/drm_gem.c
>> @@ -1220,7 +1220,7 @@ void drm_gem_unpin(struct drm_gem_object *obj)
>>  		obj->dev->driver->gem_prime_unpin(obj);
>>  }
>>  
>> -void *drm_gem_vmap(struct drm_gem_object *obj)
>> +int drm_gem_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf)
>>  {
>>  	void *vaddr;
>>  
>> @@ -1229,23 +1229,26 @@ void *drm_gem_vmap(struct drm_gem_object *obj)
>>  	else if (obj->dev->driver->gem_prime_vmap)
>>  		vaddr = obj->dev->driver->gem_prime_vmap(obj);
>>  	else
>> -		vaddr = ERR_PTR(-EOPNOTSUPP);
>> +		return -EOPNOTSUPP;
>>  
>>  	if (!vaddr)
>> -		vaddr = ERR_PTR(-ENOMEM);
>> +		return -ENOMEM;
>> +
>> +	buf->vaddr = vaddr;
>> +	buf->is_iomem = false;
>>  
>> -	return vaddr;
>> +	return 0;
>>  }
>>  
>> -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
>> +void drm_gem_vunmap(struct drm_gem_object *obj, const struct drm_gem_membuf *buf)
>>  {
>> -	if (!vaddr)
>> +	if (!buf || !buf->vaddr)
>>  		return;
>>  
>>  	if (obj->funcs && obj->funcs->vunmap)
>> -		obj->funcs->vunmap(obj, vaddr);
>> +		obj->funcs->vunmap(obj, buf->vaddr);
>>  	else if (obj->dev->driver->gem_prime_vunmap)
>> -		obj->dev->driver->gem_prime_vunmap(obj, vaddr);
>> +		obj->dev->driver->gem_prime_vunmap(obj, buf->vaddr);
>>  }
>>  
>>  /**
>> diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
>> index 8e01caaf95cc..201d71249954 100644
>> --- a/drivers/gpu/drm/drm_internal.h
>> +++ b/drivers/gpu/drm/drm_internal.h
>> @@ -36,6 +36,7 @@ struct dma_buf;
>>  struct drm_connector;
>>  struct drm_crtc;
>>  struct drm_framebuffer;
>> +struct drm_gem_membuf;
>>  struct drm_gem_object;
>>  struct drm_master;
>>  struct drm_minor;
>> @@ -186,8 +187,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
>>  
>>  int drm_gem_pin(struct drm_gem_object *obj);
>>  void drm_gem_unpin(struct drm_gem_object *obj);
>> -void *drm_gem_vmap(struct drm_gem_object *obj);
>> -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
>> +int drm_gem_vmap(struct drm_gem_object *obj, struct drm_gem_membuf *buf);
>> +void drm_gem_vunmap(struct drm_gem_object *obj, const struct drm_gem_membuf *buf);
>>  
>>  /* drm_debugfs.c drm_debugfs_crc.c */
>>  #if defined(CONFIG_DEBUG_FS)
>> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
>> index 1693aa7c14b5..d95a39030a93 100644
>> --- a/drivers/gpu/drm/drm_prime.c
>> +++ b/drivers/gpu/drm/drm_prime.c
>> @@ -671,13 +671,14 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
>>  void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf)
>>  {
>>  	struct drm_gem_object *obj = dma_buf->priv;
>> -	void *vaddr;
>> +	struct drm_gem_membuf buf;
>> +	int ret;
>>  
>> -	vaddr = drm_gem_vmap(obj);
>> -	if (IS_ERR(vaddr))
>> -		vaddr = NULL;
>> +	ret = drm_gem_vmap(obj, &buf);
>> +	if (ret)
>> +		buf.vaddr = NULL;
>>  
>> -	return vaddr;
>> +	return buf.vaddr;
>>  }
>>  EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
>>  
>> @@ -692,8 +693,11 @@ EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
>>  void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
>>  {
>>  	struct drm_gem_object *obj = dma_buf->priv;
>> +	struct drm_gem_membuf buf;
>>  
>> -	drm_gem_vunmap(obj, vaddr);
>> +	buf.vaddr = vaddr;
>> +	buf.is_iomem = false;
>> +	drm_gem_vunmap(obj, &buf);
>>  }
>>  EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
>>  
>> diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
>> index 7aaea665bfc2..5ed73c390619 100644
>> --- a/include/drm/drm_client.h
>> +++ b/include/drm/drm_client.h
>> @@ -14,6 +14,7 @@ struct drm_client_dev;
>>  struct drm_device;
>>  struct drm_file;
>>  struct drm_framebuffer;
>> +struct drm_gem_membuf;
>>  struct drm_gem_object;
>>  struct drm_minor;
>>  struct module;
>> @@ -141,9 +142,9 @@ struct drm_client_buffer {
>>  	struct drm_gem_object *gem;
>>  
>>  	/**
>> -	 * @vaddr: Virtual address for the buffer
>> +	 * @membuf: Virtual address for the buffer
>>  	 */
>> -	void *vaddr;
>> +	struct drm_gem_membuf membuf;
>>  
>>  	/**
>>  	 * @fb: DRM framebuffer
>> @@ -155,7 +156,7 @@ struct drm_client_buffer *
>>  drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
>>  void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
>>  int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
>> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
>> +const struct drm_gem_membuf *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
>>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
>>  
>>  int drm_client_modeset_create(struct drm_client_dev *client);
>> diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h
>> index 0988351d743c..6ecf03601c36 100644
>> --- a/include/drm/drm_device.h
>> +++ b/include/drm/drm_device.h
>> @@ -381,4 +381,30 @@ struct drm_device {
>>  #endif
>>  };
>>  
>> +/**
>> + * struct drm_gem_membuf - GEM memory buffer
> 
> This isn't gem specific, and membuf feels a bit strange - it's not a
> memory buffer, but more a buffer mapping. I'd call these struct
> drm_bufmap.
> 
>> + */
>> +struct drm_gem_membuf {
>> +	union {
>> +		/**
>> +		 * @vaddr:
>> +		 *
>> +		 * The virtual address for the buffer in system memory.
>> +		 */
>> +		void *vaddr;
>> +		/**
>> +		 * @vaddr_iomem:
>> +		 *
>> +		 * The virtual address for the buffer in I/O memory.
>> +		 */
>> +		void __iomem *vaddr_iomem;
>> +	};
>> +	/**
>> +	 * @is_iomem:
>> +	 *
>> +	 * True if the memory is located in I/O memory, false otherwise.
>> +	 */
>> +	bool is_iomem;
>> +};
> 
> I think if we do this we should go all in, i.e. create a new header, and
> then start to add all kinds of helper functions to it, like:
> 
> static inline drm_memcpy_to_bufmap(bufmap, ...)
> {
> 	if (bufmap->is_iomem)
> 		memcpy_toio(bufmap->vaddr_iomem, ...);
> 	else
> 		memcpy(bufmap->vaddr_iomem, ...);
> 
> }
> 
> totally wrong code of course, but I think you get the idea. If we add
> helpers like this I think a new header file is also warranted, so maybe
> put it all in drm_bufmap.h and then wire up the kerneldoc.

Just a note: while converting the fbdev blit code, I found it easier to
open-code the test for is_iomem and have two separate loops. Doing the
same with drm_memcpy_to_bufmap would also require helpers for pointer
arithmetics and array indexing to make it comfortable to use.

Best regards
Thomas

> 
> Also I think cc: ttm people for an ack would be good, maybe long-term we
> want to roll this out through ttm too.
> 
> Cheers, Daniel
> 
> 
>> +
>>  #endif
>> -- 
>> 2.28.0
>>
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RFC][PATCH v2 0/4] Support GEM object mappings from I/O memory
  2020-08-06  8:52 [RFC][PATCH v2 0/4] Support GEM object mappings from I/O memory Thomas Zimmermann
                   ` (3 preceding siblings ...)
  2020-08-06  8:52 ` [PATCH v2 4/4] drm/fb_helper: Access framebuffer as I/O memory, if required Thomas Zimmermann
@ 2020-08-14  6:51 ` Thomas Zimmermann
  4 siblings, 0 replies; 13+ messages in thread
From: Thomas Zimmermann @ 2020-08-14  6:51 UTC (permalink / raw)
  To: airlied, daniel, kraxel, maarten.lankhorst, mripard, hdegoede,
	sean, sam, emil.l.velikov, lyude, noralf, zou_wei,
	Christian König, Huang Rui
  Cc: dri-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 3527 bytes --]

cc'ing TTM maintainers for comments. We might want to use the I/O
pointer structure with TTM at some point.

Am 06.08.20 um 10:52 schrieb Thomas Zimmermann:
> DRM's fbdev console uses regular load and store operations to update
> framebuffer memory. The bochs driver on sparc64 requires the use of
> I/O-specific load and store operations. We have a workaround, but need
> a long-term solution to the problem. Previous attempts to resolve the
> issue returned an extra is_iomem flag from vmap(), or added a separate
> vmap_iomem() callback to GEM objects.
> 
> This patchset is yet another iteration with a different idea. Instead
> of a raw pointer, vmap() interfaces now return a structure that contains
> the buffer address in system or I/O memory, plus a flag that signals
> which location the address is in.
> 
> Patch #1 updates the vboxvideo driver to match the latest VRAM helpers.
> This simplifies the other patches and should be merged in any case.
> 
> Patch #2 adds struct drm_gem_membuf, which contains the pointer and flag,
> and converts the generic GEM interfaces to use it.
> 
> Patch #3 converts vmap/vunmap in GEM object functions and updates most
> GEM backends. A few drivers are still missing, but the patch should be
> acceptable for an RFC.
> 
> Patch #4 changes fbdev helpers to access framebuffer memory either with
> system or I/O memcpy functions.
> 
> Thomas Zimmermann (4):
>   drm/vboxvideo: Use drm_gem_vram_vmap() interfaces
>   drm/gem: Update client API to use struct drm_gem_membuf
>   drm/gem: Use struct drm_gem_membuf in vmap op and convert GEM backends
>   drm/fb_helper: Access framebuffer as I/O memory, if required
> 
>  drivers/gpu/drm/ast/ast_cursor.c       |  29 ++-
>  drivers/gpu/drm/ast/ast_drv.h          |   2 +-
>  drivers/gpu/drm/bochs/bochs_kms.c      |   1 -
>  drivers/gpu/drm/drm_client.c           |  25 ++-
>  drivers/gpu/drm/drm_fb_helper.c        | 246 ++++++++++++++++++++++---
>  drivers/gpu/drm/drm_gem.c              |  28 +--
>  drivers/gpu/drm/drm_gem_cma_helper.c   |  15 +-
>  drivers/gpu/drm/drm_gem_shmem_helper.c |  31 ++--
>  drivers/gpu/drm/drm_gem_vram_helper.c  | 142 +++++---------
>  drivers/gpu/drm/drm_internal.h         |   5 +-
>  drivers/gpu/drm/drm_prime.c            |  16 +-
>  drivers/gpu/drm/mgag200/mgag200_mode.c |  11 +-
>  drivers/gpu/drm/qxl/qxl_display.c      |  12 +-
>  drivers/gpu/drm/qxl/qxl_draw.c         |  14 +-
>  drivers/gpu/drm/qxl/qxl_drv.h          |   6 +-
>  drivers/gpu/drm/qxl/qxl_object.c       |  19 +-
>  drivers/gpu/drm/qxl/qxl_object.h       |   2 +-
>  drivers/gpu/drm/qxl/qxl_prime.c        |  12 +-
>  drivers/gpu/drm/tiny/cirrus.c          |  15 +-
>  drivers/gpu/drm/tiny/gm12u320.c        |  12 +-
>  drivers/gpu/drm/udl/udl_modeset.c      |  10 +-
>  drivers/gpu/drm/vboxvideo/vbox_mode.c  |  17 +-
>  include/drm/drm_client.h               |   7 +-
>  include/drm/drm_device.h               |  26 +++
>  include/drm/drm_gem.h                  |   5 +-
>  include/drm/drm_gem_cma_helper.h       |   4 +-
>  include/drm/drm_gem_shmem_helper.h     |   4 +-
>  include/drm/drm_gem_vram_helper.h      |   9 +-
>  include/drm/drm_mode_config.h          |  12 --
>  29 files changed, 464 insertions(+), 273 deletions(-)
> 
> --
> 2.28.0
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-08-14  6:51 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-06  8:52 [RFC][PATCH v2 0/4] Support GEM object mappings from I/O memory Thomas Zimmermann
2020-08-06  8:52 ` [PATCH v2 1/4] drm/vboxvideo: Use drm_gem_vram_vmap() interfaces Thomas Zimmermann
2020-08-06  8:52 ` [PATCH v2 2/4] drm/gem: Update client API to use struct drm_gem_membuf Thomas Zimmermann
2020-08-13 10:26   ` Daniel Vetter
2020-08-13 10:46     ` Thomas Zimmermann
2020-08-13 14:03       ` Daniel Vetter
2020-08-14  6:47     ` Thomas Zimmermann
2020-08-06  8:52 ` [PATCH v2 3/4] drm/gem: Use struct drm_gem_membuf in vmap op and convert GEM backends Thomas Zimmermann
2020-08-06 10:50   ` kernel test robot
2020-08-06 11:36   ` kernel test robot
2020-08-06  8:52 ` [PATCH v2 4/4] drm/fb_helper: Access framebuffer as I/O memory, if required Thomas Zimmermann
2020-08-06 19:23   ` kernel test robot
2020-08-14  6:51 ` [RFC][PATCH v2 0/4] Support GEM object mappings from I/O memory Thomas Zimmermann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.