All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC][PATCH 0/8] Support I/O memory in generic fbdev emulation
@ 2019-11-06  9:31 Thomas Zimmermann
  2019-11-06  9:31 ` [PATCH 1/8] drm/vram-helper: Tell caller if vmap() returned I/O memory Thomas Zimmermann
                   ` (8 more replies)
  0 siblings, 9 replies; 10+ messages in thread
From: Thomas Zimmermann @ 2019-11-06  9:31 UTC (permalink / raw)
  To: daniel, christian.koenig, noralf; +Cc: Thomas Zimmermann, dri-devel

We recently had a discussion if/how fbdev emulation could support
framebuffers in I/O memory on all platform. [1]

I typed up a patchset that passes information about the memory area
from memory manager to client (e.g., fbdev emulation). The client can
take this into consideration when accessing the framebuffer.

The alternative proposal is to introduce a separate vmap() call that
only returns I/O memorym or NULL if the framebuffer is not in I/O
memory. AFAICS the benefit of this idea is the cleaner interface and
the ability to modify drivers one by one. The drawback is some additional
boilerplate code in drivers and clients.

[1] https://lists.freedesktop.org/archives/dri-devel/2019-November/242464.html

Thomas Zimmermann (8):
  drm/vram-helper: Tell caller if vmap() returned I/O memory
  drm/qxl: Tell caller if kmap() returned I/O memory
  drm: Add is_iomem return parameter to struct drm_gem_object_funcs.vmap
  drm/gem: Return I/O-memory flag from drm_gem_vram()
  drm/client: Return I/O memory flag from drm_client_buffer_vmap()
  fbdev: Export default read and write operations as
    fb_cfb_{read,write}()
  drm/fb-helper: Select between fb_{sys,cfb}_read() and _write()
  drm/fb-helper: Handle I/O memory correctly when flushing shadow fb

 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h |   2 +-
 drivers/gpu/drm/ast/ast_mode.c              |   6 +-
 drivers/gpu/drm/cirrus/cirrus.c             |   2 +-
 drivers/gpu/drm/drm_client.c                |  15 ++-
 drivers/gpu/drm/drm_fb_helper.c             | 118 ++++++++++++++++++--
 drivers/gpu/drm/drm_gem.c                   |   9 +-
 drivers/gpu/drm/drm_gem_cma_helper.c        |   7 +-
 drivers/gpu/drm/drm_gem_shmem_helper.c      |  12 +-
 drivers/gpu/drm/drm_gem_vram_helper.c       |  13 ++-
 drivers/gpu/drm/drm_internal.h              |   2 +-
 drivers/gpu/drm/drm_prime.c                 |   2 +-
 drivers/gpu/drm/etnaviv/etnaviv_drv.h       |   2 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |   4 +-
 drivers/gpu/drm/mgag200/mgag200_cursor.c    |   4 +-
 drivers/gpu/drm/nouveau/nouveau_gem.h       |   2 +-
 drivers/gpu/drm/nouveau/nouveau_prime.c     |   4 +-
 drivers/gpu/drm/panfrost/panfrost_perfcnt.c |   2 +-
 drivers/gpu/drm/qxl/qxl_display.c           |   6 +-
 drivers/gpu/drm/qxl/qxl_draw.c              |   4 +-
 drivers/gpu/drm/qxl/qxl_drv.h               |   4 +-
 drivers/gpu/drm/qxl/qxl_object.c            |   7 +-
 drivers/gpu/drm/qxl/qxl_object.h            |   2 +-
 drivers/gpu/drm/qxl/qxl_prime.c             |   4 +-
 drivers/gpu/drm/radeon/radeon_drv.c         |   2 +-
 drivers/gpu/drm/radeon/radeon_prime.c       |   4 +-
 drivers/gpu/drm/tiny/gm12u320.c             |   2 +-
 drivers/gpu/drm/vc4/vc4_bo.c                |   4 +-
 drivers/gpu/drm/vc4/vc4_drv.h               |   2 +-
 drivers/gpu/drm/vgem/vgem_drv.c             |   5 +-
 drivers/gpu/drm/xen/xen_drm_front_gem.c     |   6 +-
 drivers/gpu/drm/xen/xen_drm_front_gem.h     |   3 +-
 drivers/video/fbdev/core/fbmem.c            |  53 +++++++--
 include/drm/drm_client.h                    |   7 +-
 include/drm/drm_drv.h                       |   2 +-
 include/drm/drm_fb_helper.h                 |  14 +++
 include/drm/drm_gem.h                       |   2 +-
 include/drm/drm_gem_cma_helper.h            |   2 +-
 include/drm/drm_gem_shmem_helper.h          |   2 +-
 include/drm/drm_gem_vram_helper.h           |   2 +-
 include/linux/fb.h                          |   5 +
 41 files changed, 278 insertions(+), 78 deletions(-)

-- 
2.23.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/8] drm/vram-helper: Tell caller if vmap() returned I/O memory
  2019-11-06  9:31 [RFC][PATCH 0/8] Support I/O memory in generic fbdev emulation Thomas Zimmermann
@ 2019-11-06  9:31 ` Thomas Zimmermann
  2019-11-06  9:31 ` [PATCH 2/8] drm/qxl: Tell caller if kmap() " Thomas Zimmermann
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Thomas Zimmermann @ 2019-11-06  9:31 UTC (permalink / raw)
  To: daniel, christian.koenig, noralf; +Cc: Thomas Zimmermann, dri-devel

Returning a flag from kmap() whether mapped pages refer to system or
I/O memory. This prepares for a respective change to vmap().

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/ast/ast_mode.c           | 6 +++---
 drivers/gpu/drm/drm_gem_vram_helper.c    | 8 +++++---
 drivers/gpu/drm/mgag200/mgag200_cursor.c | 4 ++--
 include/drm/drm_gem_vram_helper.h        | 2 +-
 4 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
index b13eaa2619ab..bcfab641c3a9 100644
--- a/drivers/gpu/drm/ast/ast_mode.c
+++ b/drivers/gpu/drm/ast/ast_mode.c
@@ -1165,7 +1165,7 @@ static int ast_show_cursor(struct drm_crtc *crtc, void *src,
 	u8 jreg;
 
 	gbo = ast->cursor.gbo[ast->cursor.next_index];
-	dst = drm_gem_vram_vmap(gbo);
+	dst = drm_gem_vram_vmap(gbo, NULL);
 	if (IS_ERR(dst))
 		return PTR_ERR(dst);
 	off = drm_gem_vram_offset(gbo);
@@ -1231,7 +1231,7 @@ static int ast_cursor_set(struct drm_crtc *crtc,
 		return -ENOENT;
 	}
 	gbo = drm_gem_vram_of_gem(obj);
-	src = drm_gem_vram_vmap(gbo);
+	src = drm_gem_vram_vmap(gbo, NULL);
 	if (IS_ERR(src)) {
 		ret = PTR_ERR(src);
 		goto err_drm_gem_object_put_unlocked;
@@ -1264,7 +1264,7 @@ static int ast_cursor_move(struct drm_crtc *crtc,
 	u8 jreg;
 
 	gbo = ast->cursor.gbo[ast->cursor.next_index];
-	dst = drm_gem_vram_vmap(gbo);
+	dst = drm_gem_vram_vmap(gbo, NULL);
 	if (IS_ERR(dst))
 		return PTR_ERR(dst);
 
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 666cb4c22bb9..05f63f28814d 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -411,6 +411,8 @@ EXPORT_SYMBOL(drm_gem_vram_kunmap);
  * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
  *                       space
  * @gbo:	The GEM VRAM object to map
+ * @is_iomem:	returns true if the mapped memory is I/O memory, or false
+ *              otherwise; can be NULL
  *
  * The vmap function pins a GEM VRAM object to its current location, either
  * system or video memory, and maps its buffer into kernel address space.
@@ -425,7 +427,7 @@ EXPORT_SYMBOL(drm_gem_vram_kunmap);
  * The buffer's virtual address on success, or
  * an ERR_PTR()-encoded error code otherwise.
  */
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
+void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, bool *is_iomem)
 {
 	int ret;
 	void *base;
@@ -437,7 +439,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
 	ret = drm_gem_vram_pin_locked(gbo, 0);
 	if (ret)
 		goto err_ttm_bo_unreserve;
-	base = drm_gem_vram_kmap_locked(gbo, true, NULL);
+	base = drm_gem_vram_kmap_locked(gbo, true, is_iomem);
 	if (IS_ERR(base)) {
 		ret = PTR_ERR(base);
 		goto err_drm_gem_vram_unpin_locked;
@@ -826,7 +828,7 @@ static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
 	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
 	void *base;
 
-	base = drm_gem_vram_vmap(gbo);
+	base = drm_gem_vram_vmap(gbo, NULL);
 	if (IS_ERR(base))
 		return NULL;
 	return base;
diff --git a/drivers/gpu/drm/mgag200/mgag200_cursor.c b/drivers/gpu/drm/mgag200/mgag200_cursor.c
index 79711dbb5b03..765c59e25f3b 100644
--- a/drivers/gpu/drm/mgag200/mgag200_cursor.c
+++ b/drivers/gpu/drm/mgag200/mgag200_cursor.c
@@ -131,7 +131,7 @@ static int mgag200_show_cursor(struct mga_device *mdev, void *src,
 		WREG8(MGA_CURPOSXH, 0);
 		return -ENOTSUPP; /* Didn't allocate space for cursors */
 	}
-	dst = drm_gem_vram_vmap(gbo);
+	dst = drm_gem_vram_vmap(gbo, NULL);
 	if (IS_ERR(dst)) {
 		ret = PTR_ERR(dst);
 		dev_err(&dev->pdev->dev,
@@ -282,7 +282,7 @@ int mgag200_crtc_cursor_set(struct drm_crtc *crtc, struct drm_file *file_priv,
 	if (!obj)
 		return -ENOENT;
 	gbo = drm_gem_vram_of_gem(obj);
-	src = drm_gem_vram_vmap(gbo);
+	src = drm_gem_vram_vmap(gbo, NULL);
 	if (IS_ERR(src)) {
 		ret = PTR_ERR(src);
 		dev_err(&dev->pdev->dev,
diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
index e040541a105f..ef8f81acff91 100644
--- a/include/drm/drm_gem_vram_helper.h
+++ b/include/drm/drm_gem_vram_helper.h
@@ -106,7 +106,7 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
 void *drm_gem_vram_kmap(struct drm_gem_vram_object *gbo, bool map,
 			bool *is_iomem);
 void drm_gem_vram_kunmap(struct drm_gem_vram_object *gbo);
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
+void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, bool *is_iomem);
 void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
 
 int drm_gem_vram_fill_create_dumb(struct drm_file *file,
-- 
2.23.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/8] drm/qxl: Tell caller if kmap() returned I/O memory
  2019-11-06  9:31 [RFC][PATCH 0/8] Support I/O memory in generic fbdev emulation Thomas Zimmermann
  2019-11-06  9:31 ` [PATCH 1/8] drm/vram-helper: Tell caller if vmap() returned I/O memory Thomas Zimmermann
@ 2019-11-06  9:31 ` Thomas Zimmermann
  2019-11-06  9:31 ` [PATCH 3/8] drm: Add is_iomem return parameter to struct drm_gem_object_funcs.vmap Thomas Zimmermann
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Thomas Zimmermann @ 2019-11-06  9:31 UTC (permalink / raw)
  To: daniel, christian.koenig, noralf; +Cc: Thomas Zimmermann, dri-devel

Returning a flag from kmap() whether mapped pages refer to system or
I/O memory. This prepares for a respective change to vmap().

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/qxl/qxl_display.c | 6 +++---
 drivers/gpu/drm/qxl/qxl_draw.c    | 4 ++--
 drivers/gpu/drm/qxl/qxl_drv.h     | 2 +-
 drivers/gpu/drm/qxl/qxl_object.c  | 7 +++----
 drivers/gpu/drm/qxl/qxl_object.h  | 2 +-
 drivers/gpu/drm/qxl/qxl_prime.c   | 2 +-
 6 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
index 16d73b22f3f5..83c8df2f9d64 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -606,7 +606,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 		user_bo = gem_to_qxl_bo(obj);
 
 		/* pinning is done in the prepare/cleanup framevbuffer */
-		ret = qxl_bo_kmap(user_bo, &user_ptr);
+		ret = qxl_bo_kmap(user_bo, &user_ptr, NULL);
 		if (ret)
 			goto out_free_release;
 
@@ -624,7 +624,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 		if (ret)
 			goto out_unpin;
 
-		ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
+		ret = qxl_bo_kmap(cursor_bo, (void **)&cursor, NULL);
 		if (ret)
 			goto out_backoff;
 
@@ -1167,7 +1167,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
 	if (ret)
 		return ret;
 
-	qxl_bo_kmap(qdev->monitors_config_bo, NULL);
+	qxl_bo_kmap(qdev->monitors_config_bo, NULL, NULL);
 
 	qdev->monitors_config = qdev->monitors_config_bo->kptr;
 	qdev->ram_header->monitors_config =
diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
index 5bebf1ea1c5d..962fc1aa00b7 100644
--- a/drivers/gpu/drm/qxl/qxl_draw.c
+++ b/drivers/gpu/drm/qxl/qxl_draw.c
@@ -45,7 +45,7 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
 	struct qxl_clip_rects *dev_clips;
 	int ret;
 
-	ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
+	ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips, NULL);
 	if (ret) {
 		return NULL;
 	}
@@ -197,7 +197,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
 	if (ret)
 		goto out_release_backoff;
 
-	ret = qxl_bo_kmap(bo, (void **)&surface_base);
+	ret = qxl_bo_kmap(bo, (void **)&surface_base, NULL);
 	if (ret)
 		goto out_release_backoff;
 
diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 27e45a2d6b52..e749c0d0e819 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -342,7 +342,7 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
 void qxl_gem_object_close(struct drm_gem_object *obj,
 			  struct drm_file *file_priv);
 void qxl_bo_force_delete(struct qxl_device *qdev);
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
+int qxl_bo_kmap(struct qxl_bo *bo, void **ptr, bool *is_iomem);
 
 /* qxl_dumb.c */
 int qxl_mode_dumb_create(struct drm_file *file_priv,
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index ab72dc3476e9..8507ac2c7d6a 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -143,9 +143,8 @@ int qxl_bo_create(struct qxl_device *qdev,
 	return 0;
 }
 
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
+int qxl_bo_kmap(struct qxl_bo *bo, void **ptr, bool *is_iomem)
 {
-	bool is_iomem;
 	int r;
 
 	if (bo->kptr) {
@@ -157,7 +156,7 @@ int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
 	r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
 	if (r)
 		return r;
-	bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
+	bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, is_iomem);
 	if (ptr)
 		*ptr = bo->kptr;
 	bo->map_count = 1;
@@ -187,7 +186,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
 		return rptr;
 	}
 
-	ret = qxl_bo_kmap(bo, &rptr);
+	ret = qxl_bo_kmap(bo, &rptr, NULL);
 	if (ret)
 		return NULL;
 
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index 8ae54ba7857c..79cb363b3b8b 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -91,7 +91,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
 			 bool kernel, bool pinned, u32 domain,
 			 struct qxl_surface *surf,
 			 struct qxl_bo **bo_ptr);
-extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
+extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr, bool *is_iomem);
 extern void qxl_bo_kunmap(struct qxl_bo *bo);
 void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
 void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index 7d3816fca5a8..e67ebbdeb7f2 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -60,7 +60,7 @@ void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
 	void *ptr;
 	int ret;
 
-	ret = qxl_bo_kmap(bo, &ptr);
+	ret = qxl_bo_kmap(bo, &ptr, NULL);
 	if (ret < 0)
 		return ERR_PTR(ret);
 
-- 
2.23.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/8] drm: Add is_iomem return parameter to struct drm_gem_object_funcs.vmap
  2019-11-06  9:31 [RFC][PATCH 0/8] Support I/O memory in generic fbdev emulation Thomas Zimmermann
  2019-11-06  9:31 ` [PATCH 1/8] drm/vram-helper: Tell caller if vmap() returned I/O memory Thomas Zimmermann
  2019-11-06  9:31 ` [PATCH 2/8] drm/qxl: Tell caller if kmap() " Thomas Zimmermann
@ 2019-11-06  9:31 ` Thomas Zimmermann
  2019-11-06  9:31 ` [PATCH 4/8] drm/gem: Return I/O-memory flag from drm_gem_vram() Thomas Zimmermann
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Thomas Zimmermann @ 2019-11-06  9:31 UTC (permalink / raw)
  To: daniel, christian.koenig, noralf; +Cc: Thomas Zimmermann, dri-devel

The vmap operation can return system or I/O memory, which the caller may
have to treat differently. The parameter is_iomem returns 'true' if the
returned pointer refers to I/O memory, or 'false' otherwise.

In many cases, such as CMA ans SHMEM, the returned value is 'false'. For
TTM-based drivers, the correct value is provided by TTM itself. For DMA
buffers that are shared among devices, we assume system memory as well.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |  6 +++++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h |  2 +-
 drivers/gpu/drm/cirrus/cirrus.c             |  2 +-
 drivers/gpu/drm/drm_gem.c                   |  4 ++--
 drivers/gpu/drm/drm_gem_cma_helper.c        |  7 ++++++-
 drivers/gpu/drm/drm_gem_shmem_helper.c      | 12 +++++++++---
 drivers/gpu/drm/drm_gem_vram_helper.c       |  7 +++++--
 drivers/gpu/drm/etnaviv/etnaviv_drv.h       |  2 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |  4 +++-
 drivers/gpu/drm/nouveau/nouveau_gem.h       |  2 +-
 drivers/gpu/drm/nouveau/nouveau_prime.c     |  4 +++-
 drivers/gpu/drm/panfrost/panfrost_perfcnt.c |  2 +-
 drivers/gpu/drm/qxl/qxl_drv.h               |  2 +-
 drivers/gpu/drm/qxl/qxl_prime.c             |  4 ++--
 drivers/gpu/drm/radeon/radeon_drv.c         |  2 +-
 drivers/gpu/drm/radeon/radeon_prime.c       |  4 +++-
 drivers/gpu/drm/tiny/gm12u320.c             |  2 +-
 drivers/gpu/drm/vc4/vc4_bo.c                |  4 ++--
 drivers/gpu/drm/vc4/vc4_drv.h               |  2 +-
 drivers/gpu/drm/vgem/vgem_drv.c             |  5 ++++-
 drivers/gpu/drm/xen/xen_drm_front_gem.c     |  6 +++++-
 drivers/gpu/drm/xen/xen_drm_front_gem.h     |  3 ++-
 include/drm/drm_drv.h                       |  2 +-
 include/drm/drm_gem.h                       |  2 +-
 include/drm/drm_gem_cma_helper.h            |  2 +-
 include/drm/drm_gem_shmem_helper.h          |  2 +-
 26 files changed, 64 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 4917b548b7f2..97b77e7e15dc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -57,13 +57,15 @@ struct sg_table *amdgpu_gem_prime_get_sg_table(struct drm_gem_object *obj)
 /**
  * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
  * @obj: GEM BO
+ * @is_iomem: returns true if the mapped memory is I/O memory, or false
+ *            otherwise; can be NULL
  *
  * Sets up an in-kernel virtual mapping of the BO's memory.
  *
  * Returns:
  * The virtual address of the mapping or an error pointer.
  */
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
+void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
 	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
 	int ret;
@@ -73,6 +75,8 @@ void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
 	if (ret)
 		return ERR_PTR(ret);
 
+	if (is_iomem)
+		return ttm_kmap_obj_virtual(&bo->dma_buf_vmap, is_iomem);
 	return bo->dma_buf_vmap.virtual;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
index 5012e6ab58f1..910cf2ef345f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
@@ -34,7 +34,7 @@ struct dma_buf *amdgpu_gem_prime_export(struct drm_gem_object *gobj,
 					int flags);
 struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
 					    struct dma_buf *dma_buf);
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
+void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem);
 void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
 			  struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/cirrus/cirrus.c b/drivers/gpu/drm/cirrus/cirrus.c
index 248c9f765c45..6518e5c31eb4 100644
--- a/drivers/gpu/drm/cirrus/cirrus.c
+++ b/drivers/gpu/drm/cirrus/cirrus.c
@@ -302,7 +302,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
 	struct cirrus_device *cirrus = fb->dev->dev_private;
 	void *vmap;
 
-	vmap = drm_gem_shmem_vmap(fb->obj[0]);
+	vmap = drm_gem_shmem_vmap(fb->obj[0], NULL);
 	if (!vmap)
 		return -ENOMEM;
 
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 56f42e0f2584..0acfbd134e04 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1251,9 +1251,9 @@ void *drm_gem_vmap(struct drm_gem_object *obj)
 	void *vaddr;
 
 	if (obj->funcs && obj->funcs->vmap)
-		vaddr = obj->funcs->vmap(obj);
+		vaddr = obj->funcs->vmap(obj, NULL);
 	else if (obj->dev->driver->gem_prime_vmap)
-		vaddr = obj->dev->driver->gem_prime_vmap(obj);
+		vaddr = obj->dev->driver->gem_prime_vmap(obj, NULL);
 	else
 		vaddr = ERR_PTR(-EOPNOTSUPP);
 
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 12e98fb28229..b14e88337529 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -537,6 +537,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
  * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
  *     address space
  * @obj: GEM object
+ * @is_iomem: returns true if the mapped memory is I/O memory, or false
+ *            otherwise; can be NULL
  *
  * This function maps a buffer exported via DRM PRIME into the kernel's
  * virtual address space. Since the CMA buffers are already mapped into the
@@ -547,10 +549,13 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
  * Returns:
  * The kernel virtual address of the CMA GEM object's backing store.
  */
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
+void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
 	struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
 
+	if (is_iomem)
+		*is_iomem = false;
+
 	return cma_obj->vaddr;
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 3bc69b1ffa7d..a8a8e1b13a30 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -242,7 +242,8 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
 }
 EXPORT_SYMBOL(drm_gem_shmem_unpin);
 
-static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
+static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
+				       bool *is_iomem)
 {
 	struct drm_gem_object *obj = &shmem->base;
 	int ret;
@@ -266,6 +267,9 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 		goto err_put_pages;
 	}
 
+	if (is_iomem)
+		*is_iomem = false;
+
 	return shmem->vaddr;
 
 err_put_pages:
@@ -279,6 +283,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 /*
  * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
  * @shmem: shmem GEM object
+ * @is_iomem: returns true if the mapped memory is I/O memory, or false
+ *            otherwise; can be NULL
  *
  * This function makes sure that a virtual address exists for the buffer backing
  * the shmem GEM object.
@@ -286,7 +292,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
  * Returns:
  * 0 on success or a negative error code on failure.
  */
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
+void *drm_gem_shmem_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
 	void *vaddr;
@@ -295,7 +301,7 @@ void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
 	ret = mutex_lock_interruptible(&shmem->vmap_lock);
 	if (ret)
 		return ERR_PTR(ret);
-	vaddr = drm_gem_shmem_vmap_locked(shmem);
+	vaddr = drm_gem_shmem_vmap_locked(shmem, is_iomem);
 	mutex_unlock(&shmem->vmap_lock);
 
 	return vaddr;
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 05f63f28814d..77658f835774 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -818,17 +818,20 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
  * drm_gem_vram_object_vmap() - \
 	Implements &struct drm_gem_object_funcs.vmap
  * @gem:	The GEM object to map
+ * @is_iomem:	returns true if the mapped memory is I/O memory, or false
+ *              otherwise; can be NULL
  *
  * Returns:
  * The buffers virtual address on success, or
  * NULL otherwise.
  */
-static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
+static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem,
+				      bool *is_iomem)
 {
 	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
 	void *base;
 
-	base = drm_gem_vram_vmap(gbo, NULL);
+	base = drm_gem_vram_vmap(gbo, is_iomem);
 	if (IS_ERR(base))
 		return NULL;
 	return base;
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 32cfa5a48d42..558b79366bf4 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -51,7 +51,7 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 vm_fault_t etnaviv_gem_fault(struct vm_fault *vmf);
 int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
 struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
+void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem);
 void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index f24dd21c2363..c8b09ed7f936 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -22,8 +22,10 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(etnaviv_obj->pages, npages);
 }
 
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
+void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
+	if (is_iomem)
+		*is_iomem = false;
 	return etnaviv_gem_vmap(obj);
 }
 
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
index 978e07591990..46ff11a39f23 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.h
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
@@ -35,7 +35,7 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
 extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
 extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
 	struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
-extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
+extern void *nouveau_gem_prime_vmap(struct drm_gem_object *, bool *);
 extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
 
 #endif
diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
index bae6a3eccee0..b61376c91d31 100644
--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
+++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
@@ -35,7 +35,7 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(nvbo->bo.ttm->pages, npages);
 }
 
-void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
+void *nouveau_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
 	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
 	int ret;
@@ -45,6 +45,8 @@ void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
 	if (ret)
 		return ERR_PTR(ret);
 
+	if (is_iomem)
+		return ttm_kmap_obj_virtual(&nvbo->dma_buf_vmap, is_iomem);
 	return nvbo->dma_buf_vmap.virtual;
 }
 
diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
index 83c57d325ca8..f833d8376d44 100644
--- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
+++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
@@ -94,7 +94,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
 	if (ret)
 		goto err_put_bo;
 
-	perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
+	perfcnt->buf = drm_gem_shmem_vmap(&bo->base, NULL);
 	if (IS_ERR(perfcnt->buf)) {
 		ret = PTR_ERR(perfcnt->buf);
 		goto err_put_bo;
diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index e749c0d0e819..3f80b2215f25 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -452,7 +452,7 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
 struct drm_gem_object *qxl_gem_prime_import_sg_table(
 	struct drm_device *dev, struct dma_buf_attachment *attach,
 	struct sg_table *sgt);
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
+void *qxl_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem);
 void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int qxl_gem_prime_mmap(struct drm_gem_object *obj,
 				struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index e67ebbdeb7f2..9b2d4015e0d6 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -54,13 +54,13 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
 	return ERR_PTR(-ENOSYS);
 }
 
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
+void *qxl_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
 	struct qxl_bo *bo = gem_to_qxl_bo(obj);
 	void *ptr;
 	int ret;
 
-	ret = qxl_bo_kmap(bo, &ptr, NULL);
+	ret = qxl_bo_kmap(bo, &ptr, is_iomem);
 	if (ret < 0)
 		return ERR_PTR(ret);
 
diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c
index 888e0f384c61..7f9cff9cb572 100644
--- a/drivers/gpu/drm/radeon/radeon_drv.c
+++ b/drivers/gpu/drm/radeon/radeon_drv.c
@@ -153,7 +153,7 @@ struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
 							struct sg_table *sg);
 int radeon_gem_prime_pin(struct drm_gem_object *obj);
 void radeon_gem_prime_unpin(struct drm_gem_object *obj);
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
+void *radeon_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem);
 void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 
 /* atpx handler */
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
index b906e8fbd5f3..2019b54277e4 100644
--- a/drivers/gpu/drm/radeon/radeon_prime.c
+++ b/drivers/gpu/drm/radeon/radeon_prime.c
@@ -39,7 +39,7 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(bo->tbo.ttm->pages, npages);
 }
 
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
+void *radeon_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
 	struct radeon_bo *bo = gem_to_radeon_bo(obj);
 	int ret;
@@ -49,6 +49,8 @@ void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
 	if (ret)
 		return ERR_PTR(ret);
 
+	if (is_iomem)
+		return ttm_kmap_obj_virtual(&bo->dma_buf_vmap, is_iomem);
 	return bo->dma_buf_vmap.virtual;
 }
 
diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
index 94fb1f593564..4c4b1904e046 100644
--- a/drivers/gpu/drm/tiny/gm12u320.c
+++ b/drivers/gpu/drm/tiny/gm12u320.c
@@ -278,7 +278,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 	y1 = gm12u320->fb_update.rect.y1;
 	y2 = gm12u320->fb_update.rect.y2;
 
-	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
+	vaddr = drm_gem_shmem_vmap(fb->obj[0], NULL);
 	if (IS_ERR(vaddr)) {
 		GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
 		goto put_fb;
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index 72d30d90b856..c03462cef01c 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -767,7 +767,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
 	return drm_gem_cma_prime_mmap(obj, vma);
 }
 
-void *vc4_prime_vmap(struct drm_gem_object *obj)
+void *vc4_prime_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
 	struct vc4_bo *bo = to_vc4_bo(obj);
 
@@ -776,7 +776,7 @@ void *vc4_prime_vmap(struct drm_gem_object *obj)
 		return ERR_PTR(-EINVAL);
 	}
 
-	return drm_gem_cma_prime_vmap(obj);
+	return drm_gem_cma_prime_vmap(obj, is_iomem);
 }
 
 struct drm_gem_object *
diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
index 6627b20c99e9..c84a7eaf1f3e 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.h
+++ b/drivers/gpu/drm/vc4/vc4_drv.h
@@ -733,7 +733,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
 						 struct dma_buf_attachment *attach,
 						 struct sg_table *sgt);
-void *vc4_prime_vmap(struct drm_gem_object *obj);
+void *vc4_prime_vmap(struct drm_gem_object *obj, bool *is_iomem);
 int vc4_bo_cache_init(struct drm_device *dev);
 void vc4_bo_cache_destroy(struct drm_device *dev);
 int vc4_bo_inc_usecnt(struct vc4_bo *bo);
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index 5bd60ded3d81..b991cfce3d91 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -379,7 +379,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
 	return &obj->base;
 }
 
-static void *vgem_prime_vmap(struct drm_gem_object *obj)
+static void *vgem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
 	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
 	long n_pages = obj->size >> PAGE_SHIFT;
@@ -389,6 +389,9 @@ static void *vgem_prime_vmap(struct drm_gem_object *obj)
 	if (IS_ERR(pages))
 		return NULL;
 
+	if (is_iomem)
+		*is_iomem = false;
+
 	return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
 }
 
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index f0b85e094111..b3c3ba661f38 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -272,13 +272,17 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
 	return gem_mmap_obj(xen_obj, vma);
 }
 
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
+void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
+				   bool *is_iomem)
 {
 	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
 
 	if (!xen_obj->pages)
 		return NULL;
 
+	if (is_iomem)
+		*is_iomem = false;
+
 	/* Please see comment in gem_mmap_obj on mapping and attributes. */
 	return vmap(xen_obj->pages, xen_obj->num_pages,
 		    VM_MAP, PAGE_KERNEL);
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
index a39675fa31b2..adcf3d809c75 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
@@ -34,7 +34,8 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
 
 int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
+void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
+				   bool *is_iomem);
 
 void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
 				    void *vaddr);
diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
index cf13470810a5..662c5d5dfd05 100644
--- a/include/drm/drm_drv.h
+++ b/include/drm/drm_drv.h
@@ -631,7 +631,7 @@ struct drm_driver {
 	 * Deprecated vmap hook for GEM drivers. Please use
 	 * &drm_gem_object_funcs.vmap instead.
 	 */
-	void *(*gem_prime_vmap)(struct drm_gem_object *obj);
+	void *(*gem_prime_vmap)(struct drm_gem_object *obj, bool *is_iomem);
 
 	/**
 	 * @gem_prime_vunmap:
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index e71f75a2ab57..edc73b686c60 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -138,7 +138,7 @@ struct drm_gem_object_funcs {
 	 *
 	 * This callback is optional.
 	 */
-	void *(*vmap)(struct drm_gem_object *obj);
+	void *(*vmap)(struct drm_gem_object *obj, bool *is_iomem);
 
 	/**
 	 * @vunmap:
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index 947ac95eb24a..69fdd18dc7b2 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
 				  struct sg_table *sgt);
 int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
+void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj, bool *is_iomem);
 void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 
 struct drm_gem_object *
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 6748379a0b44..ddb54aa1ac1a 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -95,7 +95,7 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
 void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
 int drm_gem_shmem_pin(struct drm_gem_object *obj);
 void drm_gem_shmem_unpin(struct drm_gem_object *obj);
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
+void *drm_gem_shmem_vmap(struct drm_gem_object *obj, bool *is_iomem);
 void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
 
 int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
-- 
2.23.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/8] drm/gem: Return I/O-memory flag from drm_gem_vram()
  2019-11-06  9:31 [RFC][PATCH 0/8] Support I/O memory in generic fbdev emulation Thomas Zimmermann
                   ` (2 preceding siblings ...)
  2019-11-06  9:31 ` [PATCH 3/8] drm: Add is_iomem return parameter to struct drm_gem_object_funcs.vmap Thomas Zimmermann
@ 2019-11-06  9:31 ` Thomas Zimmermann
  2019-11-06  9:31 ` [PATCH 5/8] drm/client: Return I/O memory flag from drm_client_buffer_vmap() Thomas Zimmermann
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Thomas Zimmermann @ 2019-11-06  9:31 UTC (permalink / raw)
  To: daniel, christian.koenig, noralf; +Cc: Thomas Zimmermann, dri-devel

With this patch, drm_gem_vmap() forwards the io_mem parameter
from the vmap implementation to its caller. By default, is_iomem
is assumed to be false. This matches the return type and the
old behaviour.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_client.c   | 2 +-
 drivers/gpu/drm/drm_gem.c      | 9 ++++++---
 drivers/gpu/drm/drm_internal.h | 2 +-
 drivers/gpu/drm/drm_prime.c    | 2 +-
 4 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index d9a2e3695525..0ecb588778c5 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -317,7 +317,7 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 	 * fd_install step out of the driver backend hooks, to make that
 	 * final step optional for internal users.
 	 */
-	vaddr = drm_gem_vmap(buffer->gem);
+	vaddr = drm_gem_vmap(buffer->gem, NULL);
 	if (IS_ERR(vaddr))
 		return vaddr;
 
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 0acfbd134e04..6b1ae482dfa9 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1246,14 +1246,17 @@ void drm_gem_unpin(struct drm_gem_object *obj)
 		obj->dev->driver->gem_prime_unpin(obj);
 }
 
-void *drm_gem_vmap(struct drm_gem_object *obj)
+void *drm_gem_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
 	void *vaddr;
 
+	if (is_iomem)
+		*is_iomem = false; /* default value matches return type */
+
 	if (obj->funcs && obj->funcs->vmap)
-		vaddr = obj->funcs->vmap(obj, NULL);
+		vaddr = obj->funcs->vmap(obj, is_iomem);
 	else if (obj->dev->driver->gem_prime_vmap)
-		vaddr = obj->dev->driver->gem_prime_vmap(obj, NULL);
+		vaddr = obj->dev->driver->gem_prime_vmap(obj, is_iomem);
 	else
 		vaddr = ERR_PTR(-EOPNOTSUPP);
 
diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
index 51a2055c8f18..78578e6e1197 100644
--- a/drivers/gpu/drm/drm_internal.h
+++ b/drivers/gpu/drm/drm_internal.h
@@ -135,7 +135,7 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
 
 int drm_gem_pin(struct drm_gem_object *obj);
 void drm_gem_unpin(struct drm_gem_object *obj);
-void *drm_gem_vmap(struct drm_gem_object *obj);
+void *drm_gem_vmap(struct drm_gem_object *obj, bool *is_iomem);
 void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
 
 /* drm_debugfs.c drm_debugfs_crc.c */
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 0814211b0f3f..68492ca418ec 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -671,7 +671,7 @@ void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf)
 	struct drm_gem_object *obj = dma_buf->priv;
 	void *vaddr;
 
-	vaddr = drm_gem_vmap(obj);
+	vaddr = drm_gem_vmap(obj, NULL);
 	if (IS_ERR(vaddr))
 		vaddr = NULL;
 
-- 
2.23.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/8] drm/client: Return I/O memory flag from drm_client_buffer_vmap()
  2019-11-06  9:31 [RFC][PATCH 0/8] Support I/O memory in generic fbdev emulation Thomas Zimmermann
                   ` (3 preceding siblings ...)
  2019-11-06  9:31 ` [PATCH 4/8] drm/gem: Return I/O-memory flag from drm_gem_vram() Thomas Zimmermann
@ 2019-11-06  9:31 ` Thomas Zimmermann
  2019-11-06  9:31 ` [PATCH 6/8] fbdev: Export default read and write operations as fb_cfb_{read, write}() Thomas Zimmermann
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Thomas Zimmermann @ 2019-11-06  9:31 UTC (permalink / raw)
  To: daniel, christian.koenig, noralf; +Cc: Thomas Zimmermann, dri-devel

With this patch, drm_client_buffer_vmap() forwards the io_mem parameter
from the vmap implementation to its caller. By default, is_iomem is
assumed to be false. This matches the return type and the old behaviour.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_client.c    | 15 ++++++++++++---
 drivers/gpu/drm/drm_fb_helper.c |  4 ++--
 include/drm/drm_client.h        |  7 ++++++-
 3 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index 0ecb588778c5..44af56fc4b4d 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -290,6 +290,8 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
 /**
  * drm_client_buffer_vmap - Map DRM client buffer into address space
  * @buffer: DRM client buffer
+ * @is_iomem: Returns true if the mapped memory is I/O memory, or false
+ *            otherwise; can be NULL
  *
  * This function maps a client buffer into kernel address space. If the
  * buffer is already mapped, it returns the mapping's address.
@@ -302,12 +304,16 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
  * Returns:
  *	The mapped memory's address
  */
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
+void *drm_client_buffer_vmap(struct drm_client_buffer *buffer, bool *is_iomem)
 {
 	void *vaddr;
+	bool vaddr_is_iomem;
 
-	if (buffer->vaddr)
+	if (buffer->vaddr) {
+		if (is_iomem)
+			*is_iomem = buffer->vaddr_is_iomem;
 		return buffer->vaddr;
+	}
 
 	/*
 	 * FIXME: The dependency on GEM here isn't required, we could
@@ -317,12 +323,15 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 	 * fd_install step out of the driver backend hooks, to make that
 	 * final step optional for internal users.
 	 */
-	vaddr = drm_gem_vmap(buffer->gem, NULL);
+	vaddr = drm_gem_vmap(buffer->gem, &vaddr_is_iomem);
 	if (IS_ERR(vaddr))
 		return vaddr;
 
 	buffer->vaddr = vaddr;
+	buffer->vaddr_is_iomem = vaddr_is_iomem;
 
+	if (is_iomem)
+		*is_iomem = vaddr_is_iomem;
 	return vaddr;
 }
 EXPORT_SYMBOL(drm_client_buffer_vmap);
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 8ebeccdeed23..eff75fad7cab 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -422,7 +422,7 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 
 		/* Generic fbdev uses a shadow buffer */
 		if (helper->buffer) {
-			vaddr = drm_client_buffer_vmap(helper->buffer);
+			vaddr = drm_client_buffer_vmap(helper->buffer, NULL);
 			if (IS_ERR(vaddr))
 				return;
 			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
@@ -2212,7 +2212,7 @@ int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
 		fb_deferred_io_init(fbi);
 	} else {
 		/* buffer is mapped for HW framebuffer */
-		vaddr = drm_client_buffer_vmap(fb_helper->buffer);
+		vaddr = drm_client_buffer_vmap(fb_helper->buffer, NULL);
 		if (IS_ERR(vaddr))
 			return PTR_ERR(vaddr);
 
diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
index 5cf2c5dd8b1e..053d58215be7 100644
--- a/include/drm/drm_client.h
+++ b/include/drm/drm_client.h
@@ -140,6 +140,11 @@ struct drm_client_buffer {
 	 */
 	void *vaddr;
 
+	/**
+	 * @vaddr_is_iomem: True if vaddr points to I/O memory, false otherwise
+	 */
+	bool vaddr_is_iomem;
+
 	/**
 	 * @fb: DRM framebuffer
 	 */
@@ -149,7 +154,7 @@ struct drm_client_buffer {
 struct drm_client_buffer *
 drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
 void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
+void *drm_client_buffer_vmap(struct drm_client_buffer *buffer, bool *is_iomem);
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
 
 int drm_client_modeset_create(struct drm_client_dev *client);
-- 
2.23.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 6/8] fbdev: Export default read and write operations as fb_cfb_{read, write}()
  2019-11-06  9:31 [RFC][PATCH 0/8] Support I/O memory in generic fbdev emulation Thomas Zimmermann
                   ` (4 preceding siblings ...)
  2019-11-06  9:31 ` [PATCH 5/8] drm/client: Return I/O memory flag from drm_client_buffer_vmap() Thomas Zimmermann
@ 2019-11-06  9:31 ` Thomas Zimmermann
  2019-11-06  9:31 ` [PATCH 7/8] drm/fb-helper: Select between fb_{sys, cfb}_read() and _write() Thomas Zimmermann
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Thomas Zimmermann @ 2019-11-06  9:31 UTC (permalink / raw)
  To: daniel, christian.koenig, noralf; +Cc: Thomas Zimmermann, dri-devel

Read and write operations on the fbdev framebuffer can now be called by
in-kernel users. This is required by DRM's fbdev helpers.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/video/fbdev/core/fbmem.c | 53 ++++++++++++++++++++++++--------
 include/linux/fb.h               |  5 +++
 2 files changed, 46 insertions(+), 12 deletions(-)

diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
index 95c32952fa8a..e49cf2988001 100644
--- a/drivers/video/fbdev/core/fbmem.c
+++ b/drivers/video/fbdev/core/fbmem.c
@@ -754,11 +754,10 @@ static struct fb_info *file_fb_info(struct file *file)
 	return info;
 }
 
-static ssize_t
-fb_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
+ssize_t
+fb_cfb_read(struct fb_info *info, char __user *buf, size_t count, loff_t *ppos)
 {
 	unsigned long p = *ppos;
-	struct fb_info *info = file_fb_info(file);
 	u8 *buffer, *dst;
 	u8 __iomem *src;
 	int c, cnt = 0, err = 0;
@@ -770,9 +769,6 @@ fb_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
 	if (info->state != FBINFO_STATE_RUNNING)
 		return -EPERM;
 
-	if (info->fbops->fb_read)
-		return info->fbops->fb_read(info, buf, count, ppos);
-	
 	total_size = info->screen_size;
 
 	if (total_size == 0)
@@ -818,12 +814,13 @@ fb_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
 
 	return (err) ? err : cnt;
 }
+EXPORT_SYMBOL(fb_cfb_read);
 
-static ssize_t
-fb_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos)
+ssize_t
+fb_cfb_write(struct fb_info *info, const char __user *buf, size_t count,
+	     loff_t *ppos)
 {
 	unsigned long p = *ppos;
-	struct fb_info *info = file_fb_info(file);
 	u8 *buffer, *src;
 	u8 __iomem *dst;
 	int c, cnt = 0, err = 0;
@@ -835,9 +832,6 @@ fb_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos)
 	if (info->state != FBINFO_STATE_RUNNING)
 		return -EPERM;
 
-	if (info->fbops->fb_write)
-		return info->fbops->fb_write(info, buf, count, ppos);
-	
 	total_size = info->screen_size;
 
 	if (total_size == 0)
@@ -890,6 +884,41 @@ fb_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos)
 
 	return (cnt) ? cnt : err;
 }
+EXPORT_SYMBOL(fb_cfb_write);
+
+static ssize_t
+fb_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
+{
+	struct fb_info *info = file_fb_info(file);
+
+	if (!info || !info->screen_base)
+		return -ENODEV;
+
+	if (info->state != FBINFO_STATE_RUNNING)
+		return -EPERM;
+
+	if (info->fbops->fb_read)
+		return info->fbops->fb_read(info, buf, count, ppos);
+
+	return fb_cfb_read(info, buf, count, ppos);
+}
+
+static ssize_t
+fb_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos)
+{
+	struct fb_info *info = file_fb_info(file);
+
+	if (!info || !info->screen_base)
+		return -ENODEV;
+
+	if (info->state != FBINFO_STATE_RUNNING)
+		return -EPERM;
+
+	if (info->fbops->fb_write)
+		return info->fbops->fb_write(info, buf, count, ppos);
+
+	return fb_cfb_write(info, buf, count, ppos);
+}
 
 int
 fb_pan_display(struct fb_info *info, struct fb_var_screeninfo *var)
diff --git a/include/linux/fb.h b/include/linux/fb.h
index 41e0069eca0a..c69e098e6dc5 100644
--- a/include/linux/fb.h
+++ b/include/linux/fb.h
@@ -592,6 +592,11 @@ extern int fb_blank(struct fb_info *info, int blank);
 extern void cfb_fillrect(struct fb_info *info, const struct fb_fillrect *rect); 
 extern void cfb_copyarea(struct fb_info *info, const struct fb_copyarea *area); 
 extern void cfb_imageblit(struct fb_info *info, const struct fb_image *image);
+extern ssize_t fb_cfb_read(struct fb_info *info, char __user *buf,
+			   size_t count, loff_t *ppos);
+extern ssize_t fb_cfb_write(struct fb_info *info, const char __user *buf,
+			    size_t count, loff_t *ppos);
+
 /*
  * Drawing operations where framebuffer is in system RAM
  */
-- 
2.23.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 7/8] drm/fb-helper: Select between fb_{sys, cfb}_read() and _write()
  2019-11-06  9:31 [RFC][PATCH 0/8] Support I/O memory in generic fbdev emulation Thomas Zimmermann
                   ` (5 preceding siblings ...)
  2019-11-06  9:31 ` [PATCH 6/8] fbdev: Export default read and write operations as fb_cfb_{read, write}() Thomas Zimmermann
@ 2019-11-06  9:31 ` Thomas Zimmermann
  2019-11-06  9:31 ` [PATCH 8/8] drm/fb-helper: Handle I/O memory correctly when flushing shadow fb Thomas Zimmermann
  2019-11-06 10:05 ` [RFC][PATCH 0/8] Support I/O memory in generic fbdev emulation Daniel Vetter
  8 siblings, 0 replies; 10+ messages in thread
From: Thomas Zimmermann @ 2019-11-06  9:31 UTC (permalink / raw)
  To: daniel, christian.koenig, noralf; +Cc: Thomas Zimmermann, dri-devel

Generic fbdev emulation used to access framebuffers as if they were
located in system memory.

Depending on the whether the framebuffer is in I/O or system memory,
the fbdev emulation now calls the correct functions for accessing each.
This change allows to support generic fbdev emulation on systems that
treat both memory areas differently.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_fb_helper.c | 110 ++++++++++++++++++++++++++++++--
 include/drm/drm_fb_helper.h     |  14 ++++
 2 files changed, 118 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index eff75fad7cab..174e6d97223f 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -771,6 +771,45 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
 }
 EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
 
+/**
+ * drm_fb_helper_cfb_read - wrapper around fb_cfb_read
+ * @info: fb_info struct pointer
+ * @buf: userspace buffer to read from framebuffer memory
+ * @count: number of bytes to read from framebuffer memory
+ * @ppos: read offset within framebuffer memory
+ *
+ * A wrapper around fb_cfb_read implemented by fbdev core
+ */
+ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
+			       size_t count, loff_t *ppos)
+{
+	return fb_cfb_read(info, buf, count, ppos);
+}
+EXPORT_SYMBOL(drm_fb_helper_cfb_read);
+
+/**
+ * drm_fb_helper_cfb_write - wrapper around fb_cfb_write
+ * @info: fb_info struct pointer
+ * @buf: userspace buffer to write to framebuffer memory
+ * @count: number of bytes to write to framebuffer memory
+ * @ppos: write offset within framebuffer memory
+ *
+ * A wrapper around fb_cfb_write implemented by fbdev core
+ */
+ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
+				size_t count, loff_t *ppos)
+{
+	ssize_t ret;
+
+	ret = fb_cfb_write(info, buf, count, ppos);
+	if (ret > 0)
+		drm_fb_helper_dirty(info, 0, 0, info->var.xres,
+				    info->var.yres);
+
+	return ret;
+}
+EXPORT_SYMBOL(drm_fb_helper_cfb_write);
+
 /**
  * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
  * @info: fbdev registered by the helper
@@ -2122,6 +2161,59 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
 		return -ENODEV;
 }
 
+static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
+				 size_t count, loff_t *ppos)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+
+	if (fb_helper->screen_buffer_is_iomem)
+		return drm_fb_helper_cfb_read(info, buf, count, ppos);
+	return drm_fb_helper_sys_read(info, buf, count, ppos);
+}
+
+static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
+				  size_t count, loff_t *ppos)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+
+	if (fb_helper->screen_buffer_is_iomem)
+		return drm_fb_helper_cfb_write(info, buf, count, ppos);
+	return drm_fb_helper_sys_write(info, buf, count, ppos);
+}
+
+static void drm_fbdev_fb_fillrect(struct fb_info *info,
+				  const struct fb_fillrect *rect)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+
+	if (fb_helper->screen_buffer_is_iomem)
+		drm_fb_helper_cfb_fillrect(info, rect);
+	else
+		drm_fb_helper_sys_fillrect(info, rect);
+}
+
+static void drm_fbdev_fb_copyarea(struct fb_info *info,
+				  const struct fb_copyarea *region)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+
+	if (fb_helper->screen_buffer_is_iomem)
+		drm_fb_helper_cfb_copyarea(info, region);
+	else
+		drm_fb_helper_sys_copyarea(info, region);
+}
+
+static void drm_fbdev_fb_imageblit(struct fb_info *info,
+				   const struct fb_image *image)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+
+	if (fb_helper->screen_buffer_is_iomem)
+		drm_fb_helper_cfb_imageblit(info, image);
+	else
+		drm_fb_helper_sys_imageblit(info, image);
+}
+
 static struct fb_ops drm_fbdev_fb_ops = {
 	.owner		= THIS_MODULE,
 	DRM_FB_HELPER_DEFAULT_OPS,
@@ -2129,11 +2221,11 @@ static struct fb_ops drm_fbdev_fb_ops = {
 	.fb_release	= drm_fbdev_fb_release,
 	.fb_destroy	= drm_fbdev_fb_destroy,
 	.fb_mmap	= drm_fbdev_fb_mmap,
-	.fb_read	= drm_fb_helper_sys_read,
-	.fb_write	= drm_fb_helper_sys_write,
-	.fb_fillrect	= drm_fb_helper_sys_fillrect,
-	.fb_copyarea	= drm_fb_helper_sys_copyarea,
-	.fb_imageblit	= drm_fb_helper_sys_imageblit,
+	.fb_read	= drm_fbdev_fb_read,
+	.fb_write	= drm_fbdev_fb_write,
+	.fb_fillrect	= drm_fbdev_fb_fillrect,
+	.fb_copyarea	= drm_fbdev_fb_copyarea,
+	.fb_imageblit	= drm_fbdev_fb_imageblit,
 };
 
 static struct fb_deferred_io drm_fbdev_defio = {
@@ -2209,10 +2301,15 @@ int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
 		fbi->screen_buffer = shadow;
 		fbi->fbdefio = &drm_fbdev_defio;
 
+		/* The shadowfb is always in system memory. */
+		fb_helper->screen_buffer_is_iomem = false;
+
 		fb_deferred_io_init(fbi);
 	} else {
+		bool is_iomem;
+
 		/* buffer is mapped for HW framebuffer */
-		vaddr = drm_client_buffer_vmap(fb_helper->buffer, NULL);
+		vaddr = drm_client_buffer_vmap(fb_helper->buffer, &is_iomem);
 		if (IS_ERR(vaddr))
 			return PTR_ERR(vaddr);
 
@@ -2223,6 +2320,7 @@ int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
 			fbi->fix.smem_start =
 				page_to_phys(virt_to_page(fbi->screen_buffer));
 #endif
+		fb_helper->screen_buffer_is_iomem = is_iomem;
 	}
 
 	return 0;
diff --git a/include/drm/drm_fb_helper.h b/include/drm/drm_fb_helper.h
index 2338e9f94a03..afceae8db4af 100644
--- a/include/drm/drm_fb_helper.h
+++ b/include/drm/drm_fb_helper.h
@@ -155,6 +155,15 @@ struct drm_fb_helper {
 	 */
 	struct list_head kernel_fb_list;
 
+	/**
+	 * @screen_buffer_is_iomem
+	 *
+	 * True if info->screen_buffer refers to I/O memory, false otherwise.
+	 * Depending on this flag, fb_ops should either use sys to cfb
+	 * functions.
+	 */
+	bool screen_buffer_is_iomem;
+
 	/**
 	 * @delayed_hotplug:
 	 *
@@ -248,6 +257,11 @@ void drm_fb_helper_sys_copyarea(struct fb_info *info,
 void drm_fb_helper_sys_imageblit(struct fb_info *info,
 				 const struct fb_image *image);
 
+ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
+			       size_t count, loff_t *ppos);
+ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
+				size_t count, loff_t *ppos);
+
 void drm_fb_helper_cfb_fillrect(struct fb_info *info,
 				const struct fb_fillrect *rect);
 void drm_fb_helper_cfb_copyarea(struct fb_info *info,
-- 
2.23.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 8/8] drm/fb-helper: Handle I/O memory correctly when flushing shadow fb
  2019-11-06  9:31 [RFC][PATCH 0/8] Support I/O memory in generic fbdev emulation Thomas Zimmermann
                   ` (6 preceding siblings ...)
  2019-11-06  9:31 ` [PATCH 7/8] drm/fb-helper: Select between fb_{sys, cfb}_read() and _write() Thomas Zimmermann
@ 2019-11-06  9:31 ` Thomas Zimmermann
  2019-11-06 10:05 ` [RFC][PATCH 0/8] Support I/O memory in generic fbdev emulation Daniel Vetter
  8 siblings, 0 replies; 10+ messages in thread
From: Thomas Zimmermann @ 2019-11-06  9:31 UTC (permalink / raw)
  To: daniel, christian.koenig, noralf; +Cc: Thomas Zimmermann, dri-devel

The fbdev console's framebuffer can be located in I/O memory, such
as video RAM. When flushing the shadow fb, we test for this case and
use I/O-based memcpy() instead. The shadow fb is always in system
memory.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_fb_helper.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 174e6d97223f..e76c42f8852e 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -393,10 +393,14 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
 	void *src = fb_helper->fbdev->screen_buffer + offset;
 	void *dst = fb_helper->buffer->vaddr + offset;
 	size_t len = (clip->x2 - clip->x1) * cpp;
+	bool is_iomem = fb_helper->buffer->vaddr_is_iomem;
 	unsigned int y;
 
 	for (y = clip->y1; y < clip->y2; y++) {
-		memcpy(dst, src, len);
+		if (is_iomem)
+			memcpy_toio((void __iomem *)dst, src, len);
+		else
+			memcpy(dst, src, len);
 		src += fb->pitches[0];
 		dst += fb->pitches[0];
 	}
-- 
2.23.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [RFC][PATCH 0/8] Support I/O memory in generic fbdev emulation
  2019-11-06  9:31 [RFC][PATCH 0/8] Support I/O memory in generic fbdev emulation Thomas Zimmermann
                   ` (7 preceding siblings ...)
  2019-11-06  9:31 ` [PATCH 8/8] drm/fb-helper: Handle I/O memory correctly when flushing shadow fb Thomas Zimmermann
@ 2019-11-06 10:05 ` Daniel Vetter
  8 siblings, 0 replies; 10+ messages in thread
From: Daniel Vetter @ 2019-11-06 10:05 UTC (permalink / raw)
  To: Thomas Zimmermann; +Cc: dri-devel, christian.koenig

On Wed, Nov 06, 2019 at 10:31:13AM +0100, Thomas Zimmermann wrote:
> We recently had a discussion if/how fbdev emulation could support
> framebuffers in I/O memory on all platform. [1]
> 
> I typed up a patchset that passes information about the memory area
> from memory manager to client (e.g., fbdev emulation). The client can
> take this into consideration when accessing the framebuffer.
> 
> The alternative proposal is to introduce a separate vmap() call that
> only returns I/O memorym or NULL if the framebuffer is not in I/O
> memory. AFAICS the benefit of this idea is the cleaner interface and
> the ability to modify drivers one by one. The drawback is some additional
> boilerplate code in drivers and clients.

Imo we need the correct types, to let sparse check this stuff for us.
Otherwise this is just going to be whack-a-mole, since on x86 (and I think
also on arm) there's not really a difference between iomem and system
memory.

One idea I had is to do a new opaque pointer struct, and _lots_ of new
functions to handle it. Unfortunately that means no more pointer
arithmetic on that pointer (this isn't C++):

	struct opaque_dev_ptr {
		union { void * __iomem; void * smem; };
		bool is_iomem;
	};

So really it's a _lot_ of work.

The other issue is that we also need to fix the dma-buf interfaces. Which
is going to be even more work.

All that for gain I'm not really sure is worth it - I don't even know
which platforms we're fixing with this.
-Daniel

> 
> [1] https://lists.freedesktop.org/archives/dri-devel/2019-November/242464.html
> 
> Thomas Zimmermann (8):
>   drm/vram-helper: Tell caller if vmap() returned I/O memory
>   drm/qxl: Tell caller if kmap() returned I/O memory
>   drm: Add is_iomem return parameter to struct drm_gem_object_funcs.vmap
>   drm/gem: Return I/O-memory flag from drm_gem_vram()
>   drm/client: Return I/O memory flag from drm_client_buffer_vmap()
>   fbdev: Export default read and write operations as
>     fb_cfb_{read,write}()
>   drm/fb-helper: Select between fb_{sys,cfb}_read() and _write()
>   drm/fb-helper: Handle I/O memory correctly when flushing shadow fb
> 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |   6 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h |   2 +-
>  drivers/gpu/drm/ast/ast_mode.c              |   6 +-
>  drivers/gpu/drm/cirrus/cirrus.c             |   2 +-
>  drivers/gpu/drm/drm_client.c                |  15 ++-
>  drivers/gpu/drm/drm_fb_helper.c             | 118 ++++++++++++++++++--
>  drivers/gpu/drm/drm_gem.c                   |   9 +-
>  drivers/gpu/drm/drm_gem_cma_helper.c        |   7 +-
>  drivers/gpu/drm/drm_gem_shmem_helper.c      |  12 +-
>  drivers/gpu/drm/drm_gem_vram_helper.c       |  13 ++-
>  drivers/gpu/drm/drm_internal.h              |   2 +-
>  drivers/gpu/drm/drm_prime.c                 |   2 +-
>  drivers/gpu/drm/etnaviv/etnaviv_drv.h       |   2 +-
>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |   4 +-
>  drivers/gpu/drm/mgag200/mgag200_cursor.c    |   4 +-
>  drivers/gpu/drm/nouveau/nouveau_gem.h       |   2 +-
>  drivers/gpu/drm/nouveau/nouveau_prime.c     |   4 +-
>  drivers/gpu/drm/panfrost/panfrost_perfcnt.c |   2 +-
>  drivers/gpu/drm/qxl/qxl_display.c           |   6 +-
>  drivers/gpu/drm/qxl/qxl_draw.c              |   4 +-
>  drivers/gpu/drm/qxl/qxl_drv.h               |   4 +-
>  drivers/gpu/drm/qxl/qxl_object.c            |   7 +-
>  drivers/gpu/drm/qxl/qxl_object.h            |   2 +-
>  drivers/gpu/drm/qxl/qxl_prime.c             |   4 +-
>  drivers/gpu/drm/radeon/radeon_drv.c         |   2 +-
>  drivers/gpu/drm/radeon/radeon_prime.c       |   4 +-
>  drivers/gpu/drm/tiny/gm12u320.c             |   2 +-
>  drivers/gpu/drm/vc4/vc4_bo.c                |   4 +-
>  drivers/gpu/drm/vc4/vc4_drv.h               |   2 +-
>  drivers/gpu/drm/vgem/vgem_drv.c             |   5 +-
>  drivers/gpu/drm/xen/xen_drm_front_gem.c     |   6 +-
>  drivers/gpu/drm/xen/xen_drm_front_gem.h     |   3 +-
>  drivers/video/fbdev/core/fbmem.c            |  53 +++++++--
>  include/drm/drm_client.h                    |   7 +-
>  include/drm/drm_drv.h                       |   2 +-
>  include/drm/drm_fb_helper.h                 |  14 +++
>  include/drm/drm_gem.h                       |   2 +-
>  include/drm/drm_gem_cma_helper.h            |   2 +-
>  include/drm/drm_gem_shmem_helper.h          |   2 +-
>  include/drm/drm_gem_vram_helper.h           |   2 +-
>  include/linux/fb.h                          |   5 +
>  41 files changed, 278 insertions(+), 78 deletions(-)
> 
> -- 
> 2.23.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-11-06 10:05 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-06  9:31 [RFC][PATCH 0/8] Support I/O memory in generic fbdev emulation Thomas Zimmermann
2019-11-06  9:31 ` [PATCH 1/8] drm/vram-helper: Tell caller if vmap() returned I/O memory Thomas Zimmermann
2019-11-06  9:31 ` [PATCH 2/8] drm/qxl: Tell caller if kmap() " Thomas Zimmermann
2019-11-06  9:31 ` [PATCH 3/8] drm: Add is_iomem return parameter to struct drm_gem_object_funcs.vmap Thomas Zimmermann
2019-11-06  9:31 ` [PATCH 4/8] drm/gem: Return I/O-memory flag from drm_gem_vram() Thomas Zimmermann
2019-11-06  9:31 ` [PATCH 5/8] drm/client: Return I/O memory flag from drm_client_buffer_vmap() Thomas Zimmermann
2019-11-06  9:31 ` [PATCH 6/8] fbdev: Export default read and write operations as fb_cfb_{read, write}() Thomas Zimmermann
2019-11-06  9:31 ` [PATCH 7/8] drm/fb-helper: Select between fb_{sys, cfb}_read() and _write() Thomas Zimmermann
2019-11-06  9:31 ` [PATCH 8/8] drm/fb-helper: Handle I/O memory correctly when flushing shadow fb Thomas Zimmermann
2019-11-06 10:05 ` [RFC][PATCH 0/8] Support I/O memory in generic fbdev emulation Daniel Vetter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.