linux-rockchip.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/7] Support GEM object mappings from I/O memory
@ 2020-09-29 15:14 Thomas Zimmermann
  2020-09-29 15:14 ` [PATCH v3 1/7] drm/vram-helper: Remove invariant parameters from internal kmap function Thomas Zimmermann
                   ` (6 more replies)
  0 siblings, 7 replies; 33+ messages in thread
From: Thomas Zimmermann @ 2020-09-29 15:14 UTC (permalink / raw)
  To: maarten.lankhorst, mripard, airlied, daniel, sam,
	alexander.deucher, christian.koenig, kraxel, l.stach,
	linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
	sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
	tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
	hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
	sumit.semwal, emil.velikov, luben.tuikov, apaneers,
	linus.walleij, melissa.srw, chris, miaoqinglang
  Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
	virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
	Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
	linux-media

DRM's fbdev console uses regular load and store operations to update
framebuffer memory. The bochs driver on sparc64 requires the use of
I/O-specific load and store operations. We have a workaround, but need
a long-term solution to the problem.

This patchset changes GEM's vmap/vunmap interfaces to forward pointers
of type struct dma_buf_map and updates the generic fbdev emulation to
use them correctly. This enables I/O-memory operations on all framebuffers
that require and support them.

Patches #1 and #2 prepare VRAM helpers and TTM for the changes in patch
#3. Patch #3 updates GEM's vmap/vunmap callback to forward instances of
type struct dma_buf_map. While the patch touches many files throughout the
DRM modules, the applied changes are mostly trivial interface fixes.

Patch #4 updates GEM's internal vmap/vunmap functions to forward struct
dma_buf_map.

Patches #5 and #6 convert DRM clients and generic fbdev emulation to use
struct dma_buf_map. Updating the fbdev framebuffer will select the correct
functions, either for system or I/O memory.

The patch set is just enough to fix the bochs issue on sparc64 and a
correct way. Patch #7 updates the TODO list with ideas for further
improvements

v3:
	* recreate the whole patchset on top of struct dma_buf_map

v2:
	* RFC patchset

Thomas Zimmermann (7):
  drm/vram-helper: Remove invariant parameters from internal kmap
    function
  drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion
  drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM
    backends
  drm/gem: Update internal GEM vmap/vunmap interfaces to use struct
    dma_buf_map
  drm/gem: Store client buffer mappings as struct dma_buf_map
  drm/fb_helper: Support framebuffers in I/O memory
  drm/todo: Update entries around struct dma_buf_map

 Documentation/gpu/todo.rst                  |  24 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |  14 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h |   4 +-
 drivers/gpu/drm/ast/ast_cursor.c            |  29 ++-
 drivers/gpu/drm/ast/ast_drv.h               |   7 +-
 drivers/gpu/drm/bochs/bochs_kms.c           |   1 -
 drivers/gpu/drm/drm_client.c                |  38 ++--
 drivers/gpu/drm/drm_fb_helper.c             | 238 ++++++++++++++++++--
 drivers/gpu/drm/drm_gem.c                   |  28 ++-
 drivers/gpu/drm/drm_gem_cma_helper.c        |  14 +-
 drivers/gpu/drm/drm_gem_shmem_helper.c      |  48 ++--
 drivers/gpu/drm/drm_gem_vram_helper.c       |  93 ++++----
 drivers/gpu/drm/drm_internal.h              |   5 +-
 drivers/gpu/drm/drm_prime.c                 |  14 +-
 drivers/gpu/drm/etnaviv/etnaviv_drv.h       |   4 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |  11 +-
 drivers/gpu/drm/exynos/exynos_drm_gem.c     |   6 +-
 drivers/gpu/drm/exynos/exynos_drm_gem.h     |   4 +-
 drivers/gpu/drm/lima/lima_gem.c             |   6 +-
 drivers/gpu/drm/lima/lima_sched.c           |  11 +-
 drivers/gpu/drm/mgag200/mgag200_mode.c      |  12 +-
 drivers/gpu/drm/nouveau/nouveau_gem.h       |   4 +-
 drivers/gpu/drm/nouveau/nouveau_prime.c     |   9 +-
 drivers/gpu/drm/panfrost/panfrost_perfcnt.c |  14 +-
 drivers/gpu/drm/qxl/qxl_display.c           |  13 +-
 drivers/gpu/drm/qxl/qxl_draw.c              |  16 +-
 drivers/gpu/drm/qxl/qxl_drv.h               |   8 +-
 drivers/gpu/drm/qxl/qxl_object.c            |  23 +-
 drivers/gpu/drm/qxl/qxl_object.h            |   2 +-
 drivers/gpu/drm/qxl/qxl_prime.c             |  12 +-
 drivers/gpu/drm/radeon/radeon_gem.c         |   4 +-
 drivers/gpu/drm/radeon/radeon_prime.c       |   9 +-
 drivers/gpu/drm/rockchip/rockchip_drm_gem.c |  22 +-
 drivers/gpu/drm/rockchip/rockchip_drm_gem.h |   4 +-
 drivers/gpu/drm/tiny/cirrus.c               |  10 +-
 drivers/gpu/drm/tiny/gm12u320.c             |  10 +-
 drivers/gpu/drm/udl/udl_modeset.c           |   8 +-
 drivers/gpu/drm/vboxvideo/vbox_mode.c       |  11 +-
 drivers/gpu/drm/vc4/vc4_bo.c                |   6 +-
 drivers/gpu/drm/vc4/vc4_drv.h               |   2 +-
 drivers/gpu/drm/vgem/vgem_drv.c             |  16 +-
 drivers/gpu/drm/xen/xen_drm_front_gem.c     |  18 +-
 drivers/gpu/drm/xen/xen_drm_front_gem.h     |   6 +-
 include/drm/drm_client.h                    |   7 +-
 include/drm/drm_gem.h                       |   5 +-
 include/drm/drm_gem_cma_helper.h            |   4 +-
 include/drm/drm_gem_shmem_helper.h          |   4 +-
 include/drm/drm_gem_vram_helper.h           |   4 +-
 include/drm/drm_mode_config.h               |  12 -
 include/drm/ttm/ttm_bo_api.h                |  24 ++
 include/linux/dma-buf-map.h                 |  92 +++++++-
 51 files changed, 685 insertions(+), 305 deletions(-)

--
2.28.0


_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH v3 1/7] drm/vram-helper: Remove invariant parameters from internal kmap function
  2020-09-29 15:14 [PATCH v3 0/7] Support GEM object mappings from I/O memory Thomas Zimmermann
@ 2020-09-29 15:14 ` Thomas Zimmermann
  2020-10-02  9:48   ` Daniel Vetter
  2020-09-29 15:14 ` [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion Thomas Zimmermann
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 33+ messages in thread
From: Thomas Zimmermann @ 2020-09-29 15:14 UTC (permalink / raw)
  To: maarten.lankhorst, mripard, airlied, daniel, sam,
	alexander.deucher, christian.koenig, kraxel, l.stach,
	linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
	sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
	tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
	hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
	sumit.semwal, emil.velikov, luben.tuikov, apaneers,
	linus.walleij, melissa.srw, chris, miaoqinglang
  Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
	virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
	Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
	linux-media

The parameters map and is_iomem are always of the same value. Removed them
to prepares the function for conversion to struct dma_buf_map.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_gem_vram_helper.c | 17 ++++++-----------
 1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 3fe4b326e18e..256b346664f2 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -382,16 +382,16 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
 }
 EXPORT_SYMBOL(drm_gem_vram_unpin);
 
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
-				      bool map, bool *is_iomem)
+static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
 {
 	int ret;
 	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+	bool is_iomem;
 
 	if (gbo->kmap_use_count > 0)
 		goto out;
 
-	if (kmap->virtual || !map)
+	if (kmap->virtual)
 		goto out;
 
 	ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
@@ -399,15 +399,10 @@ static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
 		return ERR_PTR(ret);
 
 out:
-	if (!kmap->virtual) {
-		if (is_iomem)
-			*is_iomem = false;
+	if (!kmap->virtual)
 		return NULL; /* not mapped; don't increment ref */
-	}
 	++gbo->kmap_use_count;
-	if (is_iomem)
-		return ttm_kmap_obj_virtual(kmap, is_iomem);
-	return kmap->virtual;
+	return ttm_kmap_obj_virtual(kmap, &is_iomem);
 }
 
 static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
@@ -452,7 +447,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
 	ret = drm_gem_vram_pin_locked(gbo, 0);
 	if (ret)
 		goto err_ttm_bo_unreserve;
-	base = drm_gem_vram_kmap_locked(gbo, true, NULL);
+	base = drm_gem_vram_kmap_locked(gbo);
 	if (IS_ERR(base)) {
 		ret = PTR_ERR(base);
 		goto err_drm_gem_vram_unpin_locked;
-- 
2.28.0


_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion
  2020-09-29 15:14 [PATCH v3 0/7] Support GEM object mappings from I/O memory Thomas Zimmermann
  2020-09-29 15:14 ` [PATCH v3 1/7] drm/vram-helper: Remove invariant parameters from internal kmap function Thomas Zimmermann
@ 2020-09-29 15:14 ` Thomas Zimmermann
  2020-09-29 15:35   ` Christian König
  2020-09-29 15:14 ` [PATCH v3 3/7] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends Thomas Zimmermann
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 33+ messages in thread
From: Thomas Zimmermann @ 2020-09-29 15:14 UTC (permalink / raw)
  To: maarten.lankhorst, mripard, airlied, daniel, sam,
	alexander.deucher, christian.koenig, kraxel, l.stach,
	linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
	sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
	tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
	hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
	sumit.semwal, emil.velikov, luben.tuikov, apaneers,
	linus.walleij, melissa.srw, chris, miaoqinglang
  Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
	virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
	Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
	linux-media

The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
from and instance of TTM's kmap_obj and initializes struct dma_buf_map
with these values. Helpful for TTM-based drivers.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
 include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index c96a25d571c8..62d89f05a801 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -34,6 +34,7 @@
 #include <drm/drm_gem.h>
 #include <drm/drm_hashtab.h>
 #include <drm/drm_vma_manager.h>
+#include <linux/dma-buf-map.h>
 #include <linux/kref.h>
 #include <linux/list.h>
 #include <linux/wait.h>
@@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct ttm_bo_kmap_obj *map,
 	return map->virtual;
 }
 
+/**
+ * ttm_kmap_obj_to_dma_buf_map
+ *
+ * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
+ * @map: Returns the mapping as struct dma_buf_map
+ *
+ * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
+ * is not mapped, the returned mapping is initialized to NULL.
+ */
+static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj *kmap,
+					       struct dma_buf_map *map)
+{
+	bool is_iomem;
+	void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
+
+	if (!vaddr)
+		dma_buf_map_clear(map);
+	else if (is_iomem)
+		dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
+	else
+		dma_buf_map_set_vaddr(map, vaddr);
+}
+
 /**
  * ttm_bo_kmap
  *
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index fd1aba545fdf..2e8bbecb5091 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -45,6 +45,12 @@
  *
  *	dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
  *
+ * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
+ *
+ * .. code-block:: c
+ *
+ *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
+ *
  * Test if a mapping is valid with either dma_buf_map_is_set() or
  * dma_buf_map_is_null().
  *
@@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
 	map->is_iomem = false;
 }
 
+/**
+ * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
+ * @map:		The dma-buf mapping structure
+ * @vaddr_iomem:	An I/O-memory address
+ *
+ * Sets the address and the I/O-memory flag.
+ */
+static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
+					       void __iomem *vaddr_iomem)
+{
+	map->vaddr_iomem = vaddr_iomem;
+	map->is_iomem = true;
+}
+
 /**
  * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
  * @lhs:	The dma-buf mapping structure
-- 
2.28.0


_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v3 3/7] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends
  2020-09-29 15:14 [PATCH v3 0/7] Support GEM object mappings from I/O memory Thomas Zimmermann
  2020-09-29 15:14 ` [PATCH v3 1/7] drm/vram-helper: Remove invariant parameters from internal kmap function Thomas Zimmermann
  2020-09-29 15:14 ` [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion Thomas Zimmermann
@ 2020-09-29 15:14 ` Thomas Zimmermann
  2020-10-02 13:02   ` Daniel Vetter
  2020-09-29 15:14 ` [PATCH v3 4/7] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map Thomas Zimmermann
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 33+ messages in thread
From: Thomas Zimmermann @ 2020-09-29 15:14 UTC (permalink / raw)
  To: maarten.lankhorst, mripard, airlied, daniel, sam,
	alexander.deucher, christian.koenig, kraxel, l.stach,
	linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
	sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
	tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
	hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
	sumit.semwal, emil.velikov, luben.tuikov, apaneers,
	linus.walleij, melissa.srw, chris, miaoqinglang
  Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
	virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
	Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
	linux-media

This patch replaces the vmap/vunmap's use of raw pointers in GEM object
functions with instances of struct dma_buf_map. GEM backends are
converted as well.

For most GEM backends, this simply change the returned type. GEM VRAM
helpers are also updated to indicate whether the returned framebuffer
address is in system or I/O memory.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 14 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h |  4 +-
 drivers/gpu/drm/ast/ast_cursor.c            | 29 +++----
 drivers/gpu/drm/ast/ast_drv.h               |  7 +-
 drivers/gpu/drm/drm_gem.c                   | 22 ++---
 drivers/gpu/drm/drm_gem_cma_helper.c        | 14 ++--
 drivers/gpu/drm/drm_gem_shmem_helper.c      | 48 ++++++-----
 drivers/gpu/drm/drm_gem_vram_helper.c       | 90 +++++++++++----------
 drivers/gpu/drm/etnaviv/etnaviv_drv.h       |  4 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 11 ++-
 drivers/gpu/drm/exynos/exynos_drm_gem.c     |  6 +-
 drivers/gpu/drm/exynos/exynos_drm_gem.h     |  4 +-
 drivers/gpu/drm/lima/lima_gem.c             |  6 +-
 drivers/gpu/drm/lima/lima_sched.c           | 11 ++-
 drivers/gpu/drm/mgag200/mgag200_mode.c      | 12 +--
 drivers/gpu/drm/nouveau/nouveau_gem.h       |  4 +-
 drivers/gpu/drm/nouveau/nouveau_prime.c     |  9 ++-
 drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 ++--
 drivers/gpu/drm/qxl/qxl_display.c           | 13 +--
 drivers/gpu/drm/qxl/qxl_draw.c              | 16 ++--
 drivers/gpu/drm/qxl/qxl_drv.h               |  8 +-
 drivers/gpu/drm/qxl/qxl_object.c            | 23 +++---
 drivers/gpu/drm/qxl/qxl_object.h            |  2 +-
 drivers/gpu/drm/qxl/qxl_prime.c             | 12 +--
 drivers/gpu/drm/radeon/radeon_gem.c         |  4 +-
 drivers/gpu/drm/radeon/radeon_prime.c       |  9 ++-
 drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 +++--
 drivers/gpu/drm/rockchip/rockchip_drm_gem.h |  4 +-
 drivers/gpu/drm/tiny/cirrus.c               | 10 ++-
 drivers/gpu/drm/tiny/gm12u320.c             | 10 ++-
 drivers/gpu/drm/udl/udl_modeset.c           |  8 +-
 drivers/gpu/drm/vboxvideo/vbox_mode.c       | 11 ++-
 drivers/gpu/drm/vc4/vc4_bo.c                |  6 +-
 drivers/gpu/drm/vc4/vc4_drv.h               |  2 +-
 drivers/gpu/drm/vgem/vgem_drv.c             | 16 ++--
 drivers/gpu/drm/xen/xen_drm_front_gem.c     | 18 +++--
 drivers/gpu/drm/xen/xen_drm_front_gem.h     |  6 +-
 include/drm/drm_gem.h                       |  5 +-
 include/drm/drm_gem_cma_helper.h            |  4 +-
 include/drm/drm_gem_shmem_helper.h          |  4 +-
 include/drm/drm_gem_vram_helper.h           |  4 +-
 41 files changed, 304 insertions(+), 222 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 5b465ab774d1..de7d0cfe1b93 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -44,13 +44,14 @@
 /**
  * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
  * @obj: GEM BO
+ * @map: The virtual address of the mapping.
  *
  * Sets up an in-kernel virtual mapping of the BO's memory.
  *
  * Returns:
- * The virtual address of the mapping or an error pointer.
+ * 0 on success, or a negative errno code otherwise.
  */
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
+int amdgpu_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
 	int ret;
@@ -58,19 +59,20 @@ void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
 	ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
 			  &bo->dma_buf_vmap);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
+	ttm_kmap_obj_to_dma_buf_map(&bo->dma_buf_vmap, map);
 
-	return bo->dma_buf_vmap.virtual;
+	return 0;
 }
 
 /**
  * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
  * @obj: GEM BO
- * @vaddr: Virtual address (unused)
+ * @map: Virtual address (unused)
  *
  * Tears down the in-kernel virtual mapping of the BO's memory.
  */
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
index 2c5c84a06bb9..622642793064 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
@@ -31,8 +31,8 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
 					    struct dma_buf *dma_buf);
 bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
 				      struct amdgpu_bo *bo);
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int amdgpu_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
 			  struct vm_area_struct *vma);
 
diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
index e0f4613918ad..459a3774e4e1 100644
--- a/drivers/gpu/drm/ast/ast_cursor.c
+++ b/drivers/gpu/drm/ast/ast_cursor.c
@@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
 
 	for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
 		gbo = ast->cursor.gbo[i];
-		drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
 		drm_gem_vram_unpin(gbo);
 		drm_gem_vram_put(gbo);
 	}
@@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
 	struct drm_device *dev = &ast->base;
 	size_t size, i;
 	struct drm_gem_vram_object *gbo;
-	void __iomem *vaddr;
+	struct dma_buf_map map;
 	int ret;
 
 	size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
@@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
 			drm_gem_vram_put(gbo);
 			goto err_drm_gem_vram_put;
 		}
-		vaddr = drm_gem_vram_vmap(gbo);
-		if (IS_ERR(vaddr)) {
-			ret = PTR_ERR(vaddr);
+		ret = drm_gem_vram_vmap(gbo, &map);
+		if (ret) {
 			drm_gem_vram_unpin(gbo);
 			drm_gem_vram_put(gbo);
 			goto err_drm_gem_vram_put;
 		}
 
 		ast->cursor.gbo[i] = gbo;
-		ast->cursor.vaddr[i] = vaddr;
+		ast->cursor.map[i] = map;
 	}
 
 	return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
@@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
 	while (i) {
 		--i;
 		gbo = ast->cursor.gbo[i];
-		drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
 		drm_gem_vram_unpin(gbo);
 		drm_gem_vram_put(gbo);
 	}
@@ -170,8 +169,8 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 {
 	struct drm_device *dev = &ast->base;
 	struct drm_gem_vram_object *gbo;
+	struct dma_buf_map map;
 	int ret;
-	void *src;
 	void __iomem *dst;
 
 	if (drm_WARN_ON_ONCE(dev, fb->width > AST_MAX_HWC_WIDTH) ||
@@ -183,18 +182,16 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 	ret = drm_gem_vram_pin(gbo, 0);
 	if (ret)
 		return ret;
-	src = drm_gem_vram_vmap(gbo);
-	if (IS_ERR(src)) {
-		ret = PTR_ERR(src);
+	ret = drm_gem_vram_vmap(gbo, &map);
+	if (ret)
 		goto err_drm_gem_vram_unpin;
-	}
 
-	dst = ast->cursor.vaddr[ast->cursor.next_index];
+	dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
 
 	/* do data transfer to cursor BO */
-	update_cursor_image(dst, src, fb->width, fb->height);
+	update_cursor_image(dst, map.vaddr, fb->width, fb->height);
 
-	drm_gem_vram_vunmap(gbo, src);
+	drm_gem_vram_vunmap(gbo, &map);
 	drm_gem_vram_unpin(gbo);
 
 	return 0;
@@ -257,7 +254,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
 	u8 __iomem *sig;
 	u8 jreg;
 
-	dst = ast->cursor.vaddr[ast->cursor.next_index];
+	dst = ast->cursor.map[ast->cursor.next_index].vaddr;
 
 	sig = dst + AST_HWC_SIZE;
 	writel(x, sig + AST_HWC_SIGNATURE_X);
diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
index 467049ca8430..f963141dd851 100644
--- a/drivers/gpu/drm/ast/ast_drv.h
+++ b/drivers/gpu/drm/ast/ast_drv.h
@@ -28,10 +28,11 @@
 #ifndef __AST_DRV_H__
 #define __AST_DRV_H__
 
-#include <linux/types.h>
-#include <linux/io.h>
+#include <linux/dma-buf-map.h>
 #include <linux/i2c.h>
 #include <linux/i2c-algo-bit.h>
+#include <linux/io.h>
+#include <linux/types.h>
 
 #include <drm/drm_connector.h>
 #include <drm/drm_crtc.h>
@@ -131,7 +132,7 @@ struct ast_private {
 
 	struct {
 		struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
-		void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
+		struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
 		unsigned int next_index;
 	} cursor;
 
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 1da67d34e55d..0c4a66dea5c2 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1207,26 +1207,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
 
 void *drm_gem_vmap(struct drm_gem_object *obj)
 {
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
-	if (obj->funcs->vmap)
-		vaddr = obj->funcs->vmap(obj);
-	else
-		vaddr = ERR_PTR(-EOPNOTSUPP);
+	if (!obj->funcs->vmap) {
+		return ERR_PTR(-EOPNOTSUPP);
 
-	if (!vaddr)
-		vaddr = ERR_PTR(-ENOMEM);
+	ret = obj->funcs->vmap(obj, &map);
+	if (ret)
+		return ERR_PTR(ret);
+	else if (dma_buf_map_is_null(&map))
+		return ERR_PTR(-ENOMEM);
 
-	return vaddr;
+	return map.vaddr;
 }
 
 void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
 {
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
+
 	if (!vaddr)
 		return;
 
 	if (obj->funcs->vunmap)
-		obj->funcs->vunmap(obj, vaddr);
+		obj->funcs->vunmap(obj, &map);
 }
 
 /**
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 2165633c9b9e..e87cd36518d3 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
  * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
  *     address space
  * @obj: GEM object
+ * @map: Returns the kernel virtual address of the CMA GEM object's backing
+ *       store.
  *
  * This function maps a buffer exported via DRM PRIME into the kernel's
  * virtual address space. Since the CMA buffers are already mapped into the
@@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
  * driver's &drm_gem_object_funcs.vmap callback.
  *
  * Returns:
- * The kernel virtual address of the CMA GEM object's backing store.
+ * 0 on success, or a negative error code otherwise.
  */
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
 
-	return cma_obj->vaddr;
+	dma_buf_map_set_vaddr(map, cma_obj->vaddr);
+
+	return 0;
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
 
@@ -541,14 +545,14 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
  * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
  *     address space
  * @obj: GEM object
- * @vaddr: kernel virtual address where the CMA GEM object was mapped
+ * @map: Kernel virtual address where the CMA GEM object was mapped
  *
  * This function removes a buffer exported via DRM PRIME from the kernel's
  * virtual address space. This is a no-op because CMA buffers cannot be
  * unmapped from kernel space. Drivers using the CMA helpers should set this
  * as their &drm_gem_object_funcs.vunmap callback.
  */
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	/* Nothing to do */
 }
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index fb11df7aced5..5553f58f68f3 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
 }
 EXPORT_SYMBOL(drm_gem_shmem_unpin);
 
-static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
+static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = &shmem->base;
-	struct dma_buf_map map;
 	int ret = 0;
 
-	if (shmem->vmap_use_count++ > 0)
-		return shmem->vaddr;
+	if (shmem->vmap_use_count++ > 0) {
+		dma_buf_map_set_vaddr(map, shmem->vaddr);
+		return 0;
+	}
 
 	if (obj->import_attach) {
-		ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
-		if (!ret)
-			shmem->vaddr = map.vaddr;
+		ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
+		if (!ret) {
+			if (WARN_ON(map->is_iomem)) {
+				ret = -EIO;
+				goto err_put_pages;
+			}
+			shmem->vaddr = map->vaddr;
+		}
 	} else {
 		pgprot_t prot = PAGE_KERNEL;
 
@@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 				    VM_MAP, prot);
 		if (!shmem->vaddr)
 			ret = -ENOMEM;
+		else
+			dma_buf_map_set_vaddr(map, shmem->vaddr);
 	}
 
 	if (ret) {
@@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 		goto err_put_pages;
 	}
 
-	return shmem->vaddr;
+	return 0;
 
 err_put_pages:
 	if (!obj->import_attach)
@@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 err_zero_use:
 	shmem->vmap_use_count = 0;
 
-	return ERR_PTR(ret);
+	return ret;
 }
 
 /*
  * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
  * @shmem: shmem GEM object
+ * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
+ *       store.
  *
  * This function makes sure that a contiguous kernel virtual address mapping
  * exists for the buffer backing the shmem GEM object.
@@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
  * Returns:
  * 0 on success or a negative error code on failure.
  */
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
-	void *vaddr;
 	int ret;
 
 	ret = mutex_lock_interruptible(&shmem->vmap_lock);
 	if (ret)
-		return ERR_PTR(ret);
-	vaddr = drm_gem_shmem_vmap_locked(shmem);
+		return ret;
+	ret = drm_gem_shmem_vmap_locked(shmem, map);
 	mutex_unlock(&shmem->vmap_lock);
 
-	return vaddr;
+	return ret;
 }
 EXPORT_SYMBOL(drm_gem_shmem_vmap);
 
-static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
+static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
+					struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = &shmem->base;
-	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
 
 	if (WARN_ON_ONCE(!shmem->vmap_use_count))
 		return;
@@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 		return;
 
 	if (obj->import_attach)
-		dma_buf_vunmap(obj->import_attach->dmabuf, &map);
+		dma_buf_vunmap(obj->import_attach->dmabuf, map);
 	else
 		vunmap(shmem->vaddr);
 
@@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 /*
  * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
  * @shmem: shmem GEM object
+ * @map: Kernel virtual address where the SHMEM GEM object was mapped
  *
  * This function cleans up a kernel virtual address mapping acquired by
  * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
@@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
  * also be called by drivers directly, in which case it will hide the
  * differences between dma-buf imported and natively allocated objects.
  */
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
 
 	mutex_lock(&shmem->vmap_lock);
-	drm_gem_shmem_vunmap_locked(shmem);
+	drm_gem_shmem_vunmap_locked(shmem, map);
 	mutex_unlock(&shmem->vmap_lock);
 }
 EXPORT_SYMBOL(drm_gem_shmem_vunmap);
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 256b346664f2..6a5b932e0d06 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -1,5 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-or-later
 
+#include <linux/dma-buf-map.h>
 #include <linux/module.h>
 
 #include <drm/drm_debugfs.h>
@@ -382,11 +383,11 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
 }
 EXPORT_SYMBOL(drm_gem_vram_unpin);
 
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
+static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
+				    struct dma_buf_map *map)
 {
 	int ret;
 	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
-	bool is_iomem;
 
 	if (gbo->kmap_use_count > 0)
 		goto out;
@@ -396,17 +397,30 @@ static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
 
 	ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
 
 out:
-	if (!kmap->virtual)
-		return NULL; /* not mapped; don't increment ref */
+	if (!kmap->virtual) {
+		dma_buf_map_clear(map);
+		return 0; /* not mapped; don't increment ref */
+	}
 	++gbo->kmap_use_count;
-	return ttm_kmap_obj_virtual(kmap, &is_iomem);
+	ttm_kmap_obj_to_dma_buf_map(kmap, map);
+	return 0;
 }
 
-static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
+static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
+				       struct dma_buf_map *map)
 {
+	struct drm_device *dev = gbo->bo.base.dev;
+	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+	struct dma_buf_map kmap_map;
+
+	ttm_kmap_obj_to_dma_buf_map(kmap, &kmap_map);
+
+	if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&kmap_map, map)))
+		return; /* BUG: map not mapped from this BO */
+
 	if (WARN_ON_ONCE(!gbo->kmap_use_count))
 		return;
 	if (--gbo->kmap_use_count > 0)
@@ -423,7 +437,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
 /**
  * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
  *                       space
- * @gbo:	The GEM VRAM object to map
+ * @gbo: The GEM VRAM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ *       store.
  *
  * The vmap function pins a GEM VRAM object to its current location, either
  * system or video memory, and maps its buffer into kernel address space.
@@ -432,48 +448,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
  * unmap and unpin the GEM VRAM object.
  *
  * Returns:
- * The buffer's virtual address on success, or
- * an ERR_PTR()-encoded error code otherwise.
+ * 0 on success, or a negative error code otherwise.
  */
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
 	int ret;
-	void *base;
 
 	ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
 
 	ret = drm_gem_vram_pin_locked(gbo, 0);
 	if (ret)
 		goto err_ttm_bo_unreserve;
-	base = drm_gem_vram_kmap_locked(gbo);
-	if (IS_ERR(base)) {
-		ret = PTR_ERR(base);
+	ret = drm_gem_vram_kmap_locked(gbo, map);
+	if (ret)
 		goto err_drm_gem_vram_unpin_locked;
-	}
 
 	ttm_bo_unreserve(&gbo->bo);
 
-	return base;
+	return 0;
 
 err_drm_gem_vram_unpin_locked:
 	drm_gem_vram_unpin_locked(gbo);
 err_ttm_bo_unreserve:
 	ttm_bo_unreserve(&gbo->bo);
-	return ERR_PTR(ret);
+	return ret;
 }
 EXPORT_SYMBOL(drm_gem_vram_vmap);
 
 /**
  * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
- * @gbo:	The GEM VRAM object to unmap
- * @vaddr:	The mapping's base address as returned by drm_gem_vram_vmap()
+ * @gbo: The GEM VRAM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
  *
  * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
  * the documentation for drm_gem_vram_vmap() for more information.
  */
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
 	int ret;
 
@@ -481,7 +493,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
 	if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
 		return;
 
-	drm_gem_vram_kunmap_locked(gbo);
+	drm_gem_vram_kunmap_locked(gbo, map);
 	drm_gem_vram_unpin_locked(gbo);
 
 	ttm_bo_unreserve(&gbo->bo);
@@ -829,37 +841,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
 }
 
 /**
- * drm_gem_vram_object_vmap() - \
-	Implements &struct drm_gem_object_funcs.vmap
- * @gem:	The GEM object to map
+ * drm_gem_vram_object_vmap() -
+ *	Implements &struct drm_gem_object_funcs.vmap
+ * @gem: The GEM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ *       store.
  *
  * Returns:
- * The buffers virtual address on success, or
- * NULL otherwise.
+ * 0 on success, or a negative error code otherwise.
  */
-static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
+static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
 {
 	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
-	void *base;
 
-	base = drm_gem_vram_vmap(gbo);
-	if (IS_ERR(base))
-		return NULL;
-	return base;
+	return drm_gem_vram_vmap(gbo, map);
 }
 
 /**
- * drm_gem_vram_object_vunmap() - \
-	Implements &struct drm_gem_object_funcs.vunmap
- * @gem:	The GEM object to unmap
- * @vaddr:	The mapping's base address
+ * drm_gem_vram_object_vunmap() -
+ *	Implements &struct drm_gem_object_funcs.vunmap
+ * @gem: The GEM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
  */
-static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
-				       void *vaddr)
+static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
 {
 	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
 
-	drm_gem_vram_vunmap(gbo, vaddr);
+	drm_gem_vram_vunmap(gbo, map);
 }
 
 /*
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 914f0867ff71..3d1eb8065fce 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -51,8 +51,8 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
 int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
 struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
 struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 135fbff6fecf..36c03e287e29 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -22,12 +22,17 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
 }
 
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
-	return etnaviv_gem_vmap(obj);
+	void *vaddr = etnaviv_gem_vmap(obj);
+	if (!vaddr)
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	/* TODO msm_gem_vunmap() */
 }
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index e7a6eb96f692..2c74e06669fa 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -471,12 +471,12 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
 	return &exynos_gem->base;
 }
 
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
+int exynos_drm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
-	return NULL;
+	return -ENOMEM;
 }
 
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	/* Nothing to do */
 }
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
index 74e926abeff0..ecfd048fd91d 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
@@ -107,8 +107,8 @@ struct drm_gem_object *
 exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
 				     struct dma_buf_attachment *attach,
 				     struct sg_table *sgt);
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int exynos_drm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
 			      struct vm_area_struct *vma);
 
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 11223fe348df..832e5280a6ed 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
 	return drm_gem_shmem_pin(obj);
 }
 
-static void *lima_gem_vmap(struct drm_gem_object *obj)
+static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct lima_bo *bo = to_lima_bo(obj);
 
 	if (bo->heap_size)
-		return ERR_PTR(-EINVAL);
+		return -EINVAL;
 
-	return drm_gem_shmem_vmap(obj);
+	return drm_gem_shmem_vmap(obj, map);
 }
 
 static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index dc6df9e9a40d..a070a85f8f36 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0 OR MIT
 /* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
 
+#include <linux/dma-buf-map.h>
 #include <linux/kthread.h>
 #include <linux/slab.h>
 #include <linux/vmalloc.h>
@@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
 	struct lima_dump_chunk_buffer *buffer_chunk;
 	u32 size, task_size, mem_size;
 	int i;
+	struct dma_buf_map map;
+	int ret;
 
 	mutex_lock(&dev->error_task_list_lock);
 
@@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
 		} else {
 			buffer_chunk->size = lima_bo_size(bo);
 
-			data = drm_gem_shmem_vmap(&bo->base.base);
-			if (IS_ERR_OR_NULL(data)) {
+			ret = drm_gem_shmem_vmap(&bo->base.base, &map);
+			if (ret) {
 				kvfree(et);
 				goto out;
 			}
 
-			memcpy(buffer_chunk + 1, data, buffer_chunk->size);
+			memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
 
-			drm_gem_shmem_vunmap(&bo->base.base, data);
+			drm_gem_shmem_vunmap(&bo->base.base, &map);
 		}
 
 		buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
index 38672f9e5c4f..ae4c8cb33fae 100644
--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
+++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
@@ -9,6 +9,7 @@
  */
 
 #include <linux/delay.h>
+#include <linux/dma-buf-map.h>
 
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_atomic_state_helper.h>
@@ -1556,15 +1557,16 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
 		      struct drm_rect *clip)
 {
 	struct drm_device *dev = &mdev->base;
-	void *vmap;
+	struct dma_buf_map map;
+	int ret;
 
-	vmap = drm_gem_shmem_vmap(fb->obj[0]);
-	if (drm_WARN_ON(dev, !vmap))
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (drm_WARN_ON(dev, ret))
 		return; /* BUG: SHMEM BO should always be vmapped */
 
-	drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
+	drm_fb_memcpy_dstclip(mdev->vram, map.vaddr, fb, clip);
 
-	drm_gem_shmem_vunmap(fb->obj[0], vmap);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 
 	/* Always scanout image at VRAM offset 0 */
 	mgag200_set_startadd(mdev, (u32)0);
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
index b35c180322e2..e780b6b1763d 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.h
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
@@ -37,7 +37,7 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
 extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
 extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
 	struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
-extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
-extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
+extern int nouveau_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+extern void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 #endif
diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
index a8264aebf3d4..75e973a5675a 100644
--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
+++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
@@ -35,7 +35,7 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
 }
 
-void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
+int nouveau_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
 	int ret;
@@ -43,12 +43,13 @@ void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
 	ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
 			  &nvbo->dma_buf_vmap);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
+	ttm_kmap_obj_to_dma_buf_map(&nvbo->dma_buf_vmap, map);
 
-	return nvbo->dma_buf_vmap.virtual;
+	return 0;
 }
 
-void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
 
diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
index fdbc8d949135..5ab03d605f57 100644
--- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
+++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
@@ -5,6 +5,7 @@
 #include <drm/drm_gem_shmem_helper.h>
 #include <drm/panfrost_drm.h>
 #include <linux/completion.h>
+#include <linux/dma-buf-map.h>
 #include <linux/iopoll.h>
 #include <linux/pm_runtime.h>
 #include <linux/slab.h>
@@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
 {
 	struct panfrost_file_priv *user = file_priv->driver_priv;
 	struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+	struct dma_buf_map map;
 	struct drm_gem_shmem_object *bo;
 	u32 cfg, as;
 	int ret;
@@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
 		goto err_close_bo;
 	}
 
-	perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
-	if (IS_ERR(perfcnt->buf)) {
-		ret = PTR_ERR(perfcnt->buf);
+	ret = drm_gem_shmem_vmap(&bo->base, &map);
+	if (ret)
 		goto err_put_mapping;
-	}
+	perfcnt->buf = map.vaddr;
 
 	/*
 	 * Invalidate the cache and clear the counters to start from a fresh
@@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
 	return 0;
 
 err_vunmap:
-	drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
+	drm_gem_shmem_vunmap(&bo->base, &map);
 err_put_mapping:
 	panfrost_gem_mapping_put(perfcnt->mapping);
 err_close_bo:
@@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
 {
 	struct panfrost_file_priv *user = file_priv->driver_priv;
 	struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
 
 	if (user != perfcnt->user)
 		return -EINVAL;
@@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
 		  GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
 
 	perfcnt->user = NULL;
-	drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
+	drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
 	perfcnt->buf = NULL;
 	panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
 	panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
index 6063f3a15329..ed0d22fa0161 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -25,6 +25,7 @@
 
 #include <linux/crc32.h>
 #include <linux/delay.h>
+#include <linux/dma-buf-map.h>
 
 #include <drm/drm_drv.h>
 #include <drm/drm_atomic.h>
@@ -581,7 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 	struct drm_gem_object *obj;
 	struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
 	int ret;
-	void *user_ptr;
+	struct dma_buf_map user_map;
+	struct dma_buf_map cursor_map;
 	int size = 64*64*4;
 
 	ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd),
@@ -595,7 +597,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 		user_bo = gem_to_qxl_bo(obj);
 
 		/* pinning is done in the prepare/cleanup framevbuffer */
-		ret = qxl_bo_kmap(user_bo, &user_ptr);
+		ret = qxl_bo_kmap(user_bo, &user_map);
 		if (ret)
 			goto out_free_release;
 
@@ -613,7 +615,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 		if (ret)
 			goto out_unpin;
 
-		ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
+		ret = qxl_bo_kmap(cursor_bo, &cursor_map);
 		if (ret)
 			goto out_backoff;
 
@@ -627,7 +629,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 		cursor->chunk.next_chunk = 0;
 		cursor->chunk.prev_chunk = 0;
 		cursor->chunk.data_size = size;
-		memcpy(cursor->chunk.data, user_ptr, size);
+		memcpy(cursor->chunk.data, user_map.vaddr, size);
 		qxl_bo_kunmap(cursor_bo);
 		qxl_bo_kunmap(user_bo);
 
@@ -1138,6 +1140,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
 {
 	int ret;
 	struct drm_gem_object *gobj;
+	struct dma_buf_map map;
 	int monitors_config_size = sizeof(struct qxl_monitors_config) +
 		qxl_num_crtc * sizeof(struct qxl_head);
 
@@ -1154,7 +1157,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
 	if (ret)
 		return ret;
 
-	qxl_bo_kmap(qdev->monitors_config_bo, NULL);
+	qxl_bo_kmap(qdev->monitors_config_bo, &map);
 
 	qdev->monitors_config = qdev->monitors_config_bo->kptr;
 	qdev->ram_header->monitors_config =
diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
index 3599db096973..1bf4f465ecf4 100644
--- a/drivers/gpu/drm/qxl/qxl_draw.c
+++ b/drivers/gpu/drm/qxl/qxl_draw.c
@@ -20,6 +20,8 @@
  * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
  */
 
+#include <linux/dma-buf-map.h>
+
 #include <drm/drm_fourcc.h>
 
 #include "qxl_drv.h"
@@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
 					      unsigned int num_clips,
 					      struct qxl_bo *clips_bo)
 {
+	struct dma_buf_map map;
 	struct qxl_clip_rects *dev_clips;
 	int ret;
 
-	ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
-	if (ret) {
+	ret = qxl_bo_kmap(clips_bo, &map);
+	if (ret)
 		return NULL;
-	}
+
+	dev_clips = map.vaddr;
 	dev_clips->num_rects = num_clips;
 	dev_clips->chunk.next_chunk = 0;
 	dev_clips->chunk.prev_chunk = 0;
@@ -142,7 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
 	int stride = fb->pitches[0];
 	/* depth is not actually interesting, we don't mask with it */
 	int depth = fb->format->cpp[0] * 8;
-	uint8_t *surface_base;
+	struct dma_buf_map surface_map;
 	struct qxl_release *release;
 	struct qxl_bo *clips_bo;
 	struct qxl_drm_image *dimage;
@@ -197,11 +201,11 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
 	if (ret)
 		goto out_release_backoff;
 
-	ret = qxl_bo_kmap(bo, (void **)&surface_base);
+	ret = qxl_bo_kmap(bo, &surface_map);
 	if (ret)
 		goto out_release_backoff;
 
-	ret = qxl_image_init(qdev, release, dimage, surface_base,
+	ret = qxl_image_init(qdev, release, dimage, surface_map.vaddr,
 			     left - dumb_shadow_offset,
 			     top, width, height, depth, stride);
 	qxl_bo_kunmap(bo);
diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 3602e8b34189..a9e9da4f4605 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -50,6 +50,8 @@
 
 #include "qxl_dev.h"
 
+struct dma_buf_map;
+
 #define DRIVER_AUTHOR		"Dave Airlie"
 
 #define DRIVER_NAME		"qxl"
@@ -335,7 +337,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
 void qxl_gem_object_close(struct drm_gem_object *obj,
 			  struct drm_file *file_priv);
 void qxl_bo_force_delete(struct qxl_device *qdev);
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
 
 /* qxl_dumb.c */
 int qxl_mode_dumb_create(struct drm_file *file_priv,
@@ -445,8 +446,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
 struct drm_gem_object *qxl_gem_prime_import_sg_table(
 	struct drm_device *dev, struct dma_buf_attachment *attach,
 	struct sg_table *sgt);
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+			  struct dma_buf_map *map);
 int qxl_gem_prime_mmap(struct drm_gem_object *obj,
 				struct vm_area_struct *vma);
 
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index d3635e3e3267..2d8ae3b10b1c 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -23,10 +23,12 @@
  *          Alon Levy
  */
 
+#include <linux/dma-buf-map.h>
+#include <linux/io-mapping.h>
+
 #include "qxl_drv.h"
 #include "qxl_object.h"
 
-#include <linux/io-mapping.h>
 static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
 {
 	struct qxl_bo *bo;
@@ -150,24 +152,22 @@ int qxl_bo_create(struct qxl_device *qdev,
 	return 0;
 }
 
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
+int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
 {
-	bool is_iomem;
 	int r;
 
 	if (bo->kptr) {
-		if (ptr)
-			*ptr = bo->kptr;
 		bo->map_count++;
-		return 0;
+		goto out;
 	}
 	r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
 	if (r)
 		return r;
-	bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
-	if (ptr)
-		*ptr = bo->kptr;
 	bo->map_count = 1;
+	bo->kptr = bo->kmap.virtual;
+
+out:
+	ttm_kmap_obj_to_dma_buf_map(&bo->kmap, map);
 	return 0;
 }
 
@@ -178,6 +178,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
 	void *rptr;
 	int ret;
 	struct io_mapping *map;
+	struct dma_buf_map bo_map;
 
 	if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
 		map = qdev->vram_mapping;
@@ -194,11 +195,11 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
 		return rptr;
 	}
 
-	ret = qxl_bo_kmap(bo, &rptr);
+	ret = qxl_bo_kmap(bo, &bo_map);
 	if (ret)
 		return NULL;
 
-	rptr += page_offset * PAGE_SIZE;
+	rptr = bo_map.vaddr + page_offset * PAGE_SIZE;
 	return rptr;
 }
 
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index 09a5c818324d..ebf24c9d2bf2 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
 			 bool kernel, bool pinned, u32 domain,
 			 struct qxl_surface *surf,
 			 struct qxl_bo **bo_ptr);
-extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
+extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
 extern void qxl_bo_kunmap(struct qxl_bo *bo);
 void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
 void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index 7d3816fca5a8..4aa949799446 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
 	return ERR_PTR(-ENOSYS);
 }
 
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct qxl_bo *bo = gem_to_qxl_bo(obj);
-	void *ptr;
 	int ret;
 
-	ret = qxl_bo_kmap(bo, &ptr);
+	ret = qxl_bo_kmap(bo, map);
 	if (ret < 0)
-		return ERR_PTR(ret);
+		return ret;
 
-	return ptr;
+	return 0;
 }
 
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+			  struct dma_buf_map *map)
 {
 	struct qxl_bo *bo = gem_to_qxl_bo(obj);
 
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
index 0ccd7213e41f..ac51517bdfcd 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -40,8 +40,8 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
 struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
 int radeon_gem_prime_pin(struct drm_gem_object *obj);
 void radeon_gem_prime_unpin(struct drm_gem_object *obj);
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int radeon_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void radeon_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 static const struct drm_gem_object_funcs radeon_gem_object_funcs;
 
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
index b9de0e51c0be..a1a358de5448 100644
--- a/drivers/gpu/drm/radeon/radeon_prime.c
+++ b/drivers/gpu/drm/radeon/radeon_prime.c
@@ -39,7 +39,7 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
 }
 
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
+int radeon_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct radeon_bo *bo = gem_to_radeon_bo(obj);
 	int ret;
@@ -47,12 +47,13 @@ void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
 	ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
 			  &bo->dma_buf_vmap);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
+	ttm_kmap_obj_to_dma_buf_map(&bo->dma_buf_vmap, map);
 
-	return bo->dma_buf_vmap.virtual;
+	return 0;
 }
 
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void radeon_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct radeon_bo *bo = gem_to_radeon_bo(obj);
 
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
index 7d5ebb10323b..7971f57436dd 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
@@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
 	return ERR_PTR(ret);
 }
 
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
 
-	if (rk_obj->pages)
-		return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
-			    pgprot_writecombine(PAGE_KERNEL));
+	if (rk_obj->pages) {
+		void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
+				  pgprot_writecombine(PAGE_KERNEL));
+		if (!vaddr)
+			return -ENOMEM;
+		dma_buf_map_set_vaddr(map, vaddr);
+		return 0;
+	}
 
 	if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
-		return NULL;
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
 
-	return rk_obj->kvaddr;
+	return 0;
 }
 
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
 
 	if (rk_obj->pages) {
-		vunmap(vaddr);
+		vunmap(map->vaddr);
 		return;
 	}
 
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
index 7ffc541bea07..5a70a56cd406 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
@@ -31,8 +31,8 @@ struct drm_gem_object *
 rockchip_gem_prime_import_sg_table(struct drm_device *dev,
 				   struct dma_buf_attachment *attach,
 				   struct sg_table *sg);
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 /* drm driver mmap file operations */
 int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
index 744a8e337e41..6dc013f4b236 100644
--- a/drivers/gpu/drm/tiny/cirrus.c
+++ b/drivers/gpu/drm/tiny/cirrus.c
@@ -17,6 +17,7 @@
  */
 
 #include <linux/console.h>
+#include <linux/dma-buf-map.h>
 #include <linux/module.h>
 #include <linux/pci.h>
 
@@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
 			       struct drm_rect *rect)
 {
 	struct cirrus_device *cirrus = to_cirrus(fb->dev);
+	struct dma_buf_map map;
 	void *vmap;
 	int idx, ret;
 
@@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
 	if (!drm_dev_enter(&cirrus->dev, &idx))
 		goto out;
 
-	ret = -ENOMEM;
-	vmap = drm_gem_shmem_vmap(fb->obj[0]);
-	if (!vmap)
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (ret)
 		goto out_dev_exit;
+	vmap = map.vaddr;
 
 	if (cirrus->cpp == fb->format->cpp[0])
 		drm_fb_memcpy_dstclip(cirrus->vram,
@@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
 	else
 		WARN_ON_ONCE("cpp mismatch");
 
-	drm_gem_shmem_vunmap(fb->obj[0], vmap);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 	ret = 0;
 
 out_dev_exit:
diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
index cc397671f689..5865027a1667 100644
--- a/drivers/gpu/drm/tiny/gm12u320.c
+++ b/drivers/gpu/drm/tiny/gm12u320.c
@@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 {
 	int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
 	struct drm_framebuffer *fb;
+	struct dma_buf_map map;
 	void *vaddr;
 	u8 *src;
 
@@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 	y1 = gm12u320->fb_update.rect.y1;
 	y2 = gm12u320->fb_update.rect.y2;
 
-	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
-	if (IS_ERR(vaddr)) {
-		GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (ret) {
+		GM12U320_ERR("failed to vmap fb: %d\n", ret);
 		goto put_fb;
 	}
+	vaddr = map.vaddr;
 
 	if (fb->obj[0]->import_attach) {
 		ret = dma_buf_begin_cpu_access(
@@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 			GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
 	}
 vunmap:
-	drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 put_fb:
 	drm_framebuffer_put(fb);
 	gm12u320->fb_update.fb = NULL;
diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
index fef43f4e3bac..9c8ace1aa647 100644
--- a/drivers/gpu/drm/udl/udl_modeset.c
+++ b/drivers/gpu/drm/udl/udl_modeset.c
@@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 	struct urb *urb;
 	struct drm_rect clip;
 	int log_bpp;
+	struct dma_buf_map map;
 	void *vaddr;
 
 	ret = udl_log_cpp(fb->format->cpp[0]);
@@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 			return ret;
 	}
 
-	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
-	if (IS_ERR(vaddr)) {
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (ret) {
 		DRM_ERROR("failed to vmap fb\n");
 		goto out_dma_buf_end_cpu_access;
 	}
+	vaddr = map.vaddr;
 
 	urb = udl_get_urb(dev);
 	if (!urb)
@@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 	ret = 0;
 
 out_drm_gem_shmem_vunmap:
-	drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 out_dma_buf_end_cpu_access:
 	if (import_attach) {
 		tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
index 4fcc0a542b8a..6040b9ec747f 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
@@ -9,6 +9,8 @@
  *          Michael Thayer <michael.thayer@oracle.com,
  *          Hans de Goede <hdegoede@redhat.com>
  */
+
+#include <linux/dma-buf-map.h>
 #include <linux/export.h>
 
 #include <drm/drm_atomic.h>
@@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 	u32 height = plane->state->crtc_h;
 	size_t data_size, mask_size;
 	u32 flags;
+	struct dma_buf_map map;
+	int ret;
 	u8 *src;
 
 	/*
@@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 
 	vbox_crtc->cursor_enabled = true;
 
-	src = drm_gem_vram_vmap(gbo);
-	if (IS_ERR(src)) {
+	ret = drm_gem_vram_vmap(gbo, &map);
+	if (ret) {
 		/*
 		 * BUG: we should have pinned the BO in prepare_fb().
 		 */
@@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 		DRM_WARN("Could not map cursor bo, skipping update\n");
 		return;
 	}
+	src = map.vaddr;
 
 	/*
 	 * The mask must be calculated based on the alpha
@@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 	data_size = width * height * 4 + mask_size;
 
 	copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
-	drm_gem_vram_vunmap(gbo, src);
+	drm_gem_vram_vunmap(gbo, &map);
 
 	flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
 		VBOX_MOUSE_POINTER_ALPHA;
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index f432278173cd..250266fb437e 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -786,16 +786,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
 	return drm_gem_cma_prime_mmap(obj, vma);
 }
 
-void *vc4_prime_vmap(struct drm_gem_object *obj)
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct vc4_bo *bo = to_vc4_bo(obj);
 
 	if (bo->validated_shader) {
 		DRM_DEBUG("mmaping of shader BOs not allowed.\n");
-		return ERR_PTR(-EINVAL);
+		return -EINVAL;
 	}
 
-	return drm_gem_cma_prime_vmap(obj);
+	return drm_gem_cma_prime_vmap(obj, map);
 }
 
 struct drm_gem_object *
diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
index a22478a35199..6af453c84777 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.h
+++ b/drivers/gpu/drm/vc4/vc4_drv.h
@@ -804,7 +804,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
 						 struct dma_buf_attachment *attach,
 						 struct sg_table *sgt);
-void *vc4_prime_vmap(struct drm_gem_object *obj);
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 int vc4_bo_cache_init(struct drm_device *dev);
 void vc4_bo_cache_destroy(struct drm_device *dev);
 int vc4_bo_inc_usecnt(struct vc4_bo *bo);
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index fa54a6d1403d..b2aa26e1e4a2 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
 	return &obj->base;
 }
 
-static void *vgem_prime_vmap(struct drm_gem_object *obj)
+static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
 	long n_pages = obj->size >> PAGE_SHIFT;
 	struct page **pages;
+	void *vaddr;
 
 	pages = vgem_pin_pages(bo);
 	if (IS_ERR(pages))
-		return NULL;
+		return PTR_ERR(pages);
+
+	vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+	if (!vaddr)
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, vaddr);
 
-	return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+	return 0;
 }
 
-static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
 
-	vunmap(vaddr);
+	vunmap(map->vaddr);
 	vgem_unpin_pages(bo);
 }
 
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index 4f34ef34ba60..74db5a840bed 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
 	return gem_mmap_obj(xen_obj, vma);
 }
 
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
 {
 	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+	void *vaddr;
 
 	if (!xen_obj->pages)
-		return NULL;
+		return -ENOMEM;
 
 	/* Please see comment in gem_mmap_obj on mapping and attributes. */
-	return vmap(xen_obj->pages, xen_obj->num_pages,
-		    VM_MAP, PAGE_KERNEL);
+	vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
+		     VM_MAP, PAGE_KERNEL);
+	if (!vaddr)
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 
 void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
-				    void *vaddr)
+				    struct dma_buf_map *map)
 {
-	vunmap(vaddr);
+	vunmap(map->vaddr);
 }
 
 int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
index a39675fa31b2..a4e67d0a149c 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
@@ -12,6 +12,7 @@
 #define __XEN_DRM_FRONT_GEM_H
 
 struct dma_buf_attachment;
+struct dma_buf_map;
 struct drm_device;
 struct drm_gem_object;
 struct file;
@@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
 
 int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
+				 struct dma_buf_map *map);
 
 void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
-				    void *vaddr);
+				    struct dma_buf_map *map);
 
 int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
 				 struct vm_area_struct *vma);
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index c38dd35da00b..5e6daa1c982f 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -39,6 +39,7 @@
 
 #include <drm/drm_vma_manager.h>
 
+struct dma_buf_map;
 struct drm_gem_object;
 
 /**
@@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
 	 *
 	 * This callback is optional.
 	 */
-	void *(*vmap)(struct drm_gem_object *obj);
+	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 	/**
 	 * @vunmap:
@@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
 	 *
 	 * This callback is optional.
 	 */
-	void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
+	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 	/**
 	 * @mmap:
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index 2bfa2502607a..34a7f72879c5 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -103,8 +103,8 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
 				  struct sg_table *sgt);
 int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 struct drm_gem_object *
 drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 5381f0c8cf6f..3449a0353fe0 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
 void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
 int drm_gem_shmem_pin(struct drm_gem_object *obj);
 void drm_gem_shmem_unpin(struct drm_gem_object *obj);
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
 
diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
index 128f88174d32..0c43b8f17ee9 100644
--- a/include/drm/drm_gem_vram_helper.h
+++ b/include/drm/drm_gem_vram_helper.h
@@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
 s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
 int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
 int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
 
 int drm_gem_vram_fill_create_dumb(struct drm_file *file,
 				  struct drm_device *dev,
-- 
2.28.0


_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v3 4/7] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map
  2020-09-29 15:14 [PATCH v3 0/7] Support GEM object mappings from I/O memory Thomas Zimmermann
                   ` (2 preceding siblings ...)
  2020-09-29 15:14 ` [PATCH v3 3/7] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends Thomas Zimmermann
@ 2020-09-29 15:14 ` Thomas Zimmermann
  2020-10-02 13:04   ` Daniel Vetter
  2020-09-29 15:14 ` [PATCH v3 5/7] drm/gem: Store client buffer mappings as " Thomas Zimmermann
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 33+ messages in thread
From: Thomas Zimmermann @ 2020-09-29 15:14 UTC (permalink / raw)
  To: maarten.lankhorst, mripard, airlied, daniel, sam,
	alexander.deucher, christian.koenig, kraxel, l.stach,
	linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
	sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
	tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
	hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
	sumit.semwal, emil.velikov, luben.tuikov, apaneers,
	linus.walleij, melissa.srw, chris, miaoqinglang
  Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
	virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
	Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
	linux-media

GEM's vmap and vunmap interfaces now wrap memory pointers in struct
dma_buf_map.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_client.c   | 18 +++++++++++-------
 drivers/gpu/drm/drm_gem.c      | 28 ++++++++++++++--------------
 drivers/gpu/drm/drm_internal.h |  5 +++--
 drivers/gpu/drm/drm_prime.c    | 14 ++++----------
 4 files changed, 32 insertions(+), 33 deletions(-)

diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index 495f47d23d87..ac0082bed966 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -3,6 +3,7 @@
  * Copyright 2018 Noralf Trønnes
  */
 
+#include <linux/dma-buf-map.h>
 #include <linux/list.h>
 #include <linux/module.h>
 #include <linux/mutex.h>
@@ -304,7 +305,8 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
  */
 void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 {
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	if (buffer->vaddr)
 		return buffer->vaddr;
@@ -317,13 +319,13 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 	 * fd_install step out of the driver backend hooks, to make that
 	 * final step optional for internal users.
 	 */
-	vaddr = drm_gem_vmap(buffer->gem);
-	if (IS_ERR(vaddr))
-		return vaddr;
+	ret = drm_gem_vmap(buffer->gem, &map);
+	if (ret)
+		return ERR_PTR(ret);
 
-	buffer->vaddr = vaddr;
+	buffer->vaddr = map.vaddr;
 
-	return vaddr;
+	return map.vaddr;
 }
 EXPORT_SYMBOL(drm_client_buffer_vmap);
 
@@ -337,7 +339,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
  */
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
 {
-	drm_gem_vunmap(buffer->gem, buffer->vaddr);
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+
+	drm_gem_vunmap(buffer->gem, &map);
 	buffer->vaddr = NULL;
 }
 EXPORT_SYMBOL(drm_client_buffer_vunmap);
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 0c4a66dea5c2..f2b2f37d41c4 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1205,32 +1205,32 @@ void drm_gem_unpin(struct drm_gem_object *obj)
 		obj->funcs->unpin(obj);
 }
 
-void *drm_gem_vmap(struct drm_gem_object *obj)
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
-	struct dma_buf_map map;
 	int ret;
 
-	if (!obj->funcs->vmap) {
-		return ERR_PTR(-EOPNOTSUPP);
+	if (!obj->funcs->vmap)
+		return -EOPNOTSUPP;
 
-	ret = obj->funcs->vmap(obj, &map);
+	ret = obj->funcs->vmap(obj, map);
 	if (ret)
-		return ERR_PTR(ret);
-	else if (dma_buf_map_is_null(&map))
-		return ERR_PTR(-ENOMEM);
+		return ret;
+	else if (dma_buf_map_is_null(map))
+		return -ENOMEM;
 
-	return map.vaddr;
+	return 0;
 }
 
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
-	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
-
-	if (!vaddr)
+	if (dma_buf_map_is_null(map))
 		return;
 
 	if (obj->funcs->vunmap)
-		obj->funcs->vunmap(obj, &map);
+		obj->funcs->vunmap(obj, map);
+
+	/* Always set the mapping to NULL. Callers may rely on this. */
+	dma_buf_map_clear(map);
 }
 
 /**
diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
index b65865c630b0..58832d75a9bd 100644
--- a/drivers/gpu/drm/drm_internal.h
+++ b/drivers/gpu/drm/drm_internal.h
@@ -33,6 +33,7 @@
 
 struct dentry;
 struct dma_buf;
+struct dma_buf_map;
 struct drm_connector;
 struct drm_crtc;
 struct drm_framebuffer;
@@ -187,8 +188,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
 
 int drm_gem_pin(struct drm_gem_object *obj);
 void drm_gem_unpin(struct drm_gem_object *obj);
-void *drm_gem_vmap(struct drm_gem_object *obj);
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 /* drm_debugfs.c drm_debugfs_crc.c */
 #if defined(CONFIG_DEBUG_FS)
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 89e2a2496734..cb8fbeeb731b 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -667,21 +667,15 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
  *
  * Sets up a kernel virtual mapping. This can be used as the &dma_buf_ops.vmap
  * callback. Calls into &drm_gem_object_funcs.vmap for device specific handling.
+ * The kernel virtual address is returned in map.
  *
- * Returns the kernel virtual address or NULL on failure.
+ * Returns 0 on success or a negative errno code otherwise.
  */
 int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
-	void *vaddr;
 
-	vaddr = drm_gem_vmap(obj);
-	if (IS_ERR(vaddr))
-		return PTR_ERR(vaddr);
-
-	dma_buf_map_set_vaddr(map, vaddr);
-
-	return 0;
+	return drm_gem_vmap(obj, map);
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
 
@@ -697,7 +691,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
 
-	drm_gem_vunmap(obj, map->vaddr);
+	drm_gem_vunmap(obj, map);
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
 
-- 
2.28.0


_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v3 5/7] drm/gem: Store client buffer mappings as struct dma_buf_map
  2020-09-29 15:14 [PATCH v3 0/7] Support GEM object mappings from I/O memory Thomas Zimmermann
                   ` (3 preceding siblings ...)
  2020-09-29 15:14 ` [PATCH v3 4/7] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map Thomas Zimmermann
@ 2020-09-29 15:14 ` Thomas Zimmermann
  2020-10-02 13:05   ` Daniel Vetter
  2020-09-29 15:14 ` [PATCH v3 6/7] drm/fb_helper: Support framebuffers in I/O memory Thomas Zimmermann
  2020-09-29 15:14 ` [PATCH v3 7/7] drm/todo: Update entries around struct dma_buf_map Thomas Zimmermann
  6 siblings, 1 reply; 33+ messages in thread
From: Thomas Zimmermann @ 2020-09-29 15:14 UTC (permalink / raw)
  To: maarten.lankhorst, mripard, airlied, daniel, sam,
	alexander.deucher, christian.koenig, kraxel, l.stach,
	linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
	sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
	tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
	hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
	sumit.semwal, emil.velikov, luben.tuikov, apaneers,
	linus.walleij, melissa.srw, chris, miaoqinglang
  Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
	virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
	Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
	linux-media

Kernel DRM clients now store their framebuffer address in an instance
of struct dma_buf_map. Depending on the buffer's location, the address
refers to system or I/O memory.

Callers of drm_client_buffer_vmap() receive a copy of the value in
the call's supplied arguments. It can be accessed and modified with
dma_buf_map interfaces.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_client.c    | 34 +++++++++++++++++++--------------
 drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++---------
 include/drm/drm_client.h        |  7 ++++---
 3 files changed, 38 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index ac0082bed966..fe573acf1067 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
 {
 	struct drm_device *dev = buffer->client->dev;
 
-	drm_gem_vunmap(buffer->gem, buffer->vaddr);
+	drm_gem_vunmap(buffer->gem, &buffer->map);
 
 	if (buffer->gem)
 		drm_gem_object_put(buffer->gem);
@@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
 /**
  * drm_client_buffer_vmap - Map DRM client buffer into address space
  * @buffer: DRM client buffer
+ * @map_copy: Returns the mapped memory's address
  *
  * This function maps a client buffer into kernel address space. If the
- * buffer is already mapped, it returns the mapping's address.
+ * buffer is already mapped, it returns the existing mapping's address.
  *
  * Client buffer mappings are not ref'counted. Each call to
  * drm_client_buffer_vmap() should be followed by a call to
  * drm_client_buffer_vunmap(); or the client buffer should be mapped
  * throughout its lifetime.
  *
+ * The returned address is a copy of the internal value. In contrast to
+ * other vmap interfaces, you don't need it for the client's vunmap
+ * function. So you can modify it at will during blit and draw operations.
+ *
  * Returns:
- *	The mapped memory's address
+ *	0 on success, or a negative errno code otherwise.
  */
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
+int
+drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy)
 {
-	struct dma_buf_map map;
+	struct dma_buf_map *map = &buffer->map;
 	int ret;
 
-	if (buffer->vaddr)
-		return buffer->vaddr;
+	if (dma_buf_map_is_set(map))
+		goto out;
 
 	/*
 	 * FIXME: The dependency on GEM here isn't required, we could
@@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 	 * fd_install step out of the driver backend hooks, to make that
 	 * final step optional for internal users.
 	 */
-	ret = drm_gem_vmap(buffer->gem, &map);
+	ret = drm_gem_vmap(buffer->gem, map);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
 
-	buffer->vaddr = map.vaddr;
+out:
+	*map_copy = *map;
 
-	return map.vaddr;
+	return 0;
 }
 EXPORT_SYMBOL(drm_client_buffer_vmap);
 
@@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
  */
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
 {
-	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+	struct dma_buf_map *map = &buffer->map;
 
-	drm_gem_vunmap(buffer->gem, &map);
-	buffer->vaddr = NULL;
+	drm_gem_vunmap(buffer->gem, map);
 }
 EXPORT_SYMBOL(drm_client_buffer_vunmap);
 
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 8697554ccd41..343a292f2c7c 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -394,7 +394,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
 	unsigned int cpp = fb->format->cpp[0];
 	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
 	void *src = fb_helper->fbdev->screen_buffer + offset;
-	void *dst = fb_helper->buffer->vaddr + offset;
+	void *dst = fb_helper->buffer->map.vaddr + offset;
 	size_t len = (clip->x2 - clip->x1) * cpp;
 	unsigned int y;
 
@@ -416,7 +416,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 	struct drm_clip_rect *clip = &helper->dirty_clip;
 	struct drm_clip_rect clip_copy;
 	unsigned long flags;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	spin_lock_irqsave(&helper->dirty_lock, flags);
 	clip_copy = *clip;
@@ -429,8 +430,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 
 		/* Generic fbdev uses a shadow buffer */
 		if (helper->buffer) {
-			vaddr = drm_client_buffer_vmap(helper->buffer);
-			if (IS_ERR(vaddr))
+			ret = drm_client_buffer_vmap(helper->buffer, &map);
+			if (ret)
 				return;
 			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
 		}
@@ -2076,7 +2077,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
 	struct drm_framebuffer *fb;
 	struct fb_info *fbi;
 	u32 format;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
 		    sizes->surface_width, sizes->surface_height,
@@ -2112,11 +2114,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
 		fb_deferred_io_init(fbi);
 	} else {
 		/* buffer is mapped for HW framebuffer */
-		vaddr = drm_client_buffer_vmap(fb_helper->buffer);
-		if (IS_ERR(vaddr))
-			return PTR_ERR(vaddr);
+		ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
+		if (ret)
+			return ret;
+		if (map.is_iomem)
+			fbi->screen_base = map.vaddr_iomem;
+		else
+			fbi->screen_buffer = map.vaddr;
 
-		fbi->screen_buffer = vaddr;
 		/* Shamelessly leak the physical address to user-space */
 #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
 		if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
index 7aaea665bfc2..f07f2fb02e75 100644
--- a/include/drm/drm_client.h
+++ b/include/drm/drm_client.h
@@ -3,6 +3,7 @@
 #ifndef _DRM_CLIENT_H_
 #define _DRM_CLIENT_H_
 
+#include <linux/dma-buf-map.h>
 #include <linux/lockdep.h>
 #include <linux/mutex.h>
 #include <linux/types.h>
@@ -141,9 +142,9 @@ struct drm_client_buffer {
 	struct drm_gem_object *gem;
 
 	/**
-	 * @vaddr: Virtual address for the buffer
+	 * @map: Virtual address for the buffer
 	 */
-	void *vaddr;
+	struct dma_buf_map map;
 
 	/**
 	 * @fb: DRM framebuffer
@@ -155,7 +156,7 @@ struct drm_client_buffer *
 drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
 void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
 int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
+int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map);
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
 
 int drm_client_modeset_create(struct drm_client_dev *client);
-- 
2.28.0


_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v3 6/7] drm/fb_helper: Support framebuffers in I/O memory
  2020-09-29 15:14 [PATCH v3 0/7] Support GEM object mappings from I/O memory Thomas Zimmermann
                   ` (4 preceding siblings ...)
  2020-09-29 15:14 ` [PATCH v3 5/7] drm/gem: Store client buffer mappings as " Thomas Zimmermann
@ 2020-09-29 15:14 ` Thomas Zimmermann
  2020-10-02 18:05   ` Daniel Vetter
  2020-09-29 15:14 ` [PATCH v3 7/7] drm/todo: Update entries around struct dma_buf_map Thomas Zimmermann
  6 siblings, 1 reply; 33+ messages in thread
From: Thomas Zimmermann @ 2020-09-29 15:14 UTC (permalink / raw)
  To: maarten.lankhorst, mripard, airlied, daniel, sam,
	alexander.deucher, christian.koenig, kraxel, l.stach,
	linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
	sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
	tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
	hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
	sumit.semwal, emil.velikov, luben.tuikov, apaneers,
	linus.walleij, melissa.srw, chris, miaoqinglang
  Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
	virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
	Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
	linux-media

At least sparc64 requires I/O-specific access to framebuffers. This
patch updates the fbdev console accordingly.

For drivers with direct access to the framebuffer memory, the callback
functions in struct fb_ops test for the type of memory and call the rsp
fb_sys_ of fb_cfb_ functions.

For drivers that employ a shadow buffer, fbdev's blit function retrieves
the framebuffer address as struct dma_buf_map, and uses dma_buf_map
interfaces to access the buffer.

The bochs driver on sparc64 uses a workaround to flag the framebuffer as
I/O memory and avoid a HW exception. With the introduction of struct
dma_buf_map, this is not required any longer. The patch removes the rsp
code from both, bochs and fbdev.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/bochs/bochs_kms.c |   1 -
 drivers/gpu/drm/drm_fb_helper.c   | 217 ++++++++++++++++++++++++++++--
 include/drm/drm_mode_config.h     |  12 --
 include/linux/dma-buf-map.h       |  72 ++++++++--
 4 files changed, 265 insertions(+), 37 deletions(-)

diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
index 13d0d04c4457..853081d186d5 100644
--- a/drivers/gpu/drm/bochs/bochs_kms.c
+++ b/drivers/gpu/drm/bochs/bochs_kms.c
@@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
 	bochs->dev->mode_config.preferred_depth = 24;
 	bochs->dev->mode_config.prefer_shadow = 0;
 	bochs->dev->mode_config.prefer_shadow_fbdev = 1;
-	bochs->dev->mode_config.fbdev_use_iomem = true;
 	bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
 
 	bochs->dev->mode_config.funcs = &bochs_mode_funcs;
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 343a292f2c7c..f345a314a437 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -388,24 +388,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
 }
 
 static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
-					  struct drm_clip_rect *clip)
+					  struct drm_clip_rect *clip,
+					  struct dma_buf_map *dst)
 {
 	struct drm_framebuffer *fb = fb_helper->fb;
 	unsigned int cpp = fb->format->cpp[0];
 	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
 	void *src = fb_helper->fbdev->screen_buffer + offset;
-	void *dst = fb_helper->buffer->map.vaddr + offset;
 	size_t len = (clip->x2 - clip->x1) * cpp;
 	unsigned int y;
 
-	for (y = clip->y1; y < clip->y2; y++) {
-		if (!fb_helper->dev->mode_config.fbdev_use_iomem)
-			memcpy(dst, src, len);
-		else
-			memcpy_toio((void __iomem *)dst, src, len);
+	dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
 
+	for (y = clip->y1; y < clip->y2; y++) {
+		dma_buf_map_memcpy_to(dst, src, len);
+		dma_buf_map_incr(dst, fb->pitches[0]);
 		src += fb->pitches[0];
-		dst += fb->pitches[0];
 	}
 }
 
@@ -433,8 +431,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 			ret = drm_client_buffer_vmap(helper->buffer, &map);
 			if (ret)
 				return;
-			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
+			drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
 		}
+
 		if (helper->fb->funcs->dirty)
 			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
 						 &clip_copy, 1);
@@ -771,6 +770,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
 }
 EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
 
+static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
+				      size_t count, loff_t *ppos)
+{
+	unsigned long p = *ppos;
+	u8 *dst;
+	u8 __iomem *src;
+	int c, err = 0;
+	unsigned long total_size;
+	unsigned long alloc_size;
+	ssize_t ret = 0;
+
+	if (info->state != FBINFO_STATE_RUNNING)
+		return -EPERM;
+
+	total_size = info->screen_size;
+
+	if (total_size == 0)
+		total_size = info->fix.smem_len;
+
+	if (p >= total_size)
+		return 0;
+
+	if (count >= total_size)
+		count = total_size;
+
+	if (count + p > total_size)
+		count = total_size - p;
+
+	src = (u8 __iomem *)(info->screen_base + p);
+
+	alloc_size = min(count, PAGE_SIZE);
+
+	dst = kmalloc(alloc_size, GFP_KERNEL);
+	if (!dst)
+		return -ENOMEM;
+
+	while (count) {
+		c = min(count, alloc_size);
+
+		memcpy_fromio(dst, src, c);
+		if (copy_to_user(buf, dst, c)) {
+			err = -EFAULT;
+			break;
+		}
+
+		src += c;
+		*ppos += c;
+		buf += c;
+		ret += c;
+		count -= c;
+	}
+
+	kfree(dst);
+
+	if (err)
+		return err;
+
+	return ret;
+}
+
+static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
+				       size_t count, loff_t *ppos)
+{
+	unsigned long p = *ppos;
+	u8 *src;
+	u8 __iomem *dst;
+	int c, err = 0;
+	unsigned long total_size;
+	unsigned long alloc_size;
+	ssize_t ret = 0;
+
+	if (info->state != FBINFO_STATE_RUNNING)
+		return -EPERM;
+
+	total_size = info->screen_size;
+
+	if (total_size == 0)
+		total_size = info->fix.smem_len;
+
+	if (p > total_size)
+		return -EFBIG;
+
+	if (count > total_size) {
+		err = -EFBIG;
+		count = total_size;
+	}
+
+	if (count + p > total_size) {
+		/*
+		 * The framebuffer is too small. We do the
+		 * copy operation, but return an error code
+		 * afterwards. Taken from fbdev.
+		 */
+		if (!err)
+			err = -ENOSPC;
+		count = total_size - p;
+	}
+
+	alloc_size = min(count, PAGE_SIZE);
+
+	src = kmalloc(alloc_size, GFP_KERNEL);
+	if (!src)
+		return -ENOMEM;
+
+	dst = (u8 __iomem *)(info->screen_base + p);
+
+	while (count) {
+		c = min(count, alloc_size);
+
+		if (copy_from_user(src, buf, c)) {
+			err = -EFAULT;
+			break;
+		}
+		memcpy_toio(dst, src, c);
+
+		dst += c;
+		*ppos += c;
+		buf += c;
+		ret += c;
+		count -= c;
+	}
+
+	kfree(src);
+
+	if (err)
+		return err;
+
+	return ret;
+}
+
 /**
  * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
  * @info: fbdev registered by the helper
@@ -2043,6 +2172,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
 		return -ENODEV;
 }
 
+static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
+				 size_t count, loff_t *ppos)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+
+	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+		return drm_fb_helper_sys_read(info, buf, count, ppos);
+	else
+		return drm_fb_helper_cfb_read(info, buf, count, ppos);
+}
+
+static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
+				  size_t count, loff_t *ppos)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+
+	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+		return drm_fb_helper_sys_write(info, buf, count, ppos);
+	else
+		return drm_fb_helper_cfb_write(info, buf, count, ppos);
+}
+
+static void drm_fbdev_fb_fillrect(struct fb_info *info,
+				  const struct fb_fillrect *rect)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+
+	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+		drm_fb_helper_sys_fillrect(info, rect);
+	else
+		drm_fb_helper_cfb_fillrect(info, rect);
+}
+
+static void drm_fbdev_fb_copyarea(struct fb_info *info,
+				  const struct fb_copyarea *area)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+
+	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+		drm_fb_helper_sys_copyarea(info, area);
+	else
+		drm_fb_helper_cfb_copyarea(info, area);
+}
+
+static void drm_fbdev_fb_imageblit(struct fb_info *info,
+				   const struct fb_image *image)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+
+	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+		drm_fb_helper_sys_imageblit(info, image);
+	else
+		drm_fb_helper_cfb_imageblit(info, image);
+}
+
 static const struct fb_ops drm_fbdev_fb_ops = {
 	.owner		= THIS_MODULE,
 	DRM_FB_HELPER_DEFAULT_OPS,
@@ -2050,11 +2239,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
 	.fb_release	= drm_fbdev_fb_release,
 	.fb_destroy	= drm_fbdev_fb_destroy,
 	.fb_mmap	= drm_fbdev_fb_mmap,
-	.fb_read	= drm_fb_helper_sys_read,
-	.fb_write	= drm_fb_helper_sys_write,
-	.fb_fillrect	= drm_fb_helper_sys_fillrect,
-	.fb_copyarea	= drm_fb_helper_sys_copyarea,
-	.fb_imageblit	= drm_fb_helper_sys_imageblit,
+	.fb_read	= drm_fbdev_fb_read,
+	.fb_write	= drm_fbdev_fb_write,
+	.fb_fillrect	= drm_fbdev_fb_fillrect,
+	.fb_copyarea	= drm_fbdev_fb_copyarea,
+	.fb_imageblit	= drm_fbdev_fb_imageblit,
 };
 
 static struct fb_deferred_io drm_fbdev_defio = {
diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
index 5ffbb4ed5b35..ab424ddd7665 100644
--- a/include/drm/drm_mode_config.h
+++ b/include/drm/drm_mode_config.h
@@ -877,18 +877,6 @@ struct drm_mode_config {
 	 */
 	bool prefer_shadow_fbdev;
 
-	/**
-	 * @fbdev_use_iomem:
-	 *
-	 * Set to true if framebuffer reside in iomem.
-	 * When set to true memcpy_toio() is used when copying the framebuffer in
-	 * drm_fb_helper.drm_fb_helper_dirty_blit_real().
-	 *
-	 * FIXME: This should be replaced with a per-mapping is_iomem
-	 * flag (like ttm does), and then used everywhere in fbdev code.
-	 */
-	bool fbdev_use_iomem;
-
 	/**
 	 * @quirk_addfb_prefer_xbgr_30bpp:
 	 *
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index 2e8bbecb5091..6ca0f304dda2 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -32,6 +32,14 @@
  * accessing the buffer. Use the returned instance and the helper functions
  * to access the buffer's memory in the correct way.
  *
+ * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
+ * actually independent from the dma-buf infrastructure. When sharing buffers
+ * among devices, drivers have to know the location of the memory to access
+ * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
+ * solves this problem for dma-buf and its users. If other drivers or
+ * sub-systems require similar functionality, the type could be generalized
+ * and moved to a more prominent header file.
+ *
  * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
  * considered bad style. Rather then accessing its fields directly, use one
  * of the provided helper functions, or implement your own. For example,
@@ -51,6 +59,14 @@
  *
  *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
  *
+ * Instances of struct dma_buf_map do not have to be cleaned up, but
+ * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
+ * always refer to system memory.
+ *
+ * .. code-block:: c
+ *
+ *	dma_buf_map_clear(&map);
+ *
  * Test if a mapping is valid with either dma_buf_map_is_set() or
  * dma_buf_map_is_null().
  *
@@ -73,17 +89,19 @@
  *	if (dma_buf_map_is_equal(&sys_map, &io_map))
  *		// always false
  *
- * Instances of struct dma_buf_map do not have to be cleaned up, but
- * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
- * always refer to system memory.
+ * A set up instance of struct dma_buf_map can be used to access or manipulate
+ * the buffer memory. Depending on the location of the memory, the provided
+ * helpers will pick the correct operations. Data can be copied into the memory
+ * with dma_buf_map_memcpy_to(). The address can be manipulated with
+ * dma_buf_map_incr().
  *
- * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
- * actually independent from the dma-buf infrastructure. When sharing buffers
- * among devices, drivers have to know the location of the memory to access
- * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
- * solves this problem for dma-buf and its users. If other drivers or
- * sub-systems require similar functionality, the type could be generalized
- * and moved to a more prominent header file.
+ * .. code-block:: c
+ *
+ *	const void *src = ...; // source buffer
+ *	size_t len = ...; // length of src
+ *
+ *	dma_buf_map_memcpy_to(&map, src, len);
+ *	dma_buf_map_incr(&map, len); // go to first byte after the memcpy
  */
 
 /**
@@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
 	}
 }
 
+/**
+ * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
+ * @dst:	The dma-buf mapping structure
+ * @src:	The source buffer
+ * @len:	The number of byte in src
+ *
+ * Copies data into a dma-buf mapping. The source buffer is in system
+ * memory. Depending on the buffer's location, the helper picks the correct
+ * method of accessing the memory.
+ */
+static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
+{
+	if (dst->is_iomem)
+		memcpy_toio(dst->vaddr_iomem, src, len);
+	else
+		memcpy(dst->vaddr, src, len);
+}
+
+/**
+ * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
+ * @map:	The dma-buf mapping structure
+ * @incr:	The number of bytes to increment
+ *
+ * Increments the address stored in a dma-buf mapping. Depending on the
+ * buffer's location, the correct value will be updated.
+ */
+static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
+{
+	if (map->is_iomem)
+		map->vaddr_iomem += incr;
+	else
+		map->vaddr += incr;
+}
+
 #endif /* __DMA_BUF_MAP_H__ */
-- 
2.28.0


_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v3 7/7] drm/todo: Update entries around struct dma_buf_map
  2020-09-29 15:14 [PATCH v3 0/7] Support GEM object mappings from I/O memory Thomas Zimmermann
                   ` (5 preceding siblings ...)
  2020-09-29 15:14 ` [PATCH v3 6/7] drm/fb_helper: Support framebuffers in I/O memory Thomas Zimmermann
@ 2020-09-29 15:14 ` Thomas Zimmermann
  2020-10-02 18:45   ` Daniel Vetter
  6 siblings, 1 reply; 33+ messages in thread
From: Thomas Zimmermann @ 2020-09-29 15:14 UTC (permalink / raw)
  To: maarten.lankhorst, mripard, airlied, daniel, sam,
	alexander.deucher, christian.koenig, kraxel, l.stach,
	linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
	sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
	tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
	hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
	sumit.semwal, emil.velikov, luben.tuikov, apaneers,
	linus.walleij, melissa.srw, chris, miaoqinglang
  Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
	virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
	Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
	linux-media

Instances of struct dma_buf_map should be useful throughout DRM's
memory management code. Furthermore, several drivers can now use
generic fbdev emulation.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 Documentation/gpu/todo.rst | 24 ++++++++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 3751ac976c3e..023626c1837b 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -197,8 +197,10 @@ Convert drivers to use drm_fbdev_generic_setup()
 ------------------------------------------------
 
 Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
-atomic modesetting and GEM vmap support. Current generic fbdev emulation
-expects the framebuffer in system memory (or system-like memory).
+atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
+expected the framebuffer in system memory or system-like memory. By employing
+struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
+as well.
 
 Contact: Maintainer of the driver you plan to convert
 
@@ -446,6 +448,24 @@ Contact: Ville Syrjälä, Daniel Vetter
 
 Level: Intermediate
 
+Use struct dma_buf_map throughout codebase
+------------------------------------------
+
+Pointers to shared device memory are stored in struct dma_buf_map. Each
+instance knows whether it refers to system or I/O memory. Most of the DRM-wide
+interface have been converted to use struct dma_buf_map, but implementations
+often still use raw pointers.
+
+The task is to use struct dma_buf_map where it makes sense.
+
+* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
+* TTM might benefit from using struct dma_buf_map internally.
+* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
+
+Level: Intermediate
+
 
 Core refactorings
 =================
-- 
2.28.0


_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion
  2020-09-29 15:14 ` [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion Thomas Zimmermann
@ 2020-09-29 15:35   ` Christian König
  2020-09-29 15:44     ` Daniel Vetter
  2020-09-29 17:49     ` Thomas Zimmermann
  0 siblings, 2 replies; 33+ messages in thread
From: Christian König @ 2020-09-29 15:35 UTC (permalink / raw)
  To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
	sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
	christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
	kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
	steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
	eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
	emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
	chris, miaoqinglang
  Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
	virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
	xen-devel, spice-devel, linux-arm-kernel, linux-media

Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
> from and instance of TTM's kmap_obj and initializes struct dma_buf_map
> with these values. Helpful for TTM-based drivers.

We could completely drop that if we use the same structure inside TTM as 
well.

Additional to that which driver is going to use this?

Regards,
Christian.

>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>   include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
>   include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
>   2 files changed, 44 insertions(+)
>
> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> index c96a25d571c8..62d89f05a801 100644
> --- a/include/drm/ttm/ttm_bo_api.h
> +++ b/include/drm/ttm/ttm_bo_api.h
> @@ -34,6 +34,7 @@
>   #include <drm/drm_gem.h>
>   #include <drm/drm_hashtab.h>
>   #include <drm/drm_vma_manager.h>
> +#include <linux/dma-buf-map.h>
>   #include <linux/kref.h>
>   #include <linux/list.h>
>   #include <linux/wait.h>
> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct ttm_bo_kmap_obj *map,
>   	return map->virtual;
>   }
>   
> +/**
> + * ttm_kmap_obj_to_dma_buf_map
> + *
> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
> + * @map: Returns the mapping as struct dma_buf_map
> + *
> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
> + * is not mapped, the returned mapping is initialized to NULL.
> + */
> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj *kmap,
> +					       struct dma_buf_map *map)
> +{
> +	bool is_iomem;
> +	void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
> +
> +	if (!vaddr)
> +		dma_buf_map_clear(map);
> +	else if (is_iomem)
> +		dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
> +	else
> +		dma_buf_map_set_vaddr(map, vaddr);
> +}
> +
>   /**
>    * ttm_bo_kmap
>    *
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index fd1aba545fdf..2e8bbecb5091 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -45,6 +45,12 @@
>    *
>    *	dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>    *
> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> + *
> + * .. code-block:: c
> + *
> + *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> + *
>    * Test if a mapping is valid with either dma_buf_map_is_set() or
>    * dma_buf_map_is_null().
>    *
> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
>   	map->is_iomem = false;
>   }
>   
> +/**
> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
> + * @map:		The dma-buf mapping structure
> + * @vaddr_iomem:	An I/O-memory address
> + *
> + * Sets the address and the I/O-memory flag.
> + */
> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> +					       void __iomem *vaddr_iomem)
> +{
> +	map->vaddr_iomem = vaddr_iomem;
> +	map->is_iomem = true;
> +}
> +
>   /**
>    * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
>    * @lhs:	The dma-buf mapping structure


_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion
  2020-09-29 15:35   ` Christian König
@ 2020-09-29 15:44     ` Daniel Vetter
  2020-09-29 17:49     ` Thomas Zimmermann
  1 sibling, 0 replies; 33+ messages in thread
From: Daniel Vetter @ 2020-09-29 15:44 UTC (permalink / raw)
  To: Christian König
  Cc: Luben Tuikov, Heiko Stübner, Dave Airlie, Nouveau Dev,
	Linus Walleij, dri-devel, Wilson, Chris, Melissa Wen, Anholt,
	Eric, Huang Rui, Gerd Hoffmann, Sam Ravnborg, Sumit Semwal,
	Emil Velikov, Rob Herring, linux-samsung-soc, Joonyoung Shim,
	lima, Oleksandr Andrushchenko, Krzysztof Kozlowski, Steven Price,
	open list:ARM/Rockchip SoC...,
	Kukjin Kim, Alyssa Rosenzweig, Russell King,
	open list:DRM DRIVER FOR QXL VIRTUAL GPU, Ben Skeggs,
	Maarten Lankhorst, The etnaviv authors, Maxime Ripard, Inki Dae,
	Hans de Goede, Christian Gmeiner,
	moderated list:DRM DRIVERS FOR XEN, open list:VIRTIO CORE, NET...,
	Sean Paul, apaneers, Linux ARM,
	moderated list:DMA BUFFER SHARING FRAMEWORK, amd-gfx list,
	Tomeu Vizoso, Seung-Woo Kim, Sandy Huang, Kyungmin Park,
	Qinglang Miao, Qiang Yu, Thomas Zimmermann, Alex Deucher,
	open list:DMA BUFFER SHARING FRAMEWORK, Lucas Stach

On Tue, Sep 29, 2020 at 5:35 PM Christian König
<christian.koenig@amd.com> wrote:
>
> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
> > The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
> > from and instance of TTM's kmap_obj and initializes struct dma_buf_map
> > with these values. Helpful for TTM-based drivers.
>
> We could completely drop that if we use the same structure inside TTM as
> well.

> Additional to that which driver is going to use this?

Patch 3 in this series.

I also think this makes sense for gradual conversion:
1. add this helper
2. convert over all users of vmap, this should get rid of is_iomem
flags (and will probably result in a pile of small additions to
dma-buf-map.h)
3. push the struct dma_buf_map down into ttm drivers

Cheers, Daniel

> Regards,
> Christian.
>
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> >   include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
> >   include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
> >   2 files changed, 44 insertions(+)
> >
> > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > index c96a25d571c8..62d89f05a801 100644
> > --- a/include/drm/ttm/ttm_bo_api.h
> > +++ b/include/drm/ttm/ttm_bo_api.h
> > @@ -34,6 +34,7 @@
> >   #include <drm/drm_gem.h>
> >   #include <drm/drm_hashtab.h>
> >   #include <drm/drm_vma_manager.h>
> > +#include <linux/dma-buf-map.h>
> >   #include <linux/kref.h>
> >   #include <linux/list.h>
> >   #include <linux/wait.h>
> > @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct ttm_bo_kmap_obj *map,
> >       return map->virtual;
> >   }
> >
> > +/**
> > + * ttm_kmap_obj_to_dma_buf_map
> > + *
> > + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
> > + * @map: Returns the mapping as struct dma_buf_map
> > + *
> > + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
> > + * is not mapped, the returned mapping is initialized to NULL.
> > + */
> > +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj *kmap,
> > +                                            struct dma_buf_map *map)
> > +{
> > +     bool is_iomem;
> > +     void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
> > +
> > +     if (!vaddr)
> > +             dma_buf_map_clear(map);
> > +     else if (is_iomem)
> > +             dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
> > +     else
> > +             dma_buf_map_set_vaddr(map, vaddr);
> > +}
> > +
> >   /**
> >    * ttm_bo_kmap
> >    *
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index fd1aba545fdf..2e8bbecb5091 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -45,6 +45,12 @@
> >    *
> >    *  dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> >    *
> > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > + *
> > + * .. code-block:: c
> > + *
> > + *   dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > + *
> >    * Test if a mapping is valid with either dma_buf_map_is_set() or
> >    * dma_buf_map_is_null().
> >    *
> > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
> >       map->is_iomem = false;
> >   }
> >
> > +/**
> > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
> > + * @map:             The dma-buf mapping structure
> > + * @vaddr_iomem:     An I/O-memory address
> > + *
> > + * Sets the address and the I/O-memory flag.
> > + */
> > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > +                                            void __iomem *vaddr_iomem)
> > +{
> > +     map->vaddr_iomem = vaddr_iomem;
> > +     map->is_iomem = true;
> > +}
> > +
> >   /**
> >    * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
> >    * @lhs:    The dma-buf mapping structure
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion
  2020-09-29 15:35   ` Christian König
  2020-09-29 15:44     ` Daniel Vetter
@ 2020-09-29 17:49     ` Thomas Zimmermann
       [not found]       ` <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com>
  1 sibling, 1 reply; 33+ messages in thread
From: Thomas Zimmermann @ 2020-09-29 17:49 UTC (permalink / raw)
  To: Christian König, maarten.lankhorst, mripard, airlied,
	daniel, sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
	christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
	kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
	steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
	eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
	emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
	chris, miaoqinglang
  Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
	virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
	spice-devel, xen-devel, linux-arm-kernel, linux-media


[-- Attachment #1.1.1: Type: text/plain, Size: 4465 bytes --]

Hi Christian

Am 29.09.20 um 17:35 schrieb Christian König:
> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map
>> with these values. Helpful for TTM-based drivers.
> 
> We could completely drop that if we use the same structure inside TTM as
> well.
> 
> Additional to that which driver is going to use this?

As Daniel mentioned, it's in patch 3. The TTM-based drivers will
retrieve the pointer via this function.

I do want to see all that being more tightly integrated into TTM, but
not in this series. This one is about fixing the bochs-on-sparc64
problem for good. Patch 7 adds an update to TTM to the DRM TODO list.

Best regards
Thomas

> 
> Regards,
> Christian.
> 
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> ---
>>   include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
>>   include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
>>   2 files changed, 44 insertions(+)
>>
>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>> index c96a25d571c8..62d89f05a801 100644
>> --- a/include/drm/ttm/ttm_bo_api.h
>> +++ b/include/drm/ttm/ttm_bo_api.h
>> @@ -34,6 +34,7 @@
>>   #include <drm/drm_gem.h>
>>   #include <drm/drm_hashtab.h>
>>   #include <drm/drm_vma_manager.h>
>> +#include <linux/dma-buf-map.h>
>>   #include <linux/kref.h>
>>   #include <linux/list.h>
>>   #include <linux/wait.h>
>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct
>> ttm_bo_kmap_obj *map,
>>       return map->virtual;
>>   }
>>   +/**
>> + * ttm_kmap_obj_to_dma_buf_map
>> + *
>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
>> + * @map: Returns the mapping as struct dma_buf_map
>> + *
>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
>> + * is not mapped, the returned mapping is initialized to NULL.
>> + */
>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj
>> *kmap,
>> +                           struct dma_buf_map *map)
>> +{
>> +    bool is_iomem;
>> +    void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
>> +
>> +    if (!vaddr)
>> +        dma_buf_map_clear(map);
>> +    else if (is_iomem)
>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
>> +    else
>> +        dma_buf_map_set_vaddr(map, vaddr);
>> +}
>> +
>>   /**
>>    * ttm_bo_kmap
>>    *
>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>> index fd1aba545fdf..2e8bbecb5091 100644
>> --- a/include/linux/dma-buf-map.h
>> +++ b/include/linux/dma-buf-map.h
>> @@ -45,6 +45,12 @@
>>    *
>>    *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>    *
>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>> + *
>> + * .. code-block:: c
>> + *
>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>> + *
>>    * Test if a mapping is valid with either dma_buf_map_is_set() or
>>    * dma_buf_map_is_null().
>>    *
>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>> dma_buf_map *map, void *vaddr)
>>       map->is_iomem = false;
>>   }
>>   +/**
>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>> an address in I/O memory
>> + * @map:        The dma-buf mapping structure
>> + * @vaddr_iomem:    An I/O-memory address
>> + *
>> + * Sets the address and the I/O-memory flag.
>> + */
>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>> +                           void __iomem *vaddr_iomem)
>> +{
>> +    map->vaddr_iomem = vaddr_iomem;
>> +    map->is_iomem = true;
>> +}
>> +
>>   /**
>>    * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>> for equality
>>    * @lhs:    The dma-buf mapping structure
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

[-- Attachment #2: Type: text/plain, Size: 170 bytes --]

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion
       [not found]       ` <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com>
@ 2020-09-30  8:19         ` Thomas Zimmermann
       [not found]           ` <b569b7e3-68f0-edcc-c8f4-170e9042d348@gmail.com>
  0 siblings, 1 reply; 33+ messages in thread
From: Thomas Zimmermann @ 2020-09-30  8:19 UTC (permalink / raw)
  To: christian.koenig, maarten.lankhorst, mripard, airlied, daniel,
	sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
	christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
	kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
	steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
	eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
	emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
	chris, miaoqinglang
  Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
	virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
	spice-devel, xen-devel, linux-arm-kernel, linux-media


[-- Attachment #1.1.1: Type: text/plain, Size: 5705 bytes --]

Hi

Am 30.09.20 um 10:05 schrieb Christian König:
> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
>> Hi Christian
>>
>> Am 29.09.20 um 17:35 schrieb Christian König:
>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map
>>>> with these values. Helpful for TTM-based drivers.
>>> We could completely drop that if we use the same structure inside TTM as
>>> well.
>>>
>>> Additional to that which driver is going to use this?
>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will
>> retrieve the pointer via this function.
>>
>> I do want to see all that being more tightly integrated into TTM, but
>> not in this series. This one is about fixing the bochs-on-sparc64
>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list.
> 
> I should have asked which driver you try to fix here :)
> 
> In this case just keep the function inside bochs and only fix it there.
> 
> All other drivers can be fixed when we generally pump this through TTM.

Did you take a look at patch 3? This function will be used by VRAM
helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we
have to duplicate the functionality in each if these drivers. Bochs
itself uses VRAM helpers and doesn't touch the function directly.

Best regards
Thomas

> 
> Regards,
> Christian.
> 
>> Best regards
>> Thomas
>>
>>> Regards,
>>> Christian.
>>>
>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>> ---
>>>>   include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
>>>>   include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
>>>>   2 files changed, 44 insertions(+)
>>>>
>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>>> index c96a25d571c8..62d89f05a801 100644
>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>> @@ -34,6 +34,7 @@
>>>>   #include <drm/drm_gem.h>
>>>>   #include <drm/drm_hashtab.h>
>>>>   #include <drm/drm_vma_manager.h>
>>>> +#include <linux/dma-buf-map.h>
>>>>   #include <linux/kref.h>
>>>>   #include <linux/list.h>
>>>>   #include <linux/wait.h>
>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct
>>>> ttm_bo_kmap_obj *map,
>>>>       return map->virtual;
>>>>   }
>>>>   +/**
>>>> + * ttm_kmap_obj_to_dma_buf_map
>>>> + *
>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
>>>> + * @map: Returns the mapping as struct dma_buf_map
>>>> + *
>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
>>>> + * is not mapped, the returned mapping is initialized to NULL.
>>>> + */
>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj
>>>> *kmap,
>>>> +                           struct dma_buf_map *map)
>>>> +{
>>>> +    bool is_iomem;
>>>> +    void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
>>>> +
>>>> +    if (!vaddr)
>>>> +        dma_buf_map_clear(map);
>>>> +    else if (is_iomem)
>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
>>>> +    else
>>>> +        dma_buf_map_set_vaddr(map, vaddr);
>>>> +}
>>>> +
>>>>   /**
>>>>    * ttm_bo_kmap
>>>>    *
>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>> --- a/include/linux/dma-buf-map.h
>>>> +++ b/include/linux/dma-buf-map.h
>>>> @@ -45,6 +45,12 @@
>>>>    *
>>>>    *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>>    *
>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>>> + *
>>>> + * .. code-block:: c
>>>> + *
>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>> + *
>>>>    * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>>    * dma_buf_map_is_null().
>>>>    *
>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>>> dma_buf_map *map, void *vaddr)
>>>>       map->is_iomem = false;
>>>>   }
>>>>   +/**
>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>>> an address in I/O memory
>>>> + * @map:        The dma-buf mapping structure
>>>> + * @vaddr_iomem:    An I/O-memory address
>>>> + *
>>>> + * Sets the address and the I/O-memory flag.
>>>> + */
>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>>> +                           void __iomem *vaddr_iomem)
>>>> +{
>>>> +    map->vaddr_iomem = vaddr_iomem;
>>>> +    map->is_iomem = true;
>>>> +}
>>>> +
>>>>   /**
>>>>    * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>>>> for equality
>>>>    * @lhs:    The dma-buf mapping structure
>>> _______________________________________________
>>> dri-devel mailing list
>>> dri-devel@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> 
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

[-- Attachment #2: Type: text/plain, Size: 170 bytes --]

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion
       [not found]           ` <b569b7e3-68f0-edcc-c8f4-170e9042d348@gmail.com>
@ 2020-09-30  9:47             ` Daniel Vetter
  2020-09-30 12:34               ` Christian König
  0 siblings, 1 reply; 33+ messages in thread
From: Daniel Vetter @ 2020-09-30  9:47 UTC (permalink / raw)
  To: christian.koenig
  Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
	chris, melissa.srw, eric, ray.huang, kraxel, sam, sumit.semwal,
	emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
	oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
	kgene, bskeggs, linux+etnaviv, spice-devel, alyssa.rosenzweig,
	daniel, maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
	christian.gmeiner, xen-devel, virtualization, sean, apaneers,
	linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
	sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825,
	Thomas Zimmermann, alexander.deucher, linux-media, l.stach

On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian König wrote:
> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
> > Hi
> > 
> > Am 30.09.20 um 10:05 schrieb Christian König:
> > > Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
> > > > Hi Christian
> > > > 
> > > > Am 29.09.20 um 17:35 schrieb Christian König:
> > > > > Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
> > > > > > The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
> > > > > > from and instance of TTM's kmap_obj and initializes struct dma_buf_map
> > > > > > with these values. Helpful for TTM-based drivers.
> > > > > We could completely drop that if we use the same structure inside TTM as
> > > > > well.
> > > > > 
> > > > > Additional to that which driver is going to use this?
> > > > As Daniel mentioned, it's in patch 3. The TTM-based drivers will
> > > > retrieve the pointer via this function.
> > > > 
> > > > I do want to see all that being more tightly integrated into TTM, but
> > > > not in this series. This one is about fixing the bochs-on-sparc64
> > > > problem for good. Patch 7 adds an update to TTM to the DRM TODO list.
> > > I should have asked which driver you try to fix here :)
> > > 
> > > In this case just keep the function inside bochs and only fix it there.
> > > 
> > > All other drivers can be fixed when we generally pump this through TTM.
> > Did you take a look at patch 3? This function will be used by VRAM
> > helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we
> > have to duplicate the functionality in each if these drivers. Bochs
> > itself uses VRAM helpers and doesn't touch the function directly.
> 
> Ah, ok can we have that then only in the VRAM helpers?
> 
> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj
> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
> 
> What I want to avoid is to have another conversion function in TTM because
> what happens here is that we already convert from ttm_bus_placement to
> ttm_bo_kmap_obj and then to dma_buf_map.

Hm I'm not really seeing how that helps with a gradual conversion of
everything over to dma_buf_map and assorted helpers for access? There's
too many places in ttm drivers where is_iomem and related stuff is used to
be able to convert it all in one go. An intermediate state with a bunch of
conversions seems fairly unavoidable to me.
-Daniel

> 
> Thanks,
> Christian.
> 
> > 
> > Best regards
> > Thomas
> > 
> > > Regards,
> > > Christian.
> > > 
> > > > Best regards
> > > > Thomas
> > > > 
> > > > > Regards,
> > > > > Christian.
> > > > > 
> > > > > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > > > > > ---
> > > > > >    include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
> > > > > >    include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
> > > > > >    2 files changed, 44 insertions(+)
> > > > > > 
> > > > > > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > > > > > index c96a25d571c8..62d89f05a801 100644
> > > > > > --- a/include/drm/ttm/ttm_bo_api.h
> > > > > > +++ b/include/drm/ttm/ttm_bo_api.h
> > > > > > @@ -34,6 +34,7 @@
> > > > > >    #include <drm/drm_gem.h>
> > > > > >    #include <drm/drm_hashtab.h>
> > > > > >    #include <drm/drm_vma_manager.h>
> > > > > > +#include <linux/dma-buf-map.h>
> > > > > >    #include <linux/kref.h>
> > > > > >    #include <linux/list.h>
> > > > > >    #include <linux/wait.h>
> > > > > > @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct
> > > > > > ttm_bo_kmap_obj *map,
> > > > > >        return map->virtual;
> > > > > >    }
> > > > > >    +/**
> > > > > > + * ttm_kmap_obj_to_dma_buf_map
> > > > > > + *
> > > > > > + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
> > > > > > + * @map: Returns the mapping as struct dma_buf_map
> > > > > > + *
> > > > > > + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
> > > > > > + * is not mapped, the returned mapping is initialized to NULL.
> > > > > > + */
> > > > > > +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj
> > > > > > *kmap,
> > > > > > +                           struct dma_buf_map *map)
> > > > > > +{
> > > > > > +    bool is_iomem;
> > > > > > +    void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
> > > > > > +
> > > > > > +    if (!vaddr)
> > > > > > +        dma_buf_map_clear(map);
> > > > > > +    else if (is_iomem)
> > > > > > +        dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
> > > > > > +    else
> > > > > > +        dma_buf_map_set_vaddr(map, vaddr);
> > > > > > +}
> > > > > > +
> > > > > >    /**
> > > > > >     * ttm_bo_kmap
> > > > > >     *
> > > > > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > > > > > index fd1aba545fdf..2e8bbecb5091 100644
> > > > > > --- a/include/linux/dma-buf-map.h
> > > > > > +++ b/include/linux/dma-buf-map.h
> > > > > > @@ -45,6 +45,12 @@
> > > > > >     *
> > > > > >     *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > > > > >     *
> > > > > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > > > > > + *
> > > > > > + * .. code-block:: c
> > > > > > + *
> > > > > > + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > > > > > + *
> > > > > >     * Test if a mapping is valid with either dma_buf_map_is_set() or
> > > > > >     * dma_buf_map_is_null().
> > > > > >     *
> > > > > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > > > > > dma_buf_map *map, void *vaddr)
> > > > > >        map->is_iomem = false;
> > > > > >    }
> > > > > >    +/**
> > > > > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> > > > > > an address in I/O memory
> > > > > > + * @map:        The dma-buf mapping structure
> > > > > > + * @vaddr_iomem:    An I/O-memory address
> > > > > > + *
> > > > > > + * Sets the address and the I/O-memory flag.
> > > > > > + */
> > > > > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > > > > > +                           void __iomem *vaddr_iomem)
> > > > > > +{
> > > > > > +    map->vaddr_iomem = vaddr_iomem;
> > > > > > +    map->is_iomem = true;
> > > > > > +}
> > > > > > +
> > > > > >    /**
> > > > > >     * dma_buf_map_is_equal - Compares two dma-buf mapping structures
> > > > > > for equality
> > > > > >     * @lhs:    The dma-buf mapping structure
> > > > > _______________________________________________
> > > > > dri-devel mailing list
> > > > > dri-devel@lists.freedesktop.org
> > > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> > > > _______________________________________________
> > > > amd-gfx mailing list
> > > > amd-gfx@lists.freedesktop.org
> > > > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> > > 
> > > _______________________________________________
> > > dri-devel mailing list
> > > dri-devel@lists.freedesktop.org
> > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> > > 
> > 
> > _______________________________________________
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion
  2020-09-30  9:47             ` Daniel Vetter
@ 2020-09-30 12:34               ` Christian König
  2020-09-30 12:51                 ` Daniel Vetter
  0 siblings, 1 reply; 33+ messages in thread
From: Christian König @ 2020-09-30 12:34 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
	chris, melissa.srw, eric, ray.huang, kraxel, sam, sumit.semwal,
	emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
	oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
	kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
	maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
	christian.gmeiner, xen-devel, virtualization, sean, apaneers,
	linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
	sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825,
	Thomas Zimmermann, alexander.deucher, linux-media, l.stach

Am 30.09.20 um 11:47 schrieb Daniel Vetter:
> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian König wrote:
>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
>>> Hi
>>>
>>> Am 30.09.20 um 10:05 schrieb Christian König:
>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
>>>>> Hi Christian
>>>>>
>>>>> Am 29.09.20 um 17:35 schrieb Christian König:
>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map
>>>>>>> with these values. Helpful for TTM-based drivers.
>>>>>> We could completely drop that if we use the same structure inside TTM as
>>>>>> well.
>>>>>>
>>>>>> Additional to that which driver is going to use this?
>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will
>>>>> retrieve the pointer via this function.
>>>>>
>>>>> I do want to see all that being more tightly integrated into TTM, but
>>>>> not in this series. This one is about fixing the bochs-on-sparc64
>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list.
>>>> I should have asked which driver you try to fix here :)
>>>>
>>>> In this case just keep the function inside bochs and only fix it there.
>>>>
>>>> All other drivers can be fixed when we generally pump this through TTM.
>>> Did you take a look at patch 3? This function will be used by VRAM
>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we
>>> have to duplicate the functionality in each if these drivers. Bochs
>>> itself uses VRAM helpers and doesn't touch the function directly.
>> Ah, ok can we have that then only in the VRAM helpers?
>>
>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj
>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
>>
>> What I want to avoid is to have another conversion function in TTM because
>> what happens here is that we already convert from ttm_bus_placement to
>> ttm_bo_kmap_obj and then to dma_buf_map.
> Hm I'm not really seeing how that helps with a gradual conversion of
> everything over to dma_buf_map and assorted helpers for access? There's
> too many places in ttm drivers where is_iomem and related stuff is used to
> be able to convert it all in one go. An intermediate state with a bunch of
> conversions seems fairly unavoidable to me.

Fair enough. I would just have started bottom up and not top down.

Anyway feel free to go ahead with this approach as long as we can remove 
the new function again when we clean that stuff up for good.

Christian.

> -Daniel
>
>> Thanks,
>> Christian.
>>
>>> Best regards
>>> Thomas
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>> Best regards
>>>>> Thomas
>>>>>
>>>>>> Regards,
>>>>>> Christian.
>>>>>>
>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>>>>> ---
>>>>>>>     include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
>>>>>>>     include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
>>>>>>>     2 files changed, 44 insertions(+)
>>>>>>>
>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>>>>>> index c96a25d571c8..62d89f05a801 100644
>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>>>>> @@ -34,6 +34,7 @@
>>>>>>>     #include <drm/drm_gem.h>
>>>>>>>     #include <drm/drm_hashtab.h>
>>>>>>>     #include <drm/drm_vma_manager.h>
>>>>>>> +#include <linux/dma-buf-map.h>
>>>>>>>     #include <linux/kref.h>
>>>>>>>     #include <linux/list.h>
>>>>>>>     #include <linux/wait.h>
>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct
>>>>>>> ttm_bo_kmap_obj *map,
>>>>>>>         return map->virtual;
>>>>>>>     }
>>>>>>>     +/**
>>>>>>> + * ttm_kmap_obj_to_dma_buf_map
>>>>>>> + *
>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
>>>>>>> + * @map: Returns the mapping as struct dma_buf_map
>>>>>>> + *
>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
>>>>>>> + * is not mapped, the returned mapping is initialized to NULL.
>>>>>>> + */
>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj
>>>>>>> *kmap,
>>>>>>> +                           struct dma_buf_map *map)
>>>>>>> +{
>>>>>>> +    bool is_iomem;
>>>>>>> +    void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
>>>>>>> +
>>>>>>> +    if (!vaddr)
>>>>>>> +        dma_buf_map_clear(map);
>>>>>>> +    else if (is_iomem)
>>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
>>>>>>> +    else
>>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
>>>>>>> +}
>>>>>>> +
>>>>>>>     /**
>>>>>>>      * ttm_bo_kmap
>>>>>>>      *
>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>>>>> --- a/include/linux/dma-buf-map.h
>>>>>>> +++ b/include/linux/dma-buf-map.h
>>>>>>> @@ -45,6 +45,12 @@
>>>>>>>      *
>>>>>>>      *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>>>>>      *
>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>>>>>> + *
>>>>>>> + * .. code-block:: c
>>>>>>> + *
>>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>>>>> + *
>>>>>>>      * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>>>>>      * dma_buf_map_is_null().
>>>>>>>      *
>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>>>>>> dma_buf_map *map, void *vaddr)
>>>>>>>         map->is_iomem = false;
>>>>>>>     }
>>>>>>>     +/**
>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>>>>>> an address in I/O memory
>>>>>>> + * @map:        The dma-buf mapping structure
>>>>>>> + * @vaddr_iomem:    An I/O-memory address
>>>>>>> + *
>>>>>>> + * Sets the address and the I/O-memory flag.
>>>>>>> + */
>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>>>>>> +                           void __iomem *vaddr_iomem)
>>>>>>> +{
>>>>>>> +    map->vaddr_iomem = vaddr_iomem;
>>>>>>> +    map->is_iomem = true;
>>>>>>> +}
>>>>>>> +
>>>>>>>     /**
>>>>>>>      * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>>>>>>> for equality
>>>>>>>      * @lhs:    The dma-buf mapping structure
>>>>>> _______________________________________________
>>>>>> dri-devel mailing list
>>>>>> dri-devel@lists.freedesktop.org
>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
>>>>> _______________________________________________
>>>>> amd-gfx mailing list
>>>>> amd-gfx@lists.freedesktop.org
>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> dri-devel@lists.freedesktop.org
>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
>>>>
>>> _______________________________________________
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0


_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion
  2020-09-30 12:34               ` Christian König
@ 2020-09-30 12:51                 ` Daniel Vetter
  2020-10-02  9:58                   ` Daniel Vetter
  0 siblings, 1 reply; 33+ messages in thread
From: Daniel Vetter @ 2020-09-30 12:51 UTC (permalink / raw)
  To: Christian König
  Cc: Luben Tuikov, Heiko Stübner, Dave Airlie, Nouveau Dev,
	Linus Walleij, dri-devel, Wilson, Chris, Melissa Wen, Anholt,
	Eric, Huang Rui, Gerd Hoffmann, Sam Ravnborg, Sumit Semwal,
	Emil Velikov, Rob Herring, linux-samsung-soc, Joonyoung Shim,
	lima, Oleksandr Andrushchenko, Krzysztof Kozlowski, Steven Price,
	open list:ARM/Rockchip SoC...,
	Kukjin Kim, Alyssa Rosenzweig, Russell King,
	open list:DRM DRIVER FOR QXL VIRTUAL GPU, Ben Skeggs,
	Maarten Lankhorst, The etnaviv authors, Maxime Ripard, Inki Dae,
	Hans de Goede, Christian Gmeiner,
	moderated list:DRM DRIVERS FOR XEN, open list:VIRTIO CORE, NET...,
	Sean Paul, apaneers, Linux ARM,
	moderated list:DMA BUFFER SHARING FRAMEWORK, amd-gfx list,
	Tomeu Vizoso, Seung-Woo Kim, Sandy Huang, Kyungmin Park,
	Qinglang Miao, Qiang Yu, Thomas Zimmermann, Alex Deucher,
	open list:DMA BUFFER SHARING FRAMEWORK, Lucas Stach

On Wed, Sep 30, 2020 at 2:34 PM Christian König
<christian.koenig@amd.com> wrote:
>
> Am 30.09.20 um 11:47 schrieb Daniel Vetter:
> > On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian König wrote:
> >> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
> >>> Hi
> >>>
> >>> Am 30.09.20 um 10:05 schrieb Christian König:
> >>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
> >>>>> Hi Christian
> >>>>>
> >>>>> Am 29.09.20 um 17:35 schrieb Christian König:
> >>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
> >>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
> >>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map
> >>>>>>> with these values. Helpful for TTM-based drivers.
> >>>>>> We could completely drop that if we use the same structure inside TTM as
> >>>>>> well.
> >>>>>>
> >>>>>> Additional to that which driver is going to use this?
> >>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will
> >>>>> retrieve the pointer via this function.
> >>>>>
> >>>>> I do want to see all that being more tightly integrated into TTM, but
> >>>>> not in this series. This one is about fixing the bochs-on-sparc64
> >>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list.
> >>>> I should have asked which driver you try to fix here :)
> >>>>
> >>>> In this case just keep the function inside bochs and only fix it there.
> >>>>
> >>>> All other drivers can be fixed when we generally pump this through TTM.
> >>> Did you take a look at patch 3? This function will be used by VRAM
> >>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we
> >>> have to duplicate the functionality in each if these drivers. Bochs
> >>> itself uses VRAM helpers and doesn't touch the function directly.
> >> Ah, ok can we have that then only in the VRAM helpers?
> >>
> >> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj
> >> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
> >>
> >> What I want to avoid is to have another conversion function in TTM because
> >> what happens here is that we already convert from ttm_bus_placement to
> >> ttm_bo_kmap_obj and then to dma_buf_map.
> > Hm I'm not really seeing how that helps with a gradual conversion of
> > everything over to dma_buf_map and assorted helpers for access? There's
> > too many places in ttm drivers where is_iomem and related stuff is used to
> > be able to convert it all in one go. An intermediate state with a bunch of
> > conversions seems fairly unavoidable to me.
>
> Fair enough. I would just have started bottom up and not top down.
>
> Anyway feel free to go ahead with this approach as long as we can remove
> the new function again when we clean that stuff up for good.

Yeah I guess bottom up would make more sense as a refactoring. But the
main motivation to land this here is to fix the __mmio vs normal
memory confusion in the fbdev emulation helpers for sparc (and
anything else that needs this). Hence the top down approach for
rolling this out.
-Daniel

>
> Christian.
>
> > -Daniel
> >
> >> Thanks,
> >> Christian.
> >>
> >>> Best regards
> >>> Thomas
> >>>
> >>>> Regards,
> >>>> Christian.
> >>>>
> >>>>> Best regards
> >>>>> Thomas
> >>>>>
> >>>>>> Regards,
> >>>>>> Christian.
> >>>>>>
> >>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >>>>>>> ---
> >>>>>>>     include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
> >>>>>>>     include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
> >>>>>>>     2 files changed, 44 insertions(+)
> >>>>>>>
> >>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> >>>>>>> index c96a25d571c8..62d89f05a801 100644
> >>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
> >>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
> >>>>>>> @@ -34,6 +34,7 @@
> >>>>>>>     #include <drm/drm_gem.h>
> >>>>>>>     #include <drm/drm_hashtab.h>
> >>>>>>>     #include <drm/drm_vma_manager.h>
> >>>>>>> +#include <linux/dma-buf-map.h>
> >>>>>>>     #include <linux/kref.h>
> >>>>>>>     #include <linux/list.h>
> >>>>>>>     #include <linux/wait.h>
> >>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct
> >>>>>>> ttm_bo_kmap_obj *map,
> >>>>>>>         return map->virtual;
> >>>>>>>     }
> >>>>>>>     +/**
> >>>>>>> + * ttm_kmap_obj_to_dma_buf_map
> >>>>>>> + *
> >>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
> >>>>>>> + * @map: Returns the mapping as struct dma_buf_map
> >>>>>>> + *
> >>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
> >>>>>>> + * is not mapped, the returned mapping is initialized to NULL.
> >>>>>>> + */
> >>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj
> >>>>>>> *kmap,
> >>>>>>> +                           struct dma_buf_map *map)
> >>>>>>> +{
> >>>>>>> +    bool is_iomem;
> >>>>>>> +    void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
> >>>>>>> +
> >>>>>>> +    if (!vaddr)
> >>>>>>> +        dma_buf_map_clear(map);
> >>>>>>> +    else if (is_iomem)
> >>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
> >>>>>>> +    else
> >>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
> >>>>>>> +}
> >>>>>>> +
> >>>>>>>     /**
> >>>>>>>      * ttm_bo_kmap
> >>>>>>>      *
> >>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> >>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
> >>>>>>> --- a/include/linux/dma-buf-map.h
> >>>>>>> +++ b/include/linux/dma-buf-map.h
> >>>>>>> @@ -45,6 +45,12 @@
> >>>>>>>      *
> >>>>>>>      *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> >>>>>>>      *
> >>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> >>>>>>> + *
> >>>>>>> + * .. code-block:: c
> >>>>>>> + *
> >>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> >>>>>>> + *
> >>>>>>>      * Test if a mapping is valid with either dma_buf_map_is_set() or
> >>>>>>>      * dma_buf_map_is_null().
> >>>>>>>      *
> >>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> >>>>>>> dma_buf_map *map, void *vaddr)
> >>>>>>>         map->is_iomem = false;
> >>>>>>>     }
> >>>>>>>     +/**
> >>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> >>>>>>> an address in I/O memory
> >>>>>>> + * @map:        The dma-buf mapping structure
> >>>>>>> + * @vaddr_iomem:    An I/O-memory address
> >>>>>>> + *
> >>>>>>> + * Sets the address and the I/O-memory flag.
> >>>>>>> + */
> >>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> >>>>>>> +                           void __iomem *vaddr_iomem)
> >>>>>>> +{
> >>>>>>> +    map->vaddr_iomem = vaddr_iomem;
> >>>>>>> +    map->is_iomem = true;
> >>>>>>> +}
> >>>>>>> +
> >>>>>>>     /**
> >>>>>>>      * dma_buf_map_is_equal - Compares two dma-buf mapping structures
> >>>>>>> for equality
> >>>>>>>      * @lhs:    The dma-buf mapping structure
> >>>>>> _______________________________________________
> >>>>>> dri-devel mailing list
> >>>>>> dri-devel@lists.freedesktop.org
> >>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
> >>>>> _______________________________________________
> >>>>> amd-gfx mailing list
> >>>>> amd-gfx@lists.freedesktop.org
> >>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
> >>>> _______________________________________________
> >>>> dri-devel mailing list
> >>>> dri-devel@lists.freedesktop.org
> >>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
> >>>>
> >>> _______________________________________________
> >>> amd-gfx mailing list
> >>> amd-gfx@lists.freedesktop.org
> >>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 1/7] drm/vram-helper: Remove invariant parameters from internal kmap function
  2020-09-29 15:14 ` [PATCH v3 1/7] drm/vram-helper: Remove invariant parameters from internal kmap function Thomas Zimmermann
@ 2020-10-02  9:48   ` Daniel Vetter
  0 siblings, 0 replies; 33+ messages in thread
From: Daniel Vetter @ 2020-10-02  9:48 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
	chris, melissa.srw, eric, ray.huang, kraxel, sam, sumit.semwal,
	emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
	oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
	kgene, bskeggs, linux+etnaviv, spice-devel, alyssa.rosenzweig,
	maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
	christian.gmeiner, xen-devel, virtualization, sean, apaneers,
	linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
	sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
	alexander.deucher, linux-media, christian.koenig, l.stach

On Tue, Sep 29, 2020 at 05:14:31PM +0200, Thomas Zimmermann wrote:
> The parameters map and is_iomem are always of the same value. Removed them
> to prepares the function for conversion to struct dma_buf_map.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

> ---
>  drivers/gpu/drm/drm_gem_vram_helper.c | 17 ++++++-----------
>  1 file changed, 6 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 3fe4b326e18e..256b346664f2 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -382,16 +382,16 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
>  }
>  EXPORT_SYMBOL(drm_gem_vram_unpin);
>  
> -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
> -				      bool map, bool *is_iomem)
> +static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
>  {
>  	int ret;
>  	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> +	bool is_iomem;
>  
>  	if (gbo->kmap_use_count > 0)
>  		goto out;
>  
> -	if (kmap->virtual || !map)
> +	if (kmap->virtual)
>  		goto out;
>  
>  	ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
> @@ -399,15 +399,10 @@ static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
>  		return ERR_PTR(ret);
>  
>  out:
> -	if (!kmap->virtual) {
> -		if (is_iomem)
> -			*is_iomem = false;
> +	if (!kmap->virtual)
>  		return NULL; /* not mapped; don't increment ref */
> -	}
>  	++gbo->kmap_use_count;
> -	if (is_iomem)
> -		return ttm_kmap_obj_virtual(kmap, is_iomem);
> -	return kmap->virtual;
> +	return ttm_kmap_obj_virtual(kmap, &is_iomem);
>  }
>  
>  static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> @@ -452,7 +447,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
>  	ret = drm_gem_vram_pin_locked(gbo, 0);
>  	if (ret)
>  		goto err_ttm_bo_unreserve;
> -	base = drm_gem_vram_kmap_locked(gbo, true, NULL);
> +	base = drm_gem_vram_kmap_locked(gbo);
>  	if (IS_ERR(base)) {
>  		ret = PTR_ERR(base);
>  		goto err_drm_gem_vram_unpin_locked;
> -- 
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion
  2020-09-30 12:51                 ` Daniel Vetter
@ 2020-10-02  9:58                   ` Daniel Vetter
  2020-10-02 11:30                     ` Christian König
  2020-10-07 12:57                     ` Thomas Zimmermann
  0 siblings, 2 replies; 33+ messages in thread
From: Daniel Vetter @ 2020-10-02  9:58 UTC (permalink / raw)
  To: Christian König
  Cc: Luben Tuikov, Heiko Stübner, Dave Airlie, Nouveau Dev,
	Linus Walleij, dri-devel, Wilson, Chris, Melissa Wen, Anholt,
	Eric, Huang Rui, Gerd Hoffmann, Sam Ravnborg, Sumit Semwal,
	Emil Velikov, Rob Herring, linux-samsung-soc, Joonyoung Shim,
	lima, Oleksandr Andrushchenko, Krzysztof Kozlowski, Steven Price,
	open list:ARM/Rockchip SoC...,
	Kukjin Kim, Alyssa Rosenzweig, Russell King,
	open list:DRM DRIVER FOR QXL VIRTUAL GPU, Ben Skeggs,
	Maarten Lankhorst, The etnaviv authors, Maxime Ripard, Inki Dae,
	Hans de Goede, Christian Gmeiner,
	moderated list:DRM DRIVERS FOR XEN, open list:VIRTIO CORE, NET...,
	Sean Paul, apaneers, Linux ARM,
	moderated list:DMA BUFFER SHARING FRAMEWORK, amd-gfx list,
	Tomeu Vizoso, Seung-Woo Kim, Sandy Huang, Kyungmin Park,
	Qinglang Miao, Qiang Yu, Thomas Zimmermann, Alex Deucher,
	open list:DMA BUFFER SHARING FRAMEWORK, Lucas Stach

On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote:
> On Wed, Sep 30, 2020 at 2:34 PM Christian König
> <christian.koenig@amd.com> wrote:
> >
> > Am 30.09.20 um 11:47 schrieb Daniel Vetter:
> > > On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian König wrote:
> > >> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
> > >>> Hi
> > >>>
> > >>> Am 30.09.20 um 10:05 schrieb Christian König:
> > >>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
> > >>>>> Hi Christian
> > >>>>>
> > >>>>> Am 29.09.20 um 17:35 schrieb Christian König:
> > >>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
> > >>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
> > >>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map
> > >>>>>>> with these values. Helpful for TTM-based drivers.
> > >>>>>> We could completely drop that if we use the same structure inside TTM as
> > >>>>>> well.
> > >>>>>>
> > >>>>>> Additional to that which driver is going to use this?
> > >>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will
> > >>>>> retrieve the pointer via this function.
> > >>>>>
> > >>>>> I do want to see all that being more tightly integrated into TTM, but
> > >>>>> not in this series. This one is about fixing the bochs-on-sparc64
> > >>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list.
> > >>>> I should have asked which driver you try to fix here :)
> > >>>>
> > >>>> In this case just keep the function inside bochs and only fix it there.
> > >>>>
> > >>>> All other drivers can be fixed when we generally pump this through TTM.
> > >>> Did you take a look at patch 3? This function will be used by VRAM
> > >>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we
> > >>> have to duplicate the functionality in each if these drivers. Bochs
> > >>> itself uses VRAM helpers and doesn't touch the function directly.
> > >> Ah, ok can we have that then only in the VRAM helpers?
> > >>
> > >> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj
> > >> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
> > >>
> > >> What I want to avoid is to have another conversion function in TTM because
> > >> what happens here is that we already convert from ttm_bus_placement to
> > >> ttm_bo_kmap_obj and then to dma_buf_map.
> > > Hm I'm not really seeing how that helps with a gradual conversion of
> > > everything over to dma_buf_map and assorted helpers for access? There's
> > > too many places in ttm drivers where is_iomem and related stuff is used to
> > > be able to convert it all in one go. An intermediate state with a bunch of
> > > conversions seems fairly unavoidable to me.
> >
> > Fair enough. I would just have started bottom up and not top down.
> >
> > Anyway feel free to go ahead with this approach as long as we can remove
> > the new function again when we clean that stuff up for good.
> 
> Yeah I guess bottom up would make more sense as a refactoring. But the
> main motivation to land this here is to fix the __mmio vs normal
> memory confusion in the fbdev emulation helpers for sparc (and
> anything else that needs this). Hence the top down approach for
> rolling this out.

Ok I started reviewing this a bit more in-depth, and I think this is a bit
too much of a de-tour.

Looking through all the callers of ttm_bo_kmap almost everyone maps the
entire object. Only vmwgfx uses to map less than that. Also, everyone just
immediately follows up with converting that full object map into a
pointer.

So I think what we really want here is:
- new function

int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);

  _vmap name since that's consistent with both dma_buf functions and
  what's usually used to implement this. Outside of the ttm world kmap
  usually just means single-page mappings using kmap() or it's iomem
  sibling io_mapping_map* so rather confusing name for a function which
  usually is just used to set up a vmap of the entire buffer.

- a helper which can be used for the drm_gem_object_funcs vmap/vunmap
  functions for all ttm drivers. We should be able to make this fully
  generic because a) we now have dma_buf_map and b) drm_gem_object is
  embedded in the ttm_bo, so we can upcast for everyone who's both a ttm
  and gem driver.

  This is maybe a good follow-up, since it should allow us to ditch quite
  a bit of the vram helper code for this more generic stuff. I also might
  have missed some special-cases here, but from a quick look everything
  just pins the buffer to the current location and that's it.

  Also this obviously requires Christian's generic ttm_bo_pin rework
  first.

- roll the above out to drivers.

Christian/Thomas, thoughts on this?

I think for the immediate need of rolling this out for vram helpers and
fbdev code we should be able to do this, but just postpone the driver wide
roll-out for now.

Cheers, Daniel

> -Daniel
> 
> >
> > Christian.
> >
> > > -Daniel
> > >
> > >> Thanks,
> > >> Christian.
> > >>
> > >>> Best regards
> > >>> Thomas
> > >>>
> > >>>> Regards,
> > >>>> Christian.
> > >>>>
> > >>>>> Best regards
> > >>>>> Thomas
> > >>>>>
> > >>>>>> Regards,
> > >>>>>> Christian.
> > >>>>>>
> > >>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > >>>>>>> ---
> > >>>>>>>     include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
> > >>>>>>>     include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
> > >>>>>>>     2 files changed, 44 insertions(+)
> > >>>>>>>
> > >>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > >>>>>>> index c96a25d571c8..62d89f05a801 100644
> > >>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
> > >>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
> > >>>>>>> @@ -34,6 +34,7 @@
> > >>>>>>>     #include <drm/drm_gem.h>
> > >>>>>>>     #include <drm/drm_hashtab.h>
> > >>>>>>>     #include <drm/drm_vma_manager.h>
> > >>>>>>> +#include <linux/dma-buf-map.h>
> > >>>>>>>     #include <linux/kref.h>
> > >>>>>>>     #include <linux/list.h>
> > >>>>>>>     #include <linux/wait.h>
> > >>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct
> > >>>>>>> ttm_bo_kmap_obj *map,
> > >>>>>>>         return map->virtual;
> > >>>>>>>     }
> > >>>>>>>     +/**
> > >>>>>>> + * ttm_kmap_obj_to_dma_buf_map
> > >>>>>>> + *
> > >>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
> > >>>>>>> + * @map: Returns the mapping as struct dma_buf_map
> > >>>>>>> + *
> > >>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
> > >>>>>>> + * is not mapped, the returned mapping is initialized to NULL.
> > >>>>>>> + */
> > >>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj
> > >>>>>>> *kmap,
> > >>>>>>> +                           struct dma_buf_map *map)
> > >>>>>>> +{
> > >>>>>>> +    bool is_iomem;
> > >>>>>>> +    void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
> > >>>>>>> +
> > >>>>>>> +    if (!vaddr)
> > >>>>>>> +        dma_buf_map_clear(map);
> > >>>>>>> +    else if (is_iomem)
> > >>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
> > >>>>>>> +    else
> > >>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
> > >>>>>>> +}
> > >>>>>>> +
> > >>>>>>>     /**
> > >>>>>>>      * ttm_bo_kmap
> > >>>>>>>      *
> > >>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > >>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
> > >>>>>>> --- a/include/linux/dma-buf-map.h
> > >>>>>>> +++ b/include/linux/dma-buf-map.h
> > >>>>>>> @@ -45,6 +45,12 @@
> > >>>>>>>      *
> > >>>>>>>      *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > >>>>>>>      *
> > >>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > >>>>>>> + *
> > >>>>>>> + * .. code-block:: c
> > >>>>>>> + *
> > >>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > >>>>>>> + *
> > >>>>>>>      * Test if a mapping is valid with either dma_buf_map_is_set() or
> > >>>>>>>      * dma_buf_map_is_null().
> > >>>>>>>      *
> > >>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > >>>>>>> dma_buf_map *map, void *vaddr)
> > >>>>>>>         map->is_iomem = false;
> > >>>>>>>     }
> > >>>>>>>     +/**
> > >>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> > >>>>>>> an address in I/O memory
> > >>>>>>> + * @map:        The dma-buf mapping structure
> > >>>>>>> + * @vaddr_iomem:    An I/O-memory address
> > >>>>>>> + *
> > >>>>>>> + * Sets the address and the I/O-memory flag.
> > >>>>>>> + */
> > >>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > >>>>>>> +                           void __iomem *vaddr_iomem)
> > >>>>>>> +{
> > >>>>>>> +    map->vaddr_iomem = vaddr_iomem;
> > >>>>>>> +    map->is_iomem = true;
> > >>>>>>> +}
> > >>>>>>> +
> > >>>>>>>     /**
> > >>>>>>>      * dma_buf_map_is_equal - Compares two dma-buf mapping structures
> > >>>>>>> for equality
> > >>>>>>>      * @lhs:    The dma-buf mapping structure
> > >>>>>> _______________________________________________
> > >>>>>> dri-devel mailing list
> > >>>>>> dri-devel@lists.freedesktop.org
> > >>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
> > >>>>> _______________________________________________
> > >>>>> amd-gfx mailing list
> > >>>>> amd-gfx@lists.freedesktop.org
> > >>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
> > >>>> _______________________________________________
> > >>>> dri-devel mailing list
> > >>>> dri-devel@lists.freedesktop.org
> > >>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
> > >>>>
> > >>> _______________________________________________
> > >>> amd-gfx mailing list
> > >>> amd-gfx@lists.freedesktop.org
> > >>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
> >
> 
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion
  2020-10-02  9:58                   ` Daniel Vetter
@ 2020-10-02 11:30                     ` Christian König
  2020-10-02 12:21                       ` Daniel Vetter
  2020-10-07 12:57                     ` Thomas Zimmermann
  1 sibling, 1 reply; 33+ messages in thread
From: Christian König @ 2020-10-02 11:30 UTC (permalink / raw)
  To: Daniel Vetter, Christian König
  Cc: Heiko Stübner, Dave Airlie, Nouveau Dev, Linus Walleij,
	dri-devel, Wilson, Chris, Melissa Wen, Anholt, Eric, Huang Rui,
	Gerd Hoffmann, Qiang Yu, Sam Ravnborg, Sumit Semwal,
	Emil Velikov, Rob Herring, linux-samsung-soc, Joonyoung Shim,
	lima, Oleksandr Andrushchenko, Krzysztof Kozlowski, Steven Price,
	open list:ARM/Rockchip SoC...,
	Luben Tuikov, Alyssa Rosenzweig, Russell King,
	open list:DRM DRIVER FOR QXL VIRTUAL GPU, Ben Skeggs,
	Maarten Lankhorst, The etnaviv authors, Maxime Ripard, Inki Dae,
	Hans de Goede, Christian Gmeiner,
	moderated list:DRM DRIVERS FOR XEN, open list:VIRTIO CORE, NET...,
	Sean Paul, apaneers, Linux ARM,
	moderated list:DMA BUFFER SHARING FRAMEWORK, amd-gfx list,
	Tomeu Vizoso, Seung-Woo Kim, Sandy Huang, Kyungmin Park,
	Qinglang Miao, Kukjin Kim, Thomas Zimmermann, Alex Deucher,
	open list:DMA BUFFER SHARING FRAMEWORK, Lucas Stach

Am 02.10.20 um 11:58 schrieb Daniel Vetter:
> On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote:
>> On Wed, Sep 30, 2020 at 2:34 PM Christian König
>> <christian.koenig@amd.com> wrote:
>>> Am 30.09.20 um 11:47 schrieb Daniel Vetter:
>>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian König wrote:
>>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
>>>>>> Hi
>>>>>>
>>>>>> Am 30.09.20 um 10:05 schrieb Christian König:
>>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
>>>>>>>> Hi Christian
>>>>>>>>
>>>>>>>> Am 29.09.20 um 17:35 schrieb Christian König:
>>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
>>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
>>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map
>>>>>>>>>> with these values. Helpful for TTM-based drivers.
>>>>>>>>> We could completely drop that if we use the same structure inside TTM as
>>>>>>>>> well.
>>>>>>>>>
>>>>>>>>> Additional to that which driver is going to use this?
>>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will
>>>>>>>> retrieve the pointer via this function.
>>>>>>>>
>>>>>>>> I do want to see all that being more tightly integrated into TTM, but
>>>>>>>> not in this series. This one is about fixing the bochs-on-sparc64
>>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list.
>>>>>>> I should have asked which driver you try to fix here :)
>>>>>>>
>>>>>>> In this case just keep the function inside bochs and only fix it there.
>>>>>>>
>>>>>>> All other drivers can be fixed when we generally pump this through TTM.
>>>>>> Did you take a look at patch 3? This function will be used by VRAM
>>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we
>>>>>> have to duplicate the functionality in each if these drivers. Bochs
>>>>>> itself uses VRAM helpers and doesn't touch the function directly.
>>>>> Ah, ok can we have that then only in the VRAM helpers?
>>>>>
>>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj
>>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
>>>>>
>>>>> What I want to avoid is to have another conversion function in TTM because
>>>>> what happens here is that we already convert from ttm_bus_placement to
>>>>> ttm_bo_kmap_obj and then to dma_buf_map.
>>>> Hm I'm not really seeing how that helps with a gradual conversion of
>>>> everything over to dma_buf_map and assorted helpers for access? There's
>>>> too many places in ttm drivers where is_iomem and related stuff is used to
>>>> be able to convert it all in one go. An intermediate state with a bunch of
>>>> conversions seems fairly unavoidable to me.
>>> Fair enough. I would just have started bottom up and not top down.
>>>
>>> Anyway feel free to go ahead with this approach as long as we can remove
>>> the new function again when we clean that stuff up for good.
>> Yeah I guess bottom up would make more sense as a refactoring. But the
>> main motivation to land this here is to fix the __mmio vs normal
>> memory confusion in the fbdev emulation helpers for sparc (and
>> anything else that needs this). Hence the top down approach for
>> rolling this out.
> Ok I started reviewing this a bit more in-depth, and I think this is a bit
> too much of a de-tour.
>
> Looking through all the callers of ttm_bo_kmap almost everyone maps the
> entire object. Only vmwgfx uses to map less than that. Also, everyone just
> immediately follows up with converting that full object map into a
> pointer.
>
> So I think what we really want here is:
> - new function
>
> int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>
>    _vmap name since that's consistent with both dma_buf functions and
>    what's usually used to implement this. Outside of the ttm world kmap
>    usually just means single-page mappings using kmap() or it's iomem
>    sibling io_mapping_map* so rather confusing name for a function which
>    usually is just used to set up a vmap of the entire buffer.
>
> - a helper which can be used for the drm_gem_object_funcs vmap/vunmap
>    functions for all ttm drivers. We should be able to make this fully
>    generic because a) we now have dma_buf_map and b) drm_gem_object is
>    embedded in the ttm_bo, so we can upcast for everyone who's both a ttm
>    and gem driver.
>
>    This is maybe a good follow-up, since it should allow us to ditch quite
>    a bit of the vram helper code for this more generic stuff. I also might
>    have missed some special-cases here, but from a quick look everything
>    just pins the buffer to the current location and that's it.
>
>    Also this obviously requires Christian's generic ttm_bo_pin rework
>    first.
>
> - roll the above out to drivers.
>
> Christian/Thomas, thoughts on this?

Calling this vmap instead of kmap certainly makes sense.

Not 100% sure about the generic helpers, but it sounds like this should 
indeed look rather clean in the end.

Christian.

>
> I think for the immediate need of rolling this out for vram helpers and
> fbdev code we should be able to do this, but just postpone the driver wide
> roll-out for now.
>
> Cheers, Daniel
>
>> -Daniel
>>
>>> Christian.
>>>
>>>> -Daniel
>>>>
>>>>> Thanks,
>>>>> Christian.
>>>>>
>>>>>> Best regards
>>>>>> Thomas
>>>>>>
>>>>>>> Regards,
>>>>>>> Christian.
>>>>>>>
>>>>>>>> Best regards
>>>>>>>> Thomas
>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>> Christian.
>>>>>>>>>
>>>>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>>>>>>>> ---
>>>>>>>>>>      include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
>>>>>>>>>>      include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
>>>>>>>>>>      2 files changed, 44 insertions(+)
>>>>>>>>>>
>>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>> index c96a25d571c8..62d89f05a801 100644
>>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>> @@ -34,6 +34,7 @@
>>>>>>>>>>      #include <drm/drm_gem.h>
>>>>>>>>>>      #include <drm/drm_hashtab.h>
>>>>>>>>>>      #include <drm/drm_vma_manager.h>
>>>>>>>>>> +#include <linux/dma-buf-map.h>
>>>>>>>>>>      #include <linux/kref.h>
>>>>>>>>>>      #include <linux/list.h>
>>>>>>>>>>      #include <linux/wait.h>
>>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct
>>>>>>>>>> ttm_bo_kmap_obj *map,
>>>>>>>>>>          return map->virtual;
>>>>>>>>>>      }
>>>>>>>>>>      +/**
>>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map
>>>>>>>>>> + *
>>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
>>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map
>>>>>>>>>> + *
>>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
>>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL.
>>>>>>>>>> + */
>>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj
>>>>>>>>>> *kmap,
>>>>>>>>>> +                           struct dma_buf_map *map)
>>>>>>>>>> +{
>>>>>>>>>> +    bool is_iomem;
>>>>>>>>>> +    void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
>>>>>>>>>> +
>>>>>>>>>> +    if (!vaddr)
>>>>>>>>>> +        dma_buf_map_clear(map);
>>>>>>>>>> +    else if (is_iomem)
>>>>>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
>>>>>>>>>> +    else
>>>>>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>>      /**
>>>>>>>>>>       * ttm_bo_kmap
>>>>>>>>>>       *
>>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>>>>>>>> --- a/include/linux/dma-buf-map.h
>>>>>>>>>> +++ b/include/linux/dma-buf-map.h
>>>>>>>>>> @@ -45,6 +45,12 @@
>>>>>>>>>>       *
>>>>>>>>>>       *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>>>>>>>>       *
>>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>>>>>>>>> + *
>>>>>>>>>> + * .. code-block:: c
>>>>>>>>>> + *
>>>>>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>>>>>>>> + *
>>>>>>>>>>       * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>>>>>>>>       * dma_buf_map_is_null().
>>>>>>>>>>       *
>>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>>>>>>>>> dma_buf_map *map, void *vaddr)
>>>>>>>>>>          map->is_iomem = false;
>>>>>>>>>>      }
>>>>>>>>>>      +/**
>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>>>>>>>>> an address in I/O memory
>>>>>>>>>> + * @map:        The dma-buf mapping structure
>>>>>>>>>> + * @vaddr_iomem:    An I/O-memory address
>>>>>>>>>> + *
>>>>>>>>>> + * Sets the address and the I/O-memory flag.
>>>>>>>>>> + */
>>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>>>>>>>>> +                           void __iomem *vaddr_iomem)
>>>>>>>>>> +{
>>>>>>>>>> +    map->vaddr_iomem = vaddr_iomem;
>>>>>>>>>> +    map->is_iomem = true;
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>>      /**
>>>>>>>>>>       * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>>>>>>>>>> for equality
>>>>>>>>>>       * @lhs:    The dma-buf mapping structure
>>>>>>>>> _______________________________________________
>>>>>>>>> dri-devel mailing list
>>>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
>>>>>>>> _______________________________________________
>>>>>>>> amd-gfx mailing list
>>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
>>>>>>> _______________________________________________
>>>>>>> dri-devel mailing list
>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
>>>>>>>
>>>>>> _______________________________________________
>>>>>> amd-gfx mailing list
>>>>>> amd-gfx@lists.freedesktop.org
>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
>>
>> -- 
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> http://blog.ffwll.ch


_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion
  2020-10-02 11:30                     ` Christian König
@ 2020-10-02 12:21                       ` Daniel Vetter
  0 siblings, 0 replies; 33+ messages in thread
From: Daniel Vetter @ 2020-10-02 12:21 UTC (permalink / raw)
  To: Christian König
  Cc: Heiko Stübner, Dave Airlie, Nouveau Dev, Linus Walleij,
	dri-devel, Wilson, Chris, Melissa Wen, Anholt, Eric, Huang Rui,
	Gerd Hoffmann, Qiang Yu, Sam Ravnborg, Sumit Semwal,
	Emil Velikov, Rob Herring, linux-samsung-soc, Joonyoung Shim,
	lima, Oleksandr Andrushchenko, Krzysztof Kozlowski, Steven Price,
	open list:ARM/Rockchip SoC...,
	Luben Tuikov, Alyssa Rosenzweig, Russell King,
	open list:DRM DRIVER FOR QXL VIRTUAL GPU, Ben Skeggs,
	Maarten Lankhorst, The etnaviv authors, Maxime Ripard, Inki Dae,
	Hans de Goede, Christian Gmeiner,
	moderated list:DRM DRIVERS FOR XEN, open list:VIRTIO CORE, NET...,
	Sean Paul, apaneers, Linux ARM,
	moderated list:DMA BUFFER SHARING FRAMEWORK, amd-gfx list,
	Tomeu Vizoso, Seung-Woo Kim, Sandy Huang, Kyungmin Park,
	Qinglang Miao, Kukjin Kim, Thomas Zimmermann, Alex Deucher,
	open list:DMA BUFFER SHARING FRAMEWORK, Lucas Stach

On Fri, Oct 2, 2020 at 1:30 PM Christian König
<ckoenig.leichtzumerken@gmail.com> wrote:
>
> Am 02.10.20 um 11:58 schrieb Daniel Vetter:
> > On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote:
> >> On Wed, Sep 30, 2020 at 2:34 PM Christian König
> >> <christian.koenig@amd.com> wrote:
> >>> Am 30.09.20 um 11:47 schrieb Daniel Vetter:
> >>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian König wrote:
> >>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
> >>>>>> Hi
> >>>>>>
> >>>>>> Am 30.09.20 um 10:05 schrieb Christian König:
> >>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
> >>>>>>>> Hi Christian
> >>>>>>>>
> >>>>>>>> Am 29.09.20 um 17:35 schrieb Christian König:
> >>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
> >>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
> >>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map
> >>>>>>>>>> with these values. Helpful for TTM-based drivers.
> >>>>>>>>> We could completely drop that if we use the same structure inside TTM as
> >>>>>>>>> well.
> >>>>>>>>>
> >>>>>>>>> Additional to that which driver is going to use this?
> >>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will
> >>>>>>>> retrieve the pointer via this function.
> >>>>>>>>
> >>>>>>>> I do want to see all that being more tightly integrated into TTM, but
> >>>>>>>> not in this series. This one is about fixing the bochs-on-sparc64
> >>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list.
> >>>>>>> I should have asked which driver you try to fix here :)
> >>>>>>>
> >>>>>>> In this case just keep the function inside bochs and only fix it there.
> >>>>>>>
> >>>>>>> All other drivers can be fixed when we generally pump this through TTM.
> >>>>>> Did you take a look at patch 3? This function will be used by VRAM
> >>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we
> >>>>>> have to duplicate the functionality in each if these drivers. Bochs
> >>>>>> itself uses VRAM helpers and doesn't touch the function directly.
> >>>>> Ah, ok can we have that then only in the VRAM helpers?
> >>>>>
> >>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj
> >>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
> >>>>>
> >>>>> What I want to avoid is to have another conversion function in TTM because
> >>>>> what happens here is that we already convert from ttm_bus_placement to
> >>>>> ttm_bo_kmap_obj and then to dma_buf_map.
> >>>> Hm I'm not really seeing how that helps with a gradual conversion of
> >>>> everything over to dma_buf_map and assorted helpers for access? There's
> >>>> too many places in ttm drivers where is_iomem and related stuff is used to
> >>>> be able to convert it all in one go. An intermediate state with a bunch of
> >>>> conversions seems fairly unavoidable to me.
> >>> Fair enough. I would just have started bottom up and not top down.
> >>>
> >>> Anyway feel free to go ahead with this approach as long as we can remove
> >>> the new function again when we clean that stuff up for good.
> >> Yeah I guess bottom up would make more sense as a refactoring. But the
> >> main motivation to land this here is to fix the __mmio vs normal
> >> memory confusion in the fbdev emulation helpers for sparc (and
> >> anything else that needs this). Hence the top down approach for
> >> rolling this out.
> > Ok I started reviewing this a bit more in-depth, and I think this is a bit
> > too much of a de-tour.
> >
> > Looking through all the callers of ttm_bo_kmap almost everyone maps the
> > entire object. Only vmwgfx uses to map less than that. Also, everyone just
> > immediately follows up with converting that full object map into a
> > pointer.
> >
> > So I think what we really want here is:
> > - new function
> >
> > int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> >
> >    _vmap name since that's consistent with both dma_buf functions and
> >    what's usually used to implement this. Outside of the ttm world kmap
> >    usually just means single-page mappings using kmap() or it's iomem
> >    sibling io_mapping_map* so rather confusing name for a function which
> >    usually is just used to set up a vmap of the entire buffer.
> >
> > - a helper which can be used for the drm_gem_object_funcs vmap/vunmap
> >    functions for all ttm drivers. We should be able to make this fully
> >    generic because a) we now have dma_buf_map and b) drm_gem_object is
> >    embedded in the ttm_bo, so we can upcast for everyone who's both a ttm
> >    and gem driver.
> >
> >    This is maybe a good follow-up, since it should allow us to ditch quite
> >    a bit of the vram helper code for this more generic stuff. I also might
> >    have missed some special-cases here, but from a quick look everything
> >    just pins the buffer to the current location and that's it.
> >
> >    Also this obviously requires Christian's generic ttm_bo_pin rework
> >    first.
> >
> > - roll the above out to drivers.
> >
> > Christian/Thomas, thoughts on this?
>
> Calling this vmap instead of kmap certainly makes sense.
>
> Not 100% sure about the generic helpers, but it sounds like this should
> indeed look rather clean in the end.

Yeah generic helper is probably better left for a later step, after
we've rolled out ttm_bo_vmap out everywhere.
-Daniel

>
> Christian.
>
> >
> > I think for the immediate need of rolling this out for vram helpers and
> > fbdev code we should be able to do this, but just postpone the driver wide
> > roll-out for now.
> >
> > Cheers, Daniel
> >
> >> -Daniel
> >>
> >>> Christian.
> >>>
> >>>> -Daniel
> >>>>
> >>>>> Thanks,
> >>>>> Christian.
> >>>>>
> >>>>>> Best regards
> >>>>>> Thomas
> >>>>>>
> >>>>>>> Regards,
> >>>>>>> Christian.
> >>>>>>>
> >>>>>>>> Best regards
> >>>>>>>> Thomas
> >>>>>>>>
> >>>>>>>>> Regards,
> >>>>>>>>> Christian.
> >>>>>>>>>
> >>>>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >>>>>>>>>> ---
> >>>>>>>>>>      include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
> >>>>>>>>>>      include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
> >>>>>>>>>>      2 files changed, 44 insertions(+)
> >>>>>>>>>>
> >>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> >>>>>>>>>> index c96a25d571c8..62d89f05a801 100644
> >>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
> >>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
> >>>>>>>>>> @@ -34,6 +34,7 @@
> >>>>>>>>>>      #include <drm/drm_gem.h>
> >>>>>>>>>>      #include <drm/drm_hashtab.h>
> >>>>>>>>>>      #include <drm/drm_vma_manager.h>
> >>>>>>>>>> +#include <linux/dma-buf-map.h>
> >>>>>>>>>>      #include <linux/kref.h>
> >>>>>>>>>>      #include <linux/list.h>
> >>>>>>>>>>      #include <linux/wait.h>
> >>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct
> >>>>>>>>>> ttm_bo_kmap_obj *map,
> >>>>>>>>>>          return map->virtual;
> >>>>>>>>>>      }
> >>>>>>>>>>      +/**
> >>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map
> >>>>>>>>>> + *
> >>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
> >>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map
> >>>>>>>>>> + *
> >>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
> >>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL.
> >>>>>>>>>> + */
> >>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj
> >>>>>>>>>> *kmap,
> >>>>>>>>>> +                           struct dma_buf_map *map)
> >>>>>>>>>> +{
> >>>>>>>>>> +    bool is_iomem;
> >>>>>>>>>> +    void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
> >>>>>>>>>> +
> >>>>>>>>>> +    if (!vaddr)
> >>>>>>>>>> +        dma_buf_map_clear(map);
> >>>>>>>>>> +    else if (is_iomem)
> >>>>>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
> >>>>>>>>>> +    else
> >>>>>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>>      /**
> >>>>>>>>>>       * ttm_bo_kmap
> >>>>>>>>>>       *
> >>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> >>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
> >>>>>>>>>> --- a/include/linux/dma-buf-map.h
> >>>>>>>>>> +++ b/include/linux/dma-buf-map.h
> >>>>>>>>>> @@ -45,6 +45,12 @@
> >>>>>>>>>>       *
> >>>>>>>>>>       *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> >>>>>>>>>>       *
> >>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> >>>>>>>>>> + *
> >>>>>>>>>> + * .. code-block:: c
> >>>>>>>>>> + *
> >>>>>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> >>>>>>>>>> + *
> >>>>>>>>>>       * Test if a mapping is valid with either dma_buf_map_is_set() or
> >>>>>>>>>>       * dma_buf_map_is_null().
> >>>>>>>>>>       *
> >>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> >>>>>>>>>> dma_buf_map *map, void *vaddr)
> >>>>>>>>>>          map->is_iomem = false;
> >>>>>>>>>>      }
> >>>>>>>>>>      +/**
> >>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> >>>>>>>>>> an address in I/O memory
> >>>>>>>>>> + * @map:        The dma-buf mapping structure
> >>>>>>>>>> + * @vaddr_iomem:    An I/O-memory address
> >>>>>>>>>> + *
> >>>>>>>>>> + * Sets the address and the I/O-memory flag.
> >>>>>>>>>> + */
> >>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> >>>>>>>>>> +                           void __iomem *vaddr_iomem)
> >>>>>>>>>> +{
> >>>>>>>>>> +    map->vaddr_iomem = vaddr_iomem;
> >>>>>>>>>> +    map->is_iomem = true;
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>>      /**
> >>>>>>>>>>       * dma_buf_map_is_equal - Compares two dma-buf mapping structures
> >>>>>>>>>> for equality
> >>>>>>>>>>       * @lhs:    The dma-buf mapping structure
> >>>>>>>>> _______________________________________________
> >>>>>>>>> dri-devel mailing list
> >>>>>>>>> dri-devel@lists.freedesktop.org
> >>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
> >>>>>>>> _______________________________________________
> >>>>>>>> amd-gfx mailing list
> >>>>>>>> amd-gfx@lists.freedesktop.org
> >>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
> >>>>>>> _______________________________________________
> >>>>>>> dri-devel mailing list
> >>>>>>> dri-devel@lists.freedesktop.org
> >>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
> >>>>>>>
> >>>>>> _______________________________________________
> >>>>>> amd-gfx mailing list
> >>>>>> amd-gfx@lists.freedesktop.org
> >>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
> >>
> >> --
> >> Daniel Vetter
> >> Software Engineer, Intel Corporation
> >> http://blog.ffwll.ch
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 3/7] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends
  2020-09-29 15:14 ` [PATCH v3 3/7] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends Thomas Zimmermann
@ 2020-10-02 13:02   ` Daniel Vetter
  0 siblings, 0 replies; 33+ messages in thread
From: Daniel Vetter @ 2020-10-02 13:02 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
	chris, melissa.srw, eric, ray.huang, kraxel, sam, sumit.semwal,
	emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
	oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
	kgene, bskeggs, linux+etnaviv, spice-devel, alyssa.rosenzweig,
	maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
	christian.gmeiner, xen-devel, virtualization, sean, apaneers,
	linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
	sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
	alexander.deucher, linux-media, christian.koenig, l.stach

On Tue, Sep 29, 2020 at 05:14:33PM +0200, Thomas Zimmermann wrote:
> This patch replaces the vmap/vunmap's use of raw pointers in GEM object
> functions with instances of struct dma_buf_map. GEM backends are
> converted as well.
> 
> For most GEM backends, this simply change the returned type. GEM VRAM
> helpers are also updated to indicate whether the returned framebuffer
> address is in system or I/O memory.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 14 ++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h |  4 +-
>  drivers/gpu/drm/ast/ast_cursor.c            | 29 +++----
>  drivers/gpu/drm/ast/ast_drv.h               |  7 +-
>  drivers/gpu/drm/drm_gem.c                   | 22 ++---
>  drivers/gpu/drm/drm_gem_cma_helper.c        | 14 ++--
>  drivers/gpu/drm/drm_gem_shmem_helper.c      | 48 ++++++-----
>  drivers/gpu/drm/drm_gem_vram_helper.c       | 90 +++++++++++----------
>  drivers/gpu/drm/etnaviv/etnaviv_drv.h       |  4 +-
>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 11 ++-
>  drivers/gpu/drm/exynos/exynos_drm_gem.c     |  6 +-
>  drivers/gpu/drm/exynos/exynos_drm_gem.h     |  4 +-
>  drivers/gpu/drm/lima/lima_gem.c             |  6 +-
>  drivers/gpu/drm/lima/lima_sched.c           | 11 ++-
>  drivers/gpu/drm/mgag200/mgag200_mode.c      | 12 +--
>  drivers/gpu/drm/nouveau/nouveau_gem.h       |  4 +-
>  drivers/gpu/drm/nouveau/nouveau_prime.c     |  9 ++-
>  drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 ++--
>  drivers/gpu/drm/qxl/qxl_display.c           | 13 +--
>  drivers/gpu/drm/qxl/qxl_draw.c              | 16 ++--
>  drivers/gpu/drm/qxl/qxl_drv.h               |  8 +-
>  drivers/gpu/drm/qxl/qxl_object.c            | 23 +++---
>  drivers/gpu/drm/qxl/qxl_object.h            |  2 +-
>  drivers/gpu/drm/qxl/qxl_prime.c             | 12 +--
>  drivers/gpu/drm/radeon/radeon_gem.c         |  4 +-
>  drivers/gpu/drm/radeon/radeon_prime.c       |  9 ++-
>  drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 +++--
>  drivers/gpu/drm/rockchip/rockchip_drm_gem.h |  4 +-
>  drivers/gpu/drm/tiny/cirrus.c               | 10 ++-
>  drivers/gpu/drm/tiny/gm12u320.c             | 10 ++-
>  drivers/gpu/drm/udl/udl_modeset.c           |  8 +-
>  drivers/gpu/drm/vboxvideo/vbox_mode.c       | 11 ++-
>  drivers/gpu/drm/vc4/vc4_bo.c                |  6 +-
>  drivers/gpu/drm/vc4/vc4_drv.h               |  2 +-
>  drivers/gpu/drm/vgem/vgem_drv.c             | 16 ++--
>  drivers/gpu/drm/xen/xen_drm_front_gem.c     | 18 +++--
>  drivers/gpu/drm/xen/xen_drm_front_gem.h     |  6 +-
>  include/drm/drm_gem.h                       |  5 +-
>  include/drm/drm_gem_cma_helper.h            |  4 +-
>  include/drm/drm_gem_shmem_helper.h          |  4 +-
>  include/drm/drm_gem_vram_helper.h           |  4 +-
>  41 files changed, 304 insertions(+), 222 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> index 5b465ab774d1..de7d0cfe1b93 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> @@ -44,13 +44,14 @@
>  /**
>   * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
>   * @obj: GEM BO
> + * @map: The virtual address of the mapping.
>   *
>   * Sets up an in-kernel virtual mapping of the BO's memory.
>   *
>   * Returns:
> - * The virtual address of the mapping or an error pointer.
> + * 0 on success, or a negative errno code otherwise.
>   */
> -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
> +int amdgpu_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
>  	int ret;
> @@ -58,19 +59,20 @@ void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
>  	ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
>  			  &bo->dma_buf_vmap);
>  	if (ret)
> -		return ERR_PTR(ret);
> +		return ret;
> +	ttm_kmap_obj_to_dma_buf_map(&bo->dma_buf_vmap, map);

I guess with the ttm_bo_vmap idea all the ttm changes here will look a bit
different.

>  
> -	return bo->dma_buf_vmap.virtual;
> +	return 0;
>  }
>  
>  /**
>   * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
>   * @obj: GEM BO
> - * @vaddr: Virtual address (unused)
> + * @map: Virtual address (unused)
>   *
>   * Tears down the in-kernel virtual mapping of the BO's memory.
>   */
> -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> index 2c5c84a06bb9..622642793064 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> @@ -31,8 +31,8 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
>  					    struct dma_buf *dma_buf);
>  bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
>  				      struct amdgpu_bo *bo);
> -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
> -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int amdgpu_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
>  			  struct vm_area_struct *vma);
>  
> diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
> index e0f4613918ad..459a3774e4e1 100644
> --- a/drivers/gpu/drm/ast/ast_cursor.c
> +++ b/drivers/gpu/drm/ast/ast_cursor.c
> @@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
>  
>  	for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
>  		gbo = ast->cursor.gbo[i];
> -		drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
> +		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
>  		drm_gem_vram_unpin(gbo);
>  		drm_gem_vram_put(gbo);
>  	}
> @@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
>  	struct drm_device *dev = &ast->base;
>  	size_t size, i;
>  	struct drm_gem_vram_object *gbo;
> -	void __iomem *vaddr;
> +	struct dma_buf_map map;
>  	int ret;
>  
>  	size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
> @@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
>  			drm_gem_vram_put(gbo);
>  			goto err_drm_gem_vram_put;
>  		}
> -		vaddr = drm_gem_vram_vmap(gbo);
> -		if (IS_ERR(vaddr)) {
> -			ret = PTR_ERR(vaddr);
> +		ret = drm_gem_vram_vmap(gbo, &map);
> +		if (ret) {
>  			drm_gem_vram_unpin(gbo);
>  			drm_gem_vram_put(gbo);
>  			goto err_drm_gem_vram_put;
>  		}
>  
>  		ast->cursor.gbo[i] = gbo;
> -		ast->cursor.vaddr[i] = vaddr;
> +		ast->cursor.map[i] = map;
>  	}
>  
>  	return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
> @@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
>  	while (i) {
>  		--i;
>  		gbo = ast->cursor.gbo[i];
> -		drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
> +		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
>  		drm_gem_vram_unpin(gbo);
>  		drm_gem_vram_put(gbo);
>  	}
> @@ -170,8 +169,8 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
>  {
>  	struct drm_device *dev = &ast->base;
>  	struct drm_gem_vram_object *gbo;
> +	struct dma_buf_map map;
>  	int ret;
> -	void *src;
>  	void __iomem *dst;
>  
>  	if (drm_WARN_ON_ONCE(dev, fb->width > AST_MAX_HWC_WIDTH) ||
> @@ -183,18 +182,16 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
>  	ret = drm_gem_vram_pin(gbo, 0);
>  	if (ret)
>  		return ret;
> -	src = drm_gem_vram_vmap(gbo);
> -	if (IS_ERR(src)) {
> -		ret = PTR_ERR(src);
> +	ret = drm_gem_vram_vmap(gbo, &map);
> +	if (ret)
>  		goto err_drm_gem_vram_unpin;
> -	}
>  
> -	dst = ast->cursor.vaddr[ast->cursor.next_index];
> +	dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
>  
>  	/* do data transfer to cursor BO */
> -	update_cursor_image(dst, src, fb->width, fb->height);
> +	update_cursor_image(dst, map.vaddr, fb->width, fb->height);

I don't think digging around in the pointer is a good idea, imo this
should get a 

	/* TODO: Use mapping abstraction properly */

or similar. Same for all the other usage for map.vaddr added to drivers
below (the stuff in helpers that the next patches will change again I
think you can leave as-is, it'll go away).

I'm also wondering whether we should prefix all members of struct
dma_buf_map with _ to make it clear they shouldn't be touched, so
map._vaddr and map._is_iomem.

Also todo.rst entry for all these, there's a lot from looking throught
this patch.

>  
> -	drm_gem_vram_vunmap(gbo, src);
> +	drm_gem_vram_vunmap(gbo, &map);
>  	drm_gem_vram_unpin(gbo);
>  
>  	return 0;
> @@ -257,7 +254,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
>  	u8 __iomem *sig;
>  	u8 jreg;
>  
> -	dst = ast->cursor.vaddr[ast->cursor.next_index];
> +	dst = ast->cursor.map[ast->cursor.next_index].vaddr;
>  
>  	sig = dst + AST_HWC_SIZE;
>  	writel(x, sig + AST_HWC_SIGNATURE_X);
> diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
> index 467049ca8430..f963141dd851 100644
> --- a/drivers/gpu/drm/ast/ast_drv.h
> +++ b/drivers/gpu/drm/ast/ast_drv.h
> @@ -28,10 +28,11 @@
>  #ifndef __AST_DRV_H__
>  #define __AST_DRV_H__
>  
> -#include <linux/types.h>
> -#include <linux/io.h>
> +#include <linux/dma-buf-map.h>
>  #include <linux/i2c.h>
>  #include <linux/i2c-algo-bit.h>
> +#include <linux/io.h>
> +#include <linux/types.h>
>  
>  #include <drm/drm_connector.h>
>  #include <drm/drm_crtc.h>
> @@ -131,7 +132,7 @@ struct ast_private {
>  
>  	struct {
>  		struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
> -		void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
> +		struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
>  		unsigned int next_index;
>  	} cursor;
>  
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 1da67d34e55d..0c4a66dea5c2 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1207,26 +1207,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
>  
>  void *drm_gem_vmap(struct drm_gem_object *obj)
>  {
> -	void *vaddr;
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	if (obj->funcs->vmap)
> -		vaddr = obj->funcs->vmap(obj);
> -	else
> -		vaddr = ERR_PTR(-EOPNOTSUPP);
> +	if (!obj->funcs->vmap) {
> +		return ERR_PTR(-EOPNOTSUPP);
>  
> -	if (!vaddr)
> -		vaddr = ERR_PTR(-ENOMEM);
> +	ret = obj->funcs->vmap(obj, &map);
> +	if (ret)
> +		return ERR_PTR(ret);
> +	else if (dma_buf_map_is_null(&map))
> +		return ERR_PTR(-ENOMEM);
>  
> -	return vaddr;
> +	return map.vaddr;
>  }
>  
>  void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
>  {
> +	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
> +
>  	if (!vaddr)
>  		return;
>  
>  	if (obj->funcs->vunmap)
> -		obj->funcs->vunmap(obj, vaddr);
> +		obj->funcs->vunmap(obj, &map);
>  }
>  
>  /**
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 2165633c9b9e..e87cd36518d3 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
>   * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
>   *     address space
>   * @obj: GEM object
> + * @map: Returns the kernel virtual address of the CMA GEM object's backing
> + *       store.
>   *
>   * This function maps a buffer exported via DRM PRIME into the kernel's
>   * virtual address space. Since the CMA buffers are already mapped into the
> @@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
>   * driver's &drm_gem_object_funcs.vmap callback.
>   *
>   * Returns:
> - * The kernel virtual address of the CMA GEM object's backing store.
> + * 0 on success, or a negative error code otherwise.
>   */
> -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
> +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
>  
> -	return cma_obj->vaddr;
> +	dma_buf_map_set_vaddr(map, cma_obj->vaddr);
> +
> +	return 0;
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
>  
> @@ -541,14 +545,14 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
>   * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
>   *     address space
>   * @obj: GEM object
> - * @vaddr: kernel virtual address where the CMA GEM object was mapped
> + * @map: Kernel virtual address where the CMA GEM object was mapped
>   *
>   * This function removes a buffer exported via DRM PRIME from the kernel's
>   * virtual address space. This is a no-op because CMA buffers cannot be
>   * unmapped from kernel space. Drivers using the CMA helpers should set this
>   * as their &drm_gem_object_funcs.vunmap callback.
>   */
> -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	/* Nothing to do */
>  }
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index fb11df7aced5..5553f58f68f3 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
>  }
>  EXPORT_SYMBOL(drm_gem_shmem_unpin);
>  
> -static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> +static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
>  {
>  	struct drm_gem_object *obj = &shmem->base;
> -	struct dma_buf_map map;
>  	int ret = 0;
>  
> -	if (shmem->vmap_use_count++ > 0)
> -		return shmem->vaddr;
> +	if (shmem->vmap_use_count++ > 0) {
> +		dma_buf_map_set_vaddr(map, shmem->vaddr);
> +		return 0;
> +	}
>  
>  	if (obj->import_attach) {
> -		ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
> -		if (!ret)
> -			shmem->vaddr = map.vaddr;
> +		ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
> +		if (!ret) {
> +			if (WARN_ON(map->is_iomem)) {
> +				ret = -EIO;
> +				goto err_put_pages;
> +			}
> +			shmem->vaddr = map->vaddr;
> +		}
>  	} else {
>  		pgprot_t prot = PAGE_KERNEL;
>  
> @@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>  				    VM_MAP, prot);
>  		if (!shmem->vaddr)
>  			ret = -ENOMEM;
> +		else
> +			dma_buf_map_set_vaddr(map, shmem->vaddr);
>  	}
>  
>  	if (ret) {
> @@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>  		goto err_put_pages;
>  	}
>  
> -	return shmem->vaddr;
> +	return 0;
>  
>  err_put_pages:
>  	if (!obj->import_attach)
> @@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>  err_zero_use:
>  	shmem->vmap_use_count = 0;
>  
> -	return ERR_PTR(ret);
> +	return ret;
>  }
>  
>  /*
>   * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
>   * @shmem: shmem GEM object
> + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> + *       store.
>   *
>   * This function makes sure that a contiguous kernel virtual address mapping
>   * exists for the buffer backing the shmem GEM object.
> @@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>   * Returns:
>   * 0 on success or a negative error code on failure.
>   */
> -void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
> +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> -	void *vaddr;
>  	int ret;
>  
>  	ret = mutex_lock_interruptible(&shmem->vmap_lock);
>  	if (ret)
> -		return ERR_PTR(ret);
> -	vaddr = drm_gem_shmem_vmap_locked(shmem);
> +		return ret;
> +	ret = drm_gem_shmem_vmap_locked(shmem, map);
>  	mutex_unlock(&shmem->vmap_lock);
>  
> -	return vaddr;
> +	return ret;
>  }
>  EXPORT_SYMBOL(drm_gem_shmem_vmap);
>  
> -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> +static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> +					struct dma_buf_map *map)
>  {
>  	struct drm_gem_object *obj = &shmem->base;
> -	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
>  
>  	if (WARN_ON_ONCE(!shmem->vmap_use_count))
>  		return;
> @@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
>  		return;
>  
>  	if (obj->import_attach)
> -		dma_buf_vunmap(obj->import_attach->dmabuf, &map);
> +		dma_buf_vunmap(obj->import_attach->dmabuf, map);
>  	else
>  		vunmap(shmem->vaddr);
>  
> @@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
>  /*
>   * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
>   * @shmem: shmem GEM object
> + * @map: Kernel virtual address where the SHMEM GEM object was mapped
>   *
>   * This function cleans up a kernel virtual address mapping acquired by
>   * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
> @@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
>   * also be called by drivers directly, in which case it will hide the
>   * differences between dma-buf imported and natively allocated objects.
>   */
> -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>  
>  	mutex_lock(&shmem->vmap_lock);
> -	drm_gem_shmem_vunmap_locked(shmem);
> +	drm_gem_shmem_vunmap_locked(shmem, map);
>  	mutex_unlock(&shmem->vmap_lock);
>  }
>  EXPORT_SYMBOL(drm_gem_shmem_vunmap);
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 256b346664f2..6a5b932e0d06 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -1,5 +1,6 @@
>  // SPDX-License-Identifier: GPL-2.0-or-later
>  
> +#include <linux/dma-buf-map.h>
>  #include <linux/module.h>
>  
>  #include <drm/drm_debugfs.h>
> @@ -382,11 +383,11 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
>  }
>  EXPORT_SYMBOL(drm_gem_vram_unpin);
>  
> -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
> +static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
> +				    struct dma_buf_map *map)
>  {
>  	int ret;
>  	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> -	bool is_iomem;
>  
>  	if (gbo->kmap_use_count > 0)
>  		goto out;
> @@ -396,17 +397,30 @@ static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
>  
>  	ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
>  	if (ret)
> -		return ERR_PTR(ret);
> +		return ret;
>  
>  out:
> -	if (!kmap->virtual)
> -		return NULL; /* not mapped; don't increment ref */
> +	if (!kmap->virtual) {
> +		dma_buf_map_clear(map);
> +		return 0; /* not mapped; don't increment ref */
> +	}
>  	++gbo->kmap_use_count;
> -	return ttm_kmap_obj_virtual(kmap, &is_iomem);
> +	ttm_kmap_obj_to_dma_buf_map(kmap, map);
> +	return 0;
>  }
>  
> -static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> +static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
> +				       struct dma_buf_map *map)
>  {
> +	struct drm_device *dev = gbo->bo.base.dev;
> +	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> +	struct dma_buf_map kmap_map;
> +
> +	ttm_kmap_obj_to_dma_buf_map(kmap, &kmap_map);
> +
> +	if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&kmap_map, map)))
> +		return; /* BUG: map not mapped from this BO */
> +
>  	if (WARN_ON_ONCE(!gbo->kmap_use_count))
>  		return;
>  	if (--gbo->kmap_use_count > 0)
> @@ -423,7 +437,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
>  /**
>   * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
>   *                       space
> - * @gbo:	The GEM VRAM object to map
> + * @gbo: The GEM VRAM object to map
> + * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> + *       store.
>   *
>   * The vmap function pins a GEM VRAM object to its current location, either
>   * system or video memory, and maps its buffer into kernel address space.
> @@ -432,48 +448,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
>   * unmap and unpin the GEM VRAM object.
>   *
>   * Returns:
> - * The buffer's virtual address on success, or
> - * an ERR_PTR()-encoded error code otherwise.
> + * 0 on success, or a negative error code otherwise.
>   */
> -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
> +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
>  {
>  	int ret;
> -	void *base;
>  
>  	ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
>  	if (ret)
> -		return ERR_PTR(ret);
> +		return ret;
>  
>  	ret = drm_gem_vram_pin_locked(gbo, 0);
>  	if (ret)
>  		goto err_ttm_bo_unreserve;
> -	base = drm_gem_vram_kmap_locked(gbo);
> -	if (IS_ERR(base)) {
> -		ret = PTR_ERR(base);
> +	ret = drm_gem_vram_kmap_locked(gbo, map);
> +	if (ret)
>  		goto err_drm_gem_vram_unpin_locked;
> -	}
>  
>  	ttm_bo_unreserve(&gbo->bo);
>  
> -	return base;
> +	return 0;
>  
>  err_drm_gem_vram_unpin_locked:
>  	drm_gem_vram_unpin_locked(gbo);
>  err_ttm_bo_unreserve:
>  	ttm_bo_unreserve(&gbo->bo);
> -	return ERR_PTR(ret);
> +	return ret;
>  }
>  EXPORT_SYMBOL(drm_gem_vram_vmap);
>  
>  /**
>   * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
> - * @gbo:	The GEM VRAM object to unmap
> - * @vaddr:	The mapping's base address as returned by drm_gem_vram_vmap()
> + * @gbo: The GEM VRAM object to unmap
> + * @map: Kernel virtual address where the VRAM GEM object was mapped
>   *
>   * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
>   * the documentation for drm_gem_vram_vmap() for more information.
>   */
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
> +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
>  {
>  	int ret;
>  
> @@ -481,7 +493,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
>  	if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
>  		return;
>  
> -	drm_gem_vram_kunmap_locked(gbo);
> +	drm_gem_vram_kunmap_locked(gbo, map);
>  	drm_gem_vram_unpin_locked(gbo);
>  
>  	ttm_bo_unreserve(&gbo->bo);
> @@ -829,37 +841,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
>  }
>  
>  /**
> - * drm_gem_vram_object_vmap() - \
> -	Implements &struct drm_gem_object_funcs.vmap
> - * @gem:	The GEM object to map
> + * drm_gem_vram_object_vmap() -
> + *	Implements &struct drm_gem_object_funcs.vmap
> + * @gem: The GEM object to map
> + * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> + *       store.
>   *
>   * Returns:
> - * The buffers virtual address on success, or
> - * NULL otherwise.
> + * 0 on success, or a negative error code otherwise.
>   */
> -static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
> +static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
>  {
>  	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
> -	void *base;
>  
> -	base = drm_gem_vram_vmap(gbo);
> -	if (IS_ERR(base))
> -		return NULL;
> -	return base;
> +	return drm_gem_vram_vmap(gbo, map);
>  }
>  
>  /**
> - * drm_gem_vram_object_vunmap() - \
> -	Implements &struct drm_gem_object_funcs.vunmap
> - * @gem:	The GEM object to unmap
> - * @vaddr:	The mapping's base address
> + * drm_gem_vram_object_vunmap() -
> + *	Implements &struct drm_gem_object_funcs.vunmap
> + * @gem: The GEM object to unmap
> + * @map: Kernel virtual address where the VRAM GEM object was mapped
>   */
> -static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
> -				       void *vaddr)
> +static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
>  {
>  	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
>  
> -	drm_gem_vram_vunmap(gbo, vaddr);
> +	drm_gem_vram_vunmap(gbo, map);
>  }
>  
>  /*
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> index 914f0867ff71..3d1eb8065fce 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> @@ -51,8 +51,8 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
>  int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
>  int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
>  struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
> -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
> -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
>  			   struct vm_area_struct *vma);
>  struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 135fbff6fecf..36c03e287e29 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -22,12 +22,17 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
>  	return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
>  }
>  
> -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
> +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
> -	return etnaviv_gem_vmap(obj);
> +	void *vaddr = etnaviv_gem_vmap(obj);
> +	if (!vaddr)
> +		return -ENOMEM;
> +	dma_buf_map_set_vaddr(map, vaddr);
> +
> +	return 0;
>  }
>  
> -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	/* TODO msm_gem_vunmap() */
>  }
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> index e7a6eb96f692..2c74e06669fa 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> @@ -471,12 +471,12 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
>  	return &exynos_gem->base;
>  }
>  
> -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
> +int exynos_drm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
> -	return NULL;
> +	return -ENOMEM;
>  }
>  
> -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	/* Nothing to do */
>  }

Might want to just start out with a patch to delete these. We don't keep
dummy functions around generally.

> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> index 74e926abeff0..ecfd048fd91d 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> @@ -107,8 +107,8 @@ struct drm_gem_object *
>  exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
>  				     struct dma_buf_attachment *attach,
>  				     struct sg_table *sgt);
> -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
> -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int exynos_drm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
>  			      struct vm_area_struct *vma);
>  
> diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
> index 11223fe348df..832e5280a6ed 100644
> --- a/drivers/gpu/drm/lima/lima_gem.c
> +++ b/drivers/gpu/drm/lima/lima_gem.c
> @@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
>  	return drm_gem_shmem_pin(obj);
>  }
>  
> -static void *lima_gem_vmap(struct drm_gem_object *obj)
> +static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct lima_bo *bo = to_lima_bo(obj);
>  
>  	if (bo->heap_size)
> -		return ERR_PTR(-EINVAL);
> +		return -EINVAL;
>  
> -	return drm_gem_shmem_vmap(obj);
> +	return drm_gem_shmem_vmap(obj, map);
>  }
>  
>  static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
> index dc6df9e9a40d..a070a85f8f36 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -1,6 +1,7 @@
>  // SPDX-License-Identifier: GPL-2.0 OR MIT
>  /* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
>  
> +#include <linux/dma-buf-map.h>
>  #include <linux/kthread.h>
>  #include <linux/slab.h>
>  #include <linux/vmalloc.h>
> @@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
>  	struct lima_dump_chunk_buffer *buffer_chunk;
>  	u32 size, task_size, mem_size;
>  	int i;
> +	struct dma_buf_map map;
> +	int ret;
>  
>  	mutex_lock(&dev->error_task_list_lock);
>  
> @@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
>  		} else {
>  			buffer_chunk->size = lima_bo_size(bo);
>  
> -			data = drm_gem_shmem_vmap(&bo->base.base);
> -			if (IS_ERR_OR_NULL(data)) {
> +			ret = drm_gem_shmem_vmap(&bo->base.base, &map);
> +			if (ret) {
>  				kvfree(et);
>  				goto out;
>  			}
>  
> -			memcpy(buffer_chunk + 1, data, buffer_chunk->size);
> +			memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
>  
> -			drm_gem_shmem_vunmap(&bo->base.base, data);
> +			drm_gem_shmem_vunmap(&bo->base.base, &map);
>  		}
>  
>  		buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
> diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
> index 38672f9e5c4f..ae4c8cb33fae 100644
> --- a/drivers/gpu/drm/mgag200/mgag200_mode.c
> +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
> @@ -9,6 +9,7 @@
>   */
>  
>  #include <linux/delay.h>
> +#include <linux/dma-buf-map.h>
>  
>  #include <drm/drm_atomic_helper.h>
>  #include <drm/drm_atomic_state_helper.h>
> @@ -1556,15 +1557,16 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
>  		      struct drm_rect *clip)
>  {
>  	struct drm_device *dev = &mdev->base;
> -	void *vmap;
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	vmap = drm_gem_shmem_vmap(fb->obj[0]);
> -	if (drm_WARN_ON(dev, !vmap))
> +	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> +	if (drm_WARN_ON(dev, ret))
>  		return; /* BUG: SHMEM BO should always be vmapped */
>  
> -	drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
> +	drm_fb_memcpy_dstclip(mdev->vram, map.vaddr, fb, clip);
>  
> -	drm_gem_shmem_vunmap(fb->obj[0], vmap);
> +	drm_gem_shmem_vunmap(fb->obj[0], &map);
>  
>  	/* Always scanout image at VRAM offset 0 */
>  	mgag200_set_startadd(mdev, (u32)0);
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
> index b35c180322e2..e780b6b1763d 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
> @@ -37,7 +37,7 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
>  extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
>  extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
>  	struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
> -extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
> -extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
> +extern int nouveau_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +extern void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
>  #endif
> diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
> index a8264aebf3d4..75e973a5675a 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_prime.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
> @@ -35,7 +35,7 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
>  	return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
>  }
>  
> -void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
> +int nouveau_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
>  	int ret;
> @@ -43,12 +43,13 @@ void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
>  	ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
>  			  &nvbo->dma_buf_vmap);
>  	if (ret)
> -		return ERR_PTR(ret);
> +		return ret;
> +	ttm_kmap_obj_to_dma_buf_map(&nvbo->dma_buf_vmap, map);
>  
> -	return nvbo->dma_buf_vmap.virtual;
> +	return 0;
>  }
>  
> -void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
>  
> diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> index fdbc8d949135..5ab03d605f57 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> @@ -5,6 +5,7 @@
>  #include <drm/drm_gem_shmem_helper.h>
>  #include <drm/panfrost_drm.h>
>  #include <linux/completion.h>
> +#include <linux/dma-buf-map.h>
>  #include <linux/iopoll.h>
>  #include <linux/pm_runtime.h>
>  #include <linux/slab.h>
> @@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
>  {
>  	struct panfrost_file_priv *user = file_priv->driver_priv;
>  	struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
> +	struct dma_buf_map map;
>  	struct drm_gem_shmem_object *bo;
>  	u32 cfg, as;
>  	int ret;
> @@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
>  		goto err_close_bo;
>  	}
>  
> -	perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
> -	if (IS_ERR(perfcnt->buf)) {
> -		ret = PTR_ERR(perfcnt->buf);
> +	ret = drm_gem_shmem_vmap(&bo->base, &map);
> +	if (ret)
>  		goto err_put_mapping;
> -	}
> +	perfcnt->buf = map.vaddr;
>  
>  	/*
>  	 * Invalidate the cache and clear the counters to start from a fresh
> @@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
>  	return 0;
>  
>  err_vunmap:
> -	drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
> +	drm_gem_shmem_vunmap(&bo->base, &map);
>  err_put_mapping:
>  	panfrost_gem_mapping_put(perfcnt->mapping);
>  err_close_bo:
> @@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
>  {
>  	struct panfrost_file_priv *user = file_priv->driver_priv;
>  	struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
> +	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
>  
>  	if (user != perfcnt->user)
>  		return -EINVAL;
> @@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
>  		  GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
>  
>  	perfcnt->user = NULL;
> -	drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
> +	drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
>  	perfcnt->buf = NULL;
>  	panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
>  	panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
> diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
> index 6063f3a15329..ed0d22fa0161 100644
> --- a/drivers/gpu/drm/qxl/qxl_display.c
> +++ b/drivers/gpu/drm/qxl/qxl_display.c
> @@ -25,6 +25,7 @@
>  
>  #include <linux/crc32.h>
>  #include <linux/delay.h>
> +#include <linux/dma-buf-map.h>
>  
>  #include <drm/drm_drv.h>
>  #include <drm/drm_atomic.h>
> @@ -581,7 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
>  	struct drm_gem_object *obj;
>  	struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
>  	int ret;
> -	void *user_ptr;
> +	struct dma_buf_map user_map;
> +	struct dma_buf_map cursor_map;
>  	int size = 64*64*4;
>  
>  	ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd),
> @@ -595,7 +597,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
>  		user_bo = gem_to_qxl_bo(obj);
>  
>  		/* pinning is done in the prepare/cleanup framevbuffer */
> -		ret = qxl_bo_kmap(user_bo, &user_ptr);
> +		ret = qxl_bo_kmap(user_bo, &user_map);
>  		if (ret)
>  			goto out_free_release;
>  
> @@ -613,7 +615,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
>  		if (ret)
>  			goto out_unpin;
>  
> -		ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
> +		ret = qxl_bo_kmap(cursor_bo, &cursor_map);
>  		if (ret)
>  			goto out_backoff;
>  
> @@ -627,7 +629,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
>  		cursor->chunk.next_chunk = 0;
>  		cursor->chunk.prev_chunk = 0;
>  		cursor->chunk.data_size = size;
> -		memcpy(cursor->chunk.data, user_ptr, size);
> +		memcpy(cursor->chunk.data, user_map.vaddr, size);
>  		qxl_bo_kunmap(cursor_bo);
>  		qxl_bo_kunmap(user_bo);
>  
> @@ -1138,6 +1140,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
>  {
>  	int ret;
>  	struct drm_gem_object *gobj;
> +	struct dma_buf_map map;
>  	int monitors_config_size = sizeof(struct qxl_monitors_config) +
>  		qxl_num_crtc * sizeof(struct qxl_head);
>  
> @@ -1154,7 +1157,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
>  	if (ret)
>  		return ret;
>  
> -	qxl_bo_kmap(qdev->monitors_config_bo, NULL);
> +	qxl_bo_kmap(qdev->monitors_config_bo, &map);
>  
>  	qdev->monitors_config = qdev->monitors_config_bo->kptr;
>  	qdev->ram_header->monitors_config =
> diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
> index 3599db096973..1bf4f465ecf4 100644
> --- a/drivers/gpu/drm/qxl/qxl_draw.c
> +++ b/drivers/gpu/drm/qxl/qxl_draw.c
> @@ -20,6 +20,8 @@
>   * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
>   */
>  
> +#include <linux/dma-buf-map.h>
> +
>  #include <drm/drm_fourcc.h>
>  
>  #include "qxl_drv.h"
> @@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
>  					      unsigned int num_clips,
>  					      struct qxl_bo *clips_bo)
>  {
> +	struct dma_buf_map map;
>  	struct qxl_clip_rects *dev_clips;
>  	int ret;
>  
> -	ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
> -	if (ret) {
> +	ret = qxl_bo_kmap(clips_bo, &map);
> +	if (ret)
>  		return NULL;
> -	}
> +
> +	dev_clips = map.vaddr;
>  	dev_clips->num_rects = num_clips;
>  	dev_clips->chunk.next_chunk = 0;
>  	dev_clips->chunk.prev_chunk = 0;
> @@ -142,7 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
>  	int stride = fb->pitches[0];
>  	/* depth is not actually interesting, we don't mask with it */
>  	int depth = fb->format->cpp[0] * 8;
> -	uint8_t *surface_base;
> +	struct dma_buf_map surface_map;
>  	struct qxl_release *release;
>  	struct qxl_bo *clips_bo;
>  	struct qxl_drm_image *dimage;
> @@ -197,11 +201,11 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
>  	if (ret)
>  		goto out_release_backoff;
>  
> -	ret = qxl_bo_kmap(bo, (void **)&surface_base);
> +	ret = qxl_bo_kmap(bo, &surface_map);
>  	if (ret)
>  		goto out_release_backoff;
>  
> -	ret = qxl_image_init(qdev, release, dimage, surface_base,
> +	ret = qxl_image_init(qdev, release, dimage, surface_map.vaddr,
>  			     left - dumb_shadow_offset,
>  			     top, width, height, depth, stride);
>  	qxl_bo_kunmap(bo);
> diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
> index 3602e8b34189..a9e9da4f4605 100644
> --- a/drivers/gpu/drm/qxl/qxl_drv.h
> +++ b/drivers/gpu/drm/qxl/qxl_drv.h
> @@ -50,6 +50,8 @@
>  
>  #include "qxl_dev.h"
>  
> +struct dma_buf_map;
> +
>  #define DRIVER_AUTHOR		"Dave Airlie"
>  
>  #define DRIVER_NAME		"qxl"
> @@ -335,7 +337,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
>  void qxl_gem_object_close(struct drm_gem_object *obj,
>  			  struct drm_file *file_priv);
>  void qxl_bo_force_delete(struct qxl_device *qdev);
> -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
>  
>  /* qxl_dumb.c */
>  int qxl_mode_dumb_create(struct drm_file *file_priv,
> @@ -445,8 +446,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
>  struct drm_gem_object *qxl_gem_prime_import_sg_table(
>  	struct drm_device *dev, struct dma_buf_attachment *attach,
>  	struct sg_table *sgt);
> -void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
> -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
> +			  struct dma_buf_map *map);
>  int qxl_gem_prime_mmap(struct drm_gem_object *obj,
>  				struct vm_area_struct *vma);
>  
> diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
> index d3635e3e3267..2d8ae3b10b1c 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.c
> +++ b/drivers/gpu/drm/qxl/qxl_object.c
> @@ -23,10 +23,12 @@
>   *          Alon Levy
>   */
>  
> +#include <linux/dma-buf-map.h>
> +#include <linux/io-mapping.h>
> +
>  #include "qxl_drv.h"
>  #include "qxl_object.h"
>  
> -#include <linux/io-mapping.h>
>  static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
>  {
>  	struct qxl_bo *bo;
> @@ -150,24 +152,22 @@ int qxl_bo_create(struct qxl_device *qdev,
>  	return 0;
>  }
>  
> -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
> +int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
>  {
> -	bool is_iomem;
>  	int r;
>  
>  	if (bo->kptr) {
> -		if (ptr)
> -			*ptr = bo->kptr;
>  		bo->map_count++;
> -		return 0;
> +		goto out;
>  	}
>  	r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
>  	if (r)
>  		return r;
> -	bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
> -	if (ptr)
> -		*ptr = bo->kptr;
>  	bo->map_count = 1;
> +	bo->kptr = bo->kmap.virtual;
> +
> +out:
> +	ttm_kmap_obj_to_dma_buf_map(&bo->kmap, map);
>  	return 0;
>  }
>  
> @@ -178,6 +178,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
>  	void *rptr;
>  	int ret;
>  	struct io_mapping *map;
> +	struct dma_buf_map bo_map;
>  
>  	if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
>  		map = qdev->vram_mapping;
> @@ -194,11 +195,11 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,

Uh, this fallback is wild. Not exactly sure this is a good idea or
anything, but also it's here already :-)

>  		return rptr;
>  	}
>  
> -	ret = qxl_bo_kmap(bo, &rptr);
> +	ret = qxl_bo_kmap(bo, &bo_map);
>  	if (ret)
>  		return NULL;
>  
> -	rptr += page_offset * PAGE_SIZE;
> +	rptr = bo_map.vaddr + page_offset * PAGE_SIZE;
>  	return rptr;
>  }
>  
> diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
> index 09a5c818324d..ebf24c9d2bf2 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.h
> +++ b/drivers/gpu/drm/qxl/qxl_object.h
> @@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
>  			 bool kernel, bool pinned, u32 domain,
>  			 struct qxl_surface *surf,
>  			 struct qxl_bo **bo_ptr);
> -extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
> +extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
>  extern void qxl_bo_kunmap(struct qxl_bo *bo);
>  void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
>  void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
> diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
> index 7d3816fca5a8..4aa949799446 100644
> --- a/drivers/gpu/drm/qxl/qxl_prime.c
> +++ b/drivers/gpu/drm/qxl/qxl_prime.c
> @@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
>  	return ERR_PTR(-ENOSYS);
>  }
>  
> -void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
> +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct qxl_bo *bo = gem_to_qxl_bo(obj);
> -	void *ptr;
>  	int ret;
>  
> -	ret = qxl_bo_kmap(bo, &ptr);
> +	ret = qxl_bo_kmap(bo, map);
>  	if (ret < 0)
> -		return ERR_PTR(ret);
> +		return ret;
>  
> -	return ptr;
> +	return 0;
>  }
>  
> -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
> +			  struct dma_buf_map *map)
>  {
>  	struct qxl_bo *bo = gem_to_qxl_bo(obj);
>  
> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> index 0ccd7213e41f..ac51517bdfcd 100644
> --- a/drivers/gpu/drm/radeon/radeon_gem.c
> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> @@ -40,8 +40,8 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
>  struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
>  int radeon_gem_prime_pin(struct drm_gem_object *obj);
>  void radeon_gem_prime_unpin(struct drm_gem_object *obj);
> -void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
> -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int radeon_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void radeon_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
>  static const struct drm_gem_object_funcs radeon_gem_object_funcs;
>  
> diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
> index b9de0e51c0be..a1a358de5448 100644
> --- a/drivers/gpu/drm/radeon/radeon_prime.c
> +++ b/drivers/gpu/drm/radeon/radeon_prime.c
> @@ -39,7 +39,7 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
>  	return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
>  }
>  
> -void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
> +int radeon_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct radeon_bo *bo = gem_to_radeon_bo(obj);
>  	int ret;
> @@ -47,12 +47,13 @@ void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
>  	ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
>  			  &bo->dma_buf_vmap);
>  	if (ret)
> -		return ERR_PTR(ret);
> +		return ret;
> +	ttm_kmap_obj_to_dma_buf_map(&bo->dma_buf_vmap, map);
>  
> -	return bo->dma_buf_vmap.virtual;
> +	return 0;
>  }
>  
> -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void radeon_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct radeon_bo *bo = gem_to_radeon_bo(obj);
>  
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> index 7d5ebb10323b..7971f57436dd 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> @@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
>  	return ERR_PTR(ret);
>  }
>  
> -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
> +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>  
> -	if (rk_obj->pages)
> -		return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
> -			    pgprot_writecombine(PAGE_KERNEL));
> +	if (rk_obj->pages) {
> +		void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
> +				  pgprot_writecombine(PAGE_KERNEL));
> +		if (!vaddr)
> +			return -ENOMEM;
> +		dma_buf_map_set_vaddr(map, vaddr);
> +		return 0;
> +	}
>  
>  	if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
> -		return NULL;
> +		return -ENOMEM;
> +	dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
>  
> -	return rk_obj->kvaddr;
> +	return 0;
>  }
>  
> -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>  
>  	if (rk_obj->pages) {
> -		vunmap(vaddr);
> +		vunmap(map->vaddr);
>  		return;
>  	}
>  
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> index 7ffc541bea07..5a70a56cd406 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> @@ -31,8 +31,8 @@ struct drm_gem_object *
>  rockchip_gem_prime_import_sg_table(struct drm_device *dev,
>  				   struct dma_buf_attachment *attach,
>  				   struct sg_table *sg);
> -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
> -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
>  /* drm driver mmap file operations */
>  int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
> index 744a8e337e41..6dc013f4b236 100644
> --- a/drivers/gpu/drm/tiny/cirrus.c
> +++ b/drivers/gpu/drm/tiny/cirrus.c
> @@ -17,6 +17,7 @@
>   */
>  
>  #include <linux/console.h>
> +#include <linux/dma-buf-map.h>
>  #include <linux/module.h>
>  #include <linux/pci.h>
>  
> @@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
>  			       struct drm_rect *rect)
>  {
>  	struct cirrus_device *cirrus = to_cirrus(fb->dev);
> +	struct dma_buf_map map;
>  	void *vmap;
>  	int idx, ret;
>  
> @@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
>  	if (!drm_dev_enter(&cirrus->dev, &idx))
>  		goto out;
>  
> -	ret = -ENOMEM;
> -	vmap = drm_gem_shmem_vmap(fb->obj[0]);
> -	if (!vmap)
> +	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> +	if (ret)
>  		goto out_dev_exit;
> +	vmap = map.vaddr;
>  
>  	if (cirrus->cpp == fb->format->cpp[0])
>  		drm_fb_memcpy_dstclip(cirrus->vram,
> @@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
>  	else
>  		WARN_ON_ONCE("cpp mismatch");
>  
> -	drm_gem_shmem_vunmap(fb->obj[0], vmap);
> +	drm_gem_shmem_vunmap(fb->obj[0], &map);
>  	ret = 0;
>  
>  out_dev_exit:
> diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
> index cc397671f689..5865027a1667 100644
> --- a/drivers/gpu/drm/tiny/gm12u320.c
> +++ b/drivers/gpu/drm/tiny/gm12u320.c
> @@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
>  {
>  	int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
>  	struct drm_framebuffer *fb;
> +	struct dma_buf_map map;
>  	void *vaddr;
>  	u8 *src;
>  
> @@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
>  	y1 = gm12u320->fb_update.rect.y1;
>  	y2 = gm12u320->fb_update.rect.y2;
>  
> -	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
> -	if (IS_ERR(vaddr)) {
> -		GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
> +	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> +	if (ret) {
> +		GM12U320_ERR("failed to vmap fb: %d\n", ret);
>  		goto put_fb;
>  	}
> +	vaddr = map.vaddr;
>  
>  	if (fb->obj[0]->import_attach) {
>  		ret = dma_buf_begin_cpu_access(
> @@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
>  			GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
>  	}
>  vunmap:
> -	drm_gem_shmem_vunmap(fb->obj[0], vaddr);
> +	drm_gem_shmem_vunmap(fb->obj[0], &map);
>  put_fb:
>  	drm_framebuffer_put(fb);
>  	gm12u320->fb_update.fb = NULL;
> diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
> index fef43f4e3bac..9c8ace1aa647 100644
> --- a/drivers/gpu/drm/udl/udl_modeset.c
> +++ b/drivers/gpu/drm/udl/udl_modeset.c
> @@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
>  	struct urb *urb;
>  	struct drm_rect clip;
>  	int log_bpp;
> +	struct dma_buf_map map;
>  	void *vaddr;
>  
>  	ret = udl_log_cpp(fb->format->cpp[0]);
> @@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
>  			return ret;
>  	}
>  
> -	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
> -	if (IS_ERR(vaddr)) {
> +	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> +	if (ret) {
>  		DRM_ERROR("failed to vmap fb\n");
>  		goto out_dma_buf_end_cpu_access;
>  	}
> +	vaddr = map.vaddr;
>  
>  	urb = udl_get_urb(dev);
>  	if (!urb)
> @@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
>  	ret = 0;
>  
>  out_drm_gem_shmem_vunmap:
> -	drm_gem_shmem_vunmap(fb->obj[0], vaddr);
> +	drm_gem_shmem_vunmap(fb->obj[0], &map);
>  out_dma_buf_end_cpu_access:
>  	if (import_attach) {
>  		tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
> diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> index 4fcc0a542b8a..6040b9ec747f 100644
> --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
> +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> @@ -9,6 +9,8 @@
>   *          Michael Thayer <michael.thayer@oracle.com,
>   *          Hans de Goede <hdegoede@redhat.com>
>   */
> +
> +#include <linux/dma-buf-map.h>
>  #include <linux/export.h>
>  
>  #include <drm/drm_atomic.h>
> @@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>  	u32 height = plane->state->crtc_h;
>  	size_t data_size, mask_size;
>  	u32 flags;
> +	struct dma_buf_map map;
> +	int ret;
>  	u8 *src;
>  
>  	/*
> @@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>  
>  	vbox_crtc->cursor_enabled = true;
>  
> -	src = drm_gem_vram_vmap(gbo);
> -	if (IS_ERR(src)) {
> +	ret = drm_gem_vram_vmap(gbo, &map);
> +	if (ret) {
>  		/*
>  		 * BUG: we should have pinned the BO in prepare_fb().
>  		 */
> @@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>  		DRM_WARN("Could not map cursor bo, skipping update\n");
>  		return;
>  	}

I don't think digging around in the pointer is a good idea, imo this
should get a 

	/* FIXME: Use mapping abstraction properly */

or similar.

> +	src = map.vaddr;
>  
>  	/*
>  	 * The mask must be calculated based on the alpha
> @@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>  	data_size = width * height * 4 + mask_size;
>  
>  	copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
> -	drm_gem_vram_vunmap(gbo, src);
> +	drm_gem_vram_vunmap(gbo, &map);
>  
>  	flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
>  		VBOX_MOUSE_POINTER_ALPHA;
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index f432278173cd..250266fb437e 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -786,16 +786,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
>  	return drm_gem_cma_prime_mmap(obj, vma);
>  }
>  
> -void *vc4_prime_vmap(struct drm_gem_object *obj)
> +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct vc4_bo *bo = to_vc4_bo(obj);
>  
>  	if (bo->validated_shader) {
>  		DRM_DEBUG("mmaping of shader BOs not allowed.\n");
> -		return ERR_PTR(-EINVAL);
> +		return -EINVAL;
>  	}
>  
> -	return drm_gem_cma_prime_vmap(obj);
> +	return drm_gem_cma_prime_vmap(obj, map);
>  }
>  
>  struct drm_gem_object *
> diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
> index a22478a35199..6af453c84777 100644
> --- a/drivers/gpu/drm/vc4/vc4_drv.h
> +++ b/drivers/gpu/drm/vc4/vc4_drv.h
> @@ -804,7 +804,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
>  struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
>  						 struct dma_buf_attachment *attach,
>  						 struct sg_table *sgt);
> -void *vc4_prime_vmap(struct drm_gem_object *obj);
> +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  int vc4_bo_cache_init(struct drm_device *dev);
>  void vc4_bo_cache_destroy(struct drm_device *dev);
>  int vc4_bo_inc_usecnt(struct vc4_bo *bo);
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index fa54a6d1403d..b2aa26e1e4a2 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
>  	return &obj->base;
>  }
>  
> -static void *vgem_prime_vmap(struct drm_gem_object *obj)
> +static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
>  	long n_pages = obj->size >> PAGE_SHIFT;
>  	struct page **pages;
> +	void *vaddr;
>  
>  	pages = vgem_pin_pages(bo);
>  	if (IS_ERR(pages))
> -		return NULL;
> +		return PTR_ERR(pages);
> +
> +	vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> +	if (!vaddr)
> +		return -ENOMEM;
> +	dma_buf_map_set_vaddr(map, vaddr);
>  
> -	return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> +	return 0;
>  }
>  
> -static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
>  
> -	vunmap(vaddr);
> +	vunmap(map->vaddr);
>  	vgem_unpin_pages(bo);
>  }
>  
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> index 4f34ef34ba60..74db5a840bed 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> @@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
>  	return gem_mmap_obj(xen_obj, vma);
>  }
>  
> -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
> +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
>  {
>  	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
> +	void *vaddr;
>  
>  	if (!xen_obj->pages)
> -		return NULL;
> +		return -ENOMEM;
>  
>  	/* Please see comment in gem_mmap_obj on mapping and attributes. */
> -	return vmap(xen_obj->pages, xen_obj->num_pages,
> -		    VM_MAP, PAGE_KERNEL);
> +	vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
> +		     VM_MAP, PAGE_KERNEL);
> +	if (!vaddr)
> +		return -ENOMEM;
> +	dma_buf_map_set_vaddr(map, vaddr);
> +
> +	return 0;
>  }
>  
>  void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
> -				    void *vaddr)
> +				    struct dma_buf_map *map)
>  {
> -	vunmap(vaddr);
> +	vunmap(map->vaddr);
>  }
>  
>  int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> index a39675fa31b2..a4e67d0a149c 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> @@ -12,6 +12,7 @@
>  #define __XEN_DRM_FRONT_GEM_H
>  
>  struct dma_buf_attachment;
> +struct dma_buf_map;
>  struct drm_device;
>  struct drm_gem_object;
>  struct file;
> @@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
>  
>  int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
>  
> -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
> +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
> +				 struct dma_buf_map *map);
>  
>  void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
> -				    void *vaddr);
> +				    struct dma_buf_map *map);
>  
>  int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
>  				 struct vm_area_struct *vma);
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index c38dd35da00b..5e6daa1c982f 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -39,6 +39,7 @@
>  
>  #include <drm/drm_vma_manager.h>
>  
> +struct dma_buf_map;
>  struct drm_gem_object;
>  
>  /**
> @@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
>  	 *
>  	 * This callback is optional.
>  	 */
> -	void *(*vmap)(struct drm_gem_object *obj);
> +	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
>  	/**
>  	 * @vunmap:
> @@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
>  	 *
>  	 * This callback is optional.
>  	 */
> -	void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
> +	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
>  	/**
>  	 * @mmap:
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index 2bfa2502607a..34a7f72879c5 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -103,8 +103,8 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
>  				  struct sg_table *sgt);
>  int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
>  			   struct vm_area_struct *vma);
> -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
> -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
>  struct drm_gem_object *
>  drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 5381f0c8cf6f..3449a0353fe0 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
>  void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
>  int drm_gem_shmem_pin(struct drm_gem_object *obj);
>  void drm_gem_shmem_unpin(struct drm_gem_object *obj);
> -void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
> -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
>  int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
>  
> diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
> index 128f88174d32..0c43b8f17ee9 100644
> --- a/include/drm/drm_gem_vram_helper.h
> +++ b/include/drm/drm_gem_vram_helper.h
> @@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
>  s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
>  int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
>  int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
> -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
> +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
> +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
>  
>  int drm_gem_vram_fill_create_dumb(struct drm_file *file,
>  				  struct drm_device *dev,
> -- 
> 2.28.0

Bit a big patch, I can't think of a way to split it up either.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 4/7] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map
  2020-09-29 15:14 ` [PATCH v3 4/7] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map Thomas Zimmermann
@ 2020-10-02 13:04   ` Daniel Vetter
  0 siblings, 0 replies; 33+ messages in thread
From: Daniel Vetter @ 2020-10-02 13:04 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
	chris, melissa.srw, eric, ray.huang, kraxel, sam, sumit.semwal,
	emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
	oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
	kgene, bskeggs, linux+etnaviv, spice-devel, alyssa.rosenzweig,
	maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
	christian.gmeiner, xen-devel, virtualization, sean, apaneers,
	linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
	sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
	alexander.deucher, linux-media, christian.koenig, l.stach

On Tue, Sep 29, 2020 at 05:14:34PM +0200, Thomas Zimmermann wrote:
> GEM's vmap and vunmap interfaces now wrap memory pointers in struct
> dma_buf_map.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/gpu/drm/drm_client.c   | 18 +++++++++++-------
>  drivers/gpu/drm/drm_gem.c      | 28 ++++++++++++++--------------
>  drivers/gpu/drm/drm_internal.h |  5 +++--
>  drivers/gpu/drm/drm_prime.c    | 14 ++++----------
>  4 files changed, 32 insertions(+), 33 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
> index 495f47d23d87..ac0082bed966 100644
> --- a/drivers/gpu/drm/drm_client.c
> +++ b/drivers/gpu/drm/drm_client.c
> @@ -3,6 +3,7 @@
>   * Copyright 2018 Noralf Trønnes
>   */
>  
> +#include <linux/dma-buf-map.h>
>  #include <linux/list.h>
>  #include <linux/module.h>
>  #include <linux/mutex.h>
> @@ -304,7 +305,8 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
>   */
>  void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
>  {
> -	void *vaddr;
> +	struct dma_buf_map map;
> +	int ret;
>  
>  	if (buffer->vaddr)
>  		return buffer->vaddr;
> @@ -317,13 +319,13 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
>  	 * fd_install step out of the driver backend hooks, to make that
>  	 * final step optional for internal users.
>  	 */
> -	vaddr = drm_gem_vmap(buffer->gem);
> -	if (IS_ERR(vaddr))
> -		return vaddr;
> +	ret = drm_gem_vmap(buffer->gem, &map);
> +	if (ret)
> +		return ERR_PTR(ret);
>  
> -	buffer->vaddr = vaddr;
> +	buffer->vaddr = map.vaddr;
>  
> -	return vaddr;
> +	return map.vaddr;
>  }
>  EXPORT_SYMBOL(drm_client_buffer_vmap);
>  
> @@ -337,7 +339,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
>   */
>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
>  {
> -	drm_gem_vunmap(buffer->gem, buffer->vaddr);
> +	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
> +
> +	drm_gem_vunmap(buffer->gem, &map);
>  	buffer->vaddr = NULL;
>  }
>  EXPORT_SYMBOL(drm_client_buffer_vunmap);
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 0c4a66dea5c2..f2b2f37d41c4 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1205,32 +1205,32 @@ void drm_gem_unpin(struct drm_gem_object *obj)
>  		obj->funcs->unpin(obj);
>  }
>  
> -void *drm_gem_vmap(struct drm_gem_object *obj)
> +int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
> -	struct dma_buf_map map;
>  	int ret;
>  
> -	if (!obj->funcs->vmap) {
> -		return ERR_PTR(-EOPNOTSUPP);
> +	if (!obj->funcs->vmap)
> +		return -EOPNOTSUPP;
>  
> -	ret = obj->funcs->vmap(obj, &map);
> +	ret = obj->funcs->vmap(obj, map);
>  	if (ret)
> -		return ERR_PTR(ret);
> -	else if (dma_buf_map_is_null(&map))
> -		return ERR_PTR(-ENOMEM);
> +		return ret;
> +	else if (dma_buf_map_is_null(map))
> +		return -ENOMEM;
>  
> -	return map.vaddr;
> +	return 0;
>  }
>  
> -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
> -	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
> -
> -	if (!vaddr)
> +	if (dma_buf_map_is_null(map))
>  		return;
>  
>  	if (obj->funcs->vunmap)
> -		obj->funcs->vunmap(obj, &map);
> +		obj->funcs->vunmap(obj, map);
> +
> +	/* Always set the mapping to NULL. Callers may rely on this. */
> +	dma_buf_map_clear(map);
>  }
>  
>  /**
> diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
> index b65865c630b0..58832d75a9bd 100644
> --- a/drivers/gpu/drm/drm_internal.h
> +++ b/drivers/gpu/drm/drm_internal.h
> @@ -33,6 +33,7 @@
>  
>  struct dentry;
>  struct dma_buf;
> +struct dma_buf_map;
>  struct drm_connector;
>  struct drm_crtc;
>  struct drm_framebuffer;
> @@ -187,8 +188,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
>  
>  int drm_gem_pin(struct drm_gem_object *obj);
>  void drm_gem_unpin(struct drm_gem_object *obj);
> -void *drm_gem_vmap(struct drm_gem_object *obj);
> -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
>  /* drm_debugfs.c drm_debugfs_crc.c */
>  #if defined(CONFIG_DEBUG_FS)
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index 89e2a2496734..cb8fbeeb731b 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -667,21 +667,15 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
>   *
>   * Sets up a kernel virtual mapping. This can be used as the &dma_buf_ops.vmap
>   * callback. Calls into &drm_gem_object_funcs.vmap for device specific handling.
> + * The kernel virtual address is returned in map.
>   *
> - * Returns the kernel virtual address or NULL on failure.
> + * Returns 0 on success or a negative errno code otherwise.
>   */
>  int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
>  {
>  	struct drm_gem_object *obj = dma_buf->priv;
> -	void *vaddr;
>  
> -	vaddr = drm_gem_vmap(obj);
> -	if (IS_ERR(vaddr))
> -		return PTR_ERR(vaddr);
> -
> -	dma_buf_map_set_vaddr(map, vaddr);
> -
> -	return 0;
> +	return drm_gem_vmap(obj, map);
>  }
>  EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
>  
> @@ -697,7 +691,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
>  {
>  	struct drm_gem_object *obj = dma_buf->priv;
>  
> -	drm_gem_vunmap(obj, map->vaddr);
> +	drm_gem_vunmap(obj, map);
>  }
>  EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);

Some of the transitional stuff disappearing!

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>  
> -- 
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 5/7] drm/gem: Store client buffer mappings as struct dma_buf_map
  2020-09-29 15:14 ` [PATCH v3 5/7] drm/gem: Store client buffer mappings as " Thomas Zimmermann
@ 2020-10-02 13:05   ` Daniel Vetter
  0 siblings, 0 replies; 33+ messages in thread
From: Daniel Vetter @ 2020-10-02 13:05 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
	chris, melissa.srw, eric, ray.huang, kraxel, sam, sumit.semwal,
	emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
	oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
	kgene, bskeggs, linux+etnaviv, spice-devel, alyssa.rosenzweig,
	maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
	christian.gmeiner, xen-devel, virtualization, sean, apaneers,
	linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
	sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
	alexander.deucher, linux-media, christian.koenig, l.stach

On Tue, Sep 29, 2020 at 05:14:35PM +0200, Thomas Zimmermann wrote:
> Kernel DRM clients now store their framebuffer address in an instance
> of struct dma_buf_map. Depending on the buffer's location, the address
> refers to system or I/O memory.
> 
> Callers of drm_client_buffer_vmap() receive a copy of the value in
> the call's supplied arguments. It can be accessed and modified with
> dma_buf_map interfaces.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/gpu/drm/drm_client.c    | 34 +++++++++++++++++++--------------
>  drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++---------
>  include/drm/drm_client.h        |  7 ++++---
>  3 files changed, 38 insertions(+), 26 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
> index ac0082bed966..fe573acf1067 100644
> --- a/drivers/gpu/drm/drm_client.c
> +++ b/drivers/gpu/drm/drm_client.c
> @@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
>  {
>  	struct drm_device *dev = buffer->client->dev;
>  
> -	drm_gem_vunmap(buffer->gem, buffer->vaddr);
> +	drm_gem_vunmap(buffer->gem, &buffer->map);
>  
>  	if (buffer->gem)
>  		drm_gem_object_put(buffer->gem);
> @@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
>  /**
>   * drm_client_buffer_vmap - Map DRM client buffer into address space
>   * @buffer: DRM client buffer
> + * @map_copy: Returns the mapped memory's address
>   *
>   * This function maps a client buffer into kernel address space. If the
> - * buffer is already mapped, it returns the mapping's address.
> + * buffer is already mapped, it returns the existing mapping's address.
>   *
>   * Client buffer mappings are not ref'counted. Each call to
>   * drm_client_buffer_vmap() should be followed by a call to
>   * drm_client_buffer_vunmap(); or the client buffer should be mapped
>   * throughout its lifetime.
>   *
> + * The returned address is a copy of the internal value. In contrast to
> + * other vmap interfaces, you don't need it for the client's vunmap
> + * function. So you can modify it at will during blit and draw operations.
> + *
>   * Returns:
> - *	The mapped memory's address
> + *	0 on success, or a negative errno code otherwise.
>   */
> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
> +int
> +drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy)
>  {
> -	struct dma_buf_map map;
> +	struct dma_buf_map *map = &buffer->map;
>  	int ret;
>  
> -	if (buffer->vaddr)
> -		return buffer->vaddr;
> +	if (dma_buf_map_is_set(map))
> +		goto out;
>  
>  	/*
>  	 * FIXME: The dependency on GEM here isn't required, we could
> @@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
>  	 * fd_install step out of the driver backend hooks, to make that
>  	 * final step optional for internal users.
>  	 */
> -	ret = drm_gem_vmap(buffer->gem, &map);
> +	ret = drm_gem_vmap(buffer->gem, map);
>  	if (ret)
> -		return ERR_PTR(ret);
> +		return ret;
>  
> -	buffer->vaddr = map.vaddr;
> +out:
> +	*map_copy = *map;
>  
> -	return map.vaddr;
> +	return 0;
>  }
>  EXPORT_SYMBOL(drm_client_buffer_vmap);
>  
> @@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
>   */
>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
>  {
> -	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
> +	struct dma_buf_map *map = &buffer->map;
>  
> -	drm_gem_vunmap(buffer->gem, &map);
> -	buffer->vaddr = NULL;
> +	drm_gem_vunmap(buffer->gem, map);
>  }
>  EXPORT_SYMBOL(drm_client_buffer_vunmap);
>  
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 8697554ccd41..343a292f2c7c 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -394,7 +394,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
>  	unsigned int cpp = fb->format->cpp[0];
>  	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
>  	void *src = fb_helper->fbdev->screen_buffer + offset;
> -	void *dst = fb_helper->buffer->vaddr + offset;
> +	void *dst = fb_helper->buffer->map.vaddr + offset;
>  	size_t len = (clip->x2 - clip->x1) * cpp;
>  	unsigned int y;
>  
> @@ -416,7 +416,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>  	struct drm_clip_rect *clip = &helper->dirty_clip;
>  	struct drm_clip_rect clip_copy;
>  	unsigned long flags;
> -	void *vaddr;
> +	struct dma_buf_map map;
> +	int ret;
>  
>  	spin_lock_irqsave(&helper->dirty_lock, flags);
>  	clip_copy = *clip;
> @@ -429,8 +430,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>  
>  		/* Generic fbdev uses a shadow buffer */
>  		if (helper->buffer) {
> -			vaddr = drm_client_buffer_vmap(helper->buffer);
> -			if (IS_ERR(vaddr))
> +			ret = drm_client_buffer_vmap(helper->buffer, &map);
> +			if (ret)
>  				return;
>  			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
>  		}
> @@ -2076,7 +2077,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
>  	struct drm_framebuffer *fb;
>  	struct fb_info *fbi;
>  	u32 format;
> -	void *vaddr;
> +	struct dma_buf_map map;
> +	int ret;
>  
>  	drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
>  		    sizes->surface_width, sizes->surface_height,
> @@ -2112,11 +2114,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
>  		fb_deferred_io_init(fbi);
>  	} else {
>  		/* buffer is mapped for HW framebuffer */
> -		vaddr = drm_client_buffer_vmap(fb_helper->buffer);
> -		if (IS_ERR(vaddr))
> -			return PTR_ERR(vaddr);
> +		ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
> +		if (ret)
> +			return ret;
> +		if (map.is_iomem)
> +			fbi->screen_base = map.vaddr_iomem;
> +		else
> +			fbi->screen_buffer = map.vaddr;
>  
> -		fbi->screen_buffer = vaddr;
>  		/* Shamelessly leak the physical address to user-space */
>  #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
>  		if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
> diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
> index 7aaea665bfc2..f07f2fb02e75 100644
> --- a/include/drm/drm_client.h
> +++ b/include/drm/drm_client.h
> @@ -3,6 +3,7 @@
>  #ifndef _DRM_CLIENT_H_
>  #define _DRM_CLIENT_H_
>  
> +#include <linux/dma-buf-map.h>
>  #include <linux/lockdep.h>
>  #include <linux/mutex.h>
>  #include <linux/types.h>
> @@ -141,9 +142,9 @@ struct drm_client_buffer {
>  	struct drm_gem_object *gem;
>  
>  	/**
> -	 * @vaddr: Virtual address for the buffer
> +	 * @map: Virtual address for the buffer
>  	 */
> -	void *vaddr;
> +	struct dma_buf_map map;
>  
>  	/**
>  	 * @fb: DRM framebuffer
> @@ -155,7 +156,7 @@ struct drm_client_buffer *
>  drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
>  void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
>  int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
> +int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map);
>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
>  
>  int drm_client_modeset_create(struct drm_client_dev *client);

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

> -- 
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 6/7] drm/fb_helper: Support framebuffers in I/O memory
  2020-09-29 15:14 ` [PATCH v3 6/7] drm/fb_helper: Support framebuffers in I/O memory Thomas Zimmermann
@ 2020-10-02 18:05   ` Daniel Vetter
  2020-10-02 18:44     ` Daniel Vetter
  0 siblings, 1 reply; 33+ messages in thread
From: Daniel Vetter @ 2020-10-02 18:05 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
	chris, melissa.srw, eric, ray.huang, kraxel, sam, sumit.semwal,
	emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
	oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
	kgene, bskeggs, linux+etnaviv, spice-devel, alyssa.rosenzweig,
	maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
	christian.gmeiner, xen-devel, virtualization, sean, apaneers,
	linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
	sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
	alexander.deucher, linux-media, christian.koenig, l.stach

On Tue, Sep 29, 2020 at 05:14:36PM +0200, Thomas Zimmermann wrote:
> At least sparc64 requires I/O-specific access to framebuffers. This
> patch updates the fbdev console accordingly.
> 
> For drivers with direct access to the framebuffer memory, the callback
> functions in struct fb_ops test for the type of memory and call the rsp
> fb_sys_ of fb_cfb_ functions.
> 
> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> interfaces to access the buffer.
> 
> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> I/O memory and avoid a HW exception. With the introduction of struct
> dma_buf_map, this is not required any longer. The patch removes the rsp
> code from both, bochs and fbdev.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/gpu/drm/bochs/bochs_kms.c |   1 -
>  drivers/gpu/drm/drm_fb_helper.c   | 217 ++++++++++++++++++++++++++++--
>  include/drm/drm_mode_config.h     |  12 --
>  include/linux/dma-buf-map.h       |  72 ++++++++--
>  4 files changed, 265 insertions(+), 37 deletions(-)
> 
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> index 13d0d04c4457..853081d186d5 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
>  	bochs->dev->mode_config.preferred_depth = 24;
>  	bochs->dev->mode_config.prefer_shadow = 0;
>  	bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> -	bochs->dev->mode_config.fbdev_use_iomem = true;
>  	bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>  
>  	bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 343a292f2c7c..f345a314a437 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -388,24 +388,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
>  }
>  
>  static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> -					  struct drm_clip_rect *clip)
> +					  struct drm_clip_rect *clip,
> +					  struct dma_buf_map *dst)
>  {
>  	struct drm_framebuffer *fb = fb_helper->fb;
>  	unsigned int cpp = fb->format->cpp[0];
>  	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
>  	void *src = fb_helper->fbdev->screen_buffer + offset;
> -	void *dst = fb_helper->buffer->map.vaddr + offset;
>  	size_t len = (clip->x2 - clip->x1) * cpp;
>  	unsigned int y;
>  
> -	for (y = clip->y1; y < clip->y2; y++) {
> -		if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> -			memcpy(dst, src, len);
> -		else
> -			memcpy_toio((void __iomem *)dst, src, len);
> +	dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>  
> +	for (y = clip->y1; y < clip->y2; y++) {
> +		dma_buf_map_memcpy_to(dst, src, len);
> +		dma_buf_map_incr(dst, fb->pitches[0]);
>  		src += fb->pitches[0];
> -		dst += fb->pitches[0];
>  	}
>  }
>  
> @@ -433,8 +431,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>  			ret = drm_client_buffer_vmap(helper->buffer, &map);
>  			if (ret)
>  				return;
> -			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> +			drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
>  		}
> +
>  		if (helper->fb->funcs->dirty)
>  			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
>  						 &clip_copy, 1);
> @@ -771,6 +770,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
>  }
>  EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
>  
> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> +				      size_t count, loff_t *ppos)
> +{
> +	unsigned long p = *ppos;
> +	u8 *dst;
> +	u8 __iomem *src;
> +	int c, err = 0;
> +	unsigned long total_size;
> +	unsigned long alloc_size;
> +	ssize_t ret = 0;
> +
> +	if (info->state != FBINFO_STATE_RUNNING)
> +		return -EPERM;
> +
> +	total_size = info->screen_size;
> +
> +	if (total_size == 0)
> +		total_size = info->fix.smem_len;
> +
> +	if (p >= total_size)
> +		return 0;
> +
> +	if (count >= total_size)
> +		count = total_size;
> +
> +	if (count + p > total_size)
> +		count = total_size - p;
> +
> +	src = (u8 __iomem *)(info->screen_base + p);
> +
> +	alloc_size = min(count, PAGE_SIZE);
> +
> +	dst = kmalloc(alloc_size, GFP_KERNEL);
> +	if (!dst)
> +		return -ENOMEM;
> +
> +	while (count) {
> +		c = min(count, alloc_size);
> +
> +		memcpy_fromio(dst, src, c);
> +		if (copy_to_user(buf, dst, c)) {
> +			err = -EFAULT;
> +			break;
> +		}
> +
> +		src += c;
> +		*ppos += c;
> +		buf += c;
> +		ret += c;
> +		count -= c;
> +	}
> +
> +	kfree(dst);
> +
> +	if (err)
> +		return err;
> +
> +	return ret;
> +}
> +
> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> +				       size_t count, loff_t *ppos)
> +{
> +	unsigned long p = *ppos;
> +	u8 *src;
> +	u8 __iomem *dst;
> +	int c, err = 0;
> +	unsigned long total_size;
> +	unsigned long alloc_size;
> +	ssize_t ret = 0;
> +
> +	if (info->state != FBINFO_STATE_RUNNING)
> +		return -EPERM;
> +
> +	total_size = info->screen_size;
> +
> +	if (total_size == 0)
> +		total_size = info->fix.smem_len;
> +
> +	if (p > total_size)
> +		return -EFBIG;
> +
> +	if (count > total_size) {
> +		err = -EFBIG;
> +		count = total_size;
> +	}
> +
> +	if (count + p > total_size) {
> +		/*
> +		 * The framebuffer is too small. We do the
> +		 * copy operation, but return an error code
> +		 * afterwards. Taken from fbdev.
> +		 */
> +		if (!err)
> +			err = -ENOSPC;
> +		count = total_size - p;
> +	}
> +
> +	alloc_size = min(count, PAGE_SIZE);
> +
> +	src = kmalloc(alloc_size, GFP_KERNEL);
> +	if (!src)
> +		return -ENOMEM;
> +
> +	dst = (u8 __iomem *)(info->screen_base + p);
> +
> +	while (count) {
> +		c = min(count, alloc_size);
> +
> +		if (copy_from_user(src, buf, c)) {
> +			err = -EFAULT;
> +			break;
> +		}
> +		memcpy_toio(dst, src, c);
> +
> +		dst += c;
> +		*ppos += c;
> +		buf += c;
> +		ret += c;
> +		count -= c;
> +	}
> +
> +	kfree(src);
> +
> +	if (err)
> +		return err;
> +
> +	return ret;
> +}
> +
>  /**
>   * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
>   * @info: fbdev registered by the helper
> @@ -2043,6 +2172,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
>  		return -ENODEV;
>  }
>  
> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> +				 size_t count, loff_t *ppos)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		return drm_fb_helper_sys_read(info, buf, count, ppos);
> +	else
> +		return drm_fb_helper_cfb_read(info, buf, count, ppos);
> +}
> +
> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> +				  size_t count, loff_t *ppos)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		return drm_fb_helper_sys_write(info, buf, count, ppos);
> +	else
> +		return drm_fb_helper_cfb_write(info, buf, count, ppos);
> +}
> +
> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> +				  const struct fb_fillrect *rect)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		drm_fb_helper_sys_fillrect(info, rect);
> +	else
> +		drm_fb_helper_cfb_fillrect(info, rect);
> +}
> +
> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> +				  const struct fb_copyarea *area)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		drm_fb_helper_sys_copyarea(info, area);
> +	else
> +		drm_fb_helper_cfb_copyarea(info, area);
> +}
> +
> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> +				   const struct fb_image *image)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		drm_fb_helper_sys_imageblit(info, image);
> +	else
> +		drm_fb_helper_cfb_imageblit(info, image);
> +}

I think a todo to make the new generic functions the real ones, and
drivers not using the sys/cfb ones anymore would be a good addition.

> +
>  static const struct fb_ops drm_fbdev_fb_ops = {
>  	.owner		= THIS_MODULE,
>  	DRM_FB_HELPER_DEFAULT_OPS,
> @@ -2050,11 +2239,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
>  	.fb_release	= drm_fbdev_fb_release,
>  	.fb_destroy	= drm_fbdev_fb_destroy,
>  	.fb_mmap	= drm_fbdev_fb_mmap,
> -	.fb_read	= drm_fb_helper_sys_read,
> -	.fb_write	= drm_fb_helper_sys_write,
> -	.fb_fillrect	= drm_fb_helper_sys_fillrect,
> -	.fb_copyarea	= drm_fb_helper_sys_copyarea,
> -	.fb_imageblit	= drm_fb_helper_sys_imageblit,
> +	.fb_read	= drm_fbdev_fb_read,
> +	.fb_write	= drm_fbdev_fb_write,
> +	.fb_fillrect	= drm_fbdev_fb_fillrect,
> +	.fb_copyarea	= drm_fbdev_fb_copyarea,
> +	.fb_imageblit	= drm_fbdev_fb_imageblit,
>  };
>  
>  static struct fb_deferred_io drm_fbdev_defio = {
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 5ffbb4ed5b35..ab424ddd7665 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -877,18 +877,6 @@ struct drm_mode_config {
>  	 */
>  	bool prefer_shadow_fbdev;
>  
> -	/**
> -	 * @fbdev_use_iomem:
> -	 *
> -	 * Set to true if framebuffer reside in iomem.
> -	 * When set to true memcpy_toio() is used when copying the framebuffer in
> -	 * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> -	 *
> -	 * FIXME: This should be replaced with a per-mapping is_iomem
> -	 * flag (like ttm does), and then used everywhere in fbdev code.
> -	 */
> -	bool fbdev_use_iomem;
> -
>  	/**
>  	 * @quirk_addfb_prefer_xbgr_30bpp:
>  	 *
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h

I think the below should be split out as a prep patch.

> index 2e8bbecb5091..6ca0f304dda2 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -32,6 +32,14 @@
>   * accessing the buffer. Use the returned instance and the helper functions
>   * to access the buffer's memory in the correct way.
>   *
> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> + * actually independent from the dma-buf infrastructure. When sharing buffers
> + * among devices, drivers have to know the location of the memory to access
> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> + * solves this problem for dma-buf and its users. If other drivers or
> + * sub-systems require similar functionality, the type could be generalized
> + * and moved to a more prominent header file.
> + *
>   * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
>   * considered bad style. Rather then accessing its fields directly, use one
>   * of the provided helper functions, or implement your own. For example,
> @@ -51,6 +59,14 @@
>   *
>   *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>   *
> + * Instances of struct dma_buf_map do not have to be cleaned up, but
> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> + * always refer to system memory.
> + *
> + * .. code-block:: c
> + *
> + *	dma_buf_map_clear(&map);
> + *
>   * Test if a mapping is valid with either dma_buf_map_is_set() or
>   * dma_buf_map_is_null().
>   *
> @@ -73,17 +89,19 @@
>   *	if (dma_buf_map_is_equal(&sys_map, &io_map))
>   *		// always false
>   *
> - * Instances of struct dma_buf_map do not have to be cleaned up, but
> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> - * always refer to system memory.
> + * A set up instance of struct dma_buf_map can be used to access or manipulate
> + * the buffer memory. Depending on the location of the memory, the provided
> + * helpers will pick the correct operations. Data can be copied into the memory
> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> + * dma_buf_map_incr().
>   *
> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> - * actually independent from the dma-buf infrastructure. When sharing buffers
> - * among devices, drivers have to know the location of the memory to access
> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> - * solves this problem for dma-buf and its users. If other drivers or
> - * sub-systems require similar functionality, the type could be generalized
> - * and moved to a more prominent header file.
> + * .. code-block:: c
> + *
> + *	const void *src = ...; // source buffer
> + *	size_t len = ...; // length of src
> + *
> + *	dma_buf_map_memcpy_to(&map, src, len);
> + *	dma_buf_map_incr(&map, len); // go to first byte after the memcpy
>   */
>  
>  /**
> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
>  	}
>  }
>  
> +/**
> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> + * @dst:	The dma-buf mapping structure
> + * @src:	The source buffer
> + * @len:	The number of byte in src
> + *
> + * Copies data into a dma-buf mapping. The source buffer is in system
> + * memory. Depending on the buffer's location, the helper picks the correct
> + * method of accessing the memory.
> + */
> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> +{
> +	if (dst->is_iomem)
> +		memcpy_toio(dst->vaddr_iomem, src, len);
> +	else
> +		memcpy(dst->vaddr, src, len);
> +}
> +
> +/**
> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> + * @map:	The dma-buf mapping structure
> + * @incr:	The number of bytes to increment
> + *
> + * Increments the address stored in a dma-buf mapping. Depending on the
> + * buffer's location, the correct value will be updated.
> + */
> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> +{
> +	if (map->is_iomem)
> +		map->vaddr_iomem += incr;
> +	else
> +		map->vaddr += incr;
> +}
> +
>  #endif /* __DMA_BUF_MAP_H__ */
> -- 
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 6/7] drm/fb_helper: Support framebuffers in I/O memory
  2020-10-02 18:05   ` Daniel Vetter
@ 2020-10-02 18:44     ` Daniel Vetter
  2020-10-08  9:25       ` Thomas Zimmermann
  0 siblings, 1 reply; 33+ messages in thread
From: Daniel Vetter @ 2020-10-02 18:44 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: Luben Tuikov, Heiko Stübner, Dave Airlie, Nouveau Dev,
	Linus Walleij, dri-devel, Wilson, Chris, Melissa Wen, Anholt,
	Eric, Huang Rui, Gerd Hoffmann, Sam Ravnborg, Sumit Semwal,
	Emil Velikov, Rob Herring, linux-samsung-soc, Joonyoung Shim,
	lima, Oleksandr Andrushchenko, Krzysztof Kozlowski, Steven Price,
	open list:ARM/Rockchip SoC...,
	Kukjin Kim, Alyssa Rosenzweig, Russell King,
	open list:DRM DRIVER FOR QXL VIRTUAL GPU, Ben Skeggs,
	Maarten Lankhorst, The etnaviv authors, Maxime Ripard, Inki Dae,
	Hans de Goede, Christian Gmeiner,
	moderated list:DRM DRIVERS FOR XEN, open list:VIRTIO CORE, NET...,
	Sean Paul, apaneers, Linux ARM,
	moderated list:DMA BUFFER SHARING FRAMEWORK, amd-gfx list,
	Tomeu Vizoso, Seung-Woo Kim, Sandy Huang, Kyungmin Park,
	Qinglang Miao, Qiang Yu, Alex Deucher,
	open list:DMA BUFFER SHARING FRAMEWORK, Christian König,
	Lucas Stach

On Fri, Oct 2, 2020 at 8:05 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Tue, Sep 29, 2020 at 05:14:36PM +0200, Thomas Zimmermann wrote:
> > At least sparc64 requires I/O-specific access to framebuffers. This
> > patch updates the fbdev console accordingly.
> >
> > For drivers with direct access to the framebuffer memory, the callback
> > functions in struct fb_ops test for the type of memory and call the rsp
> > fb_sys_ of fb_cfb_ functions.
> >
> > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > interfaces to access the buffer.
> >
> > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > I/O memory and avoid a HW exception. With the introduction of struct
> > dma_buf_map, this is not required any longer. The patch removes the rsp
> > code from both, bochs and fbdev.
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

Argh, I accidentally hit send before finishing this ...

> > ---
> >  drivers/gpu/drm/bochs/bochs_kms.c |   1 -
> >  drivers/gpu/drm/drm_fb_helper.c   | 217 ++++++++++++++++++++++++++++--
> >  include/drm/drm_mode_config.h     |  12 --
> >  include/linux/dma-buf-map.h       |  72 ++++++++--
> >  4 files changed, 265 insertions(+), 37 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> > index 13d0d04c4457..853081d186d5 100644
> > --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> >       bochs->dev->mode_config.preferred_depth = 24;
> >       bochs->dev->mode_config.prefer_shadow = 0;
> >       bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > -     bochs->dev->mode_config.fbdev_use_iomem = true;
> >       bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
> >
> >       bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> > diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> > index 343a292f2c7c..f345a314a437 100644
> > --- a/drivers/gpu/drm/drm_fb_helper.c
> > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > @@ -388,24 +388,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
> >  }
> >
> >  static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> > -                                       struct drm_clip_rect *clip)
> > +                                       struct drm_clip_rect *clip,
> > +                                       struct dma_buf_map *dst)
> >  {
> >       struct drm_framebuffer *fb = fb_helper->fb;
> >       unsigned int cpp = fb->format->cpp[0];
> >       size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> >       void *src = fb_helper->fbdev->screen_buffer + offset;
> > -     void *dst = fb_helper->buffer->map.vaddr + offset;
> >       size_t len = (clip->x2 - clip->x1) * cpp;
> >       unsigned int y;
> >
> > -     for (y = clip->y1; y < clip->y2; y++) {
> > -             if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > -                     memcpy(dst, src, len);
> > -             else
> > -                     memcpy_toio((void __iomem *)dst, src, len);
> > +     dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
> >
> > +     for (y = clip->y1; y < clip->y2; y++) {
> > +             dma_buf_map_memcpy_to(dst, src, len);
> > +             dma_buf_map_incr(dst, fb->pitches[0]);
> >               src += fb->pitches[0];
> > -             dst += fb->pitches[0];
> >       }
> >  }
> >
> > @@ -433,8 +431,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
> >                       ret = drm_client_buffer_vmap(helper->buffer, &map);
> >                       if (ret)
> >                               return;
> > -                     drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> > +                     drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
> >               }
> > +
> >               if (helper->fb->funcs->dirty)
> >                       helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> >                                                &clip_copy, 1);
> > @@ -771,6 +770,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
> >  }
> >  EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> >
> > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> > +                                   size_t count, loff_t *ppos)
> > +{
> > +     unsigned long p = *ppos;
> > +     u8 *dst;
> > +     u8 __iomem *src;
> > +     int c, err = 0;
> > +     unsigned long total_size;
> > +     unsigned long alloc_size;
> > +     ssize_t ret = 0;
> > +
> > +     if (info->state != FBINFO_STATE_RUNNING)
> > +             return -EPERM;
> > +
> > +     total_size = info->screen_size;
> > +
> > +     if (total_size == 0)
> > +             total_size = info->fix.smem_len;
> > +
> > +     if (p >= total_size)
> > +             return 0;
> > +
> > +     if (count >= total_size)
> > +             count = total_size;
> > +
> > +     if (count + p > total_size)
> > +             count = total_size - p;
> > +
> > +     src = (u8 __iomem *)(info->screen_base + p);
> > +
> > +     alloc_size = min(count, PAGE_SIZE);
> > +
> > +     dst = kmalloc(alloc_size, GFP_KERNEL);
> > +     if (!dst)
> > +             return -ENOMEM;
> > +
> > +     while (count) {
> > +             c = min(count, alloc_size);
> > +
> > +             memcpy_fromio(dst, src, c);
> > +             if (copy_to_user(buf, dst, c)) {
> > +                     err = -EFAULT;
> > +                     break;
> > +             }
> > +
> > +             src += c;
> > +             *ppos += c;
> > +             buf += c;
> > +             ret += c;
> > +             count -= c;
> > +     }
> > +
> > +     kfree(dst);
> > +
> > +     if (err)
> > +             return err;
> > +
> > +     return ret;
> > +}
> > +
> > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> > +                                    size_t count, loff_t *ppos)
> > +{
> > +     unsigned long p = *ppos;
> > +     u8 *src;
> > +     u8 __iomem *dst;
> > +     int c, err = 0;
> > +     unsigned long total_size;
> > +     unsigned long alloc_size;
> > +     ssize_t ret = 0;
> > +
> > +     if (info->state != FBINFO_STATE_RUNNING)
> > +             return -EPERM;
> > +
> > +     total_size = info->screen_size;
> > +
> > +     if (total_size == 0)
> > +             total_size = info->fix.smem_len;
> > +
> > +     if (p > total_size)
> > +             return -EFBIG;
> > +
> > +     if (count > total_size) {
> > +             err = -EFBIG;
> > +             count = total_size;
> > +     }
> > +
> > +     if (count + p > total_size) {
> > +             /*
> > +              * The framebuffer is too small. We do the
> > +              * copy operation, but return an error code
> > +              * afterwards. Taken from fbdev.
> > +              */
> > +             if (!err)
> > +                     err = -ENOSPC;
> > +             count = total_size - p;
> > +     }
> > +
> > +     alloc_size = min(count, PAGE_SIZE);
> > +
> > +     src = kmalloc(alloc_size, GFP_KERNEL);
> > +     if (!src)
> > +             return -ENOMEM;
> > +
> > +     dst = (u8 __iomem *)(info->screen_base + p);
> > +
> > +     while (count) {
> > +             c = min(count, alloc_size);
> > +
> > +             if (copy_from_user(src, buf, c)) {
> > +                     err = -EFAULT;
> > +                     break;
> > +             }
> > +             memcpy_toio(dst, src, c);
> > +
> > +             dst += c;
> > +             *ppos += c;
> > +             buf += c;
> > +             ret += c;
> > +             count -= c;
> > +     }
> > +
> > +     kfree(src);
> > +
> > +     if (err)
> > +             return err;
> > +
> > +     return ret;
> > +}

The duplication is a bit annoying here, but can't really be avoided. I
do think though we should maybe go a bit further, and have drm
implementations of this stuff instead of following fbdev concepts as
closely as possible. So here roughly:

- if we have a shadow fb, construct a dma_buf_map for that, otherwise
take the one from the driver
- have a full generic implementation using that one directly (and
checking size limits against the underlying gem buffer)
- ideally also with some testcases in the fbdev testcase we have (very
bare-bones right now) in igt

But I'm not really sure whether that's worth all the trouble. It's
just that the fbdev-ness here in this copied code sticks out a lot :-)

> > +
> >  /**
> >   * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> >   * @info: fbdev registered by the helper
> > @@ -2043,6 +2172,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> >               return -ENODEV;
> >  }
> >
> > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > +                              size_t count, loff_t *ppos)
> > +{
> > +     struct drm_fb_helper *fb_helper = info->par;
> > +     struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +             return drm_fb_helper_sys_read(info, buf, count, ppos);
> > +     else
> > +             return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > +}
> > +
> > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> > +                               size_t count, loff_t *ppos)
> > +{
> > +     struct drm_fb_helper *fb_helper = info->par;
> > +     struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +             return drm_fb_helper_sys_write(info, buf, count, ppos);
> > +     else
> > +             return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > +}
> > +
> > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > +                               const struct fb_fillrect *rect)
> > +{
> > +     struct drm_fb_helper *fb_helper = info->par;
> > +     struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +             drm_fb_helper_sys_fillrect(info, rect);
> > +     else
> > +             drm_fb_helper_cfb_fillrect(info, rect);
> > +}
> > +
> > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > +                               const struct fb_copyarea *area)
> > +{
> > +     struct drm_fb_helper *fb_helper = info->par;
> > +     struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +             drm_fb_helper_sys_copyarea(info, area);
> > +     else
> > +             drm_fb_helper_cfb_copyarea(info, area);
> > +}
> > +
> > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > +                                const struct fb_image *image)
> > +{
> > +     struct drm_fb_helper *fb_helper = info->par;
> > +     struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +             drm_fb_helper_sys_imageblit(info, image);
> > +     else
> > +             drm_fb_helper_cfb_imageblit(info, image);
> > +}

I think a todo.rst entry to make the new generic functions the real ones, and
drivers not using the sys/cfb ones anymore would be a good addition.
It's kinda covered by the move to the generic helpers, but maybe we
can convert a few more drivers over to these here. Would also allow us
to maybe flatten the code a bit and use more of the dma_buf_map stuff
directly (instead of reusing crusty fbdev code written 20 years ago or
so).

> > +
> >  static const struct fb_ops drm_fbdev_fb_ops = {
> >       .owner          = THIS_MODULE,
> >       DRM_FB_HELPER_DEFAULT_OPS,
> > @@ -2050,11 +2239,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> >       .fb_release     = drm_fbdev_fb_release,
> >       .fb_destroy     = drm_fbdev_fb_destroy,
> >       .fb_mmap        = drm_fbdev_fb_mmap,
> > -     .fb_read        = drm_fb_helper_sys_read,
> > -     .fb_write       = drm_fb_helper_sys_write,
> > -     .fb_fillrect    = drm_fb_helper_sys_fillrect,
> > -     .fb_copyarea    = drm_fb_helper_sys_copyarea,
> > -     .fb_imageblit   = drm_fb_helper_sys_imageblit,
> > +     .fb_read        = drm_fbdev_fb_read,
> > +     .fb_write       = drm_fbdev_fb_write,
> > +     .fb_fillrect    = drm_fbdev_fb_fillrect,
> > +     .fb_copyarea    = drm_fbdev_fb_copyarea,
> > +     .fb_imageblit   = drm_fbdev_fb_imageblit,
> >  };
> >
> >  static struct fb_deferred_io drm_fbdev_defio = {
> > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > index 5ffbb4ed5b35..ab424ddd7665 100644
> > --- a/include/drm/drm_mode_config.h
> > +++ b/include/drm/drm_mode_config.h
> > @@ -877,18 +877,6 @@ struct drm_mode_config {
> >        */
> >       bool prefer_shadow_fbdev;
> >
> > -     /**
> > -      * @fbdev_use_iomem:
> > -      *
> > -      * Set to true if framebuffer reside in iomem.
> > -      * When set to true memcpy_toio() is used when copying the framebuffer in
> > -      * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > -      *
> > -      * FIXME: This should be replaced with a per-mapping is_iomem
> > -      * flag (like ttm does), and then used everywhere in fbdev code.
> > -      */
> > -     bool fbdev_use_iomem;
> > -
> >       /**
> >        * @quirk_addfb_prefer_xbgr_30bpp:
> >        *
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h

I think the below should be split out as a prep patch.

> > index 2e8bbecb5091..6ca0f304dda2 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -32,6 +32,14 @@
> >   * accessing the buffer. Use the returned instance and the helper functions
> >   * to access the buffer's memory in the correct way.
> >   *
> > + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> > + * actually independent from the dma-buf infrastructure. When sharing buffers
> > + * among devices, drivers have to know the location of the memory to access
> > + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> > + * solves this problem for dma-buf and its users. If other drivers or
> > + * sub-systems require similar functionality, the type could be generalized
> > + * and moved to a more prominent header file.
> > + *
> >   * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> >   * considered bad style. Rather then accessing its fields directly, use one
> >   * of the provided helper functions, or implement your own. For example,
> > @@ -51,6 +59,14 @@
> >   *
> >   *   dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> >   *
> > + * Instances of struct dma_buf_map do not have to be cleaned up, but
> > + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > + * always refer to system memory.
> > + *
> > + * .. code-block:: c
> > + *
> > + *   dma_buf_map_clear(&map);
> > + *
> >   * Test if a mapping is valid with either dma_buf_map_is_set() or
> >   * dma_buf_map_is_null().
> >   *
> > @@ -73,17 +89,19 @@
> >   *   if (dma_buf_map_is_equal(&sys_map, &io_map))
> >   *           // always false
> >   *
> > - * Instances of struct dma_buf_map do not have to be cleaned up, but
> > - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > - * always refer to system memory.
> > + * A set up instance of struct dma_buf_map can be used to access or manipulate
> > + * the buffer memory. Depending on the location of the memory, the provided
> > + * helpers will pick the correct operations. Data can be copied into the memory
> > + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> > + * dma_buf_map_incr().
> >   *
> > - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> > - * actually independent from the dma-buf infrastructure. When sharing buffers
> > - * among devices, drivers have to know the location of the memory to access
> > - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> > - * solves this problem for dma-buf and its users. If other drivers or
> > - * sub-systems require similar functionality, the type could be generalized
> > - * and moved to a more prominent header file.
> > + * .. code-block:: c
> > + *
> > + *   const void *src = ...; // source buffer
> > + *   size_t len = ...; // length of src
> > + *
> > + *   dma_buf_map_memcpy_to(&map, src, len);
> > + *   dma_buf_map_incr(&map, len); // go to first byte after the memcpy
> >   */
> >
> >  /**
> > @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
> >       }
> >  }
> >
> > +/**
> > + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> > + * @dst:     The dma-buf mapping structure
> > + * @src:     The source buffer
> > + * @len:     The number of byte in src
> > + *
> > + * Copies data into a dma-buf mapping. The source buffer is in system
> > + * memory. Depending on the buffer's location, the helper picks the correct
> > + * method of accessing the memory.
> > + */
> > +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> > +{
> > +     if (dst->is_iomem)
> > +             memcpy_toio(dst->vaddr_iomem, src, len);
> > +     else
> > +             memcpy(dst->vaddr, src, len);
> > +}
> > +
> > +/**
> > + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> > + * @map:     The dma-buf mapping structure
> > + * @incr:    The number of bytes to increment
> > + *
> > + * Increments the address stored in a dma-buf mapping. Depending on the
> > + * buffer's location, the correct value will be updated.
> > + */
> > +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> > +{
> > +     if (map->is_iomem)
> > +             map->vaddr_iomem += incr;
> > +     else
> > +             map->vaddr += incr;
> > +}
> > +
> >  #endif /* __DMA_BUF_MAP_H__ */
> > --
> > 2.28.0

Aside from the details I think looks all reasonable.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 7/7] drm/todo: Update entries around struct dma_buf_map
  2020-09-29 15:14 ` [PATCH v3 7/7] drm/todo: Update entries around struct dma_buf_map Thomas Zimmermann
@ 2020-10-02 18:45   ` Daniel Vetter
  0 siblings, 0 replies; 33+ messages in thread
From: Daniel Vetter @ 2020-10-02 18:45 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
	chris, melissa.srw, eric, ray.huang, kraxel, sam, sumit.semwal,
	emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
	oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
	kgene, bskeggs, linux+etnaviv, spice-devel, alyssa.rosenzweig,
	maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
	christian.gmeiner, xen-devel, virtualization, sean, apaneers,
	linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
	sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
	alexander.deucher, linux-media, christian.koenig, l.stach

On Tue, Sep 29, 2020 at 05:14:37PM +0200, Thomas Zimmermann wrote:
> Instances of struct dma_buf_map should be useful throughout DRM's
> memory management code. Furthermore, several drivers can now use
> generic fbdev emulation.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>

> ---
>  Documentation/gpu/todo.rst | 24 ++++++++++++++++++++++--
>  1 file changed, 22 insertions(+), 2 deletions(-)
> 
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 3751ac976c3e..023626c1837b 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -197,8 +197,10 @@ Convert drivers to use drm_fbdev_generic_setup()
>  ------------------------------------------------
>  
>  Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> -expects the framebuffer in system memory (or system-like memory).
> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> +expected the framebuffer in system memory or system-like memory. By employing
> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> +as well.
>  
>  Contact: Maintainer of the driver you plan to convert
>  
> @@ -446,6 +448,24 @@ Contact: Ville Syrjälä, Daniel Vetter
>  
>  Level: Intermediate
>  
> +Use struct dma_buf_map throughout codebase
> +------------------------------------------
> +
> +Pointers to shared device memory are stored in struct dma_buf_map. Each
> +instance knows whether it refers to system or I/O memory. Most of the DRM-wide
> +interface have been converted to use struct dma_buf_map, but implementations
> +often still use raw pointers.
> +
> +The task is to use struct dma_buf_map where it makes sense.
> +
> +* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
> +* TTM might benefit from using struct dma_buf_map internally.
> +* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
> +
> +Level: Intermediate
> +
>  
>  Core refactorings
>  =================
> -- 
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion
  2020-10-02  9:58                   ` Daniel Vetter
  2020-10-02 11:30                     ` Christian König
@ 2020-10-07 12:57                     ` Thomas Zimmermann
  2020-10-07 13:10                       ` Daniel Vetter
  1 sibling, 1 reply; 33+ messages in thread
From: Thomas Zimmermann @ 2020-10-07 12:57 UTC (permalink / raw)
  To: Daniel Vetter, Christian König
  Cc: Luben Tuikov, Heiko Stübner, Dave Airlie, Nouveau Dev,
	Linus Walleij, dri-devel, Wilson, Chris, Melissa Wen, Anholt,
	Eric, Huang Rui, Gerd Hoffmann, Sam Ravnborg, Sumit Semwal,
	Emil Velikov, Rob Herring, linux-samsung-soc, Joonyoung Shim,
	lima, Oleksandr Andrushchenko, Krzysztof Kozlowski, Steven Price,
	open list:ARM/Rockchip SoC...,
	Kukjin Kim, Alyssa Rosenzweig, Russell King,
	open list:DRM DRIVER FOR QXL VIRTUAL GPU, Ben Skeggs,
	Maarten Lankhorst, The etnaviv authors, Maxime Ripard, Inki Dae,
	Hans de Goede, Christian Gmeiner,
	moderated list:DRM DRIVERS FOR XEN, open list:VIRTIO CORE, NET...,
	Sean Paul, apaneers, Linux ARM,
	moderated list:DMA BUFFER SHARING FRAMEWORK, amd-gfx list,
	Tomeu Vizoso, Seung-Woo Kim, Sandy Huang, Kyungmin Park,
	Qinglang Miao, Qiang Yu, Alex Deucher,
	open list:DMA BUFFER SHARING FRAMEWORK, Lucas Stach


[-- Attachment #1.1.1: Type: text/plain, Size: 12090 bytes --]

Hi

Am 02.10.20 um 11:58 schrieb Daniel Vetter:
> On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote:
>> On Wed, Sep 30, 2020 at 2:34 PM Christian König
>> <christian.koenig@amd.com> wrote:
>>>
>>> Am 30.09.20 um 11:47 schrieb Daniel Vetter:
>>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian König wrote:
>>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
>>>>>> Hi
>>>>>>
>>>>>> Am 30.09.20 um 10:05 schrieb Christian König:
>>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
>>>>>>>> Hi Christian
>>>>>>>>
>>>>>>>> Am 29.09.20 um 17:35 schrieb Christian König:
>>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
>>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
>>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map
>>>>>>>>>> with these values. Helpful for TTM-based drivers.
>>>>>>>>> We could completely drop that if we use the same structure inside TTM as
>>>>>>>>> well.
>>>>>>>>>
>>>>>>>>> Additional to that which driver is going to use this?
>>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will
>>>>>>>> retrieve the pointer via this function.
>>>>>>>>
>>>>>>>> I do want to see all that being more tightly integrated into TTM, but
>>>>>>>> not in this series. This one is about fixing the bochs-on-sparc64
>>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list.
>>>>>>> I should have asked which driver you try to fix here :)
>>>>>>>
>>>>>>> In this case just keep the function inside bochs and only fix it there.
>>>>>>>
>>>>>>> All other drivers can be fixed when we generally pump this through TTM.
>>>>>> Did you take a look at patch 3? This function will be used by VRAM
>>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we
>>>>>> have to duplicate the functionality in each if these drivers. Bochs
>>>>>> itself uses VRAM helpers and doesn't touch the function directly.
>>>>> Ah, ok can we have that then only in the VRAM helpers?
>>>>>
>>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj
>>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
>>>>>
>>>>> What I want to avoid is to have another conversion function in TTM because
>>>>> what happens here is that we already convert from ttm_bus_placement to
>>>>> ttm_bo_kmap_obj and then to dma_buf_map.
>>>> Hm I'm not really seeing how that helps with a gradual conversion of
>>>> everything over to dma_buf_map and assorted helpers for access? There's
>>>> too many places in ttm drivers where is_iomem and related stuff is used to
>>>> be able to convert it all in one go. An intermediate state with a bunch of
>>>> conversions seems fairly unavoidable to me.
>>>
>>> Fair enough. I would just have started bottom up and not top down.
>>>
>>> Anyway feel free to go ahead with this approach as long as we can remove
>>> the new function again when we clean that stuff up for good.
>>
>> Yeah I guess bottom up would make more sense as a refactoring. But the
>> main motivation to land this here is to fix the __mmio vs normal
>> memory confusion in the fbdev emulation helpers for sparc (and
>> anything else that needs this). Hence the top down approach for
>> rolling this out.
> 
> Ok I started reviewing this a bit more in-depth, and I think this is a bit
> too much of a de-tour.
> 
> Looking through all the callers of ttm_bo_kmap almost everyone maps the
> entire object. Only vmwgfx uses to map less than that. Also, everyone just
> immediately follows up with converting that full object map into a
> pointer.
> 
> So I think what we really want here is:
> - new function
> 
> int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> 
>   _vmap name since that's consistent with both dma_buf functions and
>   what's usually used to implement this. Outside of the ttm world kmap
>   usually just means single-page mappings using kmap() or it's iomem
>   sibling io_mapping_map* so rather confusing name for a function which
>   usually is just used to set up a vmap of the entire buffer.
> 
> - a helper which can be used for the drm_gem_object_funcs vmap/vunmap
>   functions for all ttm drivers. We should be able to make this fully
>   generic because a) we now have dma_buf_map and b) drm_gem_object is
>   embedded in the ttm_bo, so we can upcast for everyone who's both a ttm
>   and gem driver.
> 
>   This is maybe a good follow-up, since it should allow us to ditch quite
>   a bit of the vram helper code for this more generic stuff. I also might
>   have missed some special-cases here, but from a quick look everything
>   just pins the buffer to the current location and that's it.
> 
>   Also this obviously requires Christian's generic ttm_bo_pin rework
>   first.
> 
> - roll the above out to drivers.
> 
> Christian/Thomas, thoughts on this?

I agree on the goals, but what is the immediate objective here?

Adding ttm_bo_vmap() does not work out easily, as struct ttm_bo_kmap_obj
is a central part of the internals of TTM. struct ttm_bo_kmap_obj has
more internal state that struct dma_buf_map, so they are not easily
convertible either. What you propose seems to require a reimplementation
of the existing ttm_bo_kmap() code. That is it's own patch series.

I'd rather go with some variant of the existing patch and add
ttm_bo_vmap() in a follow-up.

Best regards
Thomas

> 
> I think for the immediate need of rolling this out for vram helpers and
> fbdev code we should be able to do this, but just postpone the driver wide
> roll-out for now.
> 
> Cheers, Daniel
> 
>> -Daniel
>>
>>>
>>> Christian.
>>>
>>>> -Daniel
>>>>
>>>>> Thanks,
>>>>> Christian.
>>>>>
>>>>>> Best regards
>>>>>> Thomas
>>>>>>
>>>>>>> Regards,
>>>>>>> Christian.
>>>>>>>
>>>>>>>> Best regards
>>>>>>>> Thomas
>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>> Christian.
>>>>>>>>>
>>>>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>>>>>>>> ---
>>>>>>>>>>     include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
>>>>>>>>>>     include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
>>>>>>>>>>     2 files changed, 44 insertions(+)
>>>>>>>>>>
>>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>> index c96a25d571c8..62d89f05a801 100644
>>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>> @@ -34,6 +34,7 @@
>>>>>>>>>>     #include <drm/drm_gem.h>
>>>>>>>>>>     #include <drm/drm_hashtab.h>
>>>>>>>>>>     #include <drm/drm_vma_manager.h>
>>>>>>>>>> +#include <linux/dma-buf-map.h>
>>>>>>>>>>     #include <linux/kref.h>
>>>>>>>>>>     #include <linux/list.h>
>>>>>>>>>>     #include <linux/wait.h>
>>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct
>>>>>>>>>> ttm_bo_kmap_obj *map,
>>>>>>>>>>         return map->virtual;
>>>>>>>>>>     }
>>>>>>>>>>     +/**
>>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map
>>>>>>>>>> + *
>>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
>>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map
>>>>>>>>>> + *
>>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
>>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL.
>>>>>>>>>> + */
>>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj
>>>>>>>>>> *kmap,
>>>>>>>>>> +                           struct dma_buf_map *map)
>>>>>>>>>> +{
>>>>>>>>>> +    bool is_iomem;
>>>>>>>>>> +    void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
>>>>>>>>>> +
>>>>>>>>>> +    if (!vaddr)
>>>>>>>>>> +        dma_buf_map_clear(map);
>>>>>>>>>> +    else if (is_iomem)
>>>>>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
>>>>>>>>>> +    else
>>>>>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>>     /**
>>>>>>>>>>      * ttm_bo_kmap
>>>>>>>>>>      *
>>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>>>>>>>> --- a/include/linux/dma-buf-map.h
>>>>>>>>>> +++ b/include/linux/dma-buf-map.h
>>>>>>>>>> @@ -45,6 +45,12 @@
>>>>>>>>>>      *
>>>>>>>>>>      *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>>>>>>>>      *
>>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>>>>>>>>> + *
>>>>>>>>>> + * .. code-block:: c
>>>>>>>>>> + *
>>>>>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>>>>>>>> + *
>>>>>>>>>>      * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>>>>>>>>      * dma_buf_map_is_null().
>>>>>>>>>>      *
>>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>>>>>>>>> dma_buf_map *map, void *vaddr)
>>>>>>>>>>         map->is_iomem = false;
>>>>>>>>>>     }
>>>>>>>>>>     +/**
>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>>>>>>>>> an address in I/O memory
>>>>>>>>>> + * @map:        The dma-buf mapping structure
>>>>>>>>>> + * @vaddr_iomem:    An I/O-memory address
>>>>>>>>>> + *
>>>>>>>>>> + * Sets the address and the I/O-memory flag.
>>>>>>>>>> + */
>>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>>>>>>>>> +                           void __iomem *vaddr_iomem)
>>>>>>>>>> +{
>>>>>>>>>> +    map->vaddr_iomem = vaddr_iomem;
>>>>>>>>>> +    map->is_iomem = true;
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>>     /**
>>>>>>>>>>      * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>>>>>>>>>> for equality
>>>>>>>>>>      * @lhs:    The dma-buf mapping structure
>>>>>>>>> _______________________________________________
>>>>>>>>> dri-devel mailing list
>>>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
>>>>>>>> _______________________________________________
>>>>>>>> amd-gfx mailing list
>>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
>>>>>>> _______________________________________________
>>>>>>> dri-devel mailing list
>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
>>>>>>>
>>>>>> _______________________________________________
>>>>>> amd-gfx mailing list
>>>>>> amd-gfx@lists.freedesktop.org
>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
>>>
>>
>>
>> -- 
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> http://blog.ffwll.ch
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

[-- Attachment #2: Type: text/plain, Size: 170 bytes --]

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion
  2020-10-07 12:57                     ` Thomas Zimmermann
@ 2020-10-07 13:10                       ` Daniel Vetter
  2020-10-07 13:20                         ` Thomas Zimmermann
  0 siblings, 1 reply; 33+ messages in thread
From: Daniel Vetter @ 2020-10-07 13:10 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: Luben Tuikov, Heiko Stübner, Dave Airlie, Nouveau Dev,
	Linus Walleij, dri-devel, Wilson, Chris, Melissa Wen, Anholt,
	Eric, Huang Rui, Gerd Hoffmann, Sam Ravnborg, Sumit Semwal,
	Emil Velikov, Rob Herring, linux-samsung-soc, Joonyoung Shim,
	lima, Oleksandr Andrushchenko, Krzysztof Kozlowski, Steven Price,
	open list:ARM/Rockchip SoC...,
	Kukjin Kim, Alyssa Rosenzweig, Russell King,
	open list:DRM DRIVER FOR QXL VIRTUAL GPU, Ben Skeggs,
	Maarten Lankhorst, The etnaviv authors, Maxime Ripard, Inki Dae,
	Hans de Goede, Christian Gmeiner,
	moderated list:DRM DRIVERS FOR XEN, open list:VIRTIO CORE, NET...,
	Sean Paul, apaneers, Linux ARM,
	moderated list:DMA BUFFER SHARING FRAMEWORK, amd-gfx list,
	Tomeu Vizoso, Seung-Woo Kim, Sandy Huang, Kyungmin Park,
	Qinglang Miao, Qiang Yu, Alex Deucher,
	open list:DMA BUFFER SHARING FRAMEWORK, Christian König,
	Lucas Stach

On Wed, Oct 7, 2020 at 2:57 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
> Hi
>
> Am 02.10.20 um 11:58 schrieb Daniel Vetter:
> > On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote:
> >> On Wed, Sep 30, 2020 at 2:34 PM Christian König
> >> <christian.koenig@amd.com> wrote:
> >>>
> >>> Am 30.09.20 um 11:47 schrieb Daniel Vetter:
> >>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian König wrote:
> >>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
> >>>>>> Hi
> >>>>>>
> >>>>>> Am 30.09.20 um 10:05 schrieb Christian König:
> >>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
> >>>>>>>> Hi Christian
> >>>>>>>>
> >>>>>>>> Am 29.09.20 um 17:35 schrieb Christian König:
> >>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
> >>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
> >>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map
> >>>>>>>>>> with these values. Helpful for TTM-based drivers.
> >>>>>>>>> We could completely drop that if we use the same structure inside TTM as
> >>>>>>>>> well.
> >>>>>>>>>
> >>>>>>>>> Additional to that which driver is going to use this?
> >>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will
> >>>>>>>> retrieve the pointer via this function.
> >>>>>>>>
> >>>>>>>> I do want to see all that being more tightly integrated into TTM, but
> >>>>>>>> not in this series. This one is about fixing the bochs-on-sparc64
> >>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list.
> >>>>>>> I should have asked which driver you try to fix here :)
> >>>>>>>
> >>>>>>> In this case just keep the function inside bochs and only fix it there.
> >>>>>>>
> >>>>>>> All other drivers can be fixed when we generally pump this through TTM.
> >>>>>> Did you take a look at patch 3? This function will be used by VRAM
> >>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we
> >>>>>> have to duplicate the functionality in each if these drivers. Bochs
> >>>>>> itself uses VRAM helpers and doesn't touch the function directly.
> >>>>> Ah, ok can we have that then only in the VRAM helpers?
> >>>>>
> >>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj
> >>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
> >>>>>
> >>>>> What I want to avoid is to have another conversion function in TTM because
> >>>>> what happens here is that we already convert from ttm_bus_placement to
> >>>>> ttm_bo_kmap_obj and then to dma_buf_map.
> >>>> Hm I'm not really seeing how that helps with a gradual conversion of
> >>>> everything over to dma_buf_map and assorted helpers for access? There's
> >>>> too many places in ttm drivers where is_iomem and related stuff is used to
> >>>> be able to convert it all in one go. An intermediate state with a bunch of
> >>>> conversions seems fairly unavoidable to me.
> >>>
> >>> Fair enough. I would just have started bottom up and not top down.
> >>>
> >>> Anyway feel free to go ahead with this approach as long as we can remove
> >>> the new function again when we clean that stuff up for good.
> >>
> >> Yeah I guess bottom up would make more sense as a refactoring. But the
> >> main motivation to land this here is to fix the __mmio vs normal
> >> memory confusion in the fbdev emulation helpers for sparc (and
> >> anything else that needs this). Hence the top down approach for
> >> rolling this out.
> >
> > Ok I started reviewing this a bit more in-depth, and I think this is a bit
> > too much of a de-tour.
> >
> > Looking through all the callers of ttm_bo_kmap almost everyone maps the
> > entire object. Only vmwgfx uses to map less than that. Also, everyone just
> > immediately follows up with converting that full object map into a
> > pointer.
> >
> > So I think what we really want here is:
> > - new function
> >
> > int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> >
> >   _vmap name since that's consistent with both dma_buf functions and
> >   what's usually used to implement this. Outside of the ttm world kmap
> >   usually just means single-page mappings using kmap() or it's iomem
> >   sibling io_mapping_map* so rather confusing name for a function which
> >   usually is just used to set up a vmap of the entire buffer.
> >
> > - a helper which can be used for the drm_gem_object_funcs vmap/vunmap
> >   functions for all ttm drivers. We should be able to make this fully
> >   generic because a) we now have dma_buf_map and b) drm_gem_object is
> >   embedded in the ttm_bo, so we can upcast for everyone who's both a ttm
> >   and gem driver.
> >
> >   This is maybe a good follow-up, since it should allow us to ditch quite
> >   a bit of the vram helper code for this more generic stuff. I also might
> >   have missed some special-cases here, but from a quick look everything
> >   just pins the buffer to the current location and that's it.
> >
> >   Also this obviously requires Christian's generic ttm_bo_pin rework
> >   first.
> >
> > - roll the above out to drivers.
> >
> > Christian/Thomas, thoughts on this?
>
> I agree on the goals, but what is the immediate objective here?
>
> Adding ttm_bo_vmap() does not work out easily, as struct ttm_bo_kmap_obj
> is a central part of the internals of TTM. struct ttm_bo_kmap_obj has
> more internal state that struct dma_buf_map, so they are not easily
> convertible either. What you propose seems to require a reimplementation
> of the existing ttm_bo_kmap() code. That is it's own patch series.
>
> I'd rather go with some variant of the existing patch and add
> ttm_bo_vmap() in a follow-up.

ttm_bo_vmap would simply wrap what you currently open-code as
ttm_bo_kmap + ttm_kmap_obj_to_dma_buf_map. Removing ttm_kmap_obj would
be a much later step. Why do you think adding ttm_bo_vmap is not
possible?
-Daniel


> Best regards
> Thomas
>
> >
> > I think for the immediate need of rolling this out for vram helpers and
> > fbdev code we should be able to do this, but just postpone the driver wide
> > roll-out for now.
> >
> > Cheers, Daniel
> >
> >> -Daniel
> >>
> >>>
> >>> Christian.
> >>>
> >>>> -Daniel
> >>>>
> >>>>> Thanks,
> >>>>> Christian.
> >>>>>
> >>>>>> Best regards
> >>>>>> Thomas
> >>>>>>
> >>>>>>> Regards,
> >>>>>>> Christian.
> >>>>>>>
> >>>>>>>> Best regards
> >>>>>>>> Thomas
> >>>>>>>>
> >>>>>>>>> Regards,
> >>>>>>>>> Christian.
> >>>>>>>>>
> >>>>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >>>>>>>>>> ---
> >>>>>>>>>>     include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
> >>>>>>>>>>     include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
> >>>>>>>>>>     2 files changed, 44 insertions(+)
> >>>>>>>>>>
> >>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> >>>>>>>>>> index c96a25d571c8..62d89f05a801 100644
> >>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
> >>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
> >>>>>>>>>> @@ -34,6 +34,7 @@
> >>>>>>>>>>     #include <drm/drm_gem.h>
> >>>>>>>>>>     #include <drm/drm_hashtab.h>
> >>>>>>>>>>     #include <drm/drm_vma_manager.h>
> >>>>>>>>>> +#include <linux/dma-buf-map.h>
> >>>>>>>>>>     #include <linux/kref.h>
> >>>>>>>>>>     #include <linux/list.h>
> >>>>>>>>>>     #include <linux/wait.h>
> >>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct
> >>>>>>>>>> ttm_bo_kmap_obj *map,
> >>>>>>>>>>         return map->virtual;
> >>>>>>>>>>     }
> >>>>>>>>>>     +/**
> >>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map
> >>>>>>>>>> + *
> >>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
> >>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map
> >>>>>>>>>> + *
> >>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
> >>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL.
> >>>>>>>>>> + */
> >>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj
> >>>>>>>>>> *kmap,
> >>>>>>>>>> +                           struct dma_buf_map *map)
> >>>>>>>>>> +{
> >>>>>>>>>> +    bool is_iomem;
> >>>>>>>>>> +    void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
> >>>>>>>>>> +
> >>>>>>>>>> +    if (!vaddr)
> >>>>>>>>>> +        dma_buf_map_clear(map);
> >>>>>>>>>> +    else if (is_iomem)
> >>>>>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
> >>>>>>>>>> +    else
> >>>>>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>>     /**
> >>>>>>>>>>      * ttm_bo_kmap
> >>>>>>>>>>      *
> >>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> >>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
> >>>>>>>>>> --- a/include/linux/dma-buf-map.h
> >>>>>>>>>> +++ b/include/linux/dma-buf-map.h
> >>>>>>>>>> @@ -45,6 +45,12 @@
> >>>>>>>>>>      *
> >>>>>>>>>>      *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> >>>>>>>>>>      *
> >>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> >>>>>>>>>> + *
> >>>>>>>>>> + * .. code-block:: c
> >>>>>>>>>> + *
> >>>>>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> >>>>>>>>>> + *
> >>>>>>>>>>      * Test if a mapping is valid with either dma_buf_map_is_set() or
> >>>>>>>>>>      * dma_buf_map_is_null().
> >>>>>>>>>>      *
> >>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> >>>>>>>>>> dma_buf_map *map, void *vaddr)
> >>>>>>>>>>         map->is_iomem = false;
> >>>>>>>>>>     }
> >>>>>>>>>>     +/**
> >>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> >>>>>>>>>> an address in I/O memory
> >>>>>>>>>> + * @map:        The dma-buf mapping structure
> >>>>>>>>>> + * @vaddr_iomem:    An I/O-memory address
> >>>>>>>>>> + *
> >>>>>>>>>> + * Sets the address and the I/O-memory flag.
> >>>>>>>>>> + */
> >>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> >>>>>>>>>> +                           void __iomem *vaddr_iomem)
> >>>>>>>>>> +{
> >>>>>>>>>> +    map->vaddr_iomem = vaddr_iomem;
> >>>>>>>>>> +    map->is_iomem = true;
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>>     /**
> >>>>>>>>>>      * dma_buf_map_is_equal - Compares two dma-buf mapping structures
> >>>>>>>>>> for equality
> >>>>>>>>>>      * @lhs:    The dma-buf mapping structure
> >>>>>>>>> _______________________________________________
> >>>>>>>>> dri-devel mailing list
> >>>>>>>>> dri-devel@lists.freedesktop.org
> >>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
> >>>>>>>> _______________________________________________
> >>>>>>>> amd-gfx mailing list
> >>>>>>>> amd-gfx@lists.freedesktop.org
> >>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
> >>>>>>> _______________________________________________
> >>>>>>> dri-devel mailing list
> >>>>>>> dri-devel@lists.freedesktop.org
> >>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
> >>>>>>>
> >>>>>> _______________________________________________
> >>>>>> amd-gfx mailing list
> >>>>>> amd-gfx@lists.freedesktop.org
> >>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
> >>>
> >>
> >>
> >> --
> >> Daniel Vetter
> >> Software Engineer, Intel Corporation
> >> http://blog.ffwll.ch
> >
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion
  2020-10-07 13:10                       ` Daniel Vetter
@ 2020-10-07 13:20                         ` Thomas Zimmermann
  2020-10-07 13:24                           ` Christian König
  0 siblings, 1 reply; 33+ messages in thread
From: Thomas Zimmermann @ 2020-10-07 13:20 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Dave Airlie, Nouveau Dev, dri-devel, Wilson, Chris, Melissa Wen,
	Huang Rui, Gerd Hoffmann, Qiang Yu, Sam Ravnborg, Emil Velikov,
	linux-samsung-soc, Joonyoung Shim, lima, Oleksandr Andrushchenko,
	Krzysztof Kozlowski, Steven Price, open list:ARM/Rockchip SoC...,
	Luben Tuikov, Alyssa Rosenzweig, Russell King,
	open list:DRM DRIVER FOR QXL VIRTUAL GPU, Ben Skeggs,
	The etnaviv authors, moderated list:DMA BUFFER SHARING FRAMEWORK,
	Hans de Goede, moderated list:DRM DRIVERS FOR XEN,
	open list:VIRTIO CORE, NET...,
	Sean Paul, apaneers, Linux ARM, amd-gfx list, Tomeu Vizoso,
	Seung-Woo Kim, Sandy Huang, Kyungmin Park, Qinglang Miao,
	Kukjin Kim, Alex Deucher, open list:DMA BUFFER SHARING FRAMEWORK,
	Christian König


[-- Attachment #1.1.1: Type: text/plain, Size: 13633 bytes --]

Hi

Am 07.10.20 um 15:10 schrieb Daniel Vetter:
> On Wed, Oct 7, 2020 at 2:57 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:
>>
>> Hi
>>
>> Am 02.10.20 um 11:58 schrieb Daniel Vetter:
>>> On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote:
>>>> On Wed, Sep 30, 2020 at 2:34 PM Christian König
>>>> <christian.koenig@amd.com> wrote:
>>>>>
>>>>> Am 30.09.20 um 11:47 schrieb Daniel Vetter:
>>>>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian König wrote:
>>>>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
>>>>>>>> Hi
>>>>>>>>
>>>>>>>> Am 30.09.20 um 10:05 schrieb Christian König:
>>>>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
>>>>>>>>>> Hi Christian
>>>>>>>>>>
>>>>>>>>>> Am 29.09.20 um 17:35 schrieb Christian König:
>>>>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
>>>>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
>>>>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map
>>>>>>>>>>>> with these values. Helpful for TTM-based drivers.
>>>>>>>>>>> We could completely drop that if we use the same structure inside TTM as
>>>>>>>>>>> well.
>>>>>>>>>>>
>>>>>>>>>>> Additional to that which driver is going to use this?
>>>>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will
>>>>>>>>>> retrieve the pointer via this function.
>>>>>>>>>>
>>>>>>>>>> I do want to see all that being more tightly integrated into TTM, but
>>>>>>>>>> not in this series. This one is about fixing the bochs-on-sparc64
>>>>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list.
>>>>>>>>> I should have asked which driver you try to fix here :)
>>>>>>>>>
>>>>>>>>> In this case just keep the function inside bochs and only fix it there.
>>>>>>>>>
>>>>>>>>> All other drivers can be fixed when we generally pump this through TTM.
>>>>>>>> Did you take a look at patch 3? This function will be used by VRAM
>>>>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we
>>>>>>>> have to duplicate the functionality in each if these drivers. Bochs
>>>>>>>> itself uses VRAM helpers and doesn't touch the function directly.
>>>>>>> Ah, ok can we have that then only in the VRAM helpers?
>>>>>>>
>>>>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj
>>>>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
>>>>>>>
>>>>>>> What I want to avoid is to have another conversion function in TTM because
>>>>>>> what happens here is that we already convert from ttm_bus_placement to
>>>>>>> ttm_bo_kmap_obj and then to dma_buf_map.
>>>>>> Hm I'm not really seeing how that helps with a gradual conversion of
>>>>>> everything over to dma_buf_map and assorted helpers for access? There's
>>>>>> too many places in ttm drivers where is_iomem and related stuff is used to
>>>>>> be able to convert it all in one go. An intermediate state with a bunch of
>>>>>> conversions seems fairly unavoidable to me.
>>>>>
>>>>> Fair enough. I would just have started bottom up and not top down.
>>>>>
>>>>> Anyway feel free to go ahead with this approach as long as we can remove
>>>>> the new function again when we clean that stuff up for good.
>>>>
>>>> Yeah I guess bottom up would make more sense as a refactoring. But the
>>>> main motivation to land this here is to fix the __mmio vs normal
>>>> memory confusion in the fbdev emulation helpers for sparc (and
>>>> anything else that needs this). Hence the top down approach for
>>>> rolling this out.
>>>
>>> Ok I started reviewing this a bit more in-depth, and I think this is a bit
>>> too much of a de-tour.
>>>
>>> Looking through all the callers of ttm_bo_kmap almost everyone maps the
>>> entire object. Only vmwgfx uses to map less than that. Also, everyone just
>>> immediately follows up with converting that full object map into a
>>> pointer.
>>>
>>> So I think what we really want here is:
>>> - new function
>>>
>>> int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>>
>>>   _vmap name since that's consistent with both dma_buf functions and
>>>   what's usually used to implement this. Outside of the ttm world kmap
>>>   usually just means single-page mappings using kmap() or it's iomem
>>>   sibling io_mapping_map* so rather confusing name for a function which
>>>   usually is just used to set up a vmap of the entire buffer.
>>>
>>> - a helper which can be used for the drm_gem_object_funcs vmap/vunmap
>>>   functions for all ttm drivers. We should be able to make this fully
>>>   generic because a) we now have dma_buf_map and b) drm_gem_object is
>>>   embedded in the ttm_bo, so we can upcast for everyone who's both a ttm
>>>   and gem driver.
>>>
>>>   This is maybe a good follow-up, since it should allow us to ditch quite
>>>   a bit of the vram helper code for this more generic stuff. I also might
>>>   have missed some special-cases here, but from a quick look everything
>>>   just pins the buffer to the current location and that's it.
>>>
>>>   Also this obviously requires Christian's generic ttm_bo_pin rework
>>>   first.
>>>
>>> - roll the above out to drivers.
>>>
>>> Christian/Thomas, thoughts on this?
>>
>> I agree on the goals, but what is the immediate objective here?
>>
>> Adding ttm_bo_vmap() does not work out easily, as struct ttm_bo_kmap_obj
>> is a central part of the internals of TTM. struct ttm_bo_kmap_obj has
>> more internal state that struct dma_buf_map, so they are not easily
>> convertible either. What you propose seems to require a reimplementation
>> of the existing ttm_bo_kmap() code. That is it's own patch series.
>>
>> I'd rather go with some variant of the existing patch and add
>> ttm_bo_vmap() in a follow-up.
> 
> ttm_bo_vmap would simply wrap what you currently open-code as
> ttm_bo_kmap + ttm_kmap_obj_to_dma_buf_map. Removing ttm_kmap_obj would
> be a much later step. Why do you think adding ttm_bo_vmap is not
> possible?

The calls to ttm_bo_kmap/_kunmap() require an instance of struct
ttm_bo_kmap_obj that is stored in each driver's private bo structure
(e.g., struct drm_gem_vram_object, struct radeon_bo, etc). When I made
patch 3, I flirted with the idea of unifying the driver's _vmap code in
a shared helper, but I couldn't find a simple way of doing it. That's
why it's open-coded in the first place.

Best regards
Thomas

> -Daniel
> 
> 
>> Best regards
>> Thomas
>>
>>>
>>> I think for the immediate need of rolling this out for vram helpers and
>>> fbdev code we should be able to do this, but just postpone the driver wide
>>> roll-out for now.
>>>
>>> Cheers, Daniel
>>>
>>>> -Daniel
>>>>
>>>>>
>>>>> Christian.
>>>>>
>>>>>> -Daniel
>>>>>>
>>>>>>> Thanks,
>>>>>>> Christian.
>>>>>>>
>>>>>>>> Best regards
>>>>>>>> Thomas
>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>> Christian.
>>>>>>>>>
>>>>>>>>>> Best regards
>>>>>>>>>> Thomas
>>>>>>>>>>
>>>>>>>>>>> Regards,
>>>>>>>>>>> Christian.
>>>>>>>>>>>
>>>>>>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>>>>>>>>>> ---
>>>>>>>>>>>>     include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
>>>>>>>>>>>>     include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
>>>>>>>>>>>>     2 files changed, 44 insertions(+)
>>>>>>>>>>>>
>>>>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>>>> index c96a25d571c8..62d89f05a801 100644
>>>>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>>>> @@ -34,6 +34,7 @@
>>>>>>>>>>>>     #include <drm/drm_gem.h>
>>>>>>>>>>>>     #include <drm/drm_hashtab.h>
>>>>>>>>>>>>     #include <drm/drm_vma_manager.h>
>>>>>>>>>>>> +#include <linux/dma-buf-map.h>
>>>>>>>>>>>>     #include <linux/kref.h>
>>>>>>>>>>>>     #include <linux/list.h>
>>>>>>>>>>>>     #include <linux/wait.h>
>>>>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct
>>>>>>>>>>>> ttm_bo_kmap_obj *map,
>>>>>>>>>>>>         return map->virtual;
>>>>>>>>>>>>     }
>>>>>>>>>>>>     +/**
>>>>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map
>>>>>>>>>>>> + *
>>>>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
>>>>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map
>>>>>>>>>>>> + *
>>>>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
>>>>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL.
>>>>>>>>>>>> + */
>>>>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj
>>>>>>>>>>>> *kmap,
>>>>>>>>>>>> +                           struct dma_buf_map *map)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +    bool is_iomem;
>>>>>>>>>>>> +    void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
>>>>>>>>>>>> +
>>>>>>>>>>>> +    if (!vaddr)
>>>>>>>>>>>> +        dma_buf_map_clear(map);
>>>>>>>>>>>> +    else if (is_iomem)
>>>>>>>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
>>>>>>>>>>>> +    else
>>>>>>>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>>     /**
>>>>>>>>>>>>      * ttm_bo_kmap
>>>>>>>>>>>>      *
>>>>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>>>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>>>>>>>>>> --- a/include/linux/dma-buf-map.h
>>>>>>>>>>>> +++ b/include/linux/dma-buf-map.h
>>>>>>>>>>>> @@ -45,6 +45,12 @@
>>>>>>>>>>>>      *
>>>>>>>>>>>>      *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>>>>>>>>>>      *
>>>>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>>>>>>>>>>> + *
>>>>>>>>>>>> + * .. code-block:: c
>>>>>>>>>>>> + *
>>>>>>>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>>>>>>>>>> + *
>>>>>>>>>>>>      * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>>>>>>>>>>      * dma_buf_map_is_null().
>>>>>>>>>>>>      *
>>>>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>>>>>>>>>>> dma_buf_map *map, void *vaddr)
>>>>>>>>>>>>         map->is_iomem = false;
>>>>>>>>>>>>     }
>>>>>>>>>>>>     +/**
>>>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>>>>>>>>>>> an address in I/O memory
>>>>>>>>>>>> + * @map:        The dma-buf mapping structure
>>>>>>>>>>>> + * @vaddr_iomem:    An I/O-memory address
>>>>>>>>>>>> + *
>>>>>>>>>>>> + * Sets the address and the I/O-memory flag.
>>>>>>>>>>>> + */
>>>>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>>>>>>>>>>> +                           void __iomem *vaddr_iomem)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +    map->vaddr_iomem = vaddr_iomem;
>>>>>>>>>>>> +    map->is_iomem = true;
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>>     /**
>>>>>>>>>>>>      * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>>>>>>>>>>>> for equality
>>>>>>>>>>>>      * @lhs:    The dma-buf mapping structure
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> dri-devel mailing list
>>>>>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
>>>>>>>>>> _______________________________________________
>>>>>>>>>> amd-gfx mailing list
>>>>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
>>>>>>>>> _______________________________________________
>>>>>>>>> dri-devel mailing list
>>>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
>>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> amd-gfx mailing list
>>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
>>>>>
>>>>
>>>>
>>>> --
>>>> Daniel Vetter
>>>> Software Engineer, Intel Corporation
>>>> http://blog.ffwll.ch
>>>
>>
>> --
>> Thomas Zimmermann
>> Graphics Driver Developer
>> SUSE Software Solutions Germany GmbH
>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>> (HRB 36809, AG Nürnberg)
>> Geschäftsführer: Felix Imendörffer
>>
> 
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

[-- Attachment #2: Type: text/plain, Size: 170 bytes --]

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion
  2020-10-07 13:20                         ` Thomas Zimmermann
@ 2020-10-07 13:24                           ` Christian König
  2020-10-07 14:30                             ` Daniel Vetter
  0 siblings, 1 reply; 33+ messages in thread
From: Christian König @ 2020-10-07 13:24 UTC (permalink / raw)
  To: Thomas Zimmermann, Daniel Vetter
  Cc: Dave Airlie, Nouveau Dev, dri-devel, Wilson, Chris, Melissa Wen,
	Huang Rui, Gerd Hoffmann, Qiang Yu, Sam Ravnborg, Emil Velikov,
	linux-samsung-soc, Joonyoung Shim, lima, Oleksandr Andrushchenko,
	Krzysztof Kozlowski, Steven Price, open list:ARM/Rockchip SoC...,
	Luben Tuikov, Alyssa Rosenzweig, Russell King,
	open list:DRM DRIVER FOR QXL VIRTUAL GPU, Ben Skeggs,
	The etnaviv authors, moderated list:DMA BUFFER SHARING FRAMEWORK,
	Hans de Goede, moderated list:DRM DRIVERS FOR XEN,
	open list:VIRTIO CORE, NET...,
	Sean Paul, apaneers, Linux ARM, amd-gfx list, Tomeu Vizoso,
	Seung-Woo Kim, Sandy Huang, Kyungmin Park, Qinglang Miao,
	Kukjin Kim, Alex Deucher, open list:DMA BUFFER SHARING FRAMEWORK

Am 07.10.20 um 15:20 schrieb Thomas Zimmermann:
> Hi
>
> Am 07.10.20 um 15:10 schrieb Daniel Vetter:
>> On Wed, Oct 7, 2020 at 2:57 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:
>>> Hi
>>>
>>> Am 02.10.20 um 11:58 schrieb Daniel Vetter:
>>>> On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote:
>>>>> On Wed, Sep 30, 2020 at 2:34 PM Christian König
>>>>> <christian.koenig@amd.com> wrote:
>>>>>> Am 30.09.20 um 11:47 schrieb Daniel Vetter:
>>>>>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian König wrote:
>>>>>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
>>>>>>>>> Hi
>>>>>>>>>
>>>>>>>>> Am 30.09.20 um 10:05 schrieb Christian König:
>>>>>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
>>>>>>>>>>> Hi Christian
>>>>>>>>>>>
>>>>>>>>>>> Am 29.09.20 um 17:35 schrieb Christian König:
>>>>>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
>>>>>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
>>>>>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map
>>>>>>>>>>>>> with these values. Helpful for TTM-based drivers.
>>>>>>>>>>>> We could completely drop that if we use the same structure inside TTM as
>>>>>>>>>>>> well.
>>>>>>>>>>>>
>>>>>>>>>>>> Additional to that which driver is going to use this?
>>>>>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will
>>>>>>>>>>> retrieve the pointer via this function.
>>>>>>>>>>>
>>>>>>>>>>> I do want to see all that being more tightly integrated into TTM, but
>>>>>>>>>>> not in this series. This one is about fixing the bochs-on-sparc64
>>>>>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list.
>>>>>>>>>> I should have asked which driver you try to fix here :)
>>>>>>>>>>
>>>>>>>>>> In this case just keep the function inside bochs and only fix it there.
>>>>>>>>>>
>>>>>>>>>> All other drivers can be fixed when we generally pump this through TTM.
>>>>>>>>> Did you take a look at patch 3? This function will be used by VRAM
>>>>>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we
>>>>>>>>> have to duplicate the functionality in each if these drivers. Bochs
>>>>>>>>> itself uses VRAM helpers and doesn't touch the function directly.
>>>>>>>> Ah, ok can we have that then only in the VRAM helpers?
>>>>>>>>
>>>>>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj
>>>>>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
>>>>>>>>
>>>>>>>> What I want to avoid is to have another conversion function in TTM because
>>>>>>>> what happens here is that we already convert from ttm_bus_placement to
>>>>>>>> ttm_bo_kmap_obj and then to dma_buf_map.
>>>>>>> Hm I'm not really seeing how that helps with a gradual conversion of
>>>>>>> everything over to dma_buf_map and assorted helpers for access? There's
>>>>>>> too many places in ttm drivers where is_iomem and related stuff is used to
>>>>>>> be able to convert it all in one go. An intermediate state with a bunch of
>>>>>>> conversions seems fairly unavoidable to me.
>>>>>> Fair enough. I would just have started bottom up and not top down.
>>>>>>
>>>>>> Anyway feel free to go ahead with this approach as long as we can remove
>>>>>> the new function again when we clean that stuff up for good.
>>>>> Yeah I guess bottom up would make more sense as a refactoring. But the
>>>>> main motivation to land this here is to fix the __mmio vs normal
>>>>> memory confusion in the fbdev emulation helpers for sparc (and
>>>>> anything else that needs this). Hence the top down approach for
>>>>> rolling this out.
>>>> Ok I started reviewing this a bit more in-depth, and I think this is a bit
>>>> too much of a de-tour.
>>>>
>>>> Looking through all the callers of ttm_bo_kmap almost everyone maps the
>>>> entire object. Only vmwgfx uses to map less than that. Also, everyone just
>>>> immediately follows up with converting that full object map into a
>>>> pointer.
>>>>
>>>> So I think what we really want here is:
>>>> - new function
>>>>
>>>> int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>>>
>>>>    _vmap name since that's consistent with both dma_buf functions and
>>>>    what's usually used to implement this. Outside of the ttm world kmap
>>>>    usually just means single-page mappings using kmap() or it's iomem
>>>>    sibling io_mapping_map* so rather confusing name for a function which
>>>>    usually is just used to set up a vmap of the entire buffer.
>>>>
>>>> - a helper which can be used for the drm_gem_object_funcs vmap/vunmap
>>>>    functions for all ttm drivers. We should be able to make this fully
>>>>    generic because a) we now have dma_buf_map and b) drm_gem_object is
>>>>    embedded in the ttm_bo, so we can upcast for everyone who's both a ttm
>>>>    and gem driver.
>>>>
>>>>    This is maybe a good follow-up, since it should allow us to ditch quite
>>>>    a bit of the vram helper code for this more generic stuff. I also might
>>>>    have missed some special-cases here, but from a quick look everything
>>>>    just pins the buffer to the current location and that's it.
>>>>
>>>>    Also this obviously requires Christian's generic ttm_bo_pin rework
>>>>    first.
>>>>
>>>> - roll the above out to drivers.
>>>>
>>>> Christian/Thomas, thoughts on this?
>>> I agree on the goals, but what is the immediate objective here?
>>>
>>> Adding ttm_bo_vmap() does not work out easily, as struct ttm_bo_kmap_obj
>>> is a central part of the internals of TTM. struct ttm_bo_kmap_obj has
>>> more internal state that struct dma_buf_map, so they are not easily
>>> convertible either. What you propose seems to require a reimplementation
>>> of the existing ttm_bo_kmap() code. That is it's own patch series.
>>>
>>> I'd rather go with some variant of the existing patch and add
>>> ttm_bo_vmap() in a follow-up.
>> ttm_bo_vmap would simply wrap what you currently open-code as
>> ttm_bo_kmap + ttm_kmap_obj_to_dma_buf_map. Removing ttm_kmap_obj would
>> be a much later step. Why do you think adding ttm_bo_vmap is not
>> possible?
> The calls to ttm_bo_kmap/_kunmap() require an instance of struct
> ttm_bo_kmap_obj that is stored in each driver's private bo structure
> (e.g., struct drm_gem_vram_object, struct radeon_bo, etc). When I made
> patch 3, I flirted with the idea of unifying the driver's _vmap code in
> a shared helper, but I couldn't find a simple way of doing it. That's
> why it's open-coded in the first place.

Well that makes kind of sense. Keep in mind that ttm_bo_kmap is 
currently way to complicated.

Christian.

>
> Best regards
> Thomas
>
>> -Daniel
>>
>>
>>> Best regards
>>> Thomas
>>>
>>>> I think for the immediate need of rolling this out for vram helpers and
>>>> fbdev code we should be able to do this, but just postpone the driver wide
>>>> roll-out for now.
>>>>
>>>> Cheers, Daniel
>>>>
>>>>> -Daniel
>>>>>
>>>>>> Christian.
>>>>>>
>>>>>>> -Daniel
>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Christian.
>>>>>>>>
>>>>>>>>> Best regards
>>>>>>>>> Thomas
>>>>>>>>>
>>>>>>>>>> Regards,
>>>>>>>>>> Christian.
>>>>>>>>>>
>>>>>>>>>>> Best regards
>>>>>>>>>>> Thomas
>>>>>>>>>>>
>>>>>>>>>>>> Regards,
>>>>>>>>>>>> Christian.
>>>>>>>>>>>>
>>>>>>>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>>>>>>>>>>> ---
>>>>>>>>>>>>>      include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
>>>>>>>>>>>>>      include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
>>>>>>>>>>>>>      2 files changed, 44 insertions(+)
>>>>>>>>>>>>>
>>>>>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>>>>> index c96a25d571c8..62d89f05a801 100644
>>>>>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>>>>> @@ -34,6 +34,7 @@
>>>>>>>>>>>>>      #include <drm/drm_gem.h>
>>>>>>>>>>>>>      #include <drm/drm_hashtab.h>
>>>>>>>>>>>>>      #include <drm/drm_vma_manager.h>
>>>>>>>>>>>>> +#include <linux/dma-buf-map.h>
>>>>>>>>>>>>>      #include <linux/kref.h>
>>>>>>>>>>>>>      #include <linux/list.h>
>>>>>>>>>>>>>      #include <linux/wait.h>
>>>>>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct
>>>>>>>>>>>>> ttm_bo_kmap_obj *map,
>>>>>>>>>>>>>          return map->virtual;
>>>>>>>>>>>>>      }
>>>>>>>>>>>>>      +/**
>>>>>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map
>>>>>>>>>>>>> + *
>>>>>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
>>>>>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map
>>>>>>>>>>>>> + *
>>>>>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
>>>>>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL.
>>>>>>>>>>>>> + */
>>>>>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj
>>>>>>>>>>>>> *kmap,
>>>>>>>>>>>>> +                           struct dma_buf_map *map)
>>>>>>>>>>>>> +{
>>>>>>>>>>>>> +    bool is_iomem;
>>>>>>>>>>>>> +    void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
>>>>>>>>>>>>> +
>>>>>>>>>>>>> +    if (!vaddr)
>>>>>>>>>>>>> +        dma_buf_map_clear(map);
>>>>>>>>>>>>> +    else if (is_iomem)
>>>>>>>>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
>>>>>>>>>>>>> +    else
>>>>>>>>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
>>>>>>>>>>>>> +}
>>>>>>>>>>>>> +
>>>>>>>>>>>>>      /**
>>>>>>>>>>>>>       * ttm_bo_kmap
>>>>>>>>>>>>>       *
>>>>>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>>>>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>>>>>>>>>>> --- a/include/linux/dma-buf-map.h
>>>>>>>>>>>>> +++ b/include/linux/dma-buf-map.h
>>>>>>>>>>>>> @@ -45,6 +45,12 @@
>>>>>>>>>>>>>       *
>>>>>>>>>>>>>       *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>>>>>>>>>>>       *
>>>>>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>>>>>>>>>>>> + *
>>>>>>>>>>>>> + * .. code-block:: c
>>>>>>>>>>>>> + *
>>>>>>>>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>>>>>>>>>>> + *
>>>>>>>>>>>>>       * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>>>>>>>>>>>       * dma_buf_map_is_null().
>>>>>>>>>>>>>       *
>>>>>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>>>>>>>>>>>> dma_buf_map *map, void *vaddr)
>>>>>>>>>>>>>          map->is_iomem = false;
>>>>>>>>>>>>>      }
>>>>>>>>>>>>>      +/**
>>>>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>>>>>>>>>>>> an address in I/O memory
>>>>>>>>>>>>> + * @map:        The dma-buf mapping structure
>>>>>>>>>>>>> + * @vaddr_iomem:    An I/O-memory address
>>>>>>>>>>>>> + *
>>>>>>>>>>>>> + * Sets the address and the I/O-memory flag.
>>>>>>>>>>>>> + */
>>>>>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>>>>>>>>>>>> +                           void __iomem *vaddr_iomem)
>>>>>>>>>>>>> +{
>>>>>>>>>>>>> +    map->vaddr_iomem = vaddr_iomem;
>>>>>>>>>>>>> +    map->is_iomem = true;
>>>>>>>>>>>>> +}
>>>>>>>>>>>>> +
>>>>>>>>>>>>>      /**
>>>>>>>>>>>>>       * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>>>>>>>>>>>>> for equality
>>>>>>>>>>>>>       * @lhs:    The dma-buf mapping structure
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> dri-devel mailing list
>>>>>>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> amd-gfx mailing list
>>>>>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
>>>>>>>>>> _______________________________________________
>>>>>>>>>> dri-devel mailing list
>>>>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
>>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> amd-gfx mailing list
>>>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
>>>>>
>>>>> --
>>>>> Daniel Vetter
>>>>> Software Engineer, Intel Corporation
>>>>> http://blog.ffwll.ch
>>> --
>>> Thomas Zimmermann
>>> Graphics Driver Developer
>>> SUSE Software Solutions Germany GmbH
>>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>>> (HRB 36809, AG Nürnberg)
>>> Geschäftsführer: Felix Imendörffer
>>>
>>


_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion
  2020-10-07 13:24                           ` Christian König
@ 2020-10-07 14:30                             ` Daniel Vetter
  2020-10-08  9:00                               ` Thomas Zimmermann
  0 siblings, 1 reply; 33+ messages in thread
From: Daniel Vetter @ 2020-10-07 14:30 UTC (permalink / raw)
  To: Christian König
  Cc: Dave Airlie, Nouveau Dev, dri-devel, Wilson, Chris, Melissa Wen,
	Huang Rui, Gerd Hoffmann, Qiang Yu, Sam Ravnborg, Emil Velikov,
	linux-samsung-soc, Joonyoung Shim, lima, Oleksandr Andrushchenko,
	Krzysztof Kozlowski, Steven Price, open list:ARM/Rockchip SoC...,
	Luben Tuikov, Alyssa Rosenzweig, Russell King,
	open list:DRM DRIVER FOR QXL VIRTUAL GPU, Ben Skeggs,
	The etnaviv authors, moderated list:DMA BUFFER SHARING FRAMEWORK,
	Hans de Goede, moderated list:DRM DRIVERS FOR XEN,
	open list:VIRTIO CORE, NET...,
	Sean Paul, apaneers, Linux ARM, amd-gfx list, Tomeu Vizoso,
	Seung-Woo Kim, Sandy Huang, Kyungmin Park, Qinglang Miao,
	Kukjin Kim, Thomas Zimmermann, Alex Deucher,
	open list:DMA BUFFER SHARING FRAMEWORK

On Wed, Oct 7, 2020 at 3:25 PM Christian König <christian.koenig@amd.com> wrote:
>
> Am 07.10.20 um 15:20 schrieb Thomas Zimmermann:
> > Hi
> >
> > Am 07.10.20 um 15:10 schrieb Daniel Vetter:
> >> On Wed, Oct 7, 2020 at 2:57 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:
> >>> Hi
> >>>
> >>> Am 02.10.20 um 11:58 schrieb Daniel Vetter:
> >>>> On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote:
> >>>>> On Wed, Sep 30, 2020 at 2:34 PM Christian König
> >>>>> <christian.koenig@amd.com> wrote:
> >>>>>> Am 30.09.20 um 11:47 schrieb Daniel Vetter:
> >>>>>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian König wrote:
> >>>>>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
> >>>>>>>>> Hi
> >>>>>>>>>
> >>>>>>>>> Am 30.09.20 um 10:05 schrieb Christian König:
> >>>>>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
> >>>>>>>>>>> Hi Christian
> >>>>>>>>>>>
> >>>>>>>>>>> Am 29.09.20 um 17:35 schrieb Christian König:
> >>>>>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
> >>>>>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
> >>>>>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map
> >>>>>>>>>>>>> with these values. Helpful for TTM-based drivers.
> >>>>>>>>>>>> We could completely drop that if we use the same structure inside TTM as
> >>>>>>>>>>>> well.
> >>>>>>>>>>>>
> >>>>>>>>>>>> Additional to that which driver is going to use this?
> >>>>>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will
> >>>>>>>>>>> retrieve the pointer via this function.
> >>>>>>>>>>>
> >>>>>>>>>>> I do want to see all that being more tightly integrated into TTM, but
> >>>>>>>>>>> not in this series. This one is about fixing the bochs-on-sparc64
> >>>>>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list.
> >>>>>>>>>> I should have asked which driver you try to fix here :)
> >>>>>>>>>>
> >>>>>>>>>> In this case just keep the function inside bochs and only fix it there.
> >>>>>>>>>>
> >>>>>>>>>> All other drivers can be fixed when we generally pump this through TTM.
> >>>>>>>>> Did you take a look at patch 3? This function will be used by VRAM
> >>>>>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we
> >>>>>>>>> have to duplicate the functionality in each if these drivers. Bochs
> >>>>>>>>> itself uses VRAM helpers and doesn't touch the function directly.
> >>>>>>>> Ah, ok can we have that then only in the VRAM helpers?
> >>>>>>>>
> >>>>>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj
> >>>>>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
> >>>>>>>>
> >>>>>>>> What I want to avoid is to have another conversion function in TTM because
> >>>>>>>> what happens here is that we already convert from ttm_bus_placement to
> >>>>>>>> ttm_bo_kmap_obj and then to dma_buf_map.
> >>>>>>> Hm I'm not really seeing how that helps with a gradual conversion of
> >>>>>>> everything over to dma_buf_map and assorted helpers for access? There's
> >>>>>>> too many places in ttm drivers where is_iomem and related stuff is used to
> >>>>>>> be able to convert it all in one go. An intermediate state with a bunch of
> >>>>>>> conversions seems fairly unavoidable to me.
> >>>>>> Fair enough. I would just have started bottom up and not top down.
> >>>>>>
> >>>>>> Anyway feel free to go ahead with this approach as long as we can remove
> >>>>>> the new function again when we clean that stuff up for good.
> >>>>> Yeah I guess bottom up would make more sense as a refactoring. But the
> >>>>> main motivation to land this here is to fix the __mmio vs normal
> >>>>> memory confusion in the fbdev emulation helpers for sparc (and
> >>>>> anything else that needs this). Hence the top down approach for
> >>>>> rolling this out.
> >>>> Ok I started reviewing this a bit more in-depth, and I think this is a bit
> >>>> too much of a de-tour.
> >>>>
> >>>> Looking through all the callers of ttm_bo_kmap almost everyone maps the
> >>>> entire object. Only vmwgfx uses to map less than that. Also, everyone just
> >>>> immediately follows up with converting that full object map into a
> >>>> pointer.
> >>>>
> >>>> So I think what we really want here is:
> >>>> - new function
> >>>>
> >>>> int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> >>>>
> >>>>    _vmap name since that's consistent with both dma_buf functions and
> >>>>    what's usually used to implement this. Outside of the ttm world kmap
> >>>>    usually just means single-page mappings using kmap() or it's iomem
> >>>>    sibling io_mapping_map* so rather confusing name for a function which
> >>>>    usually is just used to set up a vmap of the entire buffer.
> >>>>
> >>>> - a helper which can be used for the drm_gem_object_funcs vmap/vunmap
> >>>>    functions for all ttm drivers. We should be able to make this fully
> >>>>    generic because a) we now have dma_buf_map and b) drm_gem_object is
> >>>>    embedded in the ttm_bo, so we can upcast for everyone who's both a ttm
> >>>>    and gem driver.
> >>>>
> >>>>    This is maybe a good follow-up, since it should allow us to ditch quite
> >>>>    a bit of the vram helper code for this more generic stuff. I also might
> >>>>    have missed some special-cases here, but from a quick look everything
> >>>>    just pins the buffer to the current location and that's it.
> >>>>
> >>>>    Also this obviously requires Christian's generic ttm_bo_pin rework
> >>>>    first.
> >>>>
> >>>> - roll the above out to drivers.
> >>>>
> >>>> Christian/Thomas, thoughts on this?
> >>> I agree on the goals, but what is the immediate objective here?
> >>>
> >>> Adding ttm_bo_vmap() does not work out easily, as struct ttm_bo_kmap_obj
> >>> is a central part of the internals of TTM. struct ttm_bo_kmap_obj has
> >>> more internal state that struct dma_buf_map, so they are not easily
> >>> convertible either. What you propose seems to require a reimplementation
> >>> of the existing ttm_bo_kmap() code. That is it's own patch series.
> >>>
> >>> I'd rather go with some variant of the existing patch and add
> >>> ttm_bo_vmap() in a follow-up.
> >> ttm_bo_vmap would simply wrap what you currently open-code as
> >> ttm_bo_kmap + ttm_kmap_obj_to_dma_buf_map. Removing ttm_kmap_obj would
> >> be a much later step. Why do you think adding ttm_bo_vmap is not
> >> possible?
> > The calls to ttm_bo_kmap/_kunmap() require an instance of struct
> > ttm_bo_kmap_obj that is stored in each driver's private bo structure
> > (e.g., struct drm_gem_vram_object, struct radeon_bo, etc). When I made
> > patch 3, I flirted with the idea of unifying the driver's _vmap code in
> > a shared helper, but I couldn't find a simple way of doing it. That's
> > why it's open-coded in the first place.

Yeah we'd need a ttm_bo_vunmap I guess to make this work. Which
shouldn't be more than a few lines, but maybe too much to do in this
series.

> Well that makes kind of sense. Keep in mind that ttm_bo_kmap is
> currently way to complicated.

Yeah, simplifying this into a ttm_bo_vmap on one side, and a simple
1-page kmap helper on the other should help a lot.
-Daniel

>
> Christian.
>
> >
> > Best regards
> > Thomas
> >
> >> -Daniel
> >>
> >>
> >>> Best regards
> >>> Thomas
> >>>
> >>>> I think for the immediate need of rolling this out for vram helpers and
> >>>> fbdev code we should be able to do this, but just postpone the driver wide
> >>>> roll-out for now.
> >>>>
> >>>> Cheers, Daniel
> >>>>
> >>>>> -Daniel
> >>>>>
> >>>>>> Christian.
> >>>>>>
> >>>>>>> -Daniel
> >>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>> Christian.
> >>>>>>>>
> >>>>>>>>> Best regards
> >>>>>>>>> Thomas
> >>>>>>>>>
> >>>>>>>>>> Regards,
> >>>>>>>>>> Christian.
> >>>>>>>>>>
> >>>>>>>>>>> Best regards
> >>>>>>>>>>> Thomas
> >>>>>>>>>>>
> >>>>>>>>>>>> Regards,
> >>>>>>>>>>>> Christian.
> >>>>>>>>>>>>
> >>>>>>>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >>>>>>>>>>>>> ---
> >>>>>>>>>>>>>      include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
> >>>>>>>>>>>>>      include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
> >>>>>>>>>>>>>      2 files changed, 44 insertions(+)
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> >>>>>>>>>>>>> index c96a25d571c8..62d89f05a801 100644
> >>>>>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
> >>>>>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
> >>>>>>>>>>>>> @@ -34,6 +34,7 @@
> >>>>>>>>>>>>>      #include <drm/drm_gem.h>
> >>>>>>>>>>>>>      #include <drm/drm_hashtab.h>
> >>>>>>>>>>>>>      #include <drm/drm_vma_manager.h>
> >>>>>>>>>>>>> +#include <linux/dma-buf-map.h>
> >>>>>>>>>>>>>      #include <linux/kref.h>
> >>>>>>>>>>>>>      #include <linux/list.h>
> >>>>>>>>>>>>>      #include <linux/wait.h>
> >>>>>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct
> >>>>>>>>>>>>> ttm_bo_kmap_obj *map,
> >>>>>>>>>>>>>          return map->virtual;
> >>>>>>>>>>>>>      }
> >>>>>>>>>>>>>      +/**
> >>>>>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map
> >>>>>>>>>>>>> + *
> >>>>>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
> >>>>>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map
> >>>>>>>>>>>>> + *
> >>>>>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
> >>>>>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL.
> >>>>>>>>>>>>> + */
> >>>>>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj
> >>>>>>>>>>>>> *kmap,
> >>>>>>>>>>>>> +                           struct dma_buf_map *map)
> >>>>>>>>>>>>> +{
> >>>>>>>>>>>>> +    bool is_iomem;
> >>>>>>>>>>>>> +    void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
> >>>>>>>>>>>>> +
> >>>>>>>>>>>>> +    if (!vaddr)
> >>>>>>>>>>>>> +        dma_buf_map_clear(map);
> >>>>>>>>>>>>> +    else if (is_iomem)
> >>>>>>>>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
> >>>>>>>>>>>>> +    else
> >>>>>>>>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
> >>>>>>>>>>>>> +}
> >>>>>>>>>>>>> +
> >>>>>>>>>>>>>      /**
> >>>>>>>>>>>>>       * ttm_bo_kmap
> >>>>>>>>>>>>>       *
> >>>>>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> >>>>>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
> >>>>>>>>>>>>> --- a/include/linux/dma-buf-map.h
> >>>>>>>>>>>>> +++ b/include/linux/dma-buf-map.h
> >>>>>>>>>>>>> @@ -45,6 +45,12 @@
> >>>>>>>>>>>>>       *
> >>>>>>>>>>>>>       *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> >>>>>>>>>>>>>       *
> >>>>>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> >>>>>>>>>>>>> + *
> >>>>>>>>>>>>> + * .. code-block:: c
> >>>>>>>>>>>>> + *
> >>>>>>>>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> >>>>>>>>>>>>> + *
> >>>>>>>>>>>>>       * Test if a mapping is valid with either dma_buf_map_is_set() or
> >>>>>>>>>>>>>       * dma_buf_map_is_null().
> >>>>>>>>>>>>>       *
> >>>>>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> >>>>>>>>>>>>> dma_buf_map *map, void *vaddr)
> >>>>>>>>>>>>>          map->is_iomem = false;
> >>>>>>>>>>>>>      }
> >>>>>>>>>>>>>      +/**
> >>>>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> >>>>>>>>>>>>> an address in I/O memory
> >>>>>>>>>>>>> + * @map:        The dma-buf mapping structure
> >>>>>>>>>>>>> + * @vaddr_iomem:    An I/O-memory address
> >>>>>>>>>>>>> + *
> >>>>>>>>>>>>> + * Sets the address and the I/O-memory flag.
> >>>>>>>>>>>>> + */
> >>>>>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> >>>>>>>>>>>>> +                           void __iomem *vaddr_iomem)
> >>>>>>>>>>>>> +{
> >>>>>>>>>>>>> +    map->vaddr_iomem = vaddr_iomem;
> >>>>>>>>>>>>> +    map->is_iomem = true;
> >>>>>>>>>>>>> +}
> >>>>>>>>>>>>> +
> >>>>>>>>>>>>>      /**
> >>>>>>>>>>>>>       * dma_buf_map_is_equal - Compares two dma-buf mapping structures
> >>>>>>>>>>>>> for equality
> >>>>>>>>>>>>>       * @lhs:    The dma-buf mapping structure
> >>>>>>>>>>>> _______________________________________________
> >>>>>>>>>>>> dri-devel mailing list
> >>>>>>>>>>>> dri-devel@lists.freedesktop.org
> >>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
> >>>>>>>>>>> _______________________________________________
> >>>>>>>>>>> amd-gfx mailing list
> >>>>>>>>>>> amd-gfx@lists.freedesktop.org
> >>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
> >>>>>>>>>> _______________________________________________
> >>>>>>>>>> dri-devel mailing list
> >>>>>>>>>> dri-devel@lists.freedesktop.org
> >>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
> >>>>>>>>>>
> >>>>>>>>> _______________________________________________
> >>>>>>>>> amd-gfx mailing list
> >>>>>>>>> amd-gfx@lists.freedesktop.org
> >>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
> >>>>>
> >>>>> --
> >>>>> Daniel Vetter
> >>>>> Software Engineer, Intel Corporation
> >>>>> http://blog.ffwll.ch
> >>> --
> >>> Thomas Zimmermann
> >>> Graphics Driver Developer
> >>> SUSE Software Solutions Germany GmbH
> >>> Maxfeldstr. 5, 90409 Nürnberg, Germany
> >>> (HRB 36809, AG Nürnberg)
> >>> Geschäftsführer: Felix Imendörffer
> >>>
> >>
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion
  2020-10-07 14:30                             ` Daniel Vetter
@ 2020-10-08  9:00                               ` Thomas Zimmermann
  0 siblings, 0 replies; 33+ messages in thread
From: Thomas Zimmermann @ 2020-10-08  9:00 UTC (permalink / raw)
  To: Daniel Vetter, Christian König
  Cc: Dave Airlie, Nouveau Dev, dri-devel, Wilson, Chris, Melissa Wen,
	Huang Rui, Gerd Hoffmann, Alex Deucher, Sam Ravnborg,
	Emil Velikov, linux-samsung-soc, Joonyoung Shim, lima,
	Oleksandr Andrushchenko, Krzysztof Kozlowski, Steven Price,
	open list:ARM/Rockchip SoC...,
	Luben Tuikov, Alyssa Rosenzweig, Russell King,
	open list:DRM DRIVER FOR QXL VIRTUAL GPU, Ben Skeggs,
	The etnaviv authors, moderated list:DMA BUFFER SHARING FRAMEWORK,
	Hans de Goede, moderated list:DRM DRIVERS FOR XEN,
	open list:VIRTIO CORE, NET...,
	Sean Paul, apaneers, Linux ARM, amd-gfx list, Tomeu Vizoso,
	Seung-Woo Kim, Sandy Huang, Kyungmin Park, Qinglang Miao,
	Qiang Yu, Kukjin Kim, open list:DMA BUFFER SHARING FRAMEWORK


[-- Attachment #1.1.1: Type: text/plain, Size: 15195 bytes --]

Hi

Am 07.10.20 um 16:30 schrieb Daniel Vetter:
> On Wed, Oct 7, 2020 at 3:25 PM Christian König <christian.koenig@amd.com> wrote:
>>
>> Am 07.10.20 um 15:20 schrieb Thomas Zimmermann:
>>> Hi
>>>
>>> Am 07.10.20 um 15:10 schrieb Daniel Vetter:
>>>> On Wed, Oct 7, 2020 at 2:57 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:
>>>>> Hi
>>>>>
>>>>> Am 02.10.20 um 11:58 schrieb Daniel Vetter:
>>>>>> On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote:
>>>>>>> On Wed, Sep 30, 2020 at 2:34 PM Christian König
>>>>>>> <christian.koenig@amd.com> wrote:
>>>>>>>> Am 30.09.20 um 11:47 schrieb Daniel Vetter:
>>>>>>>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian König wrote:
>>>>>>>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
>>>>>>>>>>> Hi
>>>>>>>>>>>
>>>>>>>>>>> Am 30.09.20 um 10:05 schrieb Christian König:
>>>>>>>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
>>>>>>>>>>>>> Hi Christian
>>>>>>>>>>>>>
>>>>>>>>>>>>> Am 29.09.20 um 17:35 schrieb Christian König:
>>>>>>>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
>>>>>>>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
>>>>>>>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map
>>>>>>>>>>>>>>> with these values. Helpful for TTM-based drivers.
>>>>>>>>>>>>>> We could completely drop that if we use the same structure inside TTM as
>>>>>>>>>>>>>> well.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Additional to that which driver is going to use this?
>>>>>>>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will
>>>>>>>>>>>>> retrieve the pointer via this function.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I do want to see all that being more tightly integrated into TTM, but
>>>>>>>>>>>>> not in this series. This one is about fixing the bochs-on-sparc64
>>>>>>>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list.
>>>>>>>>>>>> I should have asked which driver you try to fix here :)
>>>>>>>>>>>>
>>>>>>>>>>>> In this case just keep the function inside bochs and only fix it there.
>>>>>>>>>>>>
>>>>>>>>>>>> All other drivers can be fixed when we generally pump this through TTM.
>>>>>>>>>>> Did you take a look at patch 3? This function will be used by VRAM
>>>>>>>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we
>>>>>>>>>>> have to duplicate the functionality in each if these drivers. Bochs
>>>>>>>>>>> itself uses VRAM helpers and doesn't touch the function directly.
>>>>>>>>>> Ah, ok can we have that then only in the VRAM helpers?
>>>>>>>>>>
>>>>>>>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj
>>>>>>>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
>>>>>>>>>>
>>>>>>>>>> What I want to avoid is to have another conversion function in TTM because
>>>>>>>>>> what happens here is that we already convert from ttm_bus_placement to
>>>>>>>>>> ttm_bo_kmap_obj and then to dma_buf_map.
>>>>>>>>> Hm I'm not really seeing how that helps with a gradual conversion of
>>>>>>>>> everything over to dma_buf_map and assorted helpers for access? There's
>>>>>>>>> too many places in ttm drivers where is_iomem and related stuff is used to
>>>>>>>>> be able to convert it all in one go. An intermediate state with a bunch of
>>>>>>>>> conversions seems fairly unavoidable to me.
>>>>>>>> Fair enough. I would just have started bottom up and not top down.
>>>>>>>>
>>>>>>>> Anyway feel free to go ahead with this approach as long as we can remove
>>>>>>>> the new function again when we clean that stuff up for good.
>>>>>>> Yeah I guess bottom up would make more sense as a refactoring. But the
>>>>>>> main motivation to land this here is to fix the __mmio vs normal
>>>>>>> memory confusion in the fbdev emulation helpers for sparc (and
>>>>>>> anything else that needs this). Hence the top down approach for
>>>>>>> rolling this out.
>>>>>> Ok I started reviewing this a bit more in-depth, and I think this is a bit
>>>>>> too much of a de-tour.
>>>>>>
>>>>>> Looking through all the callers of ttm_bo_kmap almost everyone maps the
>>>>>> entire object. Only vmwgfx uses to map less than that. Also, everyone just
>>>>>> immediately follows up with converting that full object map into a
>>>>>> pointer.
>>>>>>
>>>>>> So I think what we really want here is:
>>>>>> - new function
>>>>>>
>>>>>> int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>>>>>
>>>>>>    _vmap name since that's consistent with both dma_buf functions and
>>>>>>    what's usually used to implement this. Outside of the ttm world kmap
>>>>>>    usually just means single-page mappings using kmap() or it's iomem
>>>>>>    sibling io_mapping_map* so rather confusing name for a function which
>>>>>>    usually is just used to set up a vmap of the entire buffer.
>>>>>>
>>>>>> - a helper which can be used for the drm_gem_object_funcs vmap/vunmap
>>>>>>    functions for all ttm drivers. We should be able to make this fully
>>>>>>    generic because a) we now have dma_buf_map and b) drm_gem_object is
>>>>>>    embedded in the ttm_bo, so we can upcast for everyone who's both a ttm
>>>>>>    and gem driver.
>>>>>>
>>>>>>    This is maybe a good follow-up, since it should allow us to ditch quite
>>>>>>    a bit of the vram helper code for this more generic stuff. I also might
>>>>>>    have missed some special-cases here, but from a quick look everything
>>>>>>    just pins the buffer to the current location and that's it.
>>>>>>
>>>>>>    Also this obviously requires Christian's generic ttm_bo_pin rework
>>>>>>    first.
>>>>>>
>>>>>> - roll the above out to drivers.
>>>>>>
>>>>>> Christian/Thomas, thoughts on this?
>>>>> I agree on the goals, but what is the immediate objective here?
>>>>>
>>>>> Adding ttm_bo_vmap() does not work out easily, as struct ttm_bo_kmap_obj
>>>>> is a central part of the internals of TTM. struct ttm_bo_kmap_obj has
>>>>> more internal state that struct dma_buf_map, so they are not easily
>>>>> convertible either. What you propose seems to require a reimplementation
>>>>> of the existing ttm_bo_kmap() code. That is it's own patch series.
>>>>>
>>>>> I'd rather go with some variant of the existing patch and add
>>>>> ttm_bo_vmap() in a follow-up.
>>>> ttm_bo_vmap would simply wrap what you currently open-code as
>>>> ttm_bo_kmap + ttm_kmap_obj_to_dma_buf_map. Removing ttm_kmap_obj would
>>>> be a much later step. Why do you think adding ttm_bo_vmap is not
>>>> possible?
>>> The calls to ttm_bo_kmap/_kunmap() require an instance of struct
>>> ttm_bo_kmap_obj that is stored in each driver's private bo structure
>>> (e.g., struct drm_gem_vram_object, struct radeon_bo, etc). When I made
>>> patch 3, I flirted with the idea of unifying the driver's _vmap code in
>>> a shared helper, but I couldn't find a simple way of doing it. That's
>>> why it's open-coded in the first place.
> 
> Yeah we'd need a ttm_bo_vunmap I guess to make this work. Which
> shouldn't be more than a few lines, but maybe too much to do in this
> series.
> 
>> Well that makes kind of sense. Keep in mind that ttm_bo_kmap is
>> currently way to complicated.
> 
> Yeah, simplifying this into a ttm_bo_vmap on one side, and a simple
> 1-page kmap helper on the other should help a lot.

I'm not too happy about the plan, but I'll send out something like this
in the next iteration.

Best regards
Thomas

> -Daniel
> 
>>
>> Christian.
>>
>>>
>>> Best regards
>>> Thomas
>>>
>>>> -Daniel
>>>>
>>>>
>>>>> Best regards
>>>>> Thomas
>>>>>
>>>>>> I think for the immediate need of rolling this out for vram helpers and
>>>>>> fbdev code we should be able to do this, but just postpone the driver wide
>>>>>> roll-out for now.
>>>>>>
>>>>>> Cheers, Daniel
>>>>>>
>>>>>>> -Daniel
>>>>>>>
>>>>>>>> Christian.
>>>>>>>>
>>>>>>>>> -Daniel
>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Christian.
>>>>>>>>>>
>>>>>>>>>>> Best regards
>>>>>>>>>>> Thomas
>>>>>>>>>>>
>>>>>>>>>>>> Regards,
>>>>>>>>>>>> Christian.
>>>>>>>>>>>>
>>>>>>>>>>>>> Best regards
>>>>>>>>>>>>> Thomas
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>> Christian.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>>      include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
>>>>>>>>>>>>>>>      include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
>>>>>>>>>>>>>>>      2 files changed, 44 insertions(+)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>>>>>>> index c96a25d571c8..62d89f05a801 100644
>>>>>>>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>>>>>>> @@ -34,6 +34,7 @@
>>>>>>>>>>>>>>>      #include <drm/drm_gem.h>
>>>>>>>>>>>>>>>      #include <drm/drm_hashtab.h>
>>>>>>>>>>>>>>>      #include <drm/drm_vma_manager.h>
>>>>>>>>>>>>>>> +#include <linux/dma-buf-map.h>
>>>>>>>>>>>>>>>      #include <linux/kref.h>
>>>>>>>>>>>>>>>      #include <linux/list.h>
>>>>>>>>>>>>>>>      #include <linux/wait.h>
>>>>>>>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct
>>>>>>>>>>>>>>> ttm_bo_kmap_obj *map,
>>>>>>>>>>>>>>>          return map->virtual;
>>>>>>>>>>>>>>>      }
>>>>>>>>>>>>>>>      +/**
>>>>>>>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map
>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
>>>>>>>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map
>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
>>>>>>>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL.
>>>>>>>>>>>>>>> + */
>>>>>>>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj
>>>>>>>>>>>>>>> *kmap,
>>>>>>>>>>>>>>> +                           struct dma_buf_map *map)
>>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>>> +    bool is_iomem;
>>>>>>>>>>>>>>> +    void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>> +    if (!vaddr)
>>>>>>>>>>>>>>> +        dma_buf_map_clear(map);
>>>>>>>>>>>>>>> +    else if (is_iomem)
>>>>>>>>>>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
>>>>>>>>>>>>>>> +    else
>>>>>>>>>>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
>>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>      /**
>>>>>>>>>>>>>>>       * ttm_bo_kmap
>>>>>>>>>>>>>>>       *
>>>>>>>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>>>>>>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>>>>>>>>>>>>> --- a/include/linux/dma-buf-map.h
>>>>>>>>>>>>>>> +++ b/include/linux/dma-buf-map.h
>>>>>>>>>>>>>>> @@ -45,6 +45,12 @@
>>>>>>>>>>>>>>>       *
>>>>>>>>>>>>>>>       *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>>>>>>>>>>>>>       *
>>>>>>>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>> + * .. code-block:: c
>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>       * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>>>>>>>>>>>>>       * dma_buf_map_is_null().
>>>>>>>>>>>>>>>       *
>>>>>>>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>>>>>>>>>>>>>> dma_buf_map *map, void *vaddr)
>>>>>>>>>>>>>>>          map->is_iomem = false;
>>>>>>>>>>>>>>>      }
>>>>>>>>>>>>>>>      +/**
>>>>>>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>>>>>>>>>>>>>> an address in I/O memory
>>>>>>>>>>>>>>> + * @map:        The dma-buf mapping structure
>>>>>>>>>>>>>>> + * @vaddr_iomem:    An I/O-memory address
>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>> + * Sets the address and the I/O-memory flag.
>>>>>>>>>>>>>>> + */
>>>>>>>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>>>>>>>>>>>>>> +                           void __iomem *vaddr_iomem)
>>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>>> +    map->vaddr_iomem = vaddr_iomem;
>>>>>>>>>>>>>>> +    map->is_iomem = true;
>>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>      /**
>>>>>>>>>>>>>>>       * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>>>>>>>>>>>>>>> for equality
>>>>>>>>>>>>>>>       * @lhs:    The dma-buf mapping structure
>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>> dri-devel mailing list
>>>>>>>>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>> amd-gfx mailing list
>>>>>>>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> dri-devel mailing list
>>>>>>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
>>>>>>>>>>>>
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> amd-gfx mailing list
>>>>>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
>>>>>>>
>>>>>>> --
>>>>>>> Daniel Vetter
>>>>>>> Software Engineer, Intel Corporation
>>>>>>> http://blog.ffwll.ch
>>>>> --
>>>>> Thomas Zimmermann
>>>>> Graphics Driver Developer
>>>>> SUSE Software Solutions Germany GmbH
>>>>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>>>>> (HRB 36809, AG Nürnberg)
>>>>> Geschäftsführer: Felix Imendörffer
>>>>>
>>>>
>>
> 
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

[-- Attachment #2: Type: text/plain, Size: 170 bytes --]

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 6/7] drm/fb_helper: Support framebuffers in I/O memory
  2020-10-02 18:44     ` Daniel Vetter
@ 2020-10-08  9:25       ` Thomas Zimmermann
  2020-10-08  9:35         ` Daniel Vetter
  0 siblings, 1 reply; 33+ messages in thread
From: Thomas Zimmermann @ 2020-10-08  9:25 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Luben Tuikov, Heiko Stübner, Dave Airlie, Nouveau Dev,
	Linus Walleij, dri-devel, Wilson, Chris, Melissa Wen, Anholt,
	Eric, Huang Rui, Gerd Hoffmann, Sam Ravnborg, Sumit Semwal,
	Emil Velikov, Rob Herring, linux-samsung-soc, Joonyoung Shim,
	lima, Oleksandr Andrushchenko, Krzysztof Kozlowski, Steven Price,
	open list:ARM/Rockchip SoC...,
	Kukjin Kim, Alyssa Rosenzweig, Russell King,
	open list:DRM DRIVER FOR QXL VIRTUAL GPU, Ben Skeggs,
	Maarten Lankhorst, The etnaviv authors, Maxime Ripard, Inki Dae,
	Hans de Goede, Christian Gmeiner,
	moderated list:DRM DRIVERS FOR XEN, open list:VIRTIO CORE, NET...,
	Sean Paul, apaneers, Linux ARM,
	moderated list:DMA BUFFER SHARING FRAMEWORK, amd-gfx list,
	Tomeu Vizoso, Seung-Woo Kim, Sandy Huang, Kyungmin Park,
	Qinglang Miao, Qiang Yu, Alex Deucher,
	open list:DMA BUFFER SHARING FRAMEWORK, Christian König,
	Lucas Stach


[-- Attachment #1.1.1: Type: text/plain, Size: 19530 bytes --]

Hi

Am 02.10.20 um 20:44 schrieb Daniel Vetter:
> On Fri, Oct 2, 2020 at 8:05 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>>
>> On Tue, Sep 29, 2020 at 05:14:36PM +0200, Thomas Zimmermann wrote:
>>> At least sparc64 requires I/O-specific access to framebuffers. This
>>> patch updates the fbdev console accordingly.
>>>
>>> For drivers with direct access to the framebuffer memory, the callback
>>> functions in struct fb_ops test for the type of memory and call the rsp
>>> fb_sys_ of fb_cfb_ functions.
>>>
>>> For drivers that employ a shadow buffer, fbdev's blit function retrieves
>>> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
>>> interfaces to access the buffer.
>>>
>>> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
>>> I/O memory and avoid a HW exception. With the introduction of struct
>>> dma_buf_map, this is not required any longer. The patch removes the rsp
>>> code from both, bochs and fbdev.
>>>
>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> 
> Argh, I accidentally hit send before finishing this ...
> 
>>> ---
>>>  drivers/gpu/drm/bochs/bochs_kms.c |   1 -
>>>  drivers/gpu/drm/drm_fb_helper.c   | 217 ++++++++++++++++++++++++++++--
>>>  include/drm/drm_mode_config.h     |  12 --
>>>  include/linux/dma-buf-map.h       |  72 ++++++++--
>>>  4 files changed, 265 insertions(+), 37 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
>>> index 13d0d04c4457..853081d186d5 100644
>>> --- a/drivers/gpu/drm/bochs/bochs_kms.c
>>> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
>>> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
>>>       bochs->dev->mode_config.preferred_depth = 24;
>>>       bochs->dev->mode_config.prefer_shadow = 0;
>>>       bochs->dev->mode_config.prefer_shadow_fbdev = 1;
>>> -     bochs->dev->mode_config.fbdev_use_iomem = true;
>>>       bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>>>
>>>       bochs->dev->mode_config.funcs = &bochs_mode_funcs;
>>> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
>>> index 343a292f2c7c..f345a314a437 100644
>>> --- a/drivers/gpu/drm/drm_fb_helper.c
>>> +++ b/drivers/gpu/drm/drm_fb_helper.c
>>> @@ -388,24 +388,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
>>>  }
>>>
>>>  static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
>>> -                                       struct drm_clip_rect *clip)
>>> +                                       struct drm_clip_rect *clip,
>>> +                                       struct dma_buf_map *dst)
>>>  {
>>>       struct drm_framebuffer *fb = fb_helper->fb;
>>>       unsigned int cpp = fb->format->cpp[0];
>>>       size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
>>>       void *src = fb_helper->fbdev->screen_buffer + offset;
>>> -     void *dst = fb_helper->buffer->map.vaddr + offset;
>>>       size_t len = (clip->x2 - clip->x1) * cpp;
>>>       unsigned int y;
>>>
>>> -     for (y = clip->y1; y < clip->y2; y++) {
>>> -             if (!fb_helper->dev->mode_config.fbdev_use_iomem)
>>> -                     memcpy(dst, src, len);
>>> -             else
>>> -                     memcpy_toio((void __iomem *)dst, src, len);
>>> +     dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>>>
>>> +     for (y = clip->y1; y < clip->y2; y++) {
>>> +             dma_buf_map_memcpy_to(dst, src, len);
>>> +             dma_buf_map_incr(dst, fb->pitches[0]);
>>>               src += fb->pitches[0];
>>> -             dst += fb->pitches[0];
>>>       }
>>>  }
>>>
>>> @@ -433,8 +431,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>>>                       ret = drm_client_buffer_vmap(helper->buffer, &map);
>>>                       if (ret)
>>>                               return;
>>> -                     drm_fb_helper_dirty_blit_real(helper, &clip_copy);
>>> +                     drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
>>>               }
>>> +
>>>               if (helper->fb->funcs->dirty)
>>>                       helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
>>>                                                &clip_copy, 1);
>>> @@ -771,6 +770,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
>>>  }
>>>  EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
>>>
>>> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
>>> +                                   size_t count, loff_t *ppos)
>>> +{
>>> +     unsigned long p = *ppos;
>>> +     u8 *dst;
>>> +     u8 __iomem *src;
>>> +     int c, err = 0;
>>> +     unsigned long total_size;
>>> +     unsigned long alloc_size;
>>> +     ssize_t ret = 0;
>>> +
>>> +     if (info->state != FBINFO_STATE_RUNNING)
>>> +             return -EPERM;
>>> +
>>> +     total_size = info->screen_size;
>>> +
>>> +     if (total_size == 0)
>>> +             total_size = info->fix.smem_len;
>>> +
>>> +     if (p >= total_size)
>>> +             return 0;
>>> +
>>> +     if (count >= total_size)
>>> +             count = total_size;
>>> +
>>> +     if (count + p > total_size)
>>> +             count = total_size - p;
>>> +
>>> +     src = (u8 __iomem *)(info->screen_base + p);
>>> +
>>> +     alloc_size = min(count, PAGE_SIZE);
>>> +
>>> +     dst = kmalloc(alloc_size, GFP_KERNEL);
>>> +     if (!dst)
>>> +             return -ENOMEM;
>>> +
>>> +     while (count) {
>>> +             c = min(count, alloc_size);
>>> +
>>> +             memcpy_fromio(dst, src, c);
>>> +             if (copy_to_user(buf, dst, c)) {
>>> +                     err = -EFAULT;
>>> +                     break;
>>> +             }
>>> +
>>> +             src += c;
>>> +             *ppos += c;
>>> +             buf += c;
>>> +             ret += c;
>>> +             count -= c;
>>> +     }
>>> +
>>> +     kfree(dst);
>>> +
>>> +     if (err)
>>> +             return err;
>>> +
>>> +     return ret;
>>> +}
>>> +
>>> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
>>> +                                    size_t count, loff_t *ppos)
>>> +{
>>> +     unsigned long p = *ppos;
>>> +     u8 *src;
>>> +     u8 __iomem *dst;
>>> +     int c, err = 0;
>>> +     unsigned long total_size;
>>> +     unsigned long alloc_size;
>>> +     ssize_t ret = 0;
>>> +
>>> +     if (info->state != FBINFO_STATE_RUNNING)
>>> +             return -EPERM;
>>> +
>>> +     total_size = info->screen_size;
>>> +
>>> +     if (total_size == 0)
>>> +             total_size = info->fix.smem_len;
>>> +
>>> +     if (p > total_size)
>>> +             return -EFBIG;
>>> +
>>> +     if (count > total_size) {
>>> +             err = -EFBIG;
>>> +             count = total_size;
>>> +     }
>>> +
>>> +     if (count + p > total_size) {
>>> +             /*
>>> +              * The framebuffer is too small. We do the
>>> +              * copy operation, but return an error code
>>> +              * afterwards. Taken from fbdev.
>>> +              */
>>> +             if (!err)
>>> +                     err = -ENOSPC;
>>> +             count = total_size - p;
>>> +     }
>>> +
>>> +     alloc_size = min(count, PAGE_SIZE);
>>> +
>>> +     src = kmalloc(alloc_size, GFP_KERNEL);
>>> +     if (!src)
>>> +             return -ENOMEM;
>>> +
>>> +     dst = (u8 __iomem *)(info->screen_base + p);
>>> +
>>> +     while (count) {
>>> +             c = min(count, alloc_size);
>>> +
>>> +             if (copy_from_user(src, buf, c)) {
>>> +                     err = -EFAULT;
>>> +                     break;
>>> +             }
>>> +             memcpy_toio(dst, src, c);
>>> +
>>> +             dst += c;
>>> +             *ppos += c;
>>> +             buf += c;
>>> +             ret += c;
>>> +             count -= c;
>>> +     }
>>> +
>>> +     kfree(src);
>>> +
>>> +     if (err)
>>> +             return err;
>>> +
>>> +     return ret;
>>> +}
> 
> The duplication is a bit annoying here, but can't really be avoided. I
> do think though we should maybe go a bit further, and have drm
> implementations of this stuff instead of following fbdev concepts as
> closely as possible. So here roughly:
> 
> - if we have a shadow fb, construct a dma_buf_map for that, otherwise
> take the one from the driver
> - have a full generic implementation using that one directly (and
> checking size limits against the underlying gem buffer)
> - ideally also with some testcases in the fbdev testcase we have (very
> bare-bones right now) in igt
> 
> But I'm not really sure whether that's worth all the trouble. It's
> just that the fbdev-ness here in this copied code sticks out a lot :-)
> 
>>> +
>>>  /**
>>>   * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
>>>   * @info: fbdev registered by the helper
>>> @@ -2043,6 +2172,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
>>>               return -ENODEV;
>>>  }
>>>
>>> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
>>> +                              size_t count, loff_t *ppos)
>>> +{
>>> +     struct drm_fb_helper *fb_helper = info->par;
>>> +     struct drm_client_buffer *buffer = fb_helper->buffer;
>>> +
>>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
>>> +             return drm_fb_helper_sys_read(info, buf, count, ppos);
>>> +     else
>>> +             return drm_fb_helper_cfb_read(info, buf, count, ppos);
>>> +}
>>> +
>>> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
>>> +                               size_t count, loff_t *ppos)
>>> +{
>>> +     struct drm_fb_helper *fb_helper = info->par;
>>> +     struct drm_client_buffer *buffer = fb_helper->buffer;
>>> +
>>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
>>> +             return drm_fb_helper_sys_write(info, buf, count, ppos);
>>> +     else
>>> +             return drm_fb_helper_cfb_write(info, buf, count, ppos);
>>> +}
>>> +
>>> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
>>> +                               const struct fb_fillrect *rect)
>>> +{
>>> +     struct drm_fb_helper *fb_helper = info->par;
>>> +     struct drm_client_buffer *buffer = fb_helper->buffer;
>>> +
>>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
>>> +             drm_fb_helper_sys_fillrect(info, rect);
>>> +     else
>>> +             drm_fb_helper_cfb_fillrect(info, rect);
>>> +}
>>> +
>>> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
>>> +                               const struct fb_copyarea *area)
>>> +{
>>> +     struct drm_fb_helper *fb_helper = info->par;
>>> +     struct drm_client_buffer *buffer = fb_helper->buffer;
>>> +
>>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
>>> +             drm_fb_helper_sys_copyarea(info, area);
>>> +     else
>>> +             drm_fb_helper_cfb_copyarea(info, area);
>>> +}
>>> +
>>> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
>>> +                                const struct fb_image *image)
>>> +{
>>> +     struct drm_fb_helper *fb_helper = info->par;
>>> +     struct drm_client_buffer *buffer = fb_helper->buffer;
>>> +
>>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
>>> +             drm_fb_helper_sys_imageblit(info, image);
>>> +     else
>>> +             drm_fb_helper_cfb_imageblit(info, image);
>>> +}
> 
> I think a todo.rst entry to make the new generic functions the real ones, and
> drivers not using the sys/cfb ones anymore would be a good addition.
> It's kinda covered by the move to the generic helpers, but maybe we
> can convert a few more drivers over to these here. Would also allow us
> to maybe flatten the code a bit and use more of the dma_buf_map stuff
> directly (instead of reusing crusty fbdev code written 20 years ago or
> so).

I wouldn't mind doing our own thing, but dma_buf_map is not a good fit
here. Mostly because the _cfb_ code first does a reads from I/O to
system memory, and then copies to userspace. The _sys_ functions copy
directly to userspace. (Same for write, but in the other direction.)

There's some code at the top and bottom of these functions that could be
shared. If we want to share the copy loops, we'd probably end up with
additional memcpys in the _sys_ case.

Best regards
Thomas

> 
>>> +
>>>  static const struct fb_ops drm_fbdev_fb_ops = {
>>>       .owner          = THIS_MODULE,
>>>       DRM_FB_HELPER_DEFAULT_OPS,
>>> @@ -2050,11 +2239,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
>>>       .fb_release     = drm_fbdev_fb_release,
>>>       .fb_destroy     = drm_fbdev_fb_destroy,
>>>       .fb_mmap        = drm_fbdev_fb_mmap,
>>> -     .fb_read        = drm_fb_helper_sys_read,
>>> -     .fb_write       = drm_fb_helper_sys_write,
>>> -     .fb_fillrect    = drm_fb_helper_sys_fillrect,
>>> -     .fb_copyarea    = drm_fb_helper_sys_copyarea,
>>> -     .fb_imageblit   = drm_fb_helper_sys_imageblit,
>>> +     .fb_read        = drm_fbdev_fb_read,
>>> +     .fb_write       = drm_fbdev_fb_write,
>>> +     .fb_fillrect    = drm_fbdev_fb_fillrect,
>>> +     .fb_copyarea    = drm_fbdev_fb_copyarea,
>>> +     .fb_imageblit   = drm_fbdev_fb_imageblit,
>>>  };
>>>
>>>  static struct fb_deferred_io drm_fbdev_defio = {
>>> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
>>> index 5ffbb4ed5b35..ab424ddd7665 100644
>>> --- a/include/drm/drm_mode_config.h
>>> +++ b/include/drm/drm_mode_config.h
>>> @@ -877,18 +877,6 @@ struct drm_mode_config {
>>>        */
>>>       bool prefer_shadow_fbdev;
>>>
>>> -     /**
>>> -      * @fbdev_use_iomem:
>>> -      *
>>> -      * Set to true if framebuffer reside in iomem.
>>> -      * When set to true memcpy_toio() is used when copying the framebuffer in
>>> -      * drm_fb_helper.drm_fb_helper_dirty_blit_real().
>>> -      *
>>> -      * FIXME: This should be replaced with a per-mapping is_iomem
>>> -      * flag (like ttm does), and then used everywhere in fbdev code.
>>> -      */
>>> -     bool fbdev_use_iomem;
>>> -
>>>       /**
>>>        * @quirk_addfb_prefer_xbgr_30bpp:
>>>        *
>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> 
> I think the below should be split out as a prep patch.
> 
>>> index 2e8bbecb5091..6ca0f304dda2 100644
>>> --- a/include/linux/dma-buf-map.h
>>> +++ b/include/linux/dma-buf-map.h
>>> @@ -32,6 +32,14 @@
>>>   * accessing the buffer. Use the returned instance and the helper functions
>>>   * to access the buffer's memory in the correct way.
>>>   *
>>> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
>>> + * actually independent from the dma-buf infrastructure. When sharing buffers
>>> + * among devices, drivers have to know the location of the memory to access
>>> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
>>> + * solves this problem for dma-buf and its users. If other drivers or
>>> + * sub-systems require similar functionality, the type could be generalized
>>> + * and moved to a more prominent header file.
>>> + *
>>>   * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
>>>   * considered bad style. Rather then accessing its fields directly, use one
>>>   * of the provided helper functions, or implement your own. For example,
>>> @@ -51,6 +59,14 @@
>>>   *
>>>   *   dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>   *
>>> + * Instances of struct dma_buf_map do not have to be cleaned up, but
>>> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
>>> + * always refer to system memory.
>>> + *
>>> + * .. code-block:: c
>>> + *
>>> + *   dma_buf_map_clear(&map);
>>> + *
>>>   * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>   * dma_buf_map_is_null().
>>>   *
>>> @@ -73,17 +89,19 @@
>>>   *   if (dma_buf_map_is_equal(&sys_map, &io_map))
>>>   *           // always false
>>>   *
>>> - * Instances of struct dma_buf_map do not have to be cleaned up, but
>>> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
>>> - * always refer to system memory.
>>> + * A set up instance of struct dma_buf_map can be used to access or manipulate
>>> + * the buffer memory. Depending on the location of the memory, the provided
>>> + * helpers will pick the correct operations. Data can be copied into the memory
>>> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
>>> + * dma_buf_map_incr().
>>>   *
>>> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
>>> - * actually independent from the dma-buf infrastructure. When sharing buffers
>>> - * among devices, drivers have to know the location of the memory to access
>>> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
>>> - * solves this problem for dma-buf and its users. If other drivers or
>>> - * sub-systems require similar functionality, the type could be generalized
>>> - * and moved to a more prominent header file.
>>> + * .. code-block:: c
>>> + *
>>> + *   const void *src = ...; // source buffer
>>> + *   size_t len = ...; // length of src
>>> + *
>>> + *   dma_buf_map_memcpy_to(&map, src, len);
>>> + *   dma_buf_map_incr(&map, len); // go to first byte after the memcpy
>>>   */
>>>
>>>  /**
>>> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
>>>       }
>>>  }
>>>
>>> +/**
>>> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
>>> + * @dst:     The dma-buf mapping structure
>>> + * @src:     The source buffer
>>> + * @len:     The number of byte in src
>>> + *
>>> + * Copies data into a dma-buf mapping. The source buffer is in system
>>> + * memory. Depending on the buffer's location, the helper picks the correct
>>> + * method of accessing the memory.
>>> + */
>>> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
>>> +{
>>> +     if (dst->is_iomem)
>>> +             memcpy_toio(dst->vaddr_iomem, src, len);
>>> +     else
>>> +             memcpy(dst->vaddr, src, len);
>>> +}
>>> +
>>> +/**
>>> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
>>> + * @map:     The dma-buf mapping structure
>>> + * @incr:    The number of bytes to increment
>>> + *
>>> + * Increments the address stored in a dma-buf mapping. Depending on the
>>> + * buffer's location, the correct value will be updated.
>>> + */
>>> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
>>> +{
>>> +     if (map->is_iomem)
>>> +             map->vaddr_iomem += incr;
>>> +     else
>>> +             map->vaddr += incr;
>>> +}
>>> +
>>>  #endif /* __DMA_BUF_MAP_H__ */
>>> --
>>> 2.28.0
> 
> Aside from the details I think looks all reasonable.
> -Daniel
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

[-- Attachment #2: Type: text/plain, Size: 170 bytes --]

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v3 6/7] drm/fb_helper: Support framebuffers in I/O memory
  2020-10-08  9:25       ` Thomas Zimmermann
@ 2020-10-08  9:35         ` Daniel Vetter
  0 siblings, 0 replies; 33+ messages in thread
From: Daniel Vetter @ 2020-10-08  9:35 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: Luben Tuikov, Heiko Stübner, Dave Airlie, Nouveau Dev,
	Linus Walleij, dri-devel, Wilson, Chris, Melissa Wen, Anholt,
	Eric, Huang Rui, Gerd Hoffmann, Sam Ravnborg, Sumit Semwal,
	Emil Velikov, Rob Herring, linux-samsung-soc, Joonyoung Shim,
	lima, Oleksandr Andrushchenko, Krzysztof Kozlowski, Steven Price,
	open list:ARM/Rockchip SoC...,
	Kukjin Kim, Alyssa Rosenzweig, Russell King,
	open list:DRM DRIVER FOR QXL VIRTUAL GPU, Ben Skeggs,
	Maarten Lankhorst, The etnaviv authors, Maxime Ripard, Inki Dae,
	Hans de Goede, Christian Gmeiner,
	moderated list:DRM DRIVERS FOR XEN, open list:VIRTIO CORE, NET...,
	Sean Paul, apaneers, Linux ARM,
	moderated list:DMA BUFFER SHARING FRAMEWORK, amd-gfx list,
	Tomeu Vizoso, Seung-Woo Kim, Sandy Huang, Kyungmin Park,
	Qinglang Miao, Qiang Yu, Alex Deucher,
	open list:DMA BUFFER SHARING FRAMEWORK, Christian König,
	Lucas Stach

On Thu, Oct 8, 2020 at 11:25 AM Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
> Hi
>
> Am 02.10.20 um 20:44 schrieb Daniel Vetter:
> > On Fri, Oct 2, 2020 at 8:05 PM Daniel Vetter <daniel@ffwll.ch> wrote:
> >>
> >> On Tue, Sep 29, 2020 at 05:14:36PM +0200, Thomas Zimmermann wrote:
> >>> At least sparc64 requires I/O-specific access to framebuffers. This
> >>> patch updates the fbdev console accordingly.
> >>>
> >>> For drivers with direct access to the framebuffer memory, the callback
> >>> functions in struct fb_ops test for the type of memory and call the rsp
> >>> fb_sys_ of fb_cfb_ functions.
> >>>
> >>> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> >>> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> >>> interfaces to access the buffer.
> >>>
> >>> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> >>> I/O memory and avoid a HW exception. With the introduction of struct
> >>> dma_buf_map, this is not required any longer. The patch removes the rsp
> >>> code from both, bochs and fbdev.
> >>>
> >>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >
> > Argh, I accidentally hit send before finishing this ...
> >
> >>> ---
> >>>  drivers/gpu/drm/bochs/bochs_kms.c |   1 -
> >>>  drivers/gpu/drm/drm_fb_helper.c   | 217 ++++++++++++++++++++++++++++--
> >>>  include/drm/drm_mode_config.h     |  12 --
> >>>  include/linux/dma-buf-map.h       |  72 ++++++++--
> >>>  4 files changed, 265 insertions(+), 37 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> >>> index 13d0d04c4457..853081d186d5 100644
> >>> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> >>> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> >>> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> >>>       bochs->dev->mode_config.preferred_depth = 24;
> >>>       bochs->dev->mode_config.prefer_shadow = 0;
> >>>       bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> >>> -     bochs->dev->mode_config.fbdev_use_iomem = true;
> >>>       bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
> >>>
> >>>       bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> >>> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> >>> index 343a292f2c7c..f345a314a437 100644
> >>> --- a/drivers/gpu/drm/drm_fb_helper.c
> >>> +++ b/drivers/gpu/drm/drm_fb_helper.c
> >>> @@ -388,24 +388,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
> >>>  }
> >>>
> >>>  static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> >>> -                                       struct drm_clip_rect *clip)
> >>> +                                       struct drm_clip_rect *clip,
> >>> +                                       struct dma_buf_map *dst)
> >>>  {
> >>>       struct drm_framebuffer *fb = fb_helper->fb;
> >>>       unsigned int cpp = fb->format->cpp[0];
> >>>       size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> >>>       void *src = fb_helper->fbdev->screen_buffer + offset;
> >>> -     void *dst = fb_helper->buffer->map.vaddr + offset;
> >>>       size_t len = (clip->x2 - clip->x1) * cpp;
> >>>       unsigned int y;
> >>>
> >>> -     for (y = clip->y1; y < clip->y2; y++) {
> >>> -             if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> >>> -                     memcpy(dst, src, len);
> >>> -             else
> >>> -                     memcpy_toio((void __iomem *)dst, src, len);
> >>> +     dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
> >>>
> >>> +     for (y = clip->y1; y < clip->y2; y++) {
> >>> +             dma_buf_map_memcpy_to(dst, src, len);
> >>> +             dma_buf_map_incr(dst, fb->pitches[0]);
> >>>               src += fb->pitches[0];
> >>> -             dst += fb->pitches[0];
> >>>       }
> >>>  }
> >>>
> >>> @@ -433,8 +431,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
> >>>                       ret = drm_client_buffer_vmap(helper->buffer, &map);
> >>>                       if (ret)
> >>>                               return;
> >>> -                     drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> >>> +                     drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
> >>>               }
> >>> +
> >>>               if (helper->fb->funcs->dirty)
> >>>                       helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> >>>                                                &clip_copy, 1);
> >>> @@ -771,6 +770,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
> >>>  }
> >>>  EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> >>>
> >>> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> >>> +                                   size_t count, loff_t *ppos)
> >>> +{
> >>> +     unsigned long p = *ppos;
> >>> +     u8 *dst;
> >>> +     u8 __iomem *src;
> >>> +     int c, err = 0;
> >>> +     unsigned long total_size;
> >>> +     unsigned long alloc_size;
> >>> +     ssize_t ret = 0;
> >>> +
> >>> +     if (info->state != FBINFO_STATE_RUNNING)
> >>> +             return -EPERM;
> >>> +
> >>> +     total_size = info->screen_size;
> >>> +
> >>> +     if (total_size == 0)
> >>> +             total_size = info->fix.smem_len;
> >>> +
> >>> +     if (p >= total_size)
> >>> +             return 0;
> >>> +
> >>> +     if (count >= total_size)
> >>> +             count = total_size;
> >>> +
> >>> +     if (count + p > total_size)
> >>> +             count = total_size - p;
> >>> +
> >>> +     src = (u8 __iomem *)(info->screen_base + p);
> >>> +
> >>> +     alloc_size = min(count, PAGE_SIZE);
> >>> +
> >>> +     dst = kmalloc(alloc_size, GFP_KERNEL);
> >>> +     if (!dst)
> >>> +             return -ENOMEM;
> >>> +
> >>> +     while (count) {
> >>> +             c = min(count, alloc_size);
> >>> +
> >>> +             memcpy_fromio(dst, src, c);
> >>> +             if (copy_to_user(buf, dst, c)) {
> >>> +                     err = -EFAULT;
> >>> +                     break;
> >>> +             }
> >>> +
> >>> +             src += c;
> >>> +             *ppos += c;
> >>> +             buf += c;
> >>> +             ret += c;
> >>> +             count -= c;
> >>> +     }
> >>> +
> >>> +     kfree(dst);
> >>> +
> >>> +     if (err)
> >>> +             return err;
> >>> +
> >>> +     return ret;
> >>> +}
> >>> +
> >>> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> >>> +                                    size_t count, loff_t *ppos)
> >>> +{
> >>> +     unsigned long p = *ppos;
> >>> +     u8 *src;
> >>> +     u8 __iomem *dst;
> >>> +     int c, err = 0;
> >>> +     unsigned long total_size;
> >>> +     unsigned long alloc_size;
> >>> +     ssize_t ret = 0;
> >>> +
> >>> +     if (info->state != FBINFO_STATE_RUNNING)
> >>> +             return -EPERM;
> >>> +
> >>> +     total_size = info->screen_size;
> >>> +
> >>> +     if (total_size == 0)
> >>> +             total_size = info->fix.smem_len;
> >>> +
> >>> +     if (p > total_size)
> >>> +             return -EFBIG;
> >>> +
> >>> +     if (count > total_size) {
> >>> +             err = -EFBIG;
> >>> +             count = total_size;
> >>> +     }
> >>> +
> >>> +     if (count + p > total_size) {
> >>> +             /*
> >>> +              * The framebuffer is too small. We do the
> >>> +              * copy operation, but return an error code
> >>> +              * afterwards. Taken from fbdev.
> >>> +              */
> >>> +             if (!err)
> >>> +                     err = -ENOSPC;
> >>> +             count = total_size - p;
> >>> +     }
> >>> +
> >>> +     alloc_size = min(count, PAGE_SIZE);
> >>> +
> >>> +     src = kmalloc(alloc_size, GFP_KERNEL);
> >>> +     if (!src)
> >>> +             return -ENOMEM;
> >>> +
> >>> +     dst = (u8 __iomem *)(info->screen_base + p);
> >>> +
> >>> +     while (count) {
> >>> +             c = min(count, alloc_size);
> >>> +
> >>> +             if (copy_from_user(src, buf, c)) {
> >>> +                     err = -EFAULT;
> >>> +                     break;
> >>> +             }
> >>> +             memcpy_toio(dst, src, c);
> >>> +
> >>> +             dst += c;
> >>> +             *ppos += c;
> >>> +             buf += c;
> >>> +             ret += c;
> >>> +             count -= c;
> >>> +     }
> >>> +
> >>> +     kfree(src);
> >>> +
> >>> +     if (err)
> >>> +             return err;
> >>> +
> >>> +     return ret;
> >>> +}
> >
> > The duplication is a bit annoying here, but can't really be avoided. I
> > do think though we should maybe go a bit further, and have drm
> > implementations of this stuff instead of following fbdev concepts as
> > closely as possible. So here roughly:
> >
> > - if we have a shadow fb, construct a dma_buf_map for that, otherwise
> > take the one from the driver
> > - have a full generic implementation using that one directly (and
> > checking size limits against the underlying gem buffer)
> > - ideally also with some testcases in the fbdev testcase we have (very
> > bare-bones right now) in igt
> >
> > But I'm not really sure whether that's worth all the trouble. It's
> > just that the fbdev-ness here in this copied code sticks out a lot :-)
> >
> >>> +
> >>>  /**
> >>>   * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> >>>   * @info: fbdev registered by the helper
> >>> @@ -2043,6 +2172,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> >>>               return -ENODEV;
> >>>  }
> >>>
> >>> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> >>> +                              size_t count, loff_t *ppos)
> >>> +{
> >>> +     struct drm_fb_helper *fb_helper = info->par;
> >>> +     struct drm_client_buffer *buffer = fb_helper->buffer;
> >>> +
> >>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> >>> +             return drm_fb_helper_sys_read(info, buf, count, ppos);
> >>> +     else
> >>> +             return drm_fb_helper_cfb_read(info, buf, count, ppos);
> >>> +}
> >>> +
> >>> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> >>> +                               size_t count, loff_t *ppos)
> >>> +{
> >>> +     struct drm_fb_helper *fb_helper = info->par;
> >>> +     struct drm_client_buffer *buffer = fb_helper->buffer;
> >>> +
> >>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> >>> +             return drm_fb_helper_sys_write(info, buf, count, ppos);
> >>> +     else
> >>> +             return drm_fb_helper_cfb_write(info, buf, count, ppos);
> >>> +}
> >>> +
> >>> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> >>> +                               const struct fb_fillrect *rect)
> >>> +{
> >>> +     struct drm_fb_helper *fb_helper = info->par;
> >>> +     struct drm_client_buffer *buffer = fb_helper->buffer;
> >>> +
> >>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> >>> +             drm_fb_helper_sys_fillrect(info, rect);
> >>> +     else
> >>> +             drm_fb_helper_cfb_fillrect(info, rect);
> >>> +}
> >>> +
> >>> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> >>> +                               const struct fb_copyarea *area)
> >>> +{
> >>> +     struct drm_fb_helper *fb_helper = info->par;
> >>> +     struct drm_client_buffer *buffer = fb_helper->buffer;
> >>> +
> >>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> >>> +             drm_fb_helper_sys_copyarea(info, area);
> >>> +     else
> >>> +             drm_fb_helper_cfb_copyarea(info, area);
> >>> +}
> >>> +
> >>> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> >>> +                                const struct fb_image *image)
> >>> +{
> >>> +     struct drm_fb_helper *fb_helper = info->par;
> >>> +     struct drm_client_buffer *buffer = fb_helper->buffer;
> >>> +
> >>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> >>> +             drm_fb_helper_sys_imageblit(info, image);
> >>> +     else
> >>> +             drm_fb_helper_cfb_imageblit(info, image);
> >>> +}
> >
> > I think a todo.rst entry to make the new generic functions the real ones, and
> > drivers not using the sys/cfb ones anymore would be a good addition.
> > It's kinda covered by the move to the generic helpers, but maybe we
> > can convert a few more drivers over to these here. Would also allow us
> > to maybe flatten the code a bit and use more of the dma_buf_map stuff
> > directly (instead of reusing crusty fbdev code written 20 years ago or
> > so).
>
> I wouldn't mind doing our own thing, but dma_buf_map is not a good fit
> here. Mostly because the _cfb_ code first does a reads from I/O to
> system memory, and then copies to userspace. The _sys_ functions copy
> directly to userspace. (Same for write, but in the other direction.)
>
> There's some code at the top and bottom of these functions that could be
> shared. If we want to share the copy loops, we'd probably end up with
> additional memcpys in the _sys_ case.

Yeah I noticed that. I'd just ignore it. If someone is using a) fbdev
and b) read/write on it, they don't care much about performance. We
can do another copy or two, no problem. But the duplication is also ok
I guess, just a bit less pretty.
-Daniel

> Best regards
> Thomas
>
> >
> >>> +
> >>>  static const struct fb_ops drm_fbdev_fb_ops = {
> >>>       .owner          = THIS_MODULE,
> >>>       DRM_FB_HELPER_DEFAULT_OPS,
> >>> @@ -2050,11 +2239,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> >>>       .fb_release     = drm_fbdev_fb_release,
> >>>       .fb_destroy     = drm_fbdev_fb_destroy,
> >>>       .fb_mmap        = drm_fbdev_fb_mmap,
> >>> -     .fb_read        = drm_fb_helper_sys_read,
> >>> -     .fb_write       = drm_fb_helper_sys_write,
> >>> -     .fb_fillrect    = drm_fb_helper_sys_fillrect,
> >>> -     .fb_copyarea    = drm_fb_helper_sys_copyarea,
> >>> -     .fb_imageblit   = drm_fb_helper_sys_imageblit,
> >>> +     .fb_read        = drm_fbdev_fb_read,
> >>> +     .fb_write       = drm_fbdev_fb_write,
> >>> +     .fb_fillrect    = drm_fbdev_fb_fillrect,
> >>> +     .fb_copyarea    = drm_fbdev_fb_copyarea,
> >>> +     .fb_imageblit   = drm_fbdev_fb_imageblit,
> >>>  };
> >>>
> >>>  static struct fb_deferred_io drm_fbdev_defio = {
> >>> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> >>> index 5ffbb4ed5b35..ab424ddd7665 100644
> >>> --- a/include/drm/drm_mode_config.h
> >>> +++ b/include/drm/drm_mode_config.h
> >>> @@ -877,18 +877,6 @@ struct drm_mode_config {
> >>>        */
> >>>       bool prefer_shadow_fbdev;
> >>>
> >>> -     /**
> >>> -      * @fbdev_use_iomem:
> >>> -      *
> >>> -      * Set to true if framebuffer reside in iomem.
> >>> -      * When set to true memcpy_toio() is used when copying the framebuffer in
> >>> -      * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> >>> -      *
> >>> -      * FIXME: This should be replaced with a per-mapping is_iomem
> >>> -      * flag (like ttm does), and then used everywhere in fbdev code.
> >>> -      */
> >>> -     bool fbdev_use_iomem;
> >>> -
> >>>       /**
> >>>        * @quirk_addfb_prefer_xbgr_30bpp:
> >>>        *
> >>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> >
> > I think the below should be split out as a prep patch.
> >
> >>> index 2e8bbecb5091..6ca0f304dda2 100644
> >>> --- a/include/linux/dma-buf-map.h
> >>> +++ b/include/linux/dma-buf-map.h
> >>> @@ -32,6 +32,14 @@
> >>>   * accessing the buffer. Use the returned instance and the helper functions
> >>>   * to access the buffer's memory in the correct way.
> >>>   *
> >>> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> >>> + * actually independent from the dma-buf infrastructure. When sharing buffers
> >>> + * among devices, drivers have to know the location of the memory to access
> >>> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> >>> + * solves this problem for dma-buf and its users. If other drivers or
> >>> + * sub-systems require similar functionality, the type could be generalized
> >>> + * and moved to a more prominent header file.
> >>> + *
> >>>   * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> >>>   * considered bad style. Rather then accessing its fields directly, use one
> >>>   * of the provided helper functions, or implement your own. For example,
> >>> @@ -51,6 +59,14 @@
> >>>   *
> >>>   *   dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> >>>   *
> >>> + * Instances of struct dma_buf_map do not have to be cleaned up, but
> >>> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> >>> + * always refer to system memory.
> >>> + *
> >>> + * .. code-block:: c
> >>> + *
> >>> + *   dma_buf_map_clear(&map);
> >>> + *
> >>>   * Test if a mapping is valid with either dma_buf_map_is_set() or
> >>>   * dma_buf_map_is_null().
> >>>   *
> >>> @@ -73,17 +89,19 @@
> >>>   *   if (dma_buf_map_is_equal(&sys_map, &io_map))
> >>>   *           // always false
> >>>   *
> >>> - * Instances of struct dma_buf_map do not have to be cleaned up, but
> >>> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> >>> - * always refer to system memory.
> >>> + * A set up instance of struct dma_buf_map can be used to access or manipulate
> >>> + * the buffer memory. Depending on the location of the memory, the provided
> >>> + * helpers will pick the correct operations. Data can be copied into the memory
> >>> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> >>> + * dma_buf_map_incr().
> >>>   *
> >>> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> >>> - * actually independent from the dma-buf infrastructure. When sharing buffers
> >>> - * among devices, drivers have to know the location of the memory to access
> >>> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> >>> - * solves this problem for dma-buf and its users. If other drivers or
> >>> - * sub-systems require similar functionality, the type could be generalized
> >>> - * and moved to a more prominent header file.
> >>> + * .. code-block:: c
> >>> + *
> >>> + *   const void *src = ...; // source buffer
> >>> + *   size_t len = ...; // length of src
> >>> + *
> >>> + *   dma_buf_map_memcpy_to(&map, src, len);
> >>> + *   dma_buf_map_incr(&map, len); // go to first byte after the memcpy
> >>>   */
> >>>
> >>>  /**
> >>> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
> >>>       }
> >>>  }
> >>>
> >>> +/**
> >>> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> >>> + * @dst:     The dma-buf mapping structure
> >>> + * @src:     The source buffer
> >>> + * @len:     The number of byte in src
> >>> + *
> >>> + * Copies data into a dma-buf mapping. The source buffer is in system
> >>> + * memory. Depending on the buffer's location, the helper picks the correct
> >>> + * method of accessing the memory.
> >>> + */
> >>> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> >>> +{
> >>> +     if (dst->is_iomem)
> >>> +             memcpy_toio(dst->vaddr_iomem, src, len);
> >>> +     else
> >>> +             memcpy(dst->vaddr, src, len);
> >>> +}
> >>> +
> >>> +/**
> >>> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> >>> + * @map:     The dma-buf mapping structure
> >>> + * @incr:    The number of bytes to increment
> >>> + *
> >>> + * Increments the address stored in a dma-buf mapping. Depending on the
> >>> + * buffer's location, the correct value will be updated.
> >>> + */
> >>> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> >>> +{
> >>> +     if (map->is_iomem)
> >>> +             map->vaddr_iomem += incr;
> >>> +     else
> >>> +             map->vaddr += incr;
> >>> +}
> >>> +
> >>>  #endif /* __DMA_BUF_MAP_H__ */
> >>> --
> >>> 2.28.0
> >
> > Aside from the details I think looks all reasonable.
> > -Daniel
> >
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2020-10-08  9:36 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-29 15:14 [PATCH v3 0/7] Support GEM object mappings from I/O memory Thomas Zimmermann
2020-09-29 15:14 ` [PATCH v3 1/7] drm/vram-helper: Remove invariant parameters from internal kmap function Thomas Zimmermann
2020-10-02  9:48   ` Daniel Vetter
2020-09-29 15:14 ` [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion Thomas Zimmermann
2020-09-29 15:35   ` Christian König
2020-09-29 15:44     ` Daniel Vetter
2020-09-29 17:49     ` Thomas Zimmermann
     [not found]       ` <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com>
2020-09-30  8:19         ` Thomas Zimmermann
     [not found]           ` <b569b7e3-68f0-edcc-c8f4-170e9042d348@gmail.com>
2020-09-30  9:47             ` Daniel Vetter
2020-09-30 12:34               ` Christian König
2020-09-30 12:51                 ` Daniel Vetter
2020-10-02  9:58                   ` Daniel Vetter
2020-10-02 11:30                     ` Christian König
2020-10-02 12:21                       ` Daniel Vetter
2020-10-07 12:57                     ` Thomas Zimmermann
2020-10-07 13:10                       ` Daniel Vetter
2020-10-07 13:20                         ` Thomas Zimmermann
2020-10-07 13:24                           ` Christian König
2020-10-07 14:30                             ` Daniel Vetter
2020-10-08  9:00                               ` Thomas Zimmermann
2020-09-29 15:14 ` [PATCH v3 3/7] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends Thomas Zimmermann
2020-10-02 13:02   ` Daniel Vetter
2020-09-29 15:14 ` [PATCH v3 4/7] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map Thomas Zimmermann
2020-10-02 13:04   ` Daniel Vetter
2020-09-29 15:14 ` [PATCH v3 5/7] drm/gem: Store client buffer mappings as " Thomas Zimmermann
2020-10-02 13:05   ` Daniel Vetter
2020-09-29 15:14 ` [PATCH v3 6/7] drm/fb_helper: Support framebuffers in I/O memory Thomas Zimmermann
2020-10-02 18:05   ` Daniel Vetter
2020-10-02 18:44     ` Daniel Vetter
2020-10-08  9:25       ` Thomas Zimmermann
2020-10-08  9:35         ` Daniel Vetter
2020-09-29 15:14 ` [PATCH v3 7/7] drm/todo: Update entries around struct dma_buf_map Thomas Zimmermann
2020-10-02 18:45   ` Daniel Vetter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).