* [PATCH v4 00/10] Support GEM object mappings from I/O memory
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann
DRM's fbdev console uses regular load and store operations to update
framebuffer memory. The bochs driver on sparc64 requires the use of
I/O-specific load and store operations. We have a workaround, but need
a long-term solution to the problem.
This patchset changes GEM's vmap/vunmap interfaces to forward pointers
of type struct dma_buf_map and updates the generic fbdev emulation to
use them correctly. This enables I/O-memory operations on all framebuffers
that require and support them.
Patches #1 to #4 prepare VRAM helpers and drivers.
Next is the update of the GEM vmap functions. Patch #5 adds vmap and vunmap
that is usable with TTM-based GEM drivers, and patch #6 updates GEM's
vmap/vunmap callback to forward instances of type struct dma_buf_map. While
the patch touches many files throughout the DRM modules, the applied changes
are mostly trivial interface fixes. Several TTM-based GEM drivers now use
the new vmap code. Patch #7 updates GEM's internal vmap/vunmap functions to
forward struct dma_buf_map.
With struct dma_buf_map propagated through the layers, patches #9 and #10
convert DRM clients and generic fbdev emulation to use it. Updating the
fbdev framebuffer will select the correct functions, either for system or
I/O memory.
v4:
* provide TTM vmap/vunmap plus GEM helpers and convert drivers
over (Christian, Daniel)
* remove several empty functions
* more TODOs and documentation (Daniel)
v3:
* recreate the whole patchset on top of struct dma_buf_map
v2:
* RFC patchset
Thomas Zimmermann (10):
drm/vram-helper: Remove invariant parameters from internal kmap
function
drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM
backends
drm/gem: Update internal GEM vmap/vunmap interfaces to use struct
dma_buf_map
drm/gem: Store client buffer mappings as struct dma_buf_map
dma-buf-map: Add memcpy and pointer-increment interfaces
drm/fb_helper: Support framebuffers in I/O memory
Documentation/gpu/todo.rst | 37 ++-
drivers/gpu/drm/Kconfig | 2 +
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 ---
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 -
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 -
drivers/gpu/drm/ast/ast_cursor.c | 27 ++-
drivers/gpu/drm/ast/ast_drv.h | 7 +-
drivers/gpu/drm/bochs/bochs_kms.c | 1 -
drivers/gpu/drm/drm_client.c | 38 ++--
drivers/gpu/drm/drm_fb_helper.c | 238 ++++++++++++++++++--
drivers/gpu/drm/drm_gem.c | 29 ++-
drivers/gpu/drm/drm_gem_cma_helper.c | 27 +--
drivers/gpu/drm/drm_gem_shmem_helper.c | 48 ++--
drivers/gpu/drm/drm_gem_ttm_helper.c | 38 ++++
drivers/gpu/drm/drm_gem_vram_helper.c | 117 +++++-----
drivers/gpu/drm/drm_internal.h | 5 +-
drivers/gpu/drm/drm_prime.c | 14 +-
drivers/gpu/drm/etnaviv/etnaviv_drv.h | 3 +-
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 -
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 12 +-
drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 -
drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 -
drivers/gpu/drm/lima/lima_gem.c | 6 +-
drivers/gpu/drm/lima/lima_sched.c | 11 +-
drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +-
drivers/gpu/drm/nouveau/Kconfig | 1 +
drivers/gpu/drm/nouveau/nouveau_bo.h | 2 -
drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +-
drivers/gpu/drm/nouveau/nouveau_gem.h | 2 -
drivers/gpu/drm/nouveau/nouveau_prime.c | 20 --
drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +-
drivers/gpu/drm/qxl/qxl_display.c | 11 +-
drivers/gpu/drm/qxl/qxl_draw.c | 14 +-
drivers/gpu/drm/qxl/qxl_drv.h | 11 +-
drivers/gpu/drm/qxl/qxl_object.c | 31 ++-
drivers/gpu/drm/qxl/qxl_object.h | 2 +-
drivers/gpu/drm/qxl/qxl_prime.c | 12 +-
drivers/gpu/drm/radeon/radeon.h | 1 -
drivers/gpu/drm/radeon/radeon_gem.c | 7 +-
drivers/gpu/drm/radeon/radeon_prime.c | 20 --
drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 +-
drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +-
drivers/gpu/drm/tiny/cirrus.c | 10 +-
drivers/gpu/drm/tiny/gm12u320.c | 10 +-
drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++
drivers/gpu/drm/udl/udl_modeset.c | 8 +-
drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +-
drivers/gpu/drm/vc4/vc4_bo.c | 7 +-
drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
drivers/gpu/drm/vgem/vgem_drv.c | 16 +-
drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 +-
drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +-
include/drm/drm_client.h | 7 +-
include/drm/drm_gem.h | 5 +-
include/drm/drm_gem_cma_helper.h | 3 +-
include/drm/drm_gem_shmem_helper.h | 4 +-
include/drm/drm_gem_ttm_helper.h | 6 +
include/drm/drm_gem_vram_helper.h | 14 +-
include/drm/drm_mode_config.h | 12 -
include/drm/ttm/ttm_bo_api.h | 28 +++
include/linux/dma-buf-map.h | 92 +++++++-
62 files changed, 817 insertions(+), 423 deletions(-)
--
2.28.0
^ permalink raw reply [flat|nested] 195+ messages in thread
* [PATCH v4 00/10] Support GEM object mappings from I/O memory
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers, linus.walle
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann
DRM's fbdev console uses regular load and store operations to update
framebuffer memory. The bochs driver on sparc64 requires the use of
I/O-specific load and store operations. We have a workaround, but need
a long-term solution to the problem.
This patchset changes GEM's vmap/vunmap interfaces to forward pointers
of type struct dma_buf_map and updates the generic fbdev emulation to
use them correctly. This enables I/O-memory operations on all framebuffers
that require and support them.
Patches #1 to #4 prepare VRAM helpers and drivers.
Next is the update of the GEM vmap functions. Patch #5 adds vmap and vunmap
that is usable with TTM-based GEM drivers, and patch #6 updates GEM's
vmap/vunmap callback to forward instances of type struct dma_buf_map. While
the patch touches many files throughout the DRM modules, the applied changes
are mostly trivial interface fixes. Several TTM-based GEM drivers now use
the new vmap code. Patch #7 updates GEM's internal vmap/vunmap functions to
forward struct dma_buf_map.
With struct dma_buf_map propagated through the layers, patches #9 and #10
convert DRM clients and generic fbdev emulation to use it. Updating the
fbdev framebuffer will select the correct functions, either for system or
I/O memory.
v4:
* provide TTM vmap/vunmap plus GEM helpers and convert drivers
over (Christian, Daniel)
* remove several empty functions
* more TODOs and documentation (Daniel)
v3:
* recreate the whole patchset on top of struct dma_buf_map
v2:
* RFC patchset
Thomas Zimmermann (10):
drm/vram-helper: Remove invariant parameters from internal kmap
function
drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM
backends
drm/gem: Update internal GEM vmap/vunmap interfaces to use struct
dma_buf_map
drm/gem: Store client buffer mappings as struct dma_buf_map
dma-buf-map: Add memcpy and pointer-increment interfaces
drm/fb_helper: Support framebuffers in I/O memory
Documentation/gpu/todo.rst | 37 ++-
drivers/gpu/drm/Kconfig | 2 +
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 ---
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 -
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 -
drivers/gpu/drm/ast/ast_cursor.c | 27 ++-
drivers/gpu/drm/ast/ast_drv.h | 7 +-
drivers/gpu/drm/bochs/bochs_kms.c | 1 -
drivers/gpu/drm/drm_client.c | 38 ++--
drivers/gpu/drm/drm_fb_helper.c | 238 ++++++++++++++++++--
drivers/gpu/drm/drm_gem.c | 29 ++-
drivers/gpu/drm/drm_gem_cma_helper.c | 27 +--
drivers/gpu/drm/drm_gem_shmem_helper.c | 48 ++--
drivers/gpu/drm/drm_gem_ttm_helper.c | 38 ++++
drivers/gpu/drm/drm_gem_vram_helper.c | 117 +++++-----
drivers/gpu/drm/drm_internal.h | 5 +-
drivers/gpu/drm/drm_prime.c | 14 +-
drivers/gpu/drm/etnaviv/etnaviv_drv.h | 3 +-
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 -
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 12 +-
drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 -
drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 -
drivers/gpu/drm/lima/lima_gem.c | 6 +-
drivers/gpu/drm/lima/lima_sched.c | 11 +-
drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +-
drivers/gpu/drm/nouveau/Kconfig | 1 +
drivers/gpu/drm/nouveau/nouveau_bo.h | 2 -
drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +-
drivers/gpu/drm/nouveau/nouveau_gem.h | 2 -
drivers/gpu/drm/nouveau/nouveau_prime.c | 20 --
drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +-
drivers/gpu/drm/qxl/qxl_display.c | 11 +-
drivers/gpu/drm/qxl/qxl_draw.c | 14 +-
drivers/gpu/drm/qxl/qxl_drv.h | 11 +-
drivers/gpu/drm/qxl/qxl_object.c | 31 ++-
drivers/gpu/drm/qxl/qxl_object.h | 2 +-
drivers/gpu/drm/qxl/qxl_prime.c | 12 +-
drivers/gpu/drm/radeon/radeon.h | 1 -
drivers/gpu/drm/radeon/radeon_gem.c | 7 +-
drivers/gpu/drm/radeon/radeon_prime.c | 20 --
drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 +-
drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +-
drivers/gpu/drm/tiny/cirrus.c | 10 +-
drivers/gpu/drm/tiny/gm12u320.c | 10 +-
drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++
drivers/gpu/drm/udl/udl_modeset.c | 8 +-
drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +-
drivers/gpu/drm/vc4/vc4_bo.c | 7 +-
drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
drivers/gpu/drm/vgem/vgem_drv.c | 16 +-
drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 +-
drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +-
include/drm/drm_client.h | 7 +-
include/drm/drm_gem.h | 5 +-
include/drm/drm_gem_cma_helper.h | 3 +-
include/drm/drm_gem_shmem_helper.h | 4 +-
include/drm/drm_gem_ttm_helper.h | 6 +
include/drm/drm_gem_vram_helper.h | 14 +-
include/drm/drm_mode_config.h | 12 -
include/drm/ttm/ttm_bo_api.h | 28 +++
include/linux/dma-buf-map.h | 92 +++++++-
62 files changed, 817 insertions(+), 423 deletions(-)
--
2.28.0
^ permalink raw reply [flat|nested] 195+ messages in thread
* [PATCH v4 00/10] Support GEM object mappings from I/O memory
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
DRM's fbdev console uses regular load and store operations to update
framebuffer memory. The bochs driver on sparc64 requires the use of
I/O-specific load and store operations. We have a workaround, but need
a long-term solution to the problem.
This patchset changes GEM's vmap/vunmap interfaces to forward pointers
of type struct dma_buf_map and updates the generic fbdev emulation to
use them correctly. This enables I/O-memory operations on all framebuffers
that require and support them.
Patches #1 to #4 prepare VRAM helpers and drivers.
Next is the update of the GEM vmap functions. Patch #5 adds vmap and vunmap
that is usable with TTM-based GEM drivers, and patch #6 updates GEM's
vmap/vunmap callback to forward instances of type struct dma_buf_map. While
the patch touches many files throughout the DRM modules, the applied changes
are mostly trivial interface fixes. Several TTM-based GEM drivers now use
the new vmap code. Patch #7 updates GEM's internal vmap/vunmap functions to
forward struct dma_buf_map.
With struct dma_buf_map propagated through the layers, patches #9 and #10
convert DRM clients and generic fbdev emulation to use it. Updating the
fbdev framebuffer will select the correct functions, either for system or
I/O memory.
v4:
* provide TTM vmap/vunmap plus GEM helpers and convert drivers
over (Christian, Daniel)
* remove several empty functions
* more TODOs and documentation (Daniel)
v3:
* recreate the whole patchset on top of struct dma_buf_map
v2:
* RFC patchset
Thomas Zimmermann (10):
drm/vram-helper: Remove invariant parameters from internal kmap
function
drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM
backends
drm/gem: Update internal GEM vmap/vunmap interfaces to use struct
dma_buf_map
drm/gem: Store client buffer mappings as struct dma_buf_map
dma-buf-map: Add memcpy and pointer-increment interfaces
drm/fb_helper: Support framebuffers in I/O memory
Documentation/gpu/todo.rst | 37 ++-
drivers/gpu/drm/Kconfig | 2 +
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 ---
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 -
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 -
drivers/gpu/drm/ast/ast_cursor.c | 27 ++-
drivers/gpu/drm/ast/ast_drv.h | 7 +-
drivers/gpu/drm/bochs/bochs_kms.c | 1 -
drivers/gpu/drm/drm_client.c | 38 ++--
drivers/gpu/drm/drm_fb_helper.c | 238 ++++++++++++++++++--
drivers/gpu/drm/drm_gem.c | 29 ++-
drivers/gpu/drm/drm_gem_cma_helper.c | 27 +--
drivers/gpu/drm/drm_gem_shmem_helper.c | 48 ++--
drivers/gpu/drm/drm_gem_ttm_helper.c | 38 ++++
drivers/gpu/drm/drm_gem_vram_helper.c | 117 +++++-----
drivers/gpu/drm/drm_internal.h | 5 +-
drivers/gpu/drm/drm_prime.c | 14 +-
drivers/gpu/drm/etnaviv/etnaviv_drv.h | 3 +-
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 -
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 12 +-
drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 -
drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 -
drivers/gpu/drm/lima/lima_gem.c | 6 +-
drivers/gpu/drm/lima/lima_sched.c | 11 +-
drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +-
drivers/gpu/drm/nouveau/Kconfig | 1 +
drivers/gpu/drm/nouveau/nouveau_bo.h | 2 -
drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +-
drivers/gpu/drm/nouveau/nouveau_gem.h | 2 -
drivers/gpu/drm/nouveau/nouveau_prime.c | 20 --
drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +-
drivers/gpu/drm/qxl/qxl_display.c | 11 +-
drivers/gpu/drm/qxl/qxl_draw.c | 14 +-
drivers/gpu/drm/qxl/qxl_drv.h | 11 +-
drivers/gpu/drm/qxl/qxl_object.c | 31 ++-
drivers/gpu/drm/qxl/qxl_object.h | 2 +-
drivers/gpu/drm/qxl/qxl_prime.c | 12 +-
drivers/gpu/drm/radeon/radeon.h | 1 -
drivers/gpu/drm/radeon/radeon_gem.c | 7 +-
drivers/gpu/drm/radeon/radeon_prime.c | 20 --
drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 +-
drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +-
drivers/gpu/drm/tiny/cirrus.c | 10 +-
drivers/gpu/drm/tiny/gm12u320.c | 10 +-
drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++
drivers/gpu/drm/udl/udl_modeset.c | 8 +-
drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +-
drivers/gpu/drm/vc4/vc4_bo.c | 7 +-
drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
drivers/gpu/drm/vgem/vgem_drv.c | 16 +-
drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 +-
drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +-
include/drm/drm_client.h | 7 +-
include/drm/drm_gem.h | 5 +-
include/drm/drm_gem_cma_helper.h | 3 +-
include/drm/drm_gem_shmem_helper.h | 4 +-
include/drm/drm_gem_ttm_helper.h | 6 +
include/drm/drm_gem_vram_helper.h | 14 +-
include/drm/drm_mode_config.h | 12 -
include/drm/ttm/ttm_bo_api.h | 28 +++
include/linux/dma-buf-map.h | 92 +++++++-
62 files changed, 817 insertions(+), 423 deletions(-)
--
2.28.0
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* [PATCH v4 00/10] Support GEM object mappings from I/O memory
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
DRM's fbdev console uses regular load and store operations to update
framebuffer memory. The bochs driver on sparc64 requires the use of
I/O-specific load and store operations. We have a workaround, but need
a long-term solution to the problem.
This patchset changes GEM's vmap/vunmap interfaces to forward pointers
of type struct dma_buf_map and updates the generic fbdev emulation to
use them correctly. This enables I/O-memory operations on all framebuffers
that require and support them.
Patches #1 to #4 prepare VRAM helpers and drivers.
Next is the update of the GEM vmap functions. Patch #5 adds vmap and vunmap
that is usable with TTM-based GEM drivers, and patch #6 updates GEM's
vmap/vunmap callback to forward instances of type struct dma_buf_map. While
the patch touches many files throughout the DRM modules, the applied changes
are mostly trivial interface fixes. Several TTM-based GEM drivers now use
the new vmap code. Patch #7 updates GEM's internal vmap/vunmap functions to
forward struct dma_buf_map.
With struct dma_buf_map propagated through the layers, patches #9 and #10
convert DRM clients and generic fbdev emulation to use it. Updating the
fbdev framebuffer will select the correct functions, either for system or
I/O memory.
v4:
* provide TTM vmap/vunmap plus GEM helpers and convert drivers
over (Christian, Daniel)
* remove several empty functions
* more TODOs and documentation (Daniel)
v3:
* recreate the whole patchset on top of struct dma_buf_map
v2:
* RFC patchset
Thomas Zimmermann (10):
drm/vram-helper: Remove invariant parameters from internal kmap
function
drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM
backends
drm/gem: Update internal GEM vmap/vunmap interfaces to use struct
dma_buf_map
drm/gem: Store client buffer mappings as struct dma_buf_map
dma-buf-map: Add memcpy and pointer-increment interfaces
drm/fb_helper: Support framebuffers in I/O memory
Documentation/gpu/todo.rst | 37 ++-
drivers/gpu/drm/Kconfig | 2 +
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 ---
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 -
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 -
drivers/gpu/drm/ast/ast_cursor.c | 27 ++-
drivers/gpu/drm/ast/ast_drv.h | 7 +-
drivers/gpu/drm/bochs/bochs_kms.c | 1 -
drivers/gpu/drm/drm_client.c | 38 ++--
drivers/gpu/drm/drm_fb_helper.c | 238 ++++++++++++++++++--
drivers/gpu/drm/drm_gem.c | 29 ++-
drivers/gpu/drm/drm_gem_cma_helper.c | 27 +--
drivers/gpu/drm/drm_gem_shmem_helper.c | 48 ++--
drivers/gpu/drm/drm_gem_ttm_helper.c | 38 ++++
drivers/gpu/drm/drm_gem_vram_helper.c | 117 +++++-----
drivers/gpu/drm/drm_internal.h | 5 +-
drivers/gpu/drm/drm_prime.c | 14 +-
drivers/gpu/drm/etnaviv/etnaviv_drv.h | 3 +-
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 -
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 12 +-
drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 -
drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 -
drivers/gpu/drm/lima/lima_gem.c | 6 +-
drivers/gpu/drm/lima/lima_sched.c | 11 +-
drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +-
drivers/gpu/drm/nouveau/Kconfig | 1 +
drivers/gpu/drm/nouveau/nouveau_bo.h | 2 -
drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +-
drivers/gpu/drm/nouveau/nouveau_gem.h | 2 -
drivers/gpu/drm/nouveau/nouveau_prime.c | 20 --
drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +-
drivers/gpu/drm/qxl/qxl_display.c | 11 +-
drivers/gpu/drm/qxl/qxl_draw.c | 14 +-
drivers/gpu/drm/qxl/qxl_drv.h | 11 +-
drivers/gpu/drm/qxl/qxl_object.c | 31 ++-
drivers/gpu/drm/qxl/qxl_object.h | 2 +-
drivers/gpu/drm/qxl/qxl_prime.c | 12 +-
drivers/gpu/drm/radeon/radeon.h | 1 -
drivers/gpu/drm/radeon/radeon_gem.c | 7 +-
drivers/gpu/drm/radeon/radeon_prime.c | 20 --
drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 +-
drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +-
drivers/gpu/drm/tiny/cirrus.c | 10 +-
drivers/gpu/drm/tiny/gm12u320.c | 10 +-
drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++
drivers/gpu/drm/udl/udl_modeset.c | 8 +-
drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +-
drivers/gpu/drm/vc4/vc4_bo.c | 7 +-
drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
drivers/gpu/drm/vgem/vgem_drv.c | 16 +-
drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 +-
drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +-
include/drm/drm_client.h | 7 +-
include/drm/drm_gem.h | 5 +-
include/drm/drm_gem_cma_helper.h | 3 +-
include/drm/drm_gem_shmem_helper.h | 4 +-
include/drm/drm_gem_ttm_helper.h | 6 +
include/drm/drm_gem_vram_helper.h | 14 +-
include/drm/drm_mode_config.h | 12 -
include/drm/ttm/ttm_bo_api.h | 28 +++
include/linux/dma-buf-map.h | 92 +++++++-
62 files changed, 817 insertions(+), 423 deletions(-)
--
2.28.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* [PATCH v4 00/10] Support GEM object mappings from I/O memory
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
DRM's fbdev console uses regular load and store operations to update
framebuffer memory. The bochs driver on sparc64 requires the use of
I/O-specific load and store operations. We have a workaround, but need
a long-term solution to the problem.
This patchset changes GEM's vmap/vunmap interfaces to forward pointers
of type struct dma_buf_map and updates the generic fbdev emulation to
use them correctly. This enables I/O-memory operations on all framebuffers
that require and support them.
Patches #1 to #4 prepare VRAM helpers and drivers.
Next is the update of the GEM vmap functions. Patch #5 adds vmap and vunmap
that is usable with TTM-based GEM drivers, and patch #6 updates GEM's
vmap/vunmap callback to forward instances of type struct dma_buf_map. While
the patch touches many files throughout the DRM modules, the applied changes
are mostly trivial interface fixes. Several TTM-based GEM drivers now use
the new vmap code. Patch #7 updates GEM's internal vmap/vunmap functions to
forward struct dma_buf_map.
With struct dma_buf_map propagated through the layers, patches #9 and #10
convert DRM clients and generic fbdev emulation to use it. Updating the
fbdev framebuffer will select the correct functions, either for system or
I/O memory.
v4:
* provide TTM vmap/vunmap plus GEM helpers and convert drivers
over (Christian, Daniel)
* remove several empty functions
* more TODOs and documentation (Daniel)
v3:
* recreate the whole patchset on top of struct dma_buf_map
v2:
* RFC patchset
Thomas Zimmermann (10):
drm/vram-helper: Remove invariant parameters from internal kmap
function
drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM
backends
drm/gem: Update internal GEM vmap/vunmap interfaces to use struct
dma_buf_map
drm/gem: Store client buffer mappings as struct dma_buf_map
dma-buf-map: Add memcpy and pointer-increment interfaces
drm/fb_helper: Support framebuffers in I/O memory
Documentation/gpu/todo.rst | 37 ++-
drivers/gpu/drm/Kconfig | 2 +
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 ---
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 -
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 -
drivers/gpu/drm/ast/ast_cursor.c | 27 ++-
drivers/gpu/drm/ast/ast_drv.h | 7 +-
drivers/gpu/drm/bochs/bochs_kms.c | 1 -
drivers/gpu/drm/drm_client.c | 38 ++--
drivers/gpu/drm/drm_fb_helper.c | 238 ++++++++++++++++++--
drivers/gpu/drm/drm_gem.c | 29 ++-
drivers/gpu/drm/drm_gem_cma_helper.c | 27 +--
drivers/gpu/drm/drm_gem_shmem_helper.c | 48 ++--
drivers/gpu/drm/drm_gem_ttm_helper.c | 38 ++++
drivers/gpu/drm/drm_gem_vram_helper.c | 117 +++++-----
drivers/gpu/drm/drm_internal.h | 5 +-
drivers/gpu/drm/drm_prime.c | 14 +-
drivers/gpu/drm/etnaviv/etnaviv_drv.h | 3 +-
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 -
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 12 +-
drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 -
drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 -
drivers/gpu/drm/lima/lima_gem.c | 6 +-
drivers/gpu/drm/lima/lima_sched.c | 11 +-
drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +-
drivers/gpu/drm/nouveau/Kconfig | 1 +
drivers/gpu/drm/nouveau/nouveau_bo.h | 2 -
drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +-
drivers/gpu/drm/nouveau/nouveau_gem.h | 2 -
drivers/gpu/drm/nouveau/nouveau_prime.c | 20 --
drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +-
drivers/gpu/drm/qxl/qxl_display.c | 11 +-
drivers/gpu/drm/qxl/qxl_draw.c | 14 +-
drivers/gpu/drm/qxl/qxl_drv.h | 11 +-
drivers/gpu/drm/qxl/qxl_object.c | 31 ++-
drivers/gpu/drm/qxl/qxl_object.h | 2 +-
drivers/gpu/drm/qxl/qxl_prime.c | 12 +-
drivers/gpu/drm/radeon/radeon.h | 1 -
drivers/gpu/drm/radeon/radeon_gem.c | 7 +-
drivers/gpu/drm/radeon/radeon_prime.c | 20 --
drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 +-
drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +-
drivers/gpu/drm/tiny/cirrus.c | 10 +-
drivers/gpu/drm/tiny/gm12u320.c | 10 +-
drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++
drivers/gpu/drm/udl/udl_modeset.c | 8 +-
drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +-
drivers/gpu/drm/vc4/vc4_bo.c | 7 +-
drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
drivers/gpu/drm/vgem/vgem_drv.c | 16 +-
drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 +-
drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +-
include/drm/drm_client.h | 7 +-
include/drm/drm_gem.h | 5 +-
include/drm/drm_gem_cma_helper.h | 3 +-
include/drm/drm_gem_shmem_helper.h | 4 +-
include/drm/drm_gem_ttm_helper.h | 6 +
include/drm/drm_gem_vram_helper.h | 14 +-
include/drm/drm_mode_config.h | 12 -
include/drm/ttm/ttm_bo_api.h | 28 +++
include/linux/dma-buf-map.h | 92 +++++++-
62 files changed, 817 insertions(+), 423 deletions(-)
--
2.28.0
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* [PATCH v4 00/10] Support GEM object mappings from I/O memory
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
DRM's fbdev console uses regular load and store operations to update
framebuffer memory. The bochs driver on sparc64 requires the use of
I/O-specific load and store operations. We have a workaround, but need
a long-term solution to the problem.
This patchset changes GEM's vmap/vunmap interfaces to forward pointers
of type struct dma_buf_map and updates the generic fbdev emulation to
use them correctly. This enables I/O-memory operations on all framebuffers
that require and support them.
Patches #1 to #4 prepare VRAM helpers and drivers.
Next is the update of the GEM vmap functions. Patch #5 adds vmap and vunmap
that is usable with TTM-based GEM drivers, and patch #6 updates GEM's
vmap/vunmap callback to forward instances of type struct dma_buf_map. While
the patch touches many files throughout the DRM modules, the applied changes
are mostly trivial interface fixes. Several TTM-based GEM drivers now use
the new vmap code. Patch #7 updates GEM's internal vmap/vunmap functions to
forward struct dma_buf_map.
With struct dma_buf_map propagated through the layers, patches #9 and #10
convert DRM clients and generic fbdev emulation to use it. Updating the
fbdev framebuffer will select the correct functions, either for system or
I/O memory.
v4:
* provide TTM vmap/vunmap plus GEM helpers and convert drivers
over (Christian, Daniel)
* remove several empty functions
* more TODOs and documentation (Daniel)
v3:
* recreate the whole patchset on top of struct dma_buf_map
v2:
* RFC patchset
Thomas Zimmermann (10):
drm/vram-helper: Remove invariant parameters from internal kmap
function
drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM
backends
drm/gem: Update internal GEM vmap/vunmap interfaces to use struct
dma_buf_map
drm/gem: Store client buffer mappings as struct dma_buf_map
dma-buf-map: Add memcpy and pointer-increment interfaces
drm/fb_helper: Support framebuffers in I/O memory
Documentation/gpu/todo.rst | 37 ++-
drivers/gpu/drm/Kconfig | 2 +
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 ---
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 -
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 -
drivers/gpu/drm/ast/ast_cursor.c | 27 ++-
drivers/gpu/drm/ast/ast_drv.h | 7 +-
drivers/gpu/drm/bochs/bochs_kms.c | 1 -
drivers/gpu/drm/drm_client.c | 38 ++--
drivers/gpu/drm/drm_fb_helper.c | 238 ++++++++++++++++++--
drivers/gpu/drm/drm_gem.c | 29 ++-
drivers/gpu/drm/drm_gem_cma_helper.c | 27 +--
drivers/gpu/drm/drm_gem_shmem_helper.c | 48 ++--
drivers/gpu/drm/drm_gem_ttm_helper.c | 38 ++++
drivers/gpu/drm/drm_gem_vram_helper.c | 117 +++++-----
drivers/gpu/drm/drm_internal.h | 5 +-
drivers/gpu/drm/drm_prime.c | 14 +-
drivers/gpu/drm/etnaviv/etnaviv_drv.h | 3 +-
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 -
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 12 +-
drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 -
drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 -
drivers/gpu/drm/lima/lima_gem.c | 6 +-
drivers/gpu/drm/lima/lima_sched.c | 11 +-
drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +-
drivers/gpu/drm/nouveau/Kconfig | 1 +
drivers/gpu/drm/nouveau/nouveau_bo.h | 2 -
drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +-
drivers/gpu/drm/nouveau/nouveau_gem.h | 2 -
drivers/gpu/drm/nouveau/nouveau_prime.c | 20 --
drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +-
drivers/gpu/drm/qxl/qxl_display.c | 11 +-
drivers/gpu/drm/qxl/qxl_draw.c | 14 +-
drivers/gpu/drm/qxl/qxl_drv.h | 11 +-
drivers/gpu/drm/qxl/qxl_object.c | 31 ++-
drivers/gpu/drm/qxl/qxl_object.h | 2 +-
drivers/gpu/drm/qxl/qxl_prime.c | 12 +-
drivers/gpu/drm/radeon/radeon.h | 1 -
drivers/gpu/drm/radeon/radeon_gem.c | 7 +-
drivers/gpu/drm/radeon/radeon_prime.c | 20 --
drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 +-
drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +-
drivers/gpu/drm/tiny/cirrus.c | 10 +-
drivers/gpu/drm/tiny/gm12u320.c | 10 +-
drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++
drivers/gpu/drm/udl/udl_modeset.c | 8 +-
drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +-
drivers/gpu/drm/vc4/vc4_bo.c | 7 +-
drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
drivers/gpu/drm/vgem/vgem_drv.c | 16 +-
drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 +-
drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +-
include/drm/drm_client.h | 7 +-
include/drm/drm_gem.h | 5 +-
include/drm/drm_gem_cma_helper.h | 3 +-
include/drm/drm_gem_shmem_helper.h | 4 +-
include/drm/drm_gem_ttm_helper.h | 6 +
include/drm/drm_gem_vram_helper.h | 14 +-
include/drm/drm_mode_config.h | 12 -
include/drm/ttm/ttm_bo_api.h | 28 +++
include/linux/dma-buf-map.h | 92 +++++++-
62 files changed, 817 insertions(+), 423 deletions(-)
--
2.28.0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* [PATCH v4 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function
2020-10-15 12:37 ` Thomas Zimmermann
` (3 preceding siblings ...)
(?)
@ 2020-10-15 12:37 ` Thomas Zimmermann
-1 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann,
Daniel Vetter
The parameters map and is_iomem are always of the same value. Removed them
to prepares the function for conversion to struct dma_buf_map.
v4:
* don't check for !kmap->virtual; will always be false
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++--------------
1 file changed, 4 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 3213429f8444..2d5ed30518f1 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -382,32 +382,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
}
EXPORT_SYMBOL(drm_gem_vram_unpin);
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
- bool map, bool *is_iomem)
+static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
{
int ret;
struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+ bool is_iomem;
if (gbo->kmap_use_count > 0)
goto out;
- if (kmap->virtual || !map)
- goto out;
-
ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
if (ret)
return ERR_PTR(ret);
out:
- if (!kmap->virtual) {
- if (is_iomem)
- *is_iomem = false;
- return NULL; /* not mapped; don't increment ref */
- }
++gbo->kmap_use_count;
- if (is_iomem)
- return ttm_kmap_obj_virtual(kmap, is_iomem);
- return kmap->virtual;
+ return ttm_kmap_obj_virtual(kmap, &is_iomem);
}
static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
@@ -452,7 +442,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
ret = drm_gem_vram_pin_locked(gbo, 0);
if (ret)
goto err_ttm_bo_unreserve;
- base = drm_gem_vram_kmap_locked(gbo, true, NULL);
+ base = drm_gem_vram_kmap_locked(gbo);
if (IS_ERR(base)) {
ret = PTR_ERR(base);
goto err_drm_gem_vram_unpin_locked;
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers, linus.walle
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann,
Daniel Vetter
The parameters map and is_iomem are always of the same value. Removed them
to prepares the function for conversion to struct dma_buf_map.
v4:
* don't check for !kmap->virtual; will always be false
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++--------------
1 file changed, 4 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 3213429f8444..2d5ed30518f1 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -382,32 +382,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
}
EXPORT_SYMBOL(drm_gem_vram_unpin);
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
- bool map, bool *is_iomem)
+static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
{
int ret;
struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+ bool is_iomem;
if (gbo->kmap_use_count > 0)
goto out;
- if (kmap->virtual || !map)
- goto out;
-
ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
if (ret)
return ERR_PTR(ret);
out:
- if (!kmap->virtual) {
- if (is_iomem)
- *is_iomem = false;
- return NULL; /* not mapped; don't increment ref */
- }
++gbo->kmap_use_count;
- if (is_iomem)
- return ttm_kmap_obj_virtual(kmap, is_iomem);
- return kmap->virtual;
+ return ttm_kmap_obj_virtual(kmap, &is_iomem);
}
static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
@@ -452,7 +442,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
ret = drm_gem_vram_pin_locked(gbo, 0);
if (ret)
goto err_ttm_bo_unreserve;
- base = drm_gem_vram_kmap_locked(gbo, true, NULL);
+ base = drm_gem_vram_kmap_locked(gbo);
if (IS_ERR(base)) {
ret = PTR_ERR(base);
goto err_drm_gem_vram_unpin_locked;
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, Daniel Vetter, etnaviv,
amd-gfx, virtualization, linaro-mm-sig, linux-rockchip,
dri-devel, Thomas Zimmermann, xen-devel, spice-devel,
linux-arm-kernel, linux-media
The parameters map and is_iomem are always of the same value. Removed them
to prepares the function for conversion to struct dma_buf_map.
v4:
* don't check for !kmap->virtual; will always be false
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++--------------
1 file changed, 4 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 3213429f8444..2d5ed30518f1 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -382,32 +382,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
}
EXPORT_SYMBOL(drm_gem_vram_unpin);
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
- bool map, bool *is_iomem)
+static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
{
int ret;
struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+ bool is_iomem;
if (gbo->kmap_use_count > 0)
goto out;
- if (kmap->virtual || !map)
- goto out;
-
ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
if (ret)
return ERR_PTR(ret);
out:
- if (!kmap->virtual) {
- if (is_iomem)
- *is_iomem = false;
- return NULL; /* not mapped; don't increment ref */
- }
++gbo->kmap_use_count;
- if (is_iomem)
- return ttm_kmap_obj_virtual(kmap, is_iomem);
- return kmap->virtual;
+ return ttm_kmap_obj_virtual(kmap, &is_iomem);
}
static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
@@ -452,7 +442,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
ret = drm_gem_vram_pin_locked(gbo, 0);
if (ret)
goto err_ttm_bo_unreserve;
- base = drm_gem_vram_kmap_locked(gbo, true, NULL);
+ base = drm_gem_vram_kmap_locked(gbo);
if (IS_ERR(base)) {
ret = PTR_ERR(base);
goto err_drm_gem_vram_unpin_locked;
--
2.28.0
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, Daniel Vetter, etnaviv,
amd-gfx, virtualization, linaro-mm-sig, linux-rockchip,
dri-devel, Thomas Zimmermann, xen-devel, spice-devel,
linux-arm-kernel, linux-media
The parameters map and is_iomem are always of the same value. Removed them
to prepares the function for conversion to struct dma_buf_map.
v4:
* don't check for !kmap->virtual; will always be false
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++--------------
1 file changed, 4 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 3213429f8444..2d5ed30518f1 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -382,32 +382,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
}
EXPORT_SYMBOL(drm_gem_vram_unpin);
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
- bool map, bool *is_iomem)
+static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
{
int ret;
struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+ bool is_iomem;
if (gbo->kmap_use_count > 0)
goto out;
- if (kmap->virtual || !map)
- goto out;
-
ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
if (ret)
return ERR_PTR(ret);
out:
- if (!kmap->virtual) {
- if (is_iomem)
- *is_iomem = false;
- return NULL; /* not mapped; don't increment ref */
- }
++gbo->kmap_use_count;
- if (is_iomem)
- return ttm_kmap_obj_virtual(kmap, is_iomem);
- return kmap->virtual;
+ return ttm_kmap_obj_virtual(kmap, &is_iomem);
}
static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
@@ -452,7 +442,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
ret = drm_gem_vram_pin_locked(gbo, 0);
if (ret)
goto err_ttm_bo_unreserve;
- base = drm_gem_vram_kmap_locked(gbo, true, NULL);
+ base = drm_gem_vram_kmap_locked(gbo);
if (IS_ERR(base)) {
ret = PTR_ERR(base);
goto err_drm_gem_vram_unpin_locked;
--
2.28.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, Daniel Vetter, etnaviv,
amd-gfx, virtualization, linaro-mm-sig, linux-rockchip,
dri-devel, Thomas Zimmermann, xen-devel, spice-devel,
linux-arm-kernel, linux-media
The parameters map and is_iomem are always of the same value. Removed them
to prepares the function for conversion to struct dma_buf_map.
v4:
* don't check for !kmap->virtual; will always be false
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++--------------
1 file changed, 4 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 3213429f8444..2d5ed30518f1 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -382,32 +382,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
}
EXPORT_SYMBOL(drm_gem_vram_unpin);
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
- bool map, bool *is_iomem)
+static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
{
int ret;
struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+ bool is_iomem;
if (gbo->kmap_use_count > 0)
goto out;
- if (kmap->virtual || !map)
- goto out;
-
ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
if (ret)
return ERR_PTR(ret);
out:
- if (!kmap->virtual) {
- if (is_iomem)
- *is_iomem = false;
- return NULL; /* not mapped; don't increment ref */
- }
++gbo->kmap_use_count;
- if (is_iomem)
- return ttm_kmap_obj_virtual(kmap, is_iomem);
- return kmap->virtual;
+ return ttm_kmap_obj_virtual(kmap, &is_iomem);
}
static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
@@ -452,7 +442,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
ret = drm_gem_vram_pin_locked(gbo, 0);
if (ret)
goto err_ttm_bo_unreserve;
- base = drm_gem_vram_kmap_locked(gbo, true, NULL);
+ base = drm_gem_vram_kmap_locked(gbo);
if (IS_ERR(base)) {
ret = PTR_ERR(base);
goto err_drm_gem_vram_unpin_locked;
--
2.28.0
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, Daniel Vetter, etnaviv,
amd-gfx, virtualization, linaro-mm-sig, linux-rockchip,
dri-devel, Thomas Zimmermann, xen-devel, spice-devel,
linux-arm-kernel, linux-media
The parameters map and is_iomem are always of the same value. Removed them
to prepares the function for conversion to struct dma_buf_map.
v4:
* don't check for !kmap->virtual; will always be false
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++--------------
1 file changed, 4 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 3213429f8444..2d5ed30518f1 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -382,32 +382,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
}
EXPORT_SYMBOL(drm_gem_vram_unpin);
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
- bool map, bool *is_iomem)
+static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
{
int ret;
struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+ bool is_iomem;
if (gbo->kmap_use_count > 0)
goto out;
- if (kmap->virtual || !map)
- goto out;
-
ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
if (ret)
return ERR_PTR(ret);
out:
- if (!kmap->virtual) {
- if (is_iomem)
- *is_iomem = false;
- return NULL; /* not mapped; don't increment ref */
- }
++gbo->kmap_use_count;
- if (is_iomem)
- return ttm_kmap_obj_virtual(kmap, is_iomem);
- return kmap->virtual;
+ return ttm_kmap_obj_virtual(kmap, &is_iomem);
}
static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
@@ -452,7 +442,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
ret = drm_gem_vram_pin_locked(gbo, 0);
if (ret)
goto err_ttm_bo_unreserve;
- base = drm_gem_vram_kmap_locked(gbo, true, NULL);
+ base = drm_gem_vram_kmap_locked(gbo);
if (IS_ERR(base)) {
ret = PTR_ERR(base);
goto err_drm_gem_vram_unpin_locked;
--
2.28.0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
2020-10-15 12:37 ` Thomas Zimmermann
` (3 preceding siblings ...)
(?)
@ 2020-10-15 12:37 ` Thomas Zimmermann
-1 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann
The function drm_gem_cma_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/drm_gem_cma_helper.c | 17 -----------------
drivers/gpu/drm/vc4/vc4_bo.c | 1 -
include/drm/drm_gem_cma_helper.h | 1 -
3 files changed, 19 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 2165633c9b9e..d527485ea0b7 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
-/**
- * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
- * address space
- * @obj: GEM object
- * @vaddr: kernel virtual address where the CMA GEM object was mapped
- *
- * This function removes a buffer exported via DRM PRIME from the kernel's
- * virtual address space. This is a no-op because CMA buffers cannot be
- * unmapped from kernel space. Drivers using the CMA helpers should set this
- * as their &drm_gem_object_funcs.vunmap callback.
- */
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- /* Nothing to do */
-}
-EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);
-
static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
.free = drm_gem_cma_free_object,
.print_info = drm_gem_cma_print_info,
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index f432278173cd..557f0d1e6437 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = {
.export = vc4_prime_export,
.get_sg_table = drm_gem_cma_prime_get_sg_table,
.vmap = vc4_prime_vmap,
- .vunmap = drm_gem_cma_prime_vunmap,
.vm_ops = &vc4_vm_ops,
};
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index 2bfa2502607a..a064b0d1c480 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
struct drm_gem_object *
drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers, linus.walle
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann
The function drm_gem_cma_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/drm_gem_cma_helper.c | 17 -----------------
drivers/gpu/drm/vc4/vc4_bo.c | 1 -
include/drm/drm_gem_cma_helper.h | 1 -
3 files changed, 19 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 2165633c9b9e..d527485ea0b7 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
-/**
- * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
- * address space
- * @obj: GEM object
- * @vaddr: kernel virtual address where the CMA GEM object was mapped
- *
- * This function removes a buffer exported via DRM PRIME from the kernel's
- * virtual address space. This is a no-op because CMA buffers cannot be
- * unmapped from kernel space. Drivers using the CMA helpers should set this
- * as their &drm_gem_object_funcs.vunmap callback.
- */
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- /* Nothing to do */
-}
-EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);
-
static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
.free = drm_gem_cma_free_object,
.print_info = drm_gem_cma_print_info,
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index f432278173cd..557f0d1e6437 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = {
.export = vc4_prime_export,
.get_sg_table = drm_gem_cma_prime_get_sg_table,
.vmap = vc4_prime_vmap,
- .vunmap = drm_gem_cma_prime_vunmap,
.vm_ops = &vc4_vm_ops,
};
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index 2bfa2502607a..a064b0d1c480 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
struct drm_gem_object *
drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
The function drm_gem_cma_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/drm_gem_cma_helper.c | 17 -----------------
drivers/gpu/drm/vc4/vc4_bo.c | 1 -
include/drm/drm_gem_cma_helper.h | 1 -
3 files changed, 19 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 2165633c9b9e..d527485ea0b7 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
-/**
- * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
- * address space
- * @obj: GEM object
- * @vaddr: kernel virtual address where the CMA GEM object was mapped
- *
- * This function removes a buffer exported via DRM PRIME from the kernel's
- * virtual address space. This is a no-op because CMA buffers cannot be
- * unmapped from kernel space. Drivers using the CMA helpers should set this
- * as their &drm_gem_object_funcs.vunmap callback.
- */
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- /* Nothing to do */
-}
-EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);
-
static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
.free = drm_gem_cma_free_object,
.print_info = drm_gem_cma_print_info,
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index f432278173cd..557f0d1e6437 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = {
.export = vc4_prime_export,
.get_sg_table = drm_gem_cma_prime_get_sg_table,
.vmap = vc4_prime_vmap,
- .vunmap = drm_gem_cma_prime_vunmap,
.vm_ops = &vc4_vm_ops,
};
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index 2bfa2502607a..a064b0d1c480 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
struct drm_gem_object *
drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
--
2.28.0
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
The function drm_gem_cma_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/drm_gem_cma_helper.c | 17 -----------------
drivers/gpu/drm/vc4/vc4_bo.c | 1 -
include/drm/drm_gem_cma_helper.h | 1 -
3 files changed, 19 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 2165633c9b9e..d527485ea0b7 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
-/**
- * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
- * address space
- * @obj: GEM object
- * @vaddr: kernel virtual address where the CMA GEM object was mapped
- *
- * This function removes a buffer exported via DRM PRIME from the kernel's
- * virtual address space. This is a no-op because CMA buffers cannot be
- * unmapped from kernel space. Drivers using the CMA helpers should set this
- * as their &drm_gem_object_funcs.vunmap callback.
- */
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- /* Nothing to do */
-}
-EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);
-
static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
.free = drm_gem_cma_free_object,
.print_info = drm_gem_cma_print_info,
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index f432278173cd..557f0d1e6437 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = {
.export = vc4_prime_export,
.get_sg_table = drm_gem_cma_prime_get_sg_table,
.vmap = vc4_prime_vmap,
- .vunmap = drm_gem_cma_prime_vunmap,
.vm_ops = &vc4_vm_ops,
};
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index 2bfa2502607a..a064b0d1c480 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
struct drm_gem_object *
drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
--
2.28.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
The function drm_gem_cma_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/drm_gem_cma_helper.c | 17 -----------------
drivers/gpu/drm/vc4/vc4_bo.c | 1 -
include/drm/drm_gem_cma_helper.h | 1 -
3 files changed, 19 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 2165633c9b9e..d527485ea0b7 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
-/**
- * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
- * address space
- * @obj: GEM object
- * @vaddr: kernel virtual address where the CMA GEM object was mapped
- *
- * This function removes a buffer exported via DRM PRIME from the kernel's
- * virtual address space. This is a no-op because CMA buffers cannot be
- * unmapped from kernel space. Drivers using the CMA helpers should set this
- * as their &drm_gem_object_funcs.vunmap callback.
- */
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- /* Nothing to do */
-}
-EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);
-
static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
.free = drm_gem_cma_free_object,
.print_info = drm_gem_cma_print_info,
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index f432278173cd..557f0d1e6437 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = {
.export = vc4_prime_export,
.get_sg_table = drm_gem_cma_prime_get_sg_table,
.vmap = vc4_prime_vmap,
- .vunmap = drm_gem_cma_prime_vunmap,
.vm_ops = &vc4_vm_ops,
};
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index 2bfa2502607a..a064b0d1c480 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
struct drm_gem_object *
drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
--
2.28.0
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
The function drm_gem_cma_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/drm_gem_cma_helper.c | 17 -----------------
drivers/gpu/drm/vc4/vc4_bo.c | 1 -
include/drm/drm_gem_cma_helper.h | 1 -
3 files changed, 19 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 2165633c9b9e..d527485ea0b7 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
-/**
- * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
- * address space
- * @obj: GEM object
- * @vaddr: kernel virtual address where the CMA GEM object was mapped
- *
- * This function removes a buffer exported via DRM PRIME from the kernel's
- * virtual address space. This is a no-op because CMA buffers cannot be
- * unmapped from kernel space. Drivers using the CMA helpers should set this
- * as their &drm_gem_object_funcs.vunmap callback.
- */
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- /* Nothing to do */
-}
-EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);
-
static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
.free = drm_gem_cma_free_object,
.print_info = drm_gem_cma_print_info,
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index f432278173cd..557f0d1e6437 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = {
.export = vc4_prime_export,
.get_sg_table = drm_gem_cma_prime_get_sg_table,
.vmap = vc4_prime_vmap,
- .vunmap = drm_gem_cma_prime_vunmap,
.vm_ops = &vc4_vm_ops,
};
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index 2bfa2502607a..a064b0d1c480 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
struct drm_gem_object *
drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
--
2.28.0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
2020-10-15 12:37 ` Thomas Zimmermann
` (3 preceding siblings ...)
(?)
@ 2020-10-15 12:37 ` Thomas Zimmermann
-1 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann
The function etnaviv_gem_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/etnaviv/etnaviv_drv.h | 1 -
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 -
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 -----
3 files changed, 7 deletions(-)
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 914f0867ff71..9682c26d89bb 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index 67d9a2b9ea6a..bbd235473645 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = {
.unpin = etnaviv_gem_prime_unpin,
.get_sg_table = etnaviv_gem_prime_get_sg_table,
.vmap = etnaviv_gem_prime_vmap,
- .vunmap = etnaviv_gem_prime_vunmap,
.vm_ops = &vm_ops,
};
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 135fbff6fecf..a6d9932a32ae 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
return etnaviv_gem_vmap(obj);
}
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- /* TODO msm_gem_vunmap() */
-}
-
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma)
{
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers, linus.walle
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann
The function etnaviv_gem_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/etnaviv/etnaviv_drv.h | 1 -
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 -
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 -----
3 files changed, 7 deletions(-)
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 914f0867ff71..9682c26d89bb 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index 67d9a2b9ea6a..bbd235473645 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = {
.unpin = etnaviv_gem_prime_unpin,
.get_sg_table = etnaviv_gem_prime_get_sg_table,
.vmap = etnaviv_gem_prime_vmap,
- .vunmap = etnaviv_gem_prime_vunmap,
.vm_ops = &vm_ops,
};
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 135fbff6fecf..a6d9932a32ae 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
return etnaviv_gem_vmap(obj);
}
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- /* TODO msm_gem_vunmap() */
-}
-
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma)
{
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
The function etnaviv_gem_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/etnaviv/etnaviv_drv.h | 1 -
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 -
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 -----
3 files changed, 7 deletions(-)
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 914f0867ff71..9682c26d89bb 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index 67d9a2b9ea6a..bbd235473645 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = {
.unpin = etnaviv_gem_prime_unpin,
.get_sg_table = etnaviv_gem_prime_get_sg_table,
.vmap = etnaviv_gem_prime_vmap,
- .vunmap = etnaviv_gem_prime_vunmap,
.vm_ops = &vm_ops,
};
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 135fbff6fecf..a6d9932a32ae 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
return etnaviv_gem_vmap(obj);
}
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- /* TODO msm_gem_vunmap() */
-}
-
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma)
{
--
2.28.0
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
The function etnaviv_gem_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/etnaviv/etnaviv_drv.h | 1 -
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 -
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 -----
3 files changed, 7 deletions(-)
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 914f0867ff71..9682c26d89bb 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index 67d9a2b9ea6a..bbd235473645 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = {
.unpin = etnaviv_gem_prime_unpin,
.get_sg_table = etnaviv_gem_prime_get_sg_table,
.vmap = etnaviv_gem_prime_vmap,
- .vunmap = etnaviv_gem_prime_vunmap,
.vm_ops = &vm_ops,
};
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 135fbff6fecf..a6d9932a32ae 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
return etnaviv_gem_vmap(obj);
}
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- /* TODO msm_gem_vunmap() */
-}
-
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma)
{
--
2.28.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
The function etnaviv_gem_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/etnaviv/etnaviv_drv.h | 1 -
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 -
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 -----
3 files changed, 7 deletions(-)
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 914f0867ff71..9682c26d89bb 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index 67d9a2b9ea6a..bbd235473645 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = {
.unpin = etnaviv_gem_prime_unpin,
.get_sg_table = etnaviv_gem_prime_get_sg_table,
.vmap = etnaviv_gem_prime_vmap,
- .vunmap = etnaviv_gem_prime_vunmap,
.vm_ops = &vm_ops,
};
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 135fbff6fecf..a6d9932a32ae 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
return etnaviv_gem_vmap(obj);
}
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- /* TODO msm_gem_vunmap() */
-}
-
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma)
{
--
2.28.0
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
@ 2020-10-15 12:37 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:37 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
The function etnaviv_gem_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/etnaviv/etnaviv_drv.h | 1 -
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 -
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 -----
3 files changed, 7 deletions(-)
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 914f0867ff71..9682c26d89bb 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index 67d9a2b9ea6a..bbd235473645 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = {
.unpin = etnaviv_gem_prime_unpin,
.get_sg_table = etnaviv_gem_prime_get_sg_table,
.vmap = etnaviv_gem_prime_vmap,
- .vunmap = etnaviv_gem_prime_vunmap,
.vm_ops = &vm_ops,
};
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 135fbff6fecf..a6d9932a32ae 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
return etnaviv_gem_vmap(obj);
}
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- /* TODO msm_gem_vunmap() */
-}
-
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma)
{
--
2.28.0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
2020-10-15 12:37 ` Thomas Zimmermann
` (3 preceding siblings ...)
(?)
@ 2020-10-15 12:38 ` Thomas Zimmermann
-1 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann
The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove
them before changing the interface to use struct drm_buf_map. As a side
effect of removing drm_gem_prime_vmap(), the error code changes from
ENOMEM to EOPNOTSUPP.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------
drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 --
2 files changed, 14 deletions(-)
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index e7a6eb96f692..13a35623ac04 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = {
static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = {
.free = exynos_drm_gem_free_object,
.get_sg_table = exynos_drm_gem_prime_get_sg_table,
- .vmap = exynos_drm_gem_prime_vmap,
- .vunmap = exynos_drm_gem_prime_vunmap,
.vm_ops = &exynos_drm_gem_vm_ops,
};
@@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
return &exynos_gem->base;
}
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
-{
- return NULL;
-}
-
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- /* Nothing to do */
-}
-
int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma)
{
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
index 74e926abeff0..a23272fb96fb 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
@@ -107,8 +107,6 @@ struct drm_gem_object *
exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sgt);
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers, linus.walle
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann
The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove
them before changing the interface to use struct drm_buf_map. As a side
effect of removing drm_gem_prime_vmap(), the error code changes from
ENOMEM to EOPNOTSUPP.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------
drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 --
2 files changed, 14 deletions(-)
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index e7a6eb96f692..13a35623ac04 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = {
static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = {
.free = exynos_drm_gem_free_object,
.get_sg_table = exynos_drm_gem_prime_get_sg_table,
- .vmap = exynos_drm_gem_prime_vmap,
- .vunmap = exynos_drm_gem_prime_vunmap,
.vm_ops = &exynos_drm_gem_vm_ops,
};
@@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
return &exynos_gem->base;
}
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
-{
- return NULL;
-}
-
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- /* Nothing to do */
-}
-
int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma)
{
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
index 74e926abeff0..a23272fb96fb 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
@@ -107,8 +107,6 @@ struct drm_gem_object *
exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sgt);
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap, vunmap}()
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove
them before changing the interface to use struct drm_buf_map. As a side
effect of removing drm_gem_prime_vmap(), the error code changes from
ENOMEM to EOPNOTSUPP.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------
drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 --
2 files changed, 14 deletions(-)
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index e7a6eb96f692..13a35623ac04 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = {
static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = {
.free = exynos_drm_gem_free_object,
.get_sg_table = exynos_drm_gem_prime_get_sg_table,
- .vmap = exynos_drm_gem_prime_vmap,
- .vunmap = exynos_drm_gem_prime_vunmap,
.vm_ops = &exynos_drm_gem_vm_ops,
};
@@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
return &exynos_gem->base;
}
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
-{
- return NULL;
-}
-
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- /* Nothing to do */
-}
-
int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma)
{
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
index 74e926abeff0..a23272fb96fb 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
@@ -107,8 +107,6 @@ struct drm_gem_object *
exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sgt);
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
--
2.28.0
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap, vunmap}()
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove
them before changing the interface to use struct drm_buf_map. As a side
effect of removing drm_gem_prime_vmap(), the error code changes from
ENOMEM to EOPNOTSUPP.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------
drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 --
2 files changed, 14 deletions(-)
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index e7a6eb96f692..13a35623ac04 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = {
static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = {
.free = exynos_drm_gem_free_object,
.get_sg_table = exynos_drm_gem_prime_get_sg_table,
- .vmap = exynos_drm_gem_prime_vmap,
- .vunmap = exynos_drm_gem_prime_vunmap,
.vm_ops = &exynos_drm_gem_vm_ops,
};
@@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
return &exynos_gem->base;
}
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
-{
- return NULL;
-}
-
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- /* Nothing to do */
-}
-
int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma)
{
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
index 74e926abeff0..a23272fb96fb 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
@@ -107,8 +107,6 @@ struct drm_gem_object *
exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sgt);
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
--
2.28.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap, vunmap}()
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove
them before changing the interface to use struct drm_buf_map. As a side
effect of removing drm_gem_prime_vmap(), the error code changes from
ENOMEM to EOPNOTSUPP.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------
drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 --
2 files changed, 14 deletions(-)
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index e7a6eb96f692..13a35623ac04 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = {
static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = {
.free = exynos_drm_gem_free_object,
.get_sg_table = exynos_drm_gem_prime_get_sg_table,
- .vmap = exynos_drm_gem_prime_vmap,
- .vunmap = exynos_drm_gem_prime_vunmap,
.vm_ops = &exynos_drm_gem_vm_ops,
};
@@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
return &exynos_gem->base;
}
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
-{
- return NULL;
-}
-
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- /* Nothing to do */
-}
-
int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma)
{
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
index 74e926abeff0..a23272fb96fb 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
@@ -107,8 +107,6 @@ struct drm_gem_object *
exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sgt);
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
--
2.28.0
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap, vunmap}()
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove
them before changing the interface to use struct drm_buf_map. As a side
effect of removing drm_gem_prime_vmap(), the error code changes from
ENOMEM to EOPNOTSUPP.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------
drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 --
2 files changed, 14 deletions(-)
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index e7a6eb96f692..13a35623ac04 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = {
static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = {
.free = exynos_drm_gem_free_object,
.get_sg_table = exynos_drm_gem_prime_get_sg_table,
- .vmap = exynos_drm_gem_prime_vmap,
- .vunmap = exynos_drm_gem_prime_vunmap,
.vm_ops = &exynos_drm_gem_vm_ops,
};
@@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
return &exynos_gem->base;
}
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
-{
- return NULL;
-}
-
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- /* Nothing to do */
-}
-
int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma)
{
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
index 74e926abeff0..a23272fb96fb 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
@@ -107,8 +107,6 @@ struct drm_gem_object *
exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sgt);
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
--
2.28.0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
2020-10-15 12:37 ` Thomas Zimmermann
` (3 preceding siblings ...)
(?)
@ 2020-10-15 12:38 ` Thomas Zimmermann
-1 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann
The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
address space. The mapping's address is returned as struct dma_buf_map.
Each function is a simplified version of TTM's existing kmap code. Both
functions respect the memory's location ani/or writecombine flags.
On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
two helpers that convert a GEM object into the TTM BO and forward the call
to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
callbacks.
v4:
* drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
Christian)
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
include/drm/drm_gem_ttm_helper.h | 6 +++
include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
include/linux/dma-buf-map.h | 20 ++++++++
5 files changed, 164 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
index 0e4fb9ba43ad..db4c14d78a30 100644
--- a/drivers/gpu/drm/drm_gem_ttm_helper.c
+++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
@@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
}
EXPORT_SYMBOL(drm_gem_ttm_print_info);
+/**
+ * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: [out] returns the dma-buf mapping.
+ *
+ * Maps a GEM object with ttm_bo_vmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ *
+ * Returns:
+ * 0 on success, or a negative errno code otherwise.
+ */
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map)
+{
+ struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+ return ttm_bo_vmap(bo, map);
+
+}
+EXPORT_SYMBOL(drm_gem_ttm_vmap);
+
+/**
+ * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: dma-buf mapping.
+ *
+ * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ */
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map)
+{
+ struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+ ttm_bo_vunmap(bo, map);
+}
+EXPORT_SYMBOL(drm_gem_ttm_vunmap);
+
/**
* drm_gem_ttm_mmap() - mmap &ttm_buffer_object
* @gem: GEM object.
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index bdee4df1f3f2..80c42c774c7d 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -32,6 +32,7 @@
#include <drm/ttm/ttm_bo_driver.h>
#include <drm/ttm/ttm_placement.h>
#include <drm/drm_vma_manager.h>
+#include <linux/dma-buf-map.h>
#include <linux/io.h>
#include <linux/highmem.h>
#include <linux/wait.h>
@@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
}
EXPORT_SYMBOL(ttm_bo_kunmap);
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+ struct ttm_resource *mem = &bo->mem;
+ int ret;
+
+ ret = ttm_mem_io_reserve(bo->bdev, mem);
+ if (ret)
+ return ret;
+
+ if (mem->bus.is_iomem) {
+ void __iomem *vaddr_iomem;
+ unsigned long size = bo->num_pages << PAGE_SHIFT;
+
+ if (mem->bus.addr)
+ vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
+ else if (mem->placement & TTM_PL_FLAG_WC)
+ vaddr_iomem = ioremap_wc(mem->bus.offset, size);
+ else
+ vaddr_iomem = ioremap(mem->bus.offset, size);
+
+ if (!vaddr_iomem)
+ return -ENOMEM;
+
+ dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
+
+ } else {
+ struct ttm_operation_ctx ctx = {
+ .interruptible = false,
+ .no_wait_gpu = false
+ };
+ struct ttm_tt *ttm = bo->ttm;
+ pgprot_t prot;
+ void *vaddr;
+
+ BUG_ON(!ttm);
+
+ ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
+ if (ret)
+ return ret;
+
+ /*
+ * We need to use vmap to get the desired page protection
+ * or to make the buffer object look contiguous.
+ */
+ prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
+ vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
+ if (!vaddr)
+ return -ENOMEM;
+
+ dma_buf_map_set_vaddr(map, vaddr);
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(ttm_bo_vmap);
+
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+ if (dma_buf_map_is_null(map))
+ return;
+
+ if (map->is_iomem)
+ iounmap(map->vaddr_iomem);
+ else
+ vunmap(map->vaddr);
+ dma_buf_map_clear(map);
+
+ ttm_mem_io_free(bo->bdev, &bo->mem);
+}
+EXPORT_SYMBOL(ttm_bo_vunmap);
+
static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
bool dst_use_tt)
{
diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
index 118cef76f84f..7c6d874910b8 100644
--- a/include/drm/drm_gem_ttm_helper.h
+++ b/include/drm/drm_gem_ttm_helper.h
@@ -10,11 +10,17 @@
#include <drm/ttm/ttm_bo_api.h>
#include <drm/ttm/ttm_bo_driver.h>
+struct dma_buf_map;
+
#define drm_gem_ttm_of_gem(gem_obj) \
container_of(gem_obj, struct ttm_buffer_object, base)
void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
const struct drm_gem_object *gem);
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map);
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map);
int drm_gem_ttm_mmap(struct drm_gem_object *gem,
struct vm_area_struct *vma);
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 37102e45e496..2c59a785374c 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -48,6 +48,8 @@ struct ttm_bo_global;
struct ttm_bo_device;
+struct dma_buf_map;
+
struct drm_mm_node;
struct ttm_placement;
@@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
*/
void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
+/**
+ * ttm_bo_vmap
+ *
+ * @bo: The buffer object.
+ * @map: pointer to a struct dma_buf_map representing the map.
+ *
+ * Sets up a kernel virtual mapping, using ioremap or vmap to the
+ * data in the buffer object. The parameter @map returns the virtual
+ * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
+ *
+ * Returns
+ * -ENOMEM: Out of memory.
+ * -EINVAL: Invalid range.
+ */
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
+/**
+ * ttm_bo_vunmap
+ *
+ * @bo: The buffer object.
+ * @map: Object describing the map to unmap.
+ *
+ * Unmaps a kernel map set up by ttm_bo_vmap().
+ */
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
/**
* ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
*
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index fd1aba545fdf..2e8bbecb5091 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -45,6 +45,12 @@
*
* dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
*
+ * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
+ *
+ * .. code-block:: c
+ *
+ * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
+ *
* Test if a mapping is valid with either dma_buf_map_is_set() or
* dma_buf_map_is_null().
*
@@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
map->is_iomem = false;
}
+/**
+ * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
+ * @map: The dma-buf mapping structure
+ * @vaddr_iomem: An I/O-memory address
+ *
+ * Sets the address and the I/O-memory flag.
+ */
+static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
+ void __iomem *vaddr_iomem)
+{
+ map->vaddr_iomem = vaddr_iomem;
+ map->is_iomem = true;
+}
+
/**
* dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
* @lhs: The dma-buf mapping structure
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers, linus.walle
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann
The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
address space. The mapping's address is returned as struct dma_buf_map.
Each function is a simplified version of TTM's existing kmap code. Both
functions respect the memory's location ani/or writecombine flags.
On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
two helpers that convert a GEM object into the TTM BO and forward the call
to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
callbacks.
v4:
* drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
Christian)
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
include/drm/drm_gem_ttm_helper.h | 6 +++
include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
include/linux/dma-buf-map.h | 20 ++++++++
5 files changed, 164 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
index 0e4fb9ba43ad..db4c14d78a30 100644
--- a/drivers/gpu/drm/drm_gem_ttm_helper.c
+++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
@@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
}
EXPORT_SYMBOL(drm_gem_ttm_print_info);
+/**
+ * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: [out] returns the dma-buf mapping.
+ *
+ * Maps a GEM object with ttm_bo_vmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ *
+ * Returns:
+ * 0 on success, or a negative errno code otherwise.
+ */
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map)
+{
+ struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+ return ttm_bo_vmap(bo, map);
+
+}
+EXPORT_SYMBOL(drm_gem_ttm_vmap);
+
+/**
+ * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: dma-buf mapping.
+ *
+ * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ */
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map)
+{
+ struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+ ttm_bo_vunmap(bo, map);
+}
+EXPORT_SYMBOL(drm_gem_ttm_vunmap);
+
/**
* drm_gem_ttm_mmap() - mmap &ttm_buffer_object
* @gem: GEM object.
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index bdee4df1f3f2..80c42c774c7d 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -32,6 +32,7 @@
#include <drm/ttm/ttm_bo_driver.h>
#include <drm/ttm/ttm_placement.h>
#include <drm/drm_vma_manager.h>
+#include <linux/dma-buf-map.h>
#include <linux/io.h>
#include <linux/highmem.h>
#include <linux/wait.h>
@@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
}
EXPORT_SYMBOL(ttm_bo_kunmap);
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+ struct ttm_resource *mem = &bo->mem;
+ int ret;
+
+ ret = ttm_mem_io_reserve(bo->bdev, mem);
+ if (ret)
+ return ret;
+
+ if (mem->bus.is_iomem) {
+ void __iomem *vaddr_iomem;
+ unsigned long size = bo->num_pages << PAGE_SHIFT;
+
+ if (mem->bus.addr)
+ vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
+ else if (mem->placement & TTM_PL_FLAG_WC)
+ vaddr_iomem = ioremap_wc(mem->bus.offset, size);
+ else
+ vaddr_iomem = ioremap(mem->bus.offset, size);
+
+ if (!vaddr_iomem)
+ return -ENOMEM;
+
+ dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
+
+ } else {
+ struct ttm_operation_ctx ctx = {
+ .interruptible = false,
+ .no_wait_gpu = false
+ };
+ struct ttm_tt *ttm = bo->ttm;
+ pgprot_t prot;
+ void *vaddr;
+
+ BUG_ON(!ttm);
+
+ ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
+ if (ret)
+ return ret;
+
+ /*
+ * We need to use vmap to get the desired page protection
+ * or to make the buffer object look contiguous.
+ */
+ prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
+ vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
+ if (!vaddr)
+ return -ENOMEM;
+
+ dma_buf_map_set_vaddr(map, vaddr);
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(ttm_bo_vmap);
+
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+ if (dma_buf_map_is_null(map))
+ return;
+
+ if (map->is_iomem)
+ iounmap(map->vaddr_iomem);
+ else
+ vunmap(map->vaddr);
+ dma_buf_map_clear(map);
+
+ ttm_mem_io_free(bo->bdev, &bo->mem);
+}
+EXPORT_SYMBOL(ttm_bo_vunmap);
+
static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
bool dst_use_tt)
{
diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
index 118cef76f84f..7c6d874910b8 100644
--- a/include/drm/drm_gem_ttm_helper.h
+++ b/include/drm/drm_gem_ttm_helper.h
@@ -10,11 +10,17 @@
#include <drm/ttm/ttm_bo_api.h>
#include <drm/ttm/ttm_bo_driver.h>
+struct dma_buf_map;
+
#define drm_gem_ttm_of_gem(gem_obj) \
container_of(gem_obj, struct ttm_buffer_object, base)
void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
const struct drm_gem_object *gem);
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map);
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map);
int drm_gem_ttm_mmap(struct drm_gem_object *gem,
struct vm_area_struct *vma);
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 37102e45e496..2c59a785374c 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -48,6 +48,8 @@ struct ttm_bo_global;
struct ttm_bo_device;
+struct dma_buf_map;
+
struct drm_mm_node;
struct ttm_placement;
@@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
*/
void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
+/**
+ * ttm_bo_vmap
+ *
+ * @bo: The buffer object.
+ * @map: pointer to a struct dma_buf_map representing the map.
+ *
+ * Sets up a kernel virtual mapping, using ioremap or vmap to the
+ * data in the buffer object. The parameter @map returns the virtual
+ * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
+ *
+ * Returns
+ * -ENOMEM: Out of memory.
+ * -EINVAL: Invalid range.
+ */
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
+/**
+ * ttm_bo_vunmap
+ *
+ * @bo: The buffer object.
+ * @map: Object describing the map to unmap.
+ *
+ * Unmaps a kernel map set up by ttm_bo_vmap().
+ */
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
/**
* ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
*
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index fd1aba545fdf..2e8bbecb5091 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -45,6 +45,12 @@
*
* dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
*
+ * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
+ *
+ * .. code-block:: c
+ *
+ * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
+ *
* Test if a mapping is valid with either dma_buf_map_is_set() or
* dma_buf_map_is_null().
*
@@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
map->is_iomem = false;
}
+/**
+ * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
+ * @map: The dma-buf mapping structure
+ * @vaddr_iomem: An I/O-memory address
+ *
+ * Sets the address and the I/O-memory flag.
+ */
+static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
+ void __iomem *vaddr_iomem)
+{
+ map->vaddr_iomem = vaddr_iomem;
+ map->is_iomem = true;
+}
+
/**
* dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
* @lhs: The dma-buf mapping structure
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
address space. The mapping's address is returned as struct dma_buf_map.
Each function is a simplified version of TTM's existing kmap code. Both
functions respect the memory's location ani/or writecombine flags.
On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
two helpers that convert a GEM object into the TTM BO and forward the call
to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
callbacks.
v4:
* drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
Christian)
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
include/drm/drm_gem_ttm_helper.h | 6 +++
include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
include/linux/dma-buf-map.h | 20 ++++++++
5 files changed, 164 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
index 0e4fb9ba43ad..db4c14d78a30 100644
--- a/drivers/gpu/drm/drm_gem_ttm_helper.c
+++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
@@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
}
EXPORT_SYMBOL(drm_gem_ttm_print_info);
+/**
+ * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: [out] returns the dma-buf mapping.
+ *
+ * Maps a GEM object with ttm_bo_vmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ *
+ * Returns:
+ * 0 on success, or a negative errno code otherwise.
+ */
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map)
+{
+ struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+ return ttm_bo_vmap(bo, map);
+
+}
+EXPORT_SYMBOL(drm_gem_ttm_vmap);
+
+/**
+ * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: dma-buf mapping.
+ *
+ * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ */
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map)
+{
+ struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+ ttm_bo_vunmap(bo, map);
+}
+EXPORT_SYMBOL(drm_gem_ttm_vunmap);
+
/**
* drm_gem_ttm_mmap() - mmap &ttm_buffer_object
* @gem: GEM object.
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index bdee4df1f3f2..80c42c774c7d 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -32,6 +32,7 @@
#include <drm/ttm/ttm_bo_driver.h>
#include <drm/ttm/ttm_placement.h>
#include <drm/drm_vma_manager.h>
+#include <linux/dma-buf-map.h>
#include <linux/io.h>
#include <linux/highmem.h>
#include <linux/wait.h>
@@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
}
EXPORT_SYMBOL(ttm_bo_kunmap);
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+ struct ttm_resource *mem = &bo->mem;
+ int ret;
+
+ ret = ttm_mem_io_reserve(bo->bdev, mem);
+ if (ret)
+ return ret;
+
+ if (mem->bus.is_iomem) {
+ void __iomem *vaddr_iomem;
+ unsigned long size = bo->num_pages << PAGE_SHIFT;
+
+ if (mem->bus.addr)
+ vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
+ else if (mem->placement & TTM_PL_FLAG_WC)
+ vaddr_iomem = ioremap_wc(mem->bus.offset, size);
+ else
+ vaddr_iomem = ioremap(mem->bus.offset, size);
+
+ if (!vaddr_iomem)
+ return -ENOMEM;
+
+ dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
+
+ } else {
+ struct ttm_operation_ctx ctx = {
+ .interruptible = false,
+ .no_wait_gpu = false
+ };
+ struct ttm_tt *ttm = bo->ttm;
+ pgprot_t prot;
+ void *vaddr;
+
+ BUG_ON(!ttm);
+
+ ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
+ if (ret)
+ return ret;
+
+ /*
+ * We need to use vmap to get the desired page protection
+ * or to make the buffer object look contiguous.
+ */
+ prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
+ vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
+ if (!vaddr)
+ return -ENOMEM;
+
+ dma_buf_map_set_vaddr(map, vaddr);
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(ttm_bo_vmap);
+
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+ if (dma_buf_map_is_null(map))
+ return;
+
+ if (map->is_iomem)
+ iounmap(map->vaddr_iomem);
+ else
+ vunmap(map->vaddr);
+ dma_buf_map_clear(map);
+
+ ttm_mem_io_free(bo->bdev, &bo->mem);
+}
+EXPORT_SYMBOL(ttm_bo_vunmap);
+
static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
bool dst_use_tt)
{
diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
index 118cef76f84f..7c6d874910b8 100644
--- a/include/drm/drm_gem_ttm_helper.h
+++ b/include/drm/drm_gem_ttm_helper.h
@@ -10,11 +10,17 @@
#include <drm/ttm/ttm_bo_api.h>
#include <drm/ttm/ttm_bo_driver.h>
+struct dma_buf_map;
+
#define drm_gem_ttm_of_gem(gem_obj) \
container_of(gem_obj, struct ttm_buffer_object, base)
void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
const struct drm_gem_object *gem);
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map);
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map);
int drm_gem_ttm_mmap(struct drm_gem_object *gem,
struct vm_area_struct *vma);
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 37102e45e496..2c59a785374c 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -48,6 +48,8 @@ struct ttm_bo_global;
struct ttm_bo_device;
+struct dma_buf_map;
+
struct drm_mm_node;
struct ttm_placement;
@@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
*/
void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
+/**
+ * ttm_bo_vmap
+ *
+ * @bo: The buffer object.
+ * @map: pointer to a struct dma_buf_map representing the map.
+ *
+ * Sets up a kernel virtual mapping, using ioremap or vmap to the
+ * data in the buffer object. The parameter @map returns the virtual
+ * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
+ *
+ * Returns
+ * -ENOMEM: Out of memory.
+ * -EINVAL: Invalid range.
+ */
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
+/**
+ * ttm_bo_vunmap
+ *
+ * @bo: The buffer object.
+ * @map: Object describing the map to unmap.
+ *
+ * Unmaps a kernel map set up by ttm_bo_vmap().
+ */
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
/**
* ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
*
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index fd1aba545fdf..2e8bbecb5091 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -45,6 +45,12 @@
*
* dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
*
+ * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
+ *
+ * .. code-block:: c
+ *
+ * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
+ *
* Test if a mapping is valid with either dma_buf_map_is_set() or
* dma_buf_map_is_null().
*
@@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
map->is_iomem = false;
}
+/**
+ * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
+ * @map: The dma-buf mapping structure
+ * @vaddr_iomem: An I/O-memory address
+ *
+ * Sets the address and the I/O-memory flag.
+ */
+static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
+ void __iomem *vaddr_iomem)
+{
+ map->vaddr_iomem = vaddr_iomem;
+ map->is_iomem = true;
+}
+
/**
* dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
* @lhs: The dma-buf mapping structure
--
2.28.0
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
address space. The mapping's address is returned as struct dma_buf_map.
Each function is a simplified version of TTM's existing kmap code. Both
functions respect the memory's location ani/or writecombine flags.
On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
two helpers that convert a GEM object into the TTM BO and forward the call
to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
callbacks.
v4:
* drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
Christian)
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
include/drm/drm_gem_ttm_helper.h | 6 +++
include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
include/linux/dma-buf-map.h | 20 ++++++++
5 files changed, 164 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
index 0e4fb9ba43ad..db4c14d78a30 100644
--- a/drivers/gpu/drm/drm_gem_ttm_helper.c
+++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
@@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
}
EXPORT_SYMBOL(drm_gem_ttm_print_info);
+/**
+ * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: [out] returns the dma-buf mapping.
+ *
+ * Maps a GEM object with ttm_bo_vmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ *
+ * Returns:
+ * 0 on success, or a negative errno code otherwise.
+ */
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map)
+{
+ struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+ return ttm_bo_vmap(bo, map);
+
+}
+EXPORT_SYMBOL(drm_gem_ttm_vmap);
+
+/**
+ * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: dma-buf mapping.
+ *
+ * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ */
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map)
+{
+ struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+ ttm_bo_vunmap(bo, map);
+}
+EXPORT_SYMBOL(drm_gem_ttm_vunmap);
+
/**
* drm_gem_ttm_mmap() - mmap &ttm_buffer_object
* @gem: GEM object.
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index bdee4df1f3f2..80c42c774c7d 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -32,6 +32,7 @@
#include <drm/ttm/ttm_bo_driver.h>
#include <drm/ttm/ttm_placement.h>
#include <drm/drm_vma_manager.h>
+#include <linux/dma-buf-map.h>
#include <linux/io.h>
#include <linux/highmem.h>
#include <linux/wait.h>
@@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
}
EXPORT_SYMBOL(ttm_bo_kunmap);
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+ struct ttm_resource *mem = &bo->mem;
+ int ret;
+
+ ret = ttm_mem_io_reserve(bo->bdev, mem);
+ if (ret)
+ return ret;
+
+ if (mem->bus.is_iomem) {
+ void __iomem *vaddr_iomem;
+ unsigned long size = bo->num_pages << PAGE_SHIFT;
+
+ if (mem->bus.addr)
+ vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
+ else if (mem->placement & TTM_PL_FLAG_WC)
+ vaddr_iomem = ioremap_wc(mem->bus.offset, size);
+ else
+ vaddr_iomem = ioremap(mem->bus.offset, size);
+
+ if (!vaddr_iomem)
+ return -ENOMEM;
+
+ dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
+
+ } else {
+ struct ttm_operation_ctx ctx = {
+ .interruptible = false,
+ .no_wait_gpu = false
+ };
+ struct ttm_tt *ttm = bo->ttm;
+ pgprot_t prot;
+ void *vaddr;
+
+ BUG_ON(!ttm);
+
+ ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
+ if (ret)
+ return ret;
+
+ /*
+ * We need to use vmap to get the desired page protection
+ * or to make the buffer object look contiguous.
+ */
+ prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
+ vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
+ if (!vaddr)
+ return -ENOMEM;
+
+ dma_buf_map_set_vaddr(map, vaddr);
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(ttm_bo_vmap);
+
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+ if (dma_buf_map_is_null(map))
+ return;
+
+ if (map->is_iomem)
+ iounmap(map->vaddr_iomem);
+ else
+ vunmap(map->vaddr);
+ dma_buf_map_clear(map);
+
+ ttm_mem_io_free(bo->bdev, &bo->mem);
+}
+EXPORT_SYMBOL(ttm_bo_vunmap);
+
static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
bool dst_use_tt)
{
diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
index 118cef76f84f..7c6d874910b8 100644
--- a/include/drm/drm_gem_ttm_helper.h
+++ b/include/drm/drm_gem_ttm_helper.h
@@ -10,11 +10,17 @@
#include <drm/ttm/ttm_bo_api.h>
#include <drm/ttm/ttm_bo_driver.h>
+struct dma_buf_map;
+
#define drm_gem_ttm_of_gem(gem_obj) \
container_of(gem_obj, struct ttm_buffer_object, base)
void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
const struct drm_gem_object *gem);
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map);
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map);
int drm_gem_ttm_mmap(struct drm_gem_object *gem,
struct vm_area_struct *vma);
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 37102e45e496..2c59a785374c 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -48,6 +48,8 @@ struct ttm_bo_global;
struct ttm_bo_device;
+struct dma_buf_map;
+
struct drm_mm_node;
struct ttm_placement;
@@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
*/
void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
+/**
+ * ttm_bo_vmap
+ *
+ * @bo: The buffer object.
+ * @map: pointer to a struct dma_buf_map representing the map.
+ *
+ * Sets up a kernel virtual mapping, using ioremap or vmap to the
+ * data in the buffer object. The parameter @map returns the virtual
+ * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
+ *
+ * Returns
+ * -ENOMEM: Out of memory.
+ * -EINVAL: Invalid range.
+ */
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
+/**
+ * ttm_bo_vunmap
+ *
+ * @bo: The buffer object.
+ * @map: Object describing the map to unmap.
+ *
+ * Unmaps a kernel map set up by ttm_bo_vmap().
+ */
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
/**
* ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
*
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index fd1aba545fdf..2e8bbecb5091 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -45,6 +45,12 @@
*
* dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
*
+ * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
+ *
+ * .. code-block:: c
+ *
+ * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
+ *
* Test if a mapping is valid with either dma_buf_map_is_set() or
* dma_buf_map_is_null().
*
@@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
map->is_iomem = false;
}
+/**
+ * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
+ * @map: The dma-buf mapping structure
+ * @vaddr_iomem: An I/O-memory address
+ *
+ * Sets the address and the I/O-memory flag.
+ */
+static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
+ void __iomem *vaddr_iomem)
+{
+ map->vaddr_iomem = vaddr_iomem;
+ map->is_iomem = true;
+}
+
/**
* dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
* @lhs: The dma-buf mapping structure
--
2.28.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
address space. The mapping's address is returned as struct dma_buf_map.
Each function is a simplified version of TTM's existing kmap code. Both
functions respect the memory's location ani/or writecombine flags.
On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
two helpers that convert a GEM object into the TTM BO and forward the call
to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
callbacks.
v4:
* drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
Christian)
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
include/drm/drm_gem_ttm_helper.h | 6 +++
include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
include/linux/dma-buf-map.h | 20 ++++++++
5 files changed, 164 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
index 0e4fb9ba43ad..db4c14d78a30 100644
--- a/drivers/gpu/drm/drm_gem_ttm_helper.c
+++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
@@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
}
EXPORT_SYMBOL(drm_gem_ttm_print_info);
+/**
+ * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: [out] returns the dma-buf mapping.
+ *
+ * Maps a GEM object with ttm_bo_vmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ *
+ * Returns:
+ * 0 on success, or a negative errno code otherwise.
+ */
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map)
+{
+ struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+ return ttm_bo_vmap(bo, map);
+
+}
+EXPORT_SYMBOL(drm_gem_ttm_vmap);
+
+/**
+ * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: dma-buf mapping.
+ *
+ * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ */
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map)
+{
+ struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+ ttm_bo_vunmap(bo, map);
+}
+EXPORT_SYMBOL(drm_gem_ttm_vunmap);
+
/**
* drm_gem_ttm_mmap() - mmap &ttm_buffer_object
* @gem: GEM object.
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index bdee4df1f3f2..80c42c774c7d 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -32,6 +32,7 @@
#include <drm/ttm/ttm_bo_driver.h>
#include <drm/ttm/ttm_placement.h>
#include <drm/drm_vma_manager.h>
+#include <linux/dma-buf-map.h>
#include <linux/io.h>
#include <linux/highmem.h>
#include <linux/wait.h>
@@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
}
EXPORT_SYMBOL(ttm_bo_kunmap);
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+ struct ttm_resource *mem = &bo->mem;
+ int ret;
+
+ ret = ttm_mem_io_reserve(bo->bdev, mem);
+ if (ret)
+ return ret;
+
+ if (mem->bus.is_iomem) {
+ void __iomem *vaddr_iomem;
+ unsigned long size = bo->num_pages << PAGE_SHIFT;
+
+ if (mem->bus.addr)
+ vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
+ else if (mem->placement & TTM_PL_FLAG_WC)
+ vaddr_iomem = ioremap_wc(mem->bus.offset, size);
+ else
+ vaddr_iomem = ioremap(mem->bus.offset, size);
+
+ if (!vaddr_iomem)
+ return -ENOMEM;
+
+ dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
+
+ } else {
+ struct ttm_operation_ctx ctx = {
+ .interruptible = false,
+ .no_wait_gpu = false
+ };
+ struct ttm_tt *ttm = bo->ttm;
+ pgprot_t prot;
+ void *vaddr;
+
+ BUG_ON(!ttm);
+
+ ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
+ if (ret)
+ return ret;
+
+ /*
+ * We need to use vmap to get the desired page protection
+ * or to make the buffer object look contiguous.
+ */
+ prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
+ vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
+ if (!vaddr)
+ return -ENOMEM;
+
+ dma_buf_map_set_vaddr(map, vaddr);
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(ttm_bo_vmap);
+
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+ if (dma_buf_map_is_null(map))
+ return;
+
+ if (map->is_iomem)
+ iounmap(map->vaddr_iomem);
+ else
+ vunmap(map->vaddr);
+ dma_buf_map_clear(map);
+
+ ttm_mem_io_free(bo->bdev, &bo->mem);
+}
+EXPORT_SYMBOL(ttm_bo_vunmap);
+
static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
bool dst_use_tt)
{
diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
index 118cef76f84f..7c6d874910b8 100644
--- a/include/drm/drm_gem_ttm_helper.h
+++ b/include/drm/drm_gem_ttm_helper.h
@@ -10,11 +10,17 @@
#include <drm/ttm/ttm_bo_api.h>
#include <drm/ttm/ttm_bo_driver.h>
+struct dma_buf_map;
+
#define drm_gem_ttm_of_gem(gem_obj) \
container_of(gem_obj, struct ttm_buffer_object, base)
void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
const struct drm_gem_object *gem);
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map);
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map);
int drm_gem_ttm_mmap(struct drm_gem_object *gem,
struct vm_area_struct *vma);
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 37102e45e496..2c59a785374c 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -48,6 +48,8 @@ struct ttm_bo_global;
struct ttm_bo_device;
+struct dma_buf_map;
+
struct drm_mm_node;
struct ttm_placement;
@@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
*/
void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
+/**
+ * ttm_bo_vmap
+ *
+ * @bo: The buffer object.
+ * @map: pointer to a struct dma_buf_map representing the map.
+ *
+ * Sets up a kernel virtual mapping, using ioremap or vmap to the
+ * data in the buffer object. The parameter @map returns the virtual
+ * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
+ *
+ * Returns
+ * -ENOMEM: Out of memory.
+ * -EINVAL: Invalid range.
+ */
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
+/**
+ * ttm_bo_vunmap
+ *
+ * @bo: The buffer object.
+ * @map: Object describing the map to unmap.
+ *
+ * Unmaps a kernel map set up by ttm_bo_vmap().
+ */
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
/**
* ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
*
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index fd1aba545fdf..2e8bbecb5091 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -45,6 +45,12 @@
*
* dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
*
+ * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
+ *
+ * .. code-block:: c
+ *
+ * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
+ *
* Test if a mapping is valid with either dma_buf_map_is_set() or
* dma_buf_map_is_null().
*
@@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
map->is_iomem = false;
}
+/**
+ * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
+ * @map: The dma-buf mapping structure
+ * @vaddr_iomem: An I/O-memory address
+ *
+ * Sets the address and the I/O-memory flag.
+ */
+static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
+ void __iomem *vaddr_iomem)
+{
+ map->vaddr_iomem = vaddr_iomem;
+ map->is_iomem = true;
+}
+
/**
* dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
* @lhs: The dma-buf mapping structure
--
2.28.0
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
address space. The mapping's address is returned as struct dma_buf_map.
Each function is a simplified version of TTM's existing kmap code. Both
functions respect the memory's location ani/or writecombine flags.
On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
two helpers that convert a GEM object into the TTM BO and forward the call
to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
callbacks.
v4:
* drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
Christian)
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
include/drm/drm_gem_ttm_helper.h | 6 +++
include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
include/linux/dma-buf-map.h | 20 ++++++++
5 files changed, 164 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
index 0e4fb9ba43ad..db4c14d78a30 100644
--- a/drivers/gpu/drm/drm_gem_ttm_helper.c
+++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
@@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
}
EXPORT_SYMBOL(drm_gem_ttm_print_info);
+/**
+ * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: [out] returns the dma-buf mapping.
+ *
+ * Maps a GEM object with ttm_bo_vmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ *
+ * Returns:
+ * 0 on success, or a negative errno code otherwise.
+ */
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map)
+{
+ struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+ return ttm_bo_vmap(bo, map);
+
+}
+EXPORT_SYMBOL(drm_gem_ttm_vmap);
+
+/**
+ * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: dma-buf mapping.
+ *
+ * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ */
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map)
+{
+ struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+ ttm_bo_vunmap(bo, map);
+}
+EXPORT_SYMBOL(drm_gem_ttm_vunmap);
+
/**
* drm_gem_ttm_mmap() - mmap &ttm_buffer_object
* @gem: GEM object.
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index bdee4df1f3f2..80c42c774c7d 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -32,6 +32,7 @@
#include <drm/ttm/ttm_bo_driver.h>
#include <drm/ttm/ttm_placement.h>
#include <drm/drm_vma_manager.h>
+#include <linux/dma-buf-map.h>
#include <linux/io.h>
#include <linux/highmem.h>
#include <linux/wait.h>
@@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
}
EXPORT_SYMBOL(ttm_bo_kunmap);
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+ struct ttm_resource *mem = &bo->mem;
+ int ret;
+
+ ret = ttm_mem_io_reserve(bo->bdev, mem);
+ if (ret)
+ return ret;
+
+ if (mem->bus.is_iomem) {
+ void __iomem *vaddr_iomem;
+ unsigned long size = bo->num_pages << PAGE_SHIFT;
+
+ if (mem->bus.addr)
+ vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
+ else if (mem->placement & TTM_PL_FLAG_WC)
+ vaddr_iomem = ioremap_wc(mem->bus.offset, size);
+ else
+ vaddr_iomem = ioremap(mem->bus.offset, size);
+
+ if (!vaddr_iomem)
+ return -ENOMEM;
+
+ dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
+
+ } else {
+ struct ttm_operation_ctx ctx = {
+ .interruptible = false,
+ .no_wait_gpu = false
+ };
+ struct ttm_tt *ttm = bo->ttm;
+ pgprot_t prot;
+ void *vaddr;
+
+ BUG_ON(!ttm);
+
+ ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
+ if (ret)
+ return ret;
+
+ /*
+ * We need to use vmap to get the desired page protection
+ * or to make the buffer object look contiguous.
+ */
+ prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
+ vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
+ if (!vaddr)
+ return -ENOMEM;
+
+ dma_buf_map_set_vaddr(map, vaddr);
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(ttm_bo_vmap);
+
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+ if (dma_buf_map_is_null(map))
+ return;
+
+ if (map->is_iomem)
+ iounmap(map->vaddr_iomem);
+ else
+ vunmap(map->vaddr);
+ dma_buf_map_clear(map);
+
+ ttm_mem_io_free(bo->bdev, &bo->mem);
+}
+EXPORT_SYMBOL(ttm_bo_vunmap);
+
static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
bool dst_use_tt)
{
diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
index 118cef76f84f..7c6d874910b8 100644
--- a/include/drm/drm_gem_ttm_helper.h
+++ b/include/drm/drm_gem_ttm_helper.h
@@ -10,11 +10,17 @@
#include <drm/ttm/ttm_bo_api.h>
#include <drm/ttm/ttm_bo_driver.h>
+struct dma_buf_map;
+
#define drm_gem_ttm_of_gem(gem_obj) \
container_of(gem_obj, struct ttm_buffer_object, base)
void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
const struct drm_gem_object *gem);
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map);
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+ struct dma_buf_map *map);
int drm_gem_ttm_mmap(struct drm_gem_object *gem,
struct vm_area_struct *vma);
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 37102e45e496..2c59a785374c 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -48,6 +48,8 @@ struct ttm_bo_global;
struct ttm_bo_device;
+struct dma_buf_map;
+
struct drm_mm_node;
struct ttm_placement;
@@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
*/
void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
+/**
+ * ttm_bo_vmap
+ *
+ * @bo: The buffer object.
+ * @map: pointer to a struct dma_buf_map representing the map.
+ *
+ * Sets up a kernel virtual mapping, using ioremap or vmap to the
+ * data in the buffer object. The parameter @map returns the virtual
+ * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
+ *
+ * Returns
+ * -ENOMEM: Out of memory.
+ * -EINVAL: Invalid range.
+ */
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
+/**
+ * ttm_bo_vunmap
+ *
+ * @bo: The buffer object.
+ * @map: Object describing the map to unmap.
+ *
+ * Unmaps a kernel map set up by ttm_bo_vmap().
+ */
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
/**
* ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
*
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index fd1aba545fdf..2e8bbecb5091 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -45,6 +45,12 @@
*
* dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
*
+ * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
+ *
+ * .. code-block:: c
+ *
+ * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
+ *
* Test if a mapping is valid with either dma_buf_map_is_set() or
* dma_buf_map_is_null().
*
@@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
map->is_iomem = false;
}
+/**
+ * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
+ * @map: The dma-buf mapping structure
+ * @vaddr_iomem: An I/O-memory address
+ *
+ * Sets the address and the I/O-memory flag.
+ */
+static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
+ void __iomem *vaddr_iomem)
+{
+ map->vaddr_iomem = vaddr_iomem;
+ map->is_iomem = true;
+}
+
/**
* dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
* @lhs: The dma-buf mapping structure
--
2.28.0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends
2020-10-15 12:37 ` Thomas Zimmermann
` (2 preceding siblings ...)
(?)
@ 2020-10-15 12:38 ` Thomas Zimmermann
-1 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann
This patch replaces the vmap/vunmap's use of raw pointers in GEM object
functions with instances of struct dma_buf_map. GEM backends are
converted as well. For most of them, this simply changes the returned type.
TTM-based drivers now return information about the location of the memory,
either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap()
et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of
implementing their own vmap callbacks.
v4:
* use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian)
* fix a trailing { in drm_gem_vmap()
* remove several empty functions instead of converting them (Daniel)
* comment uses of raw pointers with a TODO (Daniel)
* TODO list: convert more helpers to use struct dma_buf_map
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
Documentation/gpu/todo.rst | 18 ++++
drivers/gpu/drm/Kconfig | 2 +
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 -------
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 -
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 -
drivers/gpu/drm/ast/ast_cursor.c | 27 +++--
drivers/gpu/drm/ast/ast_drv.h | 7 +-
drivers/gpu/drm/drm_gem.c | 23 +++--
drivers/gpu/drm/drm_gem_cma_helper.c | 10 +-
drivers/gpu/drm/drm_gem_shmem_helper.c | 48 +++++----
drivers/gpu/drm/drm_gem_vram_helper.c | 107 ++++++++++----------
drivers/gpu/drm/etnaviv/etnaviv_drv.h | 2 +-
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 9 +-
drivers/gpu/drm/lima/lima_gem.c | 6 +-
drivers/gpu/drm/lima/lima_sched.c | 11 +-
drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +-
drivers/gpu/drm/nouveau/Kconfig | 1 +
drivers/gpu/drm/nouveau/nouveau_bo.h | 2 -
drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +-
drivers/gpu/drm/nouveau/nouveau_gem.h | 2 -
drivers/gpu/drm/nouveau/nouveau_prime.c | 20 ----
drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +--
drivers/gpu/drm/qxl/qxl_display.c | 11 +-
drivers/gpu/drm/qxl/qxl_draw.c | 14 ++-
drivers/gpu/drm/qxl/qxl_drv.h | 11 +-
drivers/gpu/drm/qxl/qxl_object.c | 31 +++---
drivers/gpu/drm/qxl/qxl_object.h | 2 +-
drivers/gpu/drm/qxl/qxl_prime.c | 12 +--
drivers/gpu/drm/radeon/radeon.h | 1 -
drivers/gpu/drm/radeon/radeon_gem.c | 7 +-
drivers/gpu/drm/radeon/radeon_prime.c | 20 ----
drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 ++--
drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +-
drivers/gpu/drm/tiny/cirrus.c | 10 +-
drivers/gpu/drm/tiny/gm12u320.c | 10 +-
drivers/gpu/drm/udl/udl_modeset.c | 8 +-
drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +-
drivers/gpu/drm/vc4/vc4_bo.c | 6 +-
drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
drivers/gpu/drm/vgem/vgem_drv.c | 16 ++-
drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 ++--
drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +-
include/drm/drm_gem.h | 5 +-
include/drm/drm_gem_cma_helper.h | 2 +-
include/drm/drm_gem_shmem_helper.h | 4 +-
include/drm/drm_gem_vram_helper.h | 14 +--
47 files changed, 321 insertions(+), 295 deletions(-)
diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 700637e25ecd..7e6fc3c04add 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -446,6 +446,24 @@ Contact: Ville Syrjälä, Daniel Vetter
Level: Intermediate
+Use struct dma_buf_map throughout codebase
+------------------------------------------
+
+Pointers to shared device memory are stored in struct dma_buf_map. Each
+instance knows whether it refers to system or I/O memory. Most of the DRM-wide
+interface have been converted to use struct dma_buf_map, but implementations
+often still use raw pointers.
+
+The task is to use struct dma_buf_map where it makes sense.
+
+* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
+* TTM might benefit from using struct dma_buf_map internally.
+* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
+
+Level: Intermediate
+
Core refactorings
=================
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 147d61b9674e..319839b87d37 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -239,6 +239,7 @@ config DRM_RADEON
select FW_LOADER
select DRM_KMS_HELPER
select DRM_TTM
+ select DRM_TTM_HELPER
select POWER_SUPPLY
select HWMON
select BACKLIGHT_CLASS_DEVICE
@@ -259,6 +260,7 @@ config DRM_AMDGPU
select DRM_KMS_HELPER
select DRM_SCHED
select DRM_TTM
+ select DRM_TTM_HELPER
select POWER_SUPPLY
select HWMON
select BACKLIGHT_CLASS_DEVICE
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 5b465ab774d1..e5919efca870 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -41,42 +41,6 @@
#include <linux/dma-fence-array.h>
#include <linux/pci-p2pdma.h>
-/**
- * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
- * @obj: GEM BO
- *
- * Sets up an in-kernel virtual mapping of the BO's memory.
- *
- * Returns:
- * The virtual address of the mapping or an error pointer.
- */
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
-{
- struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
- int ret;
-
- ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
- &bo->dma_buf_vmap);
- if (ret)
- return ERR_PTR(ret);
-
- return bo->dma_buf_vmap.virtual;
-}
-
-/**
- * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
- * @obj: GEM BO
- * @vaddr: Virtual address (unused)
- *
- * Tears down the in-kernel virtual mapping of the BO's memory.
- */
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
-
- ttm_bo_kunmap(&bo->dma_buf_vmap);
-}
-
/**
* amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation
* @obj: GEM BO
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
index 2c5c84a06bb9..39b5b9616fd8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
@@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
struct dma_buf *dma_buf);
bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
struct amdgpu_bo *bo);
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index be08a63ef58c..576659827e74 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -33,6 +33,7 @@
#include <drm/amdgpu_drm.h>
#include <drm/drm_debugfs.h>
+#include <drm/drm_gem_ttm_helper.h>
#include "amdgpu.h"
#include "amdgpu_display.h"
@@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = {
.open = amdgpu_gem_object_open,
.close = amdgpu_gem_object_close,
.export = amdgpu_gem_prime_export,
- .vmap = amdgpu_gem_prime_vmap,
- .vunmap = amdgpu_gem_prime_vunmap,
+ .vmap = drm_gem_ttm_vmap,
+ .vunmap = drm_gem_ttm_vunmap,
};
/*
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
index 132e5f955180..01296ef0d673 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -100,7 +100,6 @@ struct amdgpu_bo {
struct amdgpu_bo *parent;
struct amdgpu_bo *shadow;
- struct ttm_bo_kmap_obj dma_buf_vmap;
struct amdgpu_mn *mn;
diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
index e0f4613918ad..742d43a7edf4 100644
--- a/drivers/gpu/drm/ast/ast_cursor.c
+++ b/drivers/gpu/drm/ast/ast_cursor.c
@@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
gbo = ast->cursor.gbo[i];
- drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+ drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
drm_gem_vram_unpin(gbo);
drm_gem_vram_put(gbo);
}
@@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
struct drm_device *dev = &ast->base;
size_t size, i;
struct drm_gem_vram_object *gbo;
- void __iomem *vaddr;
+ struct dma_buf_map map;
int ret;
size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
@@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
drm_gem_vram_put(gbo);
goto err_drm_gem_vram_put;
}
- vaddr = drm_gem_vram_vmap(gbo);
- if (IS_ERR(vaddr)) {
- ret = PTR_ERR(vaddr);
+ ret = drm_gem_vram_vmap(gbo, &map);
+ if (ret) {
drm_gem_vram_unpin(gbo);
drm_gem_vram_put(gbo);
goto err_drm_gem_vram_put;
}
ast->cursor.gbo[i] = gbo;
- ast->cursor.vaddr[i] = vaddr;
+ ast->cursor.map[i] = map;
}
return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
@@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
while (i) {
--i;
gbo = ast->cursor.gbo[i];
- drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+ drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
drm_gem_vram_unpin(gbo);
drm_gem_vram_put(gbo);
}
@@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
{
struct drm_device *dev = &ast->base;
struct drm_gem_vram_object *gbo;
+ struct dma_buf_map map;
int ret;
void *src;
void __iomem *dst;
@@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
ret = drm_gem_vram_pin(gbo, 0);
if (ret)
return ret;
- src = drm_gem_vram_vmap(gbo);
- if (IS_ERR(src)) {
- ret = PTR_ERR(src);
+ ret = drm_gem_vram_vmap(gbo, &map);
+ if (ret)
goto err_drm_gem_vram_unpin;
- }
+ src = map.vaddr; /* TODO: Use mapping abstraction properly */
- dst = ast->cursor.vaddr[ast->cursor.next_index];
+ dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
/* do data transfer to cursor BO */
update_cursor_image(dst, src, fb->width, fb->height);
- drm_gem_vram_vunmap(gbo, src);
+ drm_gem_vram_vunmap(gbo, &map);
drm_gem_vram_unpin(gbo);
return 0;
@@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
u8 __iomem *sig;
u8 jreg;
- dst = ast->cursor.vaddr[ast->cursor.next_index];
+ dst = ast->cursor.map[ast->cursor.next_index].vaddr;
sig = dst + AST_HWC_SIZE;
writel(x, sig + AST_HWC_SIGNATURE_X);
diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
index 467049ca8430..f963141dd851 100644
--- a/drivers/gpu/drm/ast/ast_drv.h
+++ b/drivers/gpu/drm/ast/ast_drv.h
@@ -28,10 +28,11 @@
#ifndef __AST_DRV_H__
#define __AST_DRV_H__
-#include <linux/types.h>
-#include <linux/io.h>
+#include <linux/dma-buf-map.h>
#include <linux/i2c.h>
#include <linux/i2c-algo-bit.h>
+#include <linux/io.h>
+#include <linux/types.h>
#include <drm/drm_connector.h>
#include <drm/drm_crtc.h>
@@ -131,7 +132,7 @@ struct ast_private {
struct {
struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
- void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
+ struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
unsigned int next_index;
} cursor;
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 1da67d34e55d..a89ad4570e3c 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -36,6 +36,7 @@
#include <linux/pagemap.h>
#include <linux/shmem_fs.h>
#include <linux/dma-buf.h>
+#include <linux/dma-buf-map.h>
#include <linux/mem_encrypt.h>
#include <linux/pagevec.h>
@@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
void *drm_gem_vmap(struct drm_gem_object *obj)
{
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
- if (obj->funcs->vmap)
- vaddr = obj->funcs->vmap(obj);
- else
- vaddr = ERR_PTR(-EOPNOTSUPP);
+ if (!obj->funcs->vmap)
+ return ERR_PTR(-EOPNOTSUPP);
- if (!vaddr)
- vaddr = ERR_PTR(-ENOMEM);
+ ret = obj->funcs->vmap(obj, &map);
+ if (ret)
+ return ERR_PTR(ret);
+ else if (dma_buf_map_is_null(&map))
+ return ERR_PTR(-ENOMEM);
- return vaddr;
+ return map.vaddr;
}
void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
{
+ struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
+
if (!vaddr)
return;
if (obj->funcs->vunmap)
- obj->funcs->vunmap(obj, vaddr);
+ obj->funcs->vunmap(obj, &map);
}
/**
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index d527485ea0b7..b57e3e9222f0 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
* drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
* address space
* @obj: GEM object
+ * @map: Returns the kernel virtual address of the CMA GEM object's backing
+ * store.
*
* This function maps a buffer exported via DRM PRIME into the kernel's
* virtual address space. Since the CMA buffers are already mapped into the
@@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
* driver's &drm_gem_object_funcs.vmap callback.
*
* Returns:
- * The kernel virtual address of the CMA GEM object's backing store.
+ * 0 on success, or a negative error code otherwise.
*/
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
- return cma_obj->vaddr;
+ dma_buf_map_set_vaddr(map, cma_obj->vaddr);
+
+ return 0;
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index fb11df7aced5..5553f58f68f3 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
}
EXPORT_SYMBOL(drm_gem_shmem_unpin);
-static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
+static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
{
struct drm_gem_object *obj = &shmem->base;
- struct dma_buf_map map;
int ret = 0;
- if (shmem->vmap_use_count++ > 0)
- return shmem->vaddr;
+ if (shmem->vmap_use_count++ > 0) {
+ dma_buf_map_set_vaddr(map, shmem->vaddr);
+ return 0;
+ }
if (obj->import_attach) {
- ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
- if (!ret)
- shmem->vaddr = map.vaddr;
+ ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
+ if (!ret) {
+ if (WARN_ON(map->is_iomem)) {
+ ret = -EIO;
+ goto err_put_pages;
+ }
+ shmem->vaddr = map->vaddr;
+ }
} else {
pgprot_t prot = PAGE_KERNEL;
@@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
VM_MAP, prot);
if (!shmem->vaddr)
ret = -ENOMEM;
+ else
+ dma_buf_map_set_vaddr(map, shmem->vaddr);
}
if (ret) {
@@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
goto err_put_pages;
}
- return shmem->vaddr;
+ return 0;
err_put_pages:
if (!obj->import_attach)
@@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
err_zero_use:
shmem->vmap_use_count = 0;
- return ERR_PTR(ret);
+ return ret;
}
/*
* drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
* @shmem: shmem GEM object
+ * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
+ * store.
*
* This function makes sure that a contiguous kernel virtual address mapping
* exists for the buffer backing the shmem GEM object.
@@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
* Returns:
* 0 on success or a negative error code on failure.
*/
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- void *vaddr;
int ret;
ret = mutex_lock_interruptible(&shmem->vmap_lock);
if (ret)
- return ERR_PTR(ret);
- vaddr = drm_gem_shmem_vmap_locked(shmem);
+ return ret;
+ ret = drm_gem_shmem_vmap_locked(shmem, map);
mutex_unlock(&shmem->vmap_lock);
- return vaddr;
+ return ret;
}
EXPORT_SYMBOL(drm_gem_shmem_vmap);
-static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
+static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
+ struct dma_buf_map *map)
{
struct drm_gem_object *obj = &shmem->base;
- struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
if (WARN_ON_ONCE(!shmem->vmap_use_count))
return;
@@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
return;
if (obj->import_attach)
- dma_buf_vunmap(obj->import_attach->dmabuf, &map);
+ dma_buf_vunmap(obj->import_attach->dmabuf, map);
else
vunmap(shmem->vaddr);
@@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
/*
* drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
* @shmem: shmem GEM object
+ * @map: Kernel virtual address where the SHMEM GEM object was mapped
*
* This function cleans up a kernel virtual address mapping acquired by
* drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
@@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
* also be called by drivers directly, in which case it will hide the
* differences between dma-buf imported and natively allocated objects.
*/
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
mutex_lock(&shmem->vmap_lock);
- drm_gem_shmem_vunmap_locked(shmem);
+ drm_gem_shmem_vunmap_locked(shmem, map);
mutex_unlock(&shmem->vmap_lock);
}
EXPORT_SYMBOL(drm_gem_shmem_vunmap);
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 2d5ed30518f1..4d8553b28558 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-or-later
+#include <linux/dma-buf-map.h>
#include <linux/module.h>
#include <drm/drm_debugfs.h>
@@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
* up; only release the GEM object.
*/
- WARN_ON(gbo->kmap_use_count);
- WARN_ON(gbo->kmap.virtual);
+ WARN_ON(gbo->vmap_use_count);
+ WARN_ON(dma_buf_map_is_set(&gbo->map));
drm_gem_object_release(&gbo->bo.base);
}
@@ -382,29 +383,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
}
EXPORT_SYMBOL(drm_gem_vram_unpin);
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
+static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
+ struct dma_buf_map *map)
{
int ret;
- struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
- bool is_iomem;
- if (gbo->kmap_use_count > 0)
+ if (gbo->vmap_use_count > 0)
goto out;
- ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
+ ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
if (ret)
- return ERR_PTR(ret);
+ return ret;
out:
- ++gbo->kmap_use_count;
- return ttm_kmap_obj_virtual(kmap, &is_iomem);
+ ++gbo->vmap_use_count;
+ *map = gbo->map;
+
+ return 0;
}
-static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
+static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
+ struct dma_buf_map *map)
{
- if (WARN_ON_ONCE(!gbo->kmap_use_count))
+ struct drm_device *dev = gbo->bo.base.dev;
+
+ if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count))
return;
- if (--gbo->kmap_use_count > 0)
+
+ if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
+ return; /* BUG: map not mapped from this BO */
+
+ if (--gbo->vmap_use_count > 0)
return;
/*
@@ -418,7 +427,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
/**
* drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
* space
- * @gbo: The GEM VRAM object to map
+ * @gbo: The GEM VRAM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ * store.
*
* The vmap function pins a GEM VRAM object to its current location, either
* system or video memory, and maps its buffer into kernel address space.
@@ -427,48 +438,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
* unmap and unpin the GEM VRAM object.
*
* Returns:
- * The buffer's virtual address on success, or
- * an ERR_PTR()-encoded error code otherwise.
+ * 0 on success, or a negative error code otherwise.
*/
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
{
int ret;
- void *base;
ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
if (ret)
- return ERR_PTR(ret);
+ return ret;
ret = drm_gem_vram_pin_locked(gbo, 0);
if (ret)
goto err_ttm_bo_unreserve;
- base = drm_gem_vram_kmap_locked(gbo);
- if (IS_ERR(base)) {
- ret = PTR_ERR(base);
+ ret = drm_gem_vram_kmap_locked(gbo, map);
+ if (ret)
goto err_drm_gem_vram_unpin_locked;
- }
ttm_bo_unreserve(&gbo->bo);
- return base;
+ return 0;
err_drm_gem_vram_unpin_locked:
drm_gem_vram_unpin_locked(gbo);
err_ttm_bo_unreserve:
ttm_bo_unreserve(&gbo->bo);
- return ERR_PTR(ret);
+ return ret;
}
EXPORT_SYMBOL(drm_gem_vram_vmap);
/**
* drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
- * @gbo: The GEM VRAM object to unmap
- * @vaddr: The mapping's base address as returned by drm_gem_vram_vmap()
+ * @gbo: The GEM VRAM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
*
* A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
* the documentation for drm_gem_vram_vmap() for more information.
*/
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
{
int ret;
@@ -476,7 +483,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
return;
- drm_gem_vram_kunmap_locked(gbo);
+ drm_gem_vram_kunmap_locked(gbo, map);
drm_gem_vram_unpin_locked(gbo);
ttm_bo_unreserve(&gbo->bo);
@@ -567,15 +574,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
bool evict,
struct ttm_resource *new_mem)
{
- struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+ struct ttm_buffer_object *bo = &gbo->bo;
+ struct drm_device *dev = bo->base.dev;
- if (WARN_ON_ONCE(gbo->kmap_use_count))
+ if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count))
return;
- if (!kmap->virtual)
- return;
- ttm_bo_kunmap(kmap);
- kmap->virtual = NULL;
+ ttm_bo_vunmap(bo, &gbo->map);
}
static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
@@ -832,37 +837,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
}
/**
- * drm_gem_vram_object_vmap() - \
- Implements &struct drm_gem_object_funcs.vmap
- * @gem: The GEM object to map
+ * drm_gem_vram_object_vmap() -
+ * Implements &struct drm_gem_object_funcs.vmap
+ * @gem: The GEM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ * store.
*
* Returns:
- * The buffers virtual address on success, or
- * NULL otherwise.
+ * 0 on success, or a negative error code otherwise.
*/
-static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
+static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
{
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
- void *base;
- base = drm_gem_vram_vmap(gbo);
- if (IS_ERR(base))
- return NULL;
- return base;
+ return drm_gem_vram_vmap(gbo, map);
}
/**
- * drm_gem_vram_object_vunmap() - \
- Implements &struct drm_gem_object_funcs.vunmap
- * @gem: The GEM object to unmap
- * @vaddr: The mapping's base address
+ * drm_gem_vram_object_vunmap() -
+ * Implements &struct drm_gem_object_funcs.vunmap
+ * @gem: The GEM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
*/
-static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
- void *vaddr)
+static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
{
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
- drm_gem_vram_vunmap(gbo, vaddr);
+ drm_gem_vram_vunmap(gbo, map);
}
/*
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 9682c26d89bb..f5be627e1de0 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index a6d9932a32ae..bc2543dd987d 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
}
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
- return etnaviv_gem_vmap(obj);
+ void *vaddr = etnaviv_gem_vmap(obj);
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
+
+ return 0;
}
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 11223fe348df..832e5280a6ed 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
return drm_gem_shmem_pin(obj);
}
-static void *lima_gem_vmap(struct drm_gem_object *obj)
+static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct lima_bo *bo = to_lima_bo(obj);
if (bo->heap_size)
- return ERR_PTR(-EINVAL);
+ return -EINVAL;
- return drm_gem_shmem_vmap(obj);
+ return drm_gem_shmem_vmap(obj, map);
}
static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index dc6df9e9a40d..a070a85f8f36 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0 OR MIT
/* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
+#include <linux/dma-buf-map.h>
#include <linux/kthread.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
@@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
struct lima_dump_chunk_buffer *buffer_chunk;
u32 size, task_size, mem_size;
int i;
+ struct dma_buf_map map;
+ int ret;
mutex_lock(&dev->error_task_list_lock);
@@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
} else {
buffer_chunk->size = lima_bo_size(bo);
- data = drm_gem_shmem_vmap(&bo->base.base);
- if (IS_ERR_OR_NULL(data)) {
+ ret = drm_gem_shmem_vmap(&bo->base.base, &map);
+ if (ret) {
kvfree(et);
goto out;
}
- memcpy(buffer_chunk + 1, data, buffer_chunk->size);
+ memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
- drm_gem_shmem_vunmap(&bo->base.base, data);
+ drm_gem_shmem_vunmap(&bo->base.base, &map);
}
buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
index 38672f9e5c4f..8ef76769b97f 100644
--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
+++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
@@ -9,6 +9,7 @@
*/
#include <linux/delay.h>
+#include <linux/dma-buf-map.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_atomic_state_helper.h>
@@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
struct drm_rect *clip)
{
struct drm_device *dev = &mdev->base;
+ struct dma_buf_map map;
void *vmap;
+ int ret;
- vmap = drm_gem_shmem_vmap(fb->obj[0]);
- if (drm_WARN_ON(dev, !vmap))
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (drm_WARN_ON(dev, ret))
return; /* BUG: SHMEM BO should always be vmapped */
+ vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
- drm_gem_shmem_vunmap(fb->obj[0], vmap);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
/* Always scanout image at VRAM offset 0 */
mgag200_set_startadd(mdev, (u32)0);
diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
index 5dec1e5694b7..9436310d0854 100644
--- a/drivers/gpu/drm/nouveau/Kconfig
+++ b/drivers/gpu/drm/nouveau/Kconfig
@@ -6,6 +6,7 @@ config DRM_NOUVEAU
select FW_LOADER
select DRM_KMS_HELPER
select DRM_TTM
+ select DRM_TTM_HELPER
select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT
select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT
select X86_PLATFORM_DEVICES if ACPI && X86
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h
index 641ef6298a0e..6045b85a762a 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.h
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.h
@@ -39,8 +39,6 @@ struct nouveau_bo {
unsigned mode;
struct nouveau_drm_tile *tile;
-
- struct ttm_bo_kmap_obj dma_buf_vmap;
};
static inline struct nouveau_bo *
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index 9a421c3949de..f942b526b0a5 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -24,6 +24,8 @@
*
*/
+#include <drm/drm_gem_ttm_helper.h>
+
#include "nouveau_drv.h"
#include "nouveau_dma.h"
#include "nouveau_fence.h"
@@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = {
.pin = nouveau_gem_prime_pin,
.unpin = nouveau_gem_prime_unpin,
.get_sg_table = nouveau_gem_prime_get_sg_table,
- .vmap = nouveau_gem_prime_vmap,
- .vunmap = nouveau_gem_prime_vunmap,
+ .vmap = drm_gem_ttm_vmap,
+ .vunmap = drm_gem_ttm_vunmap,
};
int
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
index b35c180322e2..3b919c7c931c 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.h
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
@@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
-extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
-extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
#endif
diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
index a8264aebf3d4..2f16b5249283 100644
--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
+++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
@@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
}
-void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
-{
- struct nouveau_bo *nvbo = nouveau_gem_object(obj);
- int ret;
-
- ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
- &nvbo->dma_buf_vmap);
- if (ret)
- return ERR_PTR(ret);
-
- return nvbo->dma_buf_vmap.virtual;
-}
-
-void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- struct nouveau_bo *nvbo = nouveau_gem_object(obj);
-
- ttm_bo_kunmap(&nvbo->dma_buf_vmap);
-}
-
struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sg)
diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
index fdbc8d949135..5ab03d605f57 100644
--- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
+++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
@@ -5,6 +5,7 @@
#include <drm/drm_gem_shmem_helper.h>
#include <drm/panfrost_drm.h>
#include <linux/completion.h>
+#include <linux/dma-buf-map.h>
#include <linux/iopoll.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
@@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
{
struct panfrost_file_priv *user = file_priv->driver_priv;
struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+ struct dma_buf_map map;
struct drm_gem_shmem_object *bo;
u32 cfg, as;
int ret;
@@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
goto err_close_bo;
}
- perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
- if (IS_ERR(perfcnt->buf)) {
- ret = PTR_ERR(perfcnt->buf);
+ ret = drm_gem_shmem_vmap(&bo->base, &map);
+ if (ret)
goto err_put_mapping;
- }
+ perfcnt->buf = map.vaddr;
/*
* Invalidate the cache and clear the counters to start from a fresh
@@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
return 0;
err_vunmap:
- drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
+ drm_gem_shmem_vunmap(&bo->base, &map);
err_put_mapping:
panfrost_gem_mapping_put(perfcnt->mapping);
err_close_bo:
@@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
{
struct panfrost_file_priv *user = file_priv->driver_priv;
struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+ struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
if (user != perfcnt->user)
return -EINVAL;
@@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
perfcnt->user = NULL;
- drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
+ drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
perfcnt->buf = NULL;
panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
index 45fd76e04bdc..e165fa9b2089 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -25,6 +25,7 @@
#include <linux/crc32.h>
#include <linux/delay.h>
+#include <linux/dma-buf-map.h>
#include <drm/drm_drv.h>
#include <drm/drm_atomic.h>
@@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
struct drm_gem_object *obj;
struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
int ret;
+ struct dma_buf_map user_map;
+ struct dma_buf_map cursor_map;
void *user_ptr;
int size = 64*64*4;
@@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
user_bo = gem_to_qxl_bo(obj);
/* pinning is done in the prepare/cleanup framevbuffer */
- ret = qxl_bo_kmap(user_bo, &user_ptr);
+ ret = qxl_bo_kmap(user_bo, &user_map);
if (ret)
goto out_free_release;
+ user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */
ret = qxl_alloc_bo_reserved(qdev, release,
sizeof(struct qxl_cursor) + size,
@@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
if (ret)
goto out_unpin;
- ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
+ ret = qxl_bo_kmap(cursor_bo, &cursor_map);
if (ret)
goto out_backoff;
@@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
{
int ret;
struct drm_gem_object *gobj;
+ struct dma_buf_map map;
int monitors_config_size = sizeof(struct qxl_monitors_config) +
qxl_num_crtc * sizeof(struct qxl_head);
@@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
if (ret)
return ret;
- qxl_bo_kmap(qdev->monitors_config_bo, NULL);
+ qxl_bo_kmap(qdev->monitors_config_bo, &map);
qdev->monitors_config = qdev->monitors_config_bo->kptr;
qdev->ram_header->monitors_config =
diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
index 3599db096973..7b7acb910780 100644
--- a/drivers/gpu/drm/qxl/qxl_draw.c
+++ b/drivers/gpu/drm/qxl/qxl_draw.c
@@ -20,6 +20,8 @@
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
+#include <linux/dma-buf-map.h>
+
#include <drm/drm_fourcc.h>
#include "qxl_drv.h"
@@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
unsigned int num_clips,
struct qxl_bo *clips_bo)
{
+ struct dma_buf_map map;
struct qxl_clip_rects *dev_clips;
int ret;
- ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
- if (ret) {
+ ret = qxl_bo_kmap(clips_bo, &map);
+ if (ret)
return NULL;
- }
+ dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
+
dev_clips->num_rects = num_clips;
dev_clips->chunk.next_chunk = 0;
dev_clips->chunk.prev_chunk = 0;
@@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
int stride = fb->pitches[0];
/* depth is not actually interesting, we don't mask with it */
int depth = fb->format->cpp[0] * 8;
+ struct dma_buf_map surface_map;
uint8_t *surface_base;
struct qxl_release *release;
struct qxl_bo *clips_bo;
@@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
if (ret)
goto out_release_backoff;
- ret = qxl_bo_kmap(bo, (void **)&surface_base);
+ ret = qxl_bo_kmap(bo, &surface_map);
if (ret)
goto out_release_backoff;
+ surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */
ret = qxl_image_init(qdev, release, dimage, surface_base,
left - dumb_shadow_offset,
diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 3602e8b34189..eb437fea5d9e 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -30,6 +30,7 @@
* Definitions taken from spice-protocol, plus kernel driver specific bits.
*/
+#include <linux/dma-buf-map.h>
#include <linux/dma-fence.h>
#include <linux/firmware.h>
#include <linux/platform_device.h>
@@ -50,6 +51,8 @@
#include "qxl_dev.h"
+struct dma_buf_map;
+
#define DRIVER_AUTHOR "Dave Airlie"
#define DRIVER_NAME "qxl"
@@ -79,7 +82,7 @@ struct qxl_bo {
/* Protected by tbo.reserved */
struct ttm_place placements[3];
struct ttm_placement placement;
- struct ttm_bo_kmap_obj kmap;
+ struct dma_buf_map map;
void *kptr;
unsigned int map_count;
int type;
@@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
void qxl_gem_object_close(struct drm_gem_object *obj,
struct drm_file *file_priv);
void qxl_bo_force_delete(struct qxl_device *qdev);
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
/* qxl_dumb.c */
int qxl_mode_dumb_create(struct drm_file *file_priv,
@@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
struct drm_gem_object *qxl_gem_prime_import_sg_table(
struct drm_device *dev, struct dma_buf_attachment *attach,
struct sg_table *sgt);
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+ struct dma_buf_map *map);
int qxl_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 940e99354f49..755df4d8f95f 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -23,10 +23,12 @@
* Alon Levy
*/
+#include <linux/dma-buf-map.h>
+#include <linux/io-mapping.h>
+
#include "qxl_drv.h"
#include "qxl_object.h"
-#include <linux/io-mapping.h>
static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
{
struct qxl_bo *bo;
@@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev,
return 0;
}
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
+int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
{
- bool is_iomem;
int r;
if (bo->kptr) {
- if (ptr)
- *ptr = bo->kptr;
bo->map_count++;
- return 0;
+ goto out;
}
- r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
+ r = ttm_bo_vmap(&bo->tbo, &bo->map);
if (r)
return r;
- bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
- if (ptr)
- *ptr = bo->kptr;
bo->map_count = 1;
+
+ /* TODO: Remove kptr in favor of map everywhere. */
+ if (bo->map.is_iomem)
+ bo->kptr = (void *)bo->map.vaddr_iomem;
+ else
+ bo->kptr = bo->map.vaddr;
+
+out:
+ *map = bo->map;
return 0;
}
@@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
void *rptr;
int ret;
struct io_mapping *map;
+ struct dma_buf_map bo_map;
if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
map = qdev->vram_mapping;
@@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
return rptr;
}
- ret = qxl_bo_kmap(bo, &rptr);
+ ret = qxl_bo_kmap(bo, &bo_map);
if (ret)
return NULL;
+ rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */
rptr += page_offset * PAGE_SIZE;
return rptr;
@@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo)
if (bo->map_count > 0)
return;
bo->kptr = NULL;
- ttm_bo_kunmap(&bo->kmap);
+ ttm_bo_vunmap(&bo->tbo, &bo->map);
}
void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index 09a5c818324d..ebf24c9d2bf2 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
bool kernel, bool pinned, u32 domain,
struct qxl_surface *surf,
struct qxl_bo **bo_ptr);
-extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
+extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
extern void qxl_bo_kunmap(struct qxl_bo *bo);
void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index 7d3816fca5a8..4aa949799446 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
return ERR_PTR(-ENOSYS);
}
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct qxl_bo *bo = gem_to_qxl_bo(obj);
- void *ptr;
int ret;
- ret = qxl_bo_kmap(bo, &ptr);
+ ret = qxl_bo_kmap(bo, map);
if (ret < 0)
- return ERR_PTR(ret);
+ return ret;
- return ptr;
+ return 0;
}
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+ struct dma_buf_map *map)
{
struct qxl_bo *bo = gem_to_qxl_bo(obj);
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 5d54bccebd4d..44cb5ee6fc20 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -509,7 +509,6 @@ struct radeon_bo {
/* Constant after initialization */
struct radeon_device *rdev;
- struct ttm_bo_kmap_obj dma_buf_vmap;
pid_t pid;
#ifdef CONFIG_MMU_NOTIFIER
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
index 0ccd7213e41f..d2876ce3bc9e 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -31,6 +31,7 @@
#include <drm/drm_debugfs.h>
#include <drm/drm_device.h>
#include <drm/drm_file.h>
+#include <drm/drm_gem_ttm_helper.h>
#include <drm/radeon_drm.h>
#include "radeon.h"
@@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
int radeon_gem_prime_pin(struct drm_gem_object *obj);
void radeon_gem_prime_unpin(struct drm_gem_object *obj);
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
static const struct drm_gem_object_funcs radeon_gem_object_funcs;
@@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = {
.pin = radeon_gem_prime_pin,
.unpin = radeon_gem_prime_unpin,
.get_sg_table = radeon_gem_prime_get_sg_table,
- .vmap = radeon_gem_prime_vmap,
- .vunmap = radeon_gem_prime_vunmap,
+ .vmap = drm_gem_ttm_vmap,
+ .vunmap = drm_gem_ttm_vunmap,
};
/*
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
index b9de0e51c0be..088d39a51c0d 100644
--- a/drivers/gpu/drm/radeon/radeon_prime.c
+++ b/drivers/gpu/drm/radeon/radeon_prime.c
@@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
}
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
-{
- struct radeon_bo *bo = gem_to_radeon_bo(obj);
- int ret;
-
- ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
- &bo->dma_buf_vmap);
- if (ret)
- return ERR_PTR(ret);
-
- return bo->dma_buf_vmap.virtual;
-}
-
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- struct radeon_bo *bo = gem_to_radeon_bo(obj);
-
- ttm_bo_kunmap(&bo->dma_buf_vmap);
-}
-
struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sg)
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
index 7d5ebb10323b..7971f57436dd 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
@@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
return ERR_PTR(ret);
}
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
- if (rk_obj->pages)
- return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
- pgprot_writecombine(PAGE_KERNEL));
+ if (rk_obj->pages) {
+ void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
+ pgprot_writecombine(PAGE_KERNEL));
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
+ return 0;
+ }
if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
- return NULL;
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
- return rk_obj->kvaddr;
+ return 0;
}
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
if (rk_obj->pages) {
- vunmap(vaddr);
+ vunmap(map->vaddr);
return;
}
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
index 7ffc541bea07..5a70a56cd406 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
@@ -31,8 +31,8 @@ struct drm_gem_object *
rockchip_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sg);
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
/* drm driver mmap file operations */
int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
index 744a8e337e41..c02e35ed6e76 100644
--- a/drivers/gpu/drm/tiny/cirrus.c
+++ b/drivers/gpu/drm/tiny/cirrus.c
@@ -17,6 +17,7 @@
*/
#include <linux/console.h>
+#include <linux/dma-buf-map.h>
#include <linux/module.h>
#include <linux/pci.h>
@@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
struct drm_rect *rect)
{
struct cirrus_device *cirrus = to_cirrus(fb->dev);
+ struct dma_buf_map map;
void *vmap;
int idx, ret;
@@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
if (!drm_dev_enter(&cirrus->dev, &idx))
goto out;
- ret = -ENOMEM;
- vmap = drm_gem_shmem_vmap(fb->obj[0]);
- if (!vmap)
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (ret)
goto out_dev_exit;
+ vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
if (cirrus->cpp == fb->format->cpp[0])
drm_fb_memcpy_dstclip(cirrus->vram,
@@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
else
WARN_ON_ONCE("cpp mismatch");
- drm_gem_shmem_vunmap(fb->obj[0], vmap);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
ret = 0;
out_dev_exit:
diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
index cc397671f689..12a890cea6e9 100644
--- a/drivers/gpu/drm/tiny/gm12u320.c
+++ b/drivers/gpu/drm/tiny/gm12u320.c
@@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
{
int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
struct drm_framebuffer *fb;
+ struct dma_buf_map map;
void *vaddr;
u8 *src;
@@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
y1 = gm12u320->fb_update.rect.y1;
y2 = gm12u320->fb_update.rect.y2;
- vaddr = drm_gem_shmem_vmap(fb->obj[0]);
- if (IS_ERR(vaddr)) {
- GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (ret) {
+ GM12U320_ERR("failed to vmap fb: %d\n", ret);
goto put_fb;
}
+ vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
if (fb->obj[0]->import_attach) {
ret = dma_buf_begin_cpu_access(
@@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
}
vunmap:
- drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
put_fb:
drm_framebuffer_put(fb);
gm12u320->fb_update.fb = NULL;
diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
index fef43f4e3bac..42eeba1dfdbf 100644
--- a/drivers/gpu/drm/udl/udl_modeset.c
+++ b/drivers/gpu/drm/udl/udl_modeset.c
@@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
struct urb *urb;
struct drm_rect clip;
int log_bpp;
+ struct dma_buf_map map;
void *vaddr;
ret = udl_log_cpp(fb->format->cpp[0]);
@@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
return ret;
}
- vaddr = drm_gem_shmem_vmap(fb->obj[0]);
- if (IS_ERR(vaddr)) {
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (ret) {
DRM_ERROR("failed to vmap fb\n");
goto out_dma_buf_end_cpu_access;
}
+ vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
urb = udl_get_urb(dev);
if (!urb)
@@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
ret = 0;
out_drm_gem_shmem_vunmap:
- drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
out_dma_buf_end_cpu_access:
if (import_attach) {
tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
index 931c55126148..f268fb258c83 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
@@ -9,6 +9,8 @@
* Michael Thayer <michael.thayer@oracle.com,
* Hans de Goede <hdegoede@redhat.com>
*/
+
+#include <linux/dma-buf-map.h>
#include <linux/export.h>
#include <drm/drm_atomic.h>
@@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
u32 height = plane->state->crtc_h;
size_t data_size, mask_size;
u32 flags;
+ struct dma_buf_map map;
+ int ret;
u8 *src;
/*
@@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
vbox_crtc->cursor_enabled = true;
- src = drm_gem_vram_vmap(gbo);
- if (IS_ERR(src)) {
+ ret = drm_gem_vram_vmap(gbo, &map);
+ if (ret) {
/*
* BUG: we should have pinned the BO in prepare_fb().
*/
@@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
DRM_WARN("Could not map cursor bo, skipping update\n");
return;
}
+ src = map.vaddr; /* TODO: Use mapping abstraction properly */
/*
* The mask must be calculated based on the alpha
@@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
data_size = width * height * 4 + mask_size;
copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
- drm_gem_vram_vunmap(gbo, src);
+ drm_gem_vram_vunmap(gbo, &map);
flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
VBOX_MOUSE_POINTER_ALPHA;
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index 557f0d1e6437..f290a9a942dc 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
return drm_gem_cma_prime_mmap(obj, vma);
}
-void *vc4_prime_vmap(struct drm_gem_object *obj)
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct vc4_bo *bo = to_vc4_bo(obj);
if (bo->validated_shader) {
DRM_DEBUG("mmaping of shader BOs not allowed.\n");
- return ERR_PTR(-EINVAL);
+ return -EINVAL;
}
- return drm_gem_cma_prime_vmap(obj);
+ return drm_gem_cma_prime_vmap(obj, map);
}
struct drm_gem_object *
diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
index cc79b1aaa878..904f2c36c963 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.h
+++ b/drivers/gpu/drm/vc4/vc4_drv.h
@@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sgt);
-void *vc4_prime_vmap(struct drm_gem_object *obj);
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
int vc4_bo_cache_init(struct drm_device *dev);
void vc4_bo_cache_destroy(struct drm_device *dev);
int vc4_bo_inc_usecnt(struct vc4_bo *bo);
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index fa54a6d1403d..b2aa26e1e4a2 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
return &obj->base;
}
-static void *vgem_prime_vmap(struct drm_gem_object *obj)
+static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
long n_pages = obj->size >> PAGE_SHIFT;
struct page **pages;
+ void *vaddr;
pages = vgem_pin_pages(bo);
if (IS_ERR(pages))
- return NULL;
+ return PTR_ERR(pages);
+
+ vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
- return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+ return 0;
}
-static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
- vunmap(vaddr);
+ vunmap(map->vaddr);
vgem_unpin_pages(bo);
}
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index 4f34ef34ba60..74db5a840bed 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
return gem_mmap_obj(xen_obj, vma);
}
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
{
struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+ void *vaddr;
if (!xen_obj->pages)
- return NULL;
+ return -ENOMEM;
/* Please see comment in gem_mmap_obj on mapping and attributes. */
- return vmap(xen_obj->pages, xen_obj->num_pages,
- VM_MAP, PAGE_KERNEL);
+ vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
+ VM_MAP, PAGE_KERNEL);
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
+
+ return 0;
}
void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
- void *vaddr)
+ struct dma_buf_map *map)
{
- vunmap(vaddr);
+ vunmap(map->vaddr);
}
int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
index a39675fa31b2..a4e67d0a149c 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
@@ -12,6 +12,7 @@
#define __XEN_DRM_FRONT_GEM_H
struct dma_buf_attachment;
+struct dma_buf_map;
struct drm_device;
struct drm_gem_object;
struct file;
@@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
+ struct dma_buf_map *map);
void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
- void *vaddr);
+ struct dma_buf_map *map);
int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
struct vm_area_struct *vma);
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index c38dd35da00b..5e6daa1c982f 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -39,6 +39,7 @@
#include <drm/drm_vma_manager.h>
+struct dma_buf_map;
struct drm_gem_object;
/**
@@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
*
* This callback is optional.
*/
- void *(*vmap)(struct drm_gem_object *obj);
+ int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
/**
* @vunmap:
@@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
*
* This callback is optional.
*/
- void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
+ void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
/**
* @mmap:
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index a064b0d1c480..caf98b9cf4b4 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
struct sg_table *sgt);
int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
struct drm_gem_object *
drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 5381f0c8cf6f..3449a0353fe0 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
int drm_gem_shmem_pin(struct drm_gem_object *obj);
void drm_gem_shmem_unpin(struct drm_gem_object *obj);
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
index 128f88174d32..c0d28ba0f5c9 100644
--- a/include/drm/drm_gem_vram_helper.h
+++ b/include/drm/drm_gem_vram_helper.h
@@ -10,6 +10,7 @@
#include <drm/ttm/ttm_bo_api.h>
#include <drm/ttm/ttm_bo_driver.h>
+#include <linux/dma-buf-map.h>
#include <linux/kernel.h> /* for container_of() */
struct drm_mode_create_dumb;
@@ -29,9 +30,8 @@ struct vm_area_struct;
/**
* struct drm_gem_vram_object - GEM object backed by VRAM
- * @gem: GEM object
* @bo: TTM buffer object
- * @kmap: Mapping information for @bo
+ * @map: Mapping information for @bo
* @placement: TTM placement information. Supported placements are \
%TTM_PL_VRAM and %TTM_PL_SYSTEM
* @placements: TTM placement information.
@@ -50,15 +50,15 @@ struct vm_area_struct;
*/
struct drm_gem_vram_object {
struct ttm_buffer_object bo;
- struct ttm_bo_kmap_obj kmap;
+ struct dma_buf_map map;
/**
- * @kmap_use_count:
+ * @vmap_use_count:
*
* Reference count on the virtual address.
* The address are un-mapped when the count reaches zero.
*/
- unsigned int kmap_use_count;
+ unsigned int vmap_use_count;
/* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
struct ttm_placement placement;
@@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
int drm_gem_vram_fill_create_dumb(struct drm_file *file,
struct drm_device *dev,
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
This patch replaces the vmap/vunmap's use of raw pointers in GEM object
functions with instances of struct dma_buf_map. GEM backends are
converted as well. For most of them, this simply changes the returned type.
TTM-based drivers now return information about the location of the memory,
either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap()
et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of
implementing their own vmap callbacks.
v4:
* use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian)
* fix a trailing { in drm_gem_vmap()
* remove several empty functions instead of converting them (Daniel)
* comment uses of raw pointers with a TODO (Daniel)
* TODO list: convert more helpers to use struct dma_buf_map
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
Documentation/gpu/todo.rst | 18 ++++
drivers/gpu/drm/Kconfig | 2 +
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 -------
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 -
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 -
drivers/gpu/drm/ast/ast_cursor.c | 27 +++--
drivers/gpu/drm/ast/ast_drv.h | 7 +-
drivers/gpu/drm/drm_gem.c | 23 +++--
drivers/gpu/drm/drm_gem_cma_helper.c | 10 +-
drivers/gpu/drm/drm_gem_shmem_helper.c | 48 +++++----
drivers/gpu/drm/drm_gem_vram_helper.c | 107 ++++++++++----------
drivers/gpu/drm/etnaviv/etnaviv_drv.h | 2 +-
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 9 +-
drivers/gpu/drm/lima/lima_gem.c | 6 +-
drivers/gpu/drm/lima/lima_sched.c | 11 +-
drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +-
drivers/gpu/drm/nouveau/Kconfig | 1 +
drivers/gpu/drm/nouveau/nouveau_bo.h | 2 -
drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +-
drivers/gpu/drm/nouveau/nouveau_gem.h | 2 -
drivers/gpu/drm/nouveau/nouveau_prime.c | 20 ----
drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +--
drivers/gpu/drm/qxl/qxl_display.c | 11 +-
drivers/gpu/drm/qxl/qxl_draw.c | 14 ++-
drivers/gpu/drm/qxl/qxl_drv.h | 11 +-
drivers/gpu/drm/qxl/qxl_object.c | 31 +++---
drivers/gpu/drm/qxl/qxl_object.h | 2 +-
drivers/gpu/drm/qxl/qxl_prime.c | 12 +--
drivers/gpu/drm/radeon/radeon.h | 1 -
drivers/gpu/drm/radeon/radeon_gem.c | 7 +-
drivers/gpu/drm/radeon/radeon_prime.c | 20 ----
drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 ++--
drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +-
drivers/gpu/drm/tiny/cirrus.c | 10 +-
drivers/gpu/drm/tiny/gm12u320.c | 10 +-
drivers/gpu/drm/udl/udl_modeset.c | 8 +-
drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +-
drivers/gpu/drm/vc4/vc4_bo.c | 6 +-
drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
drivers/gpu/drm/vgem/vgem_drv.c | 16 ++-
drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 ++--
drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +-
include/drm/drm_gem.h | 5 +-
include/drm/drm_gem_cma_helper.h | 2 +-
include/drm/drm_gem_shmem_helper.h | 4 +-
include/drm/drm_gem_vram_helper.h | 14 +--
47 files changed, 321 insertions(+), 295 deletions(-)
diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 700637e25ecd..7e6fc3c04add 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -446,6 +446,24 @@ Contact: Ville Syrjälä, Daniel Vetter
Level: Intermediate
+Use struct dma_buf_map throughout codebase
+------------------------------------------
+
+Pointers to shared device memory are stored in struct dma_buf_map. Each
+instance knows whether it refers to system or I/O memory. Most of the DRM-wide
+interface have been converted to use struct dma_buf_map, but implementations
+often still use raw pointers.
+
+The task is to use struct dma_buf_map where it makes sense.
+
+* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
+* TTM might benefit from using struct dma_buf_map internally.
+* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
+
+Level: Intermediate
+
Core refactorings
=================
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 147d61b9674e..319839b87d37 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -239,6 +239,7 @@ config DRM_RADEON
select FW_LOADER
select DRM_KMS_HELPER
select DRM_TTM
+ select DRM_TTM_HELPER
select POWER_SUPPLY
select HWMON
select BACKLIGHT_CLASS_DEVICE
@@ -259,6 +260,7 @@ config DRM_AMDGPU
select DRM_KMS_HELPER
select DRM_SCHED
select DRM_TTM
+ select DRM_TTM_HELPER
select POWER_SUPPLY
select HWMON
select BACKLIGHT_CLASS_DEVICE
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 5b465ab774d1..e5919efca870 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -41,42 +41,6 @@
#include <linux/dma-fence-array.h>
#include <linux/pci-p2pdma.h>
-/**
- * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
- * @obj: GEM BO
- *
- * Sets up an in-kernel virtual mapping of the BO's memory.
- *
- * Returns:
- * The virtual address of the mapping or an error pointer.
- */
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
-{
- struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
- int ret;
-
- ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
- &bo->dma_buf_vmap);
- if (ret)
- return ERR_PTR(ret);
-
- return bo->dma_buf_vmap.virtual;
-}
-
-/**
- * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
- * @obj: GEM BO
- * @vaddr: Virtual address (unused)
- *
- * Tears down the in-kernel virtual mapping of the BO's memory.
- */
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
-
- ttm_bo_kunmap(&bo->dma_buf_vmap);
-}
-
/**
* amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation
* @obj: GEM BO
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
index 2c5c84a06bb9..39b5b9616fd8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
@@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
struct dma_buf *dma_buf);
bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
struct amdgpu_bo *bo);
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index be08a63ef58c..576659827e74 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -33,6 +33,7 @@
#include <drm/amdgpu_drm.h>
#include <drm/drm_debugfs.h>
+#include <drm/drm_gem_ttm_helper.h>
#include "amdgpu.h"
#include "amdgpu_display.h"
@@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = {
.open = amdgpu_gem_object_open,
.close = amdgpu_gem_object_close,
.export = amdgpu_gem_prime_export,
- .vmap = amdgpu_gem_prime_vmap,
- .vunmap = amdgpu_gem_prime_vunmap,
+ .vmap = drm_gem_ttm_vmap,
+ .vunmap = drm_gem_ttm_vunmap,
};
/*
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
index 132e5f955180..01296ef0d673 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -100,7 +100,6 @@ struct amdgpu_bo {
struct amdgpu_bo *parent;
struct amdgpu_bo *shadow;
- struct ttm_bo_kmap_obj dma_buf_vmap;
struct amdgpu_mn *mn;
diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
index e0f4613918ad..742d43a7edf4 100644
--- a/drivers/gpu/drm/ast/ast_cursor.c
+++ b/drivers/gpu/drm/ast/ast_cursor.c
@@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
gbo = ast->cursor.gbo[i];
- drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+ drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
drm_gem_vram_unpin(gbo);
drm_gem_vram_put(gbo);
}
@@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
struct drm_device *dev = &ast->base;
size_t size, i;
struct drm_gem_vram_object *gbo;
- void __iomem *vaddr;
+ struct dma_buf_map map;
int ret;
size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
@@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
drm_gem_vram_put(gbo);
goto err_drm_gem_vram_put;
}
- vaddr = drm_gem_vram_vmap(gbo);
- if (IS_ERR(vaddr)) {
- ret = PTR_ERR(vaddr);
+ ret = drm_gem_vram_vmap(gbo, &map);
+ if (ret) {
drm_gem_vram_unpin(gbo);
drm_gem_vram_put(gbo);
goto err_drm_gem_vram_put;
}
ast->cursor.gbo[i] = gbo;
- ast->cursor.vaddr[i] = vaddr;
+ ast->cursor.map[i] = map;
}
return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
@@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
while (i) {
--i;
gbo = ast->cursor.gbo[i];
- drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+ drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
drm_gem_vram_unpin(gbo);
drm_gem_vram_put(gbo);
}
@@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
{
struct drm_device *dev = &ast->base;
struct drm_gem_vram_object *gbo;
+ struct dma_buf_map map;
int ret;
void *src;
void __iomem *dst;
@@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
ret = drm_gem_vram_pin(gbo, 0);
if (ret)
return ret;
- src = drm_gem_vram_vmap(gbo);
- if (IS_ERR(src)) {
- ret = PTR_ERR(src);
+ ret = drm_gem_vram_vmap(gbo, &map);
+ if (ret)
goto err_drm_gem_vram_unpin;
- }
+ src = map.vaddr; /* TODO: Use mapping abstraction properly */
- dst = ast->cursor.vaddr[ast->cursor.next_index];
+ dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
/* do data transfer to cursor BO */
update_cursor_image(dst, src, fb->width, fb->height);
- drm_gem_vram_vunmap(gbo, src);
+ drm_gem_vram_vunmap(gbo, &map);
drm_gem_vram_unpin(gbo);
return 0;
@@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
u8 __iomem *sig;
u8 jreg;
- dst = ast->cursor.vaddr[ast->cursor.next_index];
+ dst = ast->cursor.map[ast->cursor.next_index].vaddr;
sig = dst + AST_HWC_SIZE;
writel(x, sig + AST_HWC_SIGNATURE_X);
diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
index 467049ca8430..f963141dd851 100644
--- a/drivers/gpu/drm/ast/ast_drv.h
+++ b/drivers/gpu/drm/ast/ast_drv.h
@@ -28,10 +28,11 @@
#ifndef __AST_DRV_H__
#define __AST_DRV_H__
-#include <linux/types.h>
-#include <linux/io.h>
+#include <linux/dma-buf-map.h>
#include <linux/i2c.h>
#include <linux/i2c-algo-bit.h>
+#include <linux/io.h>
+#include <linux/types.h>
#include <drm/drm_connector.h>
#include <drm/drm_crtc.h>
@@ -131,7 +132,7 @@ struct ast_private {
struct {
struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
- void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
+ struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
unsigned int next_index;
} cursor;
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 1da67d34e55d..a89ad4570e3c 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -36,6 +36,7 @@
#include <linux/pagemap.h>
#include <linux/shmem_fs.h>
#include <linux/dma-buf.h>
+#include <linux/dma-buf-map.h>
#include <linux/mem_encrypt.h>
#include <linux/pagevec.h>
@@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
void *drm_gem_vmap(struct drm_gem_object *obj)
{
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
- if (obj->funcs->vmap)
- vaddr = obj->funcs->vmap(obj);
- else
- vaddr = ERR_PTR(-EOPNOTSUPP);
+ if (!obj->funcs->vmap)
+ return ERR_PTR(-EOPNOTSUPP);
- if (!vaddr)
- vaddr = ERR_PTR(-ENOMEM);
+ ret = obj->funcs->vmap(obj, &map);
+ if (ret)
+ return ERR_PTR(ret);
+ else if (dma_buf_map_is_null(&map))
+ return ERR_PTR(-ENOMEM);
- return vaddr;
+ return map.vaddr;
}
void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
{
+ struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
+
if (!vaddr)
return;
if (obj->funcs->vunmap)
- obj->funcs->vunmap(obj, vaddr);
+ obj->funcs->vunmap(obj, &map);
}
/**
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index d527485ea0b7..b57e3e9222f0 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
* drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
* address space
* @obj: GEM object
+ * @map: Returns the kernel virtual address of the CMA GEM object's backing
+ * store.
*
* This function maps a buffer exported via DRM PRIME into the kernel's
* virtual address space. Since the CMA buffers are already mapped into the
@@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
* driver's &drm_gem_object_funcs.vmap callback.
*
* Returns:
- * The kernel virtual address of the CMA GEM object's backing store.
+ * 0 on success, or a negative error code otherwise.
*/
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
- return cma_obj->vaddr;
+ dma_buf_map_set_vaddr(map, cma_obj->vaddr);
+
+ return 0;
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index fb11df7aced5..5553f58f68f3 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
}
EXPORT_SYMBOL(drm_gem_shmem_unpin);
-static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
+static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
{
struct drm_gem_object *obj = &shmem->base;
- struct dma_buf_map map;
int ret = 0;
- if (shmem->vmap_use_count++ > 0)
- return shmem->vaddr;
+ if (shmem->vmap_use_count++ > 0) {
+ dma_buf_map_set_vaddr(map, shmem->vaddr);
+ return 0;
+ }
if (obj->import_attach) {
- ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
- if (!ret)
- shmem->vaddr = map.vaddr;
+ ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
+ if (!ret) {
+ if (WARN_ON(map->is_iomem)) {
+ ret = -EIO;
+ goto err_put_pages;
+ }
+ shmem->vaddr = map->vaddr;
+ }
} else {
pgprot_t prot = PAGE_KERNEL;
@@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
VM_MAP, prot);
if (!shmem->vaddr)
ret = -ENOMEM;
+ else
+ dma_buf_map_set_vaddr(map, shmem->vaddr);
}
if (ret) {
@@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
goto err_put_pages;
}
- return shmem->vaddr;
+ return 0;
err_put_pages:
if (!obj->import_attach)
@@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
err_zero_use:
shmem->vmap_use_count = 0;
- return ERR_PTR(ret);
+ return ret;
}
/*
* drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
* @shmem: shmem GEM object
+ * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
+ * store.
*
* This function makes sure that a contiguous kernel virtual address mapping
* exists for the buffer backing the shmem GEM object.
@@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
* Returns:
* 0 on success or a negative error code on failure.
*/
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- void *vaddr;
int ret;
ret = mutex_lock_interruptible(&shmem->vmap_lock);
if (ret)
- return ERR_PTR(ret);
- vaddr = drm_gem_shmem_vmap_locked(shmem);
+ return ret;
+ ret = drm_gem_shmem_vmap_locked(shmem, map);
mutex_unlock(&shmem->vmap_lock);
- return vaddr;
+ return ret;
}
EXPORT_SYMBOL(drm_gem_shmem_vmap);
-static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
+static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
+ struct dma_buf_map *map)
{
struct drm_gem_object *obj = &shmem->base;
- struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
if (WARN_ON_ONCE(!shmem->vmap_use_count))
return;
@@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
return;
if (obj->import_attach)
- dma_buf_vunmap(obj->import_attach->dmabuf, &map);
+ dma_buf_vunmap(obj->import_attach->dmabuf, map);
else
vunmap(shmem->vaddr);
@@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
/*
* drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
* @shmem: shmem GEM object
+ * @map: Kernel virtual address where the SHMEM GEM object was mapped
*
* This function cleans up a kernel virtual address mapping acquired by
* drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
@@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
* also be called by drivers directly, in which case it will hide the
* differences between dma-buf imported and natively allocated objects.
*/
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
mutex_lock(&shmem->vmap_lock);
- drm_gem_shmem_vunmap_locked(shmem);
+ drm_gem_shmem_vunmap_locked(shmem, map);
mutex_unlock(&shmem->vmap_lock);
}
EXPORT_SYMBOL(drm_gem_shmem_vunmap);
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 2d5ed30518f1..4d8553b28558 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-or-later
+#include <linux/dma-buf-map.h>
#include <linux/module.h>
#include <drm/drm_debugfs.h>
@@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
* up; only release the GEM object.
*/
- WARN_ON(gbo->kmap_use_count);
- WARN_ON(gbo->kmap.virtual);
+ WARN_ON(gbo->vmap_use_count);
+ WARN_ON(dma_buf_map_is_set(&gbo->map));
drm_gem_object_release(&gbo->bo.base);
}
@@ -382,29 +383,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
}
EXPORT_SYMBOL(drm_gem_vram_unpin);
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
+static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
+ struct dma_buf_map *map)
{
int ret;
- struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
- bool is_iomem;
- if (gbo->kmap_use_count > 0)
+ if (gbo->vmap_use_count > 0)
goto out;
- ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
+ ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
if (ret)
- return ERR_PTR(ret);
+ return ret;
out:
- ++gbo->kmap_use_count;
- return ttm_kmap_obj_virtual(kmap, &is_iomem);
+ ++gbo->vmap_use_count;
+ *map = gbo->map;
+
+ return 0;
}
-static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
+static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
+ struct dma_buf_map *map)
{
- if (WARN_ON_ONCE(!gbo->kmap_use_count))
+ struct drm_device *dev = gbo->bo.base.dev;
+
+ if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count))
return;
- if (--gbo->kmap_use_count > 0)
+
+ if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
+ return; /* BUG: map not mapped from this BO */
+
+ if (--gbo->vmap_use_count > 0)
return;
/*
@@ -418,7 +427,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
/**
* drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
* space
- * @gbo: The GEM VRAM object to map
+ * @gbo: The GEM VRAM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ * store.
*
* The vmap function pins a GEM VRAM object to its current location, either
* system or video memory, and maps its buffer into kernel address space.
@@ -427,48 +438,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
* unmap and unpin the GEM VRAM object.
*
* Returns:
- * The buffer's virtual address on success, or
- * an ERR_PTR()-encoded error code otherwise.
+ * 0 on success, or a negative error code otherwise.
*/
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
{
int ret;
- void *base;
ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
if (ret)
- return ERR_PTR(ret);
+ return ret;
ret = drm_gem_vram_pin_locked(gbo, 0);
if (ret)
goto err_ttm_bo_unreserve;
- base = drm_gem_vram_kmap_locked(gbo);
- if (IS_ERR(base)) {
- ret = PTR_ERR(base);
+ ret = drm_gem_vram_kmap_locked(gbo, map);
+ if (ret)
goto err_drm_gem_vram_unpin_locked;
- }
ttm_bo_unreserve(&gbo->bo);
- return base;
+ return 0;
err_drm_gem_vram_unpin_locked:
drm_gem_vram_unpin_locked(gbo);
err_ttm_bo_unreserve:
ttm_bo_unreserve(&gbo->bo);
- return ERR_PTR(ret);
+ return ret;
}
EXPORT_SYMBOL(drm_gem_vram_vmap);
/**
* drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
- * @gbo: The GEM VRAM object to unmap
- * @vaddr: The mapping's base address as returned by drm_gem_vram_vmap()
+ * @gbo: The GEM VRAM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
*
* A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
* the documentation for drm_gem_vram_vmap() for more information.
*/
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
{
int ret;
@@ -476,7 +483,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
return;
- drm_gem_vram_kunmap_locked(gbo);
+ drm_gem_vram_kunmap_locked(gbo, map);
drm_gem_vram_unpin_locked(gbo);
ttm_bo_unreserve(&gbo->bo);
@@ -567,15 +574,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
bool evict,
struct ttm_resource *new_mem)
{
- struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+ struct ttm_buffer_object *bo = &gbo->bo;
+ struct drm_device *dev = bo->base.dev;
- if (WARN_ON_ONCE(gbo->kmap_use_count))
+ if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count))
return;
- if (!kmap->virtual)
- return;
- ttm_bo_kunmap(kmap);
- kmap->virtual = NULL;
+ ttm_bo_vunmap(bo, &gbo->map);
}
static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
@@ -832,37 +837,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
}
/**
- * drm_gem_vram_object_vmap() - \
- Implements &struct drm_gem_object_funcs.vmap
- * @gem: The GEM object to map
+ * drm_gem_vram_object_vmap() -
+ * Implements &struct drm_gem_object_funcs.vmap
+ * @gem: The GEM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ * store.
*
* Returns:
- * The buffers virtual address on success, or
- * NULL otherwise.
+ * 0 on success, or a negative error code otherwise.
*/
-static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
+static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
{
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
- void *base;
- base = drm_gem_vram_vmap(gbo);
- if (IS_ERR(base))
- return NULL;
- return base;
+ return drm_gem_vram_vmap(gbo, map);
}
/**
- * drm_gem_vram_object_vunmap() - \
- Implements &struct drm_gem_object_funcs.vunmap
- * @gem: The GEM object to unmap
- * @vaddr: The mapping's base address
+ * drm_gem_vram_object_vunmap() -
+ * Implements &struct drm_gem_object_funcs.vunmap
+ * @gem: The GEM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
*/
-static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
- void *vaddr)
+static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
{
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
- drm_gem_vram_vunmap(gbo, vaddr);
+ drm_gem_vram_vunmap(gbo, map);
}
/*
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 9682c26d89bb..f5be627e1de0 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index a6d9932a32ae..bc2543dd987d 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
}
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
- return etnaviv_gem_vmap(obj);
+ void *vaddr = etnaviv_gem_vmap(obj);
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
+
+ return 0;
}
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 11223fe348df..832e5280a6ed 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
return drm_gem_shmem_pin(obj);
}
-static void *lima_gem_vmap(struct drm_gem_object *obj)
+static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct lima_bo *bo = to_lima_bo(obj);
if (bo->heap_size)
- return ERR_PTR(-EINVAL);
+ return -EINVAL;
- return drm_gem_shmem_vmap(obj);
+ return drm_gem_shmem_vmap(obj, map);
}
static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index dc6df9e9a40d..a070a85f8f36 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0 OR MIT
/* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
+#include <linux/dma-buf-map.h>
#include <linux/kthread.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
@@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
struct lima_dump_chunk_buffer *buffer_chunk;
u32 size, task_size, mem_size;
int i;
+ struct dma_buf_map map;
+ int ret;
mutex_lock(&dev->error_task_list_lock);
@@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
} else {
buffer_chunk->size = lima_bo_size(bo);
- data = drm_gem_shmem_vmap(&bo->base.base);
- if (IS_ERR_OR_NULL(data)) {
+ ret = drm_gem_shmem_vmap(&bo->base.base, &map);
+ if (ret) {
kvfree(et);
goto out;
}
- memcpy(buffer_chunk + 1, data, buffer_chunk->size);
+ memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
- drm_gem_shmem_vunmap(&bo->base.base, data);
+ drm_gem_shmem_vunmap(&bo->base.base, &map);
}
buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
index 38672f9e5c4f..8ef76769b97f 100644
--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
+++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
@@ -9,6 +9,7 @@
*/
#include <linux/delay.h>
+#include <linux/dma-buf-map.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_atomic_state_helper.h>
@@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
struct drm_rect *clip)
{
struct drm_device *dev = &mdev->base;
+ struct dma_buf_map map;
void *vmap;
+ int ret;
- vmap = drm_gem_shmem_vmap(fb->obj[0]);
- if (drm_WARN_ON(dev, !vmap))
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (drm_WARN_ON(dev, ret))
return; /* BUG: SHMEM BO should always be vmapped */
+ vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
- drm_gem_shmem_vunmap(fb->obj[0], vmap);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
/* Always scanout image at VRAM offset 0 */
mgag200_set_startadd(mdev, (u32)0);
diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
index 5dec1e5694b7..9436310d0854 100644
--- a/drivers/gpu/drm/nouveau/Kconfig
+++ b/drivers/gpu/drm/nouveau/Kconfig
@@ -6,6 +6,7 @@ config DRM_NOUVEAU
select FW_LOADER
select DRM_KMS_HELPER
select DRM_TTM
+ select DRM_TTM_HELPER
select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT
select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT
select X86_PLATFORM_DEVICES if ACPI && X86
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h
index 641ef6298a0e..6045b85a762a 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.h
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.h
@@ -39,8 +39,6 @@ struct nouveau_bo {
unsigned mode;
struct nouveau_drm_tile *tile;
-
- struct ttm_bo_kmap_obj dma_buf_vmap;
};
static inline struct nouveau_bo *
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index 9a421c3949de..f942b526b0a5 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -24,6 +24,8 @@
*
*/
+#include <drm/drm_gem_ttm_helper.h>
+
#include "nouveau_drv.h"
#include "nouveau_dma.h"
#include "nouveau_fence.h"
@@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = {
.pin = nouveau_gem_prime_pin,
.unpin = nouveau_gem_prime_unpin,
.get_sg_table = nouveau_gem_prime_get_sg_table,
- .vmap = nouveau_gem_prime_vmap,
- .vunmap = nouveau_gem_prime_vunmap,
+ .vmap = drm_gem_ttm_vmap,
+ .vunmap = drm_gem_ttm_vunmap,
};
int
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
index b35c180322e2..3b919c7c931c 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.h
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
@@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
-extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
-extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
#endif
diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
index a8264aebf3d4..2f16b5249283 100644
--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
+++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
@@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
}
-void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
-{
- struct nouveau_bo *nvbo = nouveau_gem_object(obj);
- int ret;
-
- ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
- &nvbo->dma_buf_vmap);
- if (ret)
- return ERR_PTR(ret);
-
- return nvbo->dma_buf_vmap.virtual;
-}
-
-void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- struct nouveau_bo *nvbo = nouveau_gem_object(obj);
-
- ttm_bo_kunmap(&nvbo->dma_buf_vmap);
-}
-
struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sg)
diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
index fdbc8d949135..5ab03d605f57 100644
--- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
+++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
@@ -5,6 +5,7 @@
#include <drm/drm_gem_shmem_helper.h>
#include <drm/panfrost_drm.h>
#include <linux/completion.h>
+#include <linux/dma-buf-map.h>
#include <linux/iopoll.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
@@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
{
struct panfrost_file_priv *user = file_priv->driver_priv;
struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+ struct dma_buf_map map;
struct drm_gem_shmem_object *bo;
u32 cfg, as;
int ret;
@@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
goto err_close_bo;
}
- perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
- if (IS_ERR(perfcnt->buf)) {
- ret = PTR_ERR(perfcnt->buf);
+ ret = drm_gem_shmem_vmap(&bo->base, &map);
+ if (ret)
goto err_put_mapping;
- }
+ perfcnt->buf = map.vaddr;
/*
* Invalidate the cache and clear the counters to start from a fresh
@@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
return 0;
err_vunmap:
- drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
+ drm_gem_shmem_vunmap(&bo->base, &map);
err_put_mapping:
panfrost_gem_mapping_put(perfcnt->mapping);
err_close_bo:
@@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
{
struct panfrost_file_priv *user = file_priv->driver_priv;
struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+ struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
if (user != perfcnt->user)
return -EINVAL;
@@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
perfcnt->user = NULL;
- drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
+ drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
perfcnt->buf = NULL;
panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
index 45fd76e04bdc..e165fa9b2089 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -25,6 +25,7 @@
#include <linux/crc32.h>
#include <linux/delay.h>
+#include <linux/dma-buf-map.h>
#include <drm/drm_drv.h>
#include <drm/drm_atomic.h>
@@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
struct drm_gem_object *obj;
struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
int ret;
+ struct dma_buf_map user_map;
+ struct dma_buf_map cursor_map;
void *user_ptr;
int size = 64*64*4;
@@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
user_bo = gem_to_qxl_bo(obj);
/* pinning is done in the prepare/cleanup framevbuffer */
- ret = qxl_bo_kmap(user_bo, &user_ptr);
+ ret = qxl_bo_kmap(user_bo, &user_map);
if (ret)
goto out_free_release;
+ user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */
ret = qxl_alloc_bo_reserved(qdev, release,
sizeof(struct qxl_cursor) + size,
@@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
if (ret)
goto out_unpin;
- ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
+ ret = qxl_bo_kmap(cursor_bo, &cursor_map);
if (ret)
goto out_backoff;
@@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
{
int ret;
struct drm_gem_object *gobj;
+ struct dma_buf_map map;
int monitors_config_size = sizeof(struct qxl_monitors_config) +
qxl_num_crtc * sizeof(struct qxl_head);
@@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
if (ret)
return ret;
- qxl_bo_kmap(qdev->monitors_config_bo, NULL);
+ qxl_bo_kmap(qdev->monitors_config_bo, &map);
qdev->monitors_config = qdev->monitors_config_bo->kptr;
qdev->ram_header->monitors_config =
diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
index 3599db096973..7b7acb910780 100644
--- a/drivers/gpu/drm/qxl/qxl_draw.c
+++ b/drivers/gpu/drm/qxl/qxl_draw.c
@@ -20,6 +20,8 @@
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
+#include <linux/dma-buf-map.h>
+
#include <drm/drm_fourcc.h>
#include "qxl_drv.h"
@@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
unsigned int num_clips,
struct qxl_bo *clips_bo)
{
+ struct dma_buf_map map;
struct qxl_clip_rects *dev_clips;
int ret;
- ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
- if (ret) {
+ ret = qxl_bo_kmap(clips_bo, &map);
+ if (ret)
return NULL;
- }
+ dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
+
dev_clips->num_rects = num_clips;
dev_clips->chunk.next_chunk = 0;
dev_clips->chunk.prev_chunk = 0;
@@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
int stride = fb->pitches[0];
/* depth is not actually interesting, we don't mask with it */
int depth = fb->format->cpp[0] * 8;
+ struct dma_buf_map surface_map;
uint8_t *surface_base;
struct qxl_release *release;
struct qxl_bo *clips_bo;
@@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
if (ret)
goto out_release_backoff;
- ret = qxl_bo_kmap(bo, (void **)&surface_base);
+ ret = qxl_bo_kmap(bo, &surface_map);
if (ret)
goto out_release_backoff;
+ surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */
ret = qxl_image_init(qdev, release, dimage, surface_base,
left - dumb_shadow_offset,
diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 3602e8b34189..eb437fea5d9e 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -30,6 +30,7 @@
* Definitions taken from spice-protocol, plus kernel driver specific bits.
*/
+#include <linux/dma-buf-map.h>
#include <linux/dma-fence.h>
#include <linux/firmware.h>
#include <linux/platform_device.h>
@@ -50,6 +51,8 @@
#include "qxl_dev.h"
+struct dma_buf_map;
+
#define DRIVER_AUTHOR "Dave Airlie"
#define DRIVER_NAME "qxl"
@@ -79,7 +82,7 @@ struct qxl_bo {
/* Protected by tbo.reserved */
struct ttm_place placements[3];
struct ttm_placement placement;
- struct ttm_bo_kmap_obj kmap;
+ struct dma_buf_map map;
void *kptr;
unsigned int map_count;
int type;
@@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
void qxl_gem_object_close(struct drm_gem_object *obj,
struct drm_file *file_priv);
void qxl_bo_force_delete(struct qxl_device *qdev);
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
/* qxl_dumb.c */
int qxl_mode_dumb_create(struct drm_file *file_priv,
@@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
struct drm_gem_object *qxl_gem_prime_import_sg_table(
struct drm_device *dev, struct dma_buf_attachment *attach,
struct sg_table *sgt);
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+ struct dma_buf_map *map);
int qxl_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 940e99354f49..755df4d8f95f 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -23,10 +23,12 @@
* Alon Levy
*/
+#include <linux/dma-buf-map.h>
+#include <linux/io-mapping.h>
+
#include "qxl_drv.h"
#include "qxl_object.h"
-#include <linux/io-mapping.h>
static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
{
struct qxl_bo *bo;
@@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev,
return 0;
}
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
+int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
{
- bool is_iomem;
int r;
if (bo->kptr) {
- if (ptr)
- *ptr = bo->kptr;
bo->map_count++;
- return 0;
+ goto out;
}
- r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
+ r = ttm_bo_vmap(&bo->tbo, &bo->map);
if (r)
return r;
- bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
- if (ptr)
- *ptr = bo->kptr;
bo->map_count = 1;
+
+ /* TODO: Remove kptr in favor of map everywhere. */
+ if (bo->map.is_iomem)
+ bo->kptr = (void *)bo->map.vaddr_iomem;
+ else
+ bo->kptr = bo->map.vaddr;
+
+out:
+ *map = bo->map;
return 0;
}
@@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
void *rptr;
int ret;
struct io_mapping *map;
+ struct dma_buf_map bo_map;
if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
map = qdev->vram_mapping;
@@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
return rptr;
}
- ret = qxl_bo_kmap(bo, &rptr);
+ ret = qxl_bo_kmap(bo, &bo_map);
if (ret)
return NULL;
+ rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */
rptr += page_offset * PAGE_SIZE;
return rptr;
@@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo)
if (bo->map_count > 0)
return;
bo->kptr = NULL;
- ttm_bo_kunmap(&bo->kmap);
+ ttm_bo_vunmap(&bo->tbo, &bo->map);
}
void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index 09a5c818324d..ebf24c9d2bf2 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
bool kernel, bool pinned, u32 domain,
struct qxl_surface *surf,
struct qxl_bo **bo_ptr);
-extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
+extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
extern void qxl_bo_kunmap(struct qxl_bo *bo);
void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index 7d3816fca5a8..4aa949799446 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
return ERR_PTR(-ENOSYS);
}
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct qxl_bo *bo = gem_to_qxl_bo(obj);
- void *ptr;
int ret;
- ret = qxl_bo_kmap(bo, &ptr);
+ ret = qxl_bo_kmap(bo, map);
if (ret < 0)
- return ERR_PTR(ret);
+ return ret;
- return ptr;
+ return 0;
}
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+ struct dma_buf_map *map)
{
struct qxl_bo *bo = gem_to_qxl_bo(obj);
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 5d54bccebd4d..44cb5ee6fc20 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -509,7 +509,6 @@ struct radeon_bo {
/* Constant after initialization */
struct radeon_device *rdev;
- struct ttm_bo_kmap_obj dma_buf_vmap;
pid_t pid;
#ifdef CONFIG_MMU_NOTIFIER
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
index 0ccd7213e41f..d2876ce3bc9e 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -31,6 +31,7 @@
#include <drm/drm_debugfs.h>
#include <drm/drm_device.h>
#include <drm/drm_file.h>
+#include <drm/drm_gem_ttm_helper.h>
#include <drm/radeon_drm.h>
#include "radeon.h"
@@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
int radeon_gem_prime_pin(struct drm_gem_object *obj);
void radeon_gem_prime_unpin(struct drm_gem_object *obj);
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
static const struct drm_gem_object_funcs radeon_gem_object_funcs;
@@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = {
.pin = radeon_gem_prime_pin,
.unpin = radeon_gem_prime_unpin,
.get_sg_table = radeon_gem_prime_get_sg_table,
- .vmap = radeon_gem_prime_vmap,
- .vunmap = radeon_gem_prime_vunmap,
+ .vmap = drm_gem_ttm_vmap,
+ .vunmap = drm_gem_ttm_vunmap,
};
/*
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
index b9de0e51c0be..088d39a51c0d 100644
--- a/drivers/gpu/drm/radeon/radeon_prime.c
+++ b/drivers/gpu/drm/radeon/radeon_prime.c
@@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
}
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
-{
- struct radeon_bo *bo = gem_to_radeon_bo(obj);
- int ret;
-
- ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
- &bo->dma_buf_vmap);
- if (ret)
- return ERR_PTR(ret);
-
- return bo->dma_buf_vmap.virtual;
-}
-
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- struct radeon_bo *bo = gem_to_radeon_bo(obj);
-
- ttm_bo_kunmap(&bo->dma_buf_vmap);
-}
-
struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sg)
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
index 7d5ebb10323b..7971f57436dd 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
@@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
return ERR_PTR(ret);
}
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
- if (rk_obj->pages)
- return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
- pgprot_writecombine(PAGE_KERNEL));
+ if (rk_obj->pages) {
+ void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
+ pgprot_writecombine(PAGE_KERNEL));
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
+ return 0;
+ }
if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
- return NULL;
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
- return rk_obj->kvaddr;
+ return 0;
}
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
if (rk_obj->pages) {
- vunmap(vaddr);
+ vunmap(map->vaddr);
return;
}
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
index 7ffc541bea07..5a70a56cd406 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
@@ -31,8 +31,8 @@ struct drm_gem_object *
rockchip_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sg);
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
/* drm driver mmap file operations */
int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
index 744a8e337e41..c02e35ed6e76 100644
--- a/drivers/gpu/drm/tiny/cirrus.c
+++ b/drivers/gpu/drm/tiny/cirrus.c
@@ -17,6 +17,7 @@
*/
#include <linux/console.h>
+#include <linux/dma-buf-map.h>
#include <linux/module.h>
#include <linux/pci.h>
@@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
struct drm_rect *rect)
{
struct cirrus_device *cirrus = to_cirrus(fb->dev);
+ struct dma_buf_map map;
void *vmap;
int idx, ret;
@@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
if (!drm_dev_enter(&cirrus->dev, &idx))
goto out;
- ret = -ENOMEM;
- vmap = drm_gem_shmem_vmap(fb->obj[0]);
- if (!vmap)
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (ret)
goto out_dev_exit;
+ vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
if (cirrus->cpp == fb->format->cpp[0])
drm_fb_memcpy_dstclip(cirrus->vram,
@@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
else
WARN_ON_ONCE("cpp mismatch");
- drm_gem_shmem_vunmap(fb->obj[0], vmap);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
ret = 0;
out_dev_exit:
diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
index cc397671f689..12a890cea6e9 100644
--- a/drivers/gpu/drm/tiny/gm12u320.c
+++ b/drivers/gpu/drm/tiny/gm12u320.c
@@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
{
int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
struct drm_framebuffer *fb;
+ struct dma_buf_map map;
void *vaddr;
u8 *src;
@@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
y1 = gm12u320->fb_update.rect.y1;
y2 = gm12u320->fb_update.rect.y2;
- vaddr = drm_gem_shmem_vmap(fb->obj[0]);
- if (IS_ERR(vaddr)) {
- GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (ret) {
+ GM12U320_ERR("failed to vmap fb: %d\n", ret);
goto put_fb;
}
+ vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
if (fb->obj[0]->import_attach) {
ret = dma_buf_begin_cpu_access(
@@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
}
vunmap:
- drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
put_fb:
drm_framebuffer_put(fb);
gm12u320->fb_update.fb = NULL;
diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
index fef43f4e3bac..42eeba1dfdbf 100644
--- a/drivers/gpu/drm/udl/udl_modeset.c
+++ b/drivers/gpu/drm/udl/udl_modeset.c
@@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
struct urb *urb;
struct drm_rect clip;
int log_bpp;
+ struct dma_buf_map map;
void *vaddr;
ret = udl_log_cpp(fb->format->cpp[0]);
@@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
return ret;
}
- vaddr = drm_gem_shmem_vmap(fb->obj[0]);
- if (IS_ERR(vaddr)) {
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (ret) {
DRM_ERROR("failed to vmap fb\n");
goto out_dma_buf_end_cpu_access;
}
+ vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
urb = udl_get_urb(dev);
if (!urb)
@@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
ret = 0;
out_drm_gem_shmem_vunmap:
- drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
out_dma_buf_end_cpu_access:
if (import_attach) {
tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
index 931c55126148..f268fb258c83 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
@@ -9,6 +9,8 @@
* Michael Thayer <michael.thayer@oracle.com,
* Hans de Goede <hdegoede@redhat.com>
*/
+
+#include <linux/dma-buf-map.h>
#include <linux/export.h>
#include <drm/drm_atomic.h>
@@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
u32 height = plane->state->crtc_h;
size_t data_size, mask_size;
u32 flags;
+ struct dma_buf_map map;
+ int ret;
u8 *src;
/*
@@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
vbox_crtc->cursor_enabled = true;
- src = drm_gem_vram_vmap(gbo);
- if (IS_ERR(src)) {
+ ret = drm_gem_vram_vmap(gbo, &map);
+ if (ret) {
/*
* BUG: we should have pinned the BO in prepare_fb().
*/
@@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
DRM_WARN("Could not map cursor bo, skipping update\n");
return;
}
+ src = map.vaddr; /* TODO: Use mapping abstraction properly */
/*
* The mask must be calculated based on the alpha
@@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
data_size = width * height * 4 + mask_size;
copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
- drm_gem_vram_vunmap(gbo, src);
+ drm_gem_vram_vunmap(gbo, &map);
flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
VBOX_MOUSE_POINTER_ALPHA;
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index 557f0d1e6437..f290a9a942dc 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
return drm_gem_cma_prime_mmap(obj, vma);
}
-void *vc4_prime_vmap(struct drm_gem_object *obj)
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct vc4_bo *bo = to_vc4_bo(obj);
if (bo->validated_shader) {
DRM_DEBUG("mmaping of shader BOs not allowed.\n");
- return ERR_PTR(-EINVAL);
+ return -EINVAL;
}
- return drm_gem_cma_prime_vmap(obj);
+ return drm_gem_cma_prime_vmap(obj, map);
}
struct drm_gem_object *
diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
index cc79b1aaa878..904f2c36c963 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.h
+++ b/drivers/gpu/drm/vc4/vc4_drv.h
@@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sgt);
-void *vc4_prime_vmap(struct drm_gem_object *obj);
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
int vc4_bo_cache_init(struct drm_device *dev);
void vc4_bo_cache_destroy(struct drm_device *dev);
int vc4_bo_inc_usecnt(struct vc4_bo *bo);
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index fa54a6d1403d..b2aa26e1e4a2 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
return &obj->base;
}
-static void *vgem_prime_vmap(struct drm_gem_object *obj)
+static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
long n_pages = obj->size >> PAGE_SHIFT;
struct page **pages;
+ void *vaddr;
pages = vgem_pin_pages(bo);
if (IS_ERR(pages))
- return NULL;
+ return PTR_ERR(pages);
+
+ vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
- return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+ return 0;
}
-static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
- vunmap(vaddr);
+ vunmap(map->vaddr);
vgem_unpin_pages(bo);
}
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index 4f34ef34ba60..74db5a840bed 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
return gem_mmap_obj(xen_obj, vma);
}
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
{
struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+ void *vaddr;
if (!xen_obj->pages)
- return NULL;
+ return -ENOMEM;
/* Please see comment in gem_mmap_obj on mapping and attributes. */
- return vmap(xen_obj->pages, xen_obj->num_pages,
- VM_MAP, PAGE_KERNEL);
+ vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
+ VM_MAP, PAGE_KERNEL);
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
+
+ return 0;
}
void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
- void *vaddr)
+ struct dma_buf_map *map)
{
- vunmap(vaddr);
+ vunmap(map->vaddr);
}
int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
index a39675fa31b2..a4e67d0a149c 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
@@ -12,6 +12,7 @@
#define __XEN_DRM_FRONT_GEM_H
struct dma_buf_attachment;
+struct dma_buf_map;
struct drm_device;
struct drm_gem_object;
struct file;
@@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
+ struct dma_buf_map *map);
void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
- void *vaddr);
+ struct dma_buf_map *map);
int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
struct vm_area_struct *vma);
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index c38dd35da00b..5e6daa1c982f 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -39,6 +39,7 @@
#include <drm/drm_vma_manager.h>
+struct dma_buf_map;
struct drm_gem_object;
/**
@@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
*
* This callback is optional.
*/
- void *(*vmap)(struct drm_gem_object *obj);
+ int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
/**
* @vunmap:
@@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
*
* This callback is optional.
*/
- void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
+ void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
/**
* @mmap:
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index a064b0d1c480..caf98b9cf4b4 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
struct sg_table *sgt);
int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
struct drm_gem_object *
drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 5381f0c8cf6f..3449a0353fe0 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
int drm_gem_shmem_pin(struct drm_gem_object *obj);
void drm_gem_shmem_unpin(struct drm_gem_object *obj);
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
index 128f88174d32..c0d28ba0f5c9 100644
--- a/include/drm/drm_gem_vram_helper.h
+++ b/include/drm/drm_gem_vram_helper.h
@@ -10,6 +10,7 @@
#include <drm/ttm/ttm_bo_api.h>
#include <drm/ttm/ttm_bo_driver.h>
+#include <linux/dma-buf-map.h>
#include <linux/kernel.h> /* for container_of() */
struct drm_mode_create_dumb;
@@ -29,9 +30,8 @@ struct vm_area_struct;
/**
* struct drm_gem_vram_object - GEM object backed by VRAM
- * @gem: GEM object
* @bo: TTM buffer object
- * @kmap: Mapping information for @bo
+ * @map: Mapping information for @bo
* @placement: TTM placement information. Supported placements are \
%TTM_PL_VRAM and %TTM_PL_SYSTEM
* @placements: TTM placement information.
@@ -50,15 +50,15 @@ struct vm_area_struct;
*/
struct drm_gem_vram_object {
struct ttm_buffer_object bo;
- struct ttm_bo_kmap_obj kmap;
+ struct dma_buf_map map;
/**
- * @kmap_use_count:
+ * @vmap_use_count:
*
* Reference count on the virtual address.
* The address are un-mapped when the count reaches zero.
*/
- unsigned int kmap_use_count;
+ unsigned int vmap_use_count;
/* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
struct ttm_placement placement;
@@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
int drm_gem_vram_fill_create_dumb(struct drm_file *file,
struct drm_device *dev,
--
2.28.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
This patch replaces the vmap/vunmap's use of raw pointers in GEM object
functions with instances of struct dma_buf_map. GEM backends are
converted as well. For most of them, this simply changes the returned type.
TTM-based drivers now return information about the location of the memory,
either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap()
et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of
implementing their own vmap callbacks.
v4:
* use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian)
* fix a trailing { in drm_gem_vmap()
* remove several empty functions instead of converting them (Daniel)
* comment uses of raw pointers with a TODO (Daniel)
* TODO list: convert more helpers to use struct dma_buf_map
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
Documentation/gpu/todo.rst | 18 ++++
drivers/gpu/drm/Kconfig | 2 +
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 -------
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 -
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 -
drivers/gpu/drm/ast/ast_cursor.c | 27 +++--
drivers/gpu/drm/ast/ast_drv.h | 7 +-
drivers/gpu/drm/drm_gem.c | 23 +++--
drivers/gpu/drm/drm_gem_cma_helper.c | 10 +-
drivers/gpu/drm/drm_gem_shmem_helper.c | 48 +++++----
drivers/gpu/drm/drm_gem_vram_helper.c | 107 ++++++++++----------
drivers/gpu/drm/etnaviv/etnaviv_drv.h | 2 +-
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 9 +-
drivers/gpu/drm/lima/lima_gem.c | 6 +-
drivers/gpu/drm/lima/lima_sched.c | 11 +-
drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +-
drivers/gpu/drm/nouveau/Kconfig | 1 +
drivers/gpu/drm/nouveau/nouveau_bo.h | 2 -
drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +-
drivers/gpu/drm/nouveau/nouveau_gem.h | 2 -
drivers/gpu/drm/nouveau/nouveau_prime.c | 20 ----
drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +--
drivers/gpu/drm/qxl/qxl_display.c | 11 +-
drivers/gpu/drm/qxl/qxl_draw.c | 14 ++-
drivers/gpu/drm/qxl/qxl_drv.h | 11 +-
drivers/gpu/drm/qxl/qxl_object.c | 31 +++---
drivers/gpu/drm/qxl/qxl_object.h | 2 +-
drivers/gpu/drm/qxl/qxl_prime.c | 12 +--
drivers/gpu/drm/radeon/radeon.h | 1 -
drivers/gpu/drm/radeon/radeon_gem.c | 7 +-
drivers/gpu/drm/radeon/radeon_prime.c | 20 ----
drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 ++--
drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +-
drivers/gpu/drm/tiny/cirrus.c | 10 +-
drivers/gpu/drm/tiny/gm12u320.c | 10 +-
drivers/gpu/drm/udl/udl_modeset.c | 8 +-
drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +-
drivers/gpu/drm/vc4/vc4_bo.c | 6 +-
drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
drivers/gpu/drm/vgem/vgem_drv.c | 16 ++-
drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 ++--
drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +-
include/drm/drm_gem.h | 5 +-
include/drm/drm_gem_cma_helper.h | 2 +-
include/drm/drm_gem_shmem_helper.h | 4 +-
include/drm/drm_gem_vram_helper.h | 14 +--
47 files changed, 321 insertions(+), 295 deletions(-)
diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 700637e25ecd..7e6fc3c04add 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -446,6 +446,24 @@ Contact: Ville Syrjälä, Daniel Vetter
Level: Intermediate
+Use struct dma_buf_map throughout codebase
+------------------------------------------
+
+Pointers to shared device memory are stored in struct dma_buf_map. Each
+instance knows whether it refers to system or I/O memory. Most of the DRM-wide
+interface have been converted to use struct dma_buf_map, but implementations
+often still use raw pointers.
+
+The task is to use struct dma_buf_map where it makes sense.
+
+* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
+* TTM might benefit from using struct dma_buf_map internally.
+* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
+
+Level: Intermediate
+
Core refactorings
=================
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 147d61b9674e..319839b87d37 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -239,6 +239,7 @@ config DRM_RADEON
select FW_LOADER
select DRM_KMS_HELPER
select DRM_TTM
+ select DRM_TTM_HELPER
select POWER_SUPPLY
select HWMON
select BACKLIGHT_CLASS_DEVICE
@@ -259,6 +260,7 @@ config DRM_AMDGPU
select DRM_KMS_HELPER
select DRM_SCHED
select DRM_TTM
+ select DRM_TTM_HELPER
select POWER_SUPPLY
select HWMON
select BACKLIGHT_CLASS_DEVICE
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 5b465ab774d1..e5919efca870 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -41,42 +41,6 @@
#include <linux/dma-fence-array.h>
#include <linux/pci-p2pdma.h>
-/**
- * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
- * @obj: GEM BO
- *
- * Sets up an in-kernel virtual mapping of the BO's memory.
- *
- * Returns:
- * The virtual address of the mapping or an error pointer.
- */
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
-{
- struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
- int ret;
-
- ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
- &bo->dma_buf_vmap);
- if (ret)
- return ERR_PTR(ret);
-
- return bo->dma_buf_vmap.virtual;
-}
-
-/**
- * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
- * @obj: GEM BO
- * @vaddr: Virtual address (unused)
- *
- * Tears down the in-kernel virtual mapping of the BO's memory.
- */
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
-
- ttm_bo_kunmap(&bo->dma_buf_vmap);
-}
-
/**
* amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation
* @obj: GEM BO
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
index 2c5c84a06bb9..39b5b9616fd8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
@@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
struct dma_buf *dma_buf);
bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
struct amdgpu_bo *bo);
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index be08a63ef58c..576659827e74 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -33,6 +33,7 @@
#include <drm/amdgpu_drm.h>
#include <drm/drm_debugfs.h>
+#include <drm/drm_gem_ttm_helper.h>
#include "amdgpu.h"
#include "amdgpu_display.h"
@@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = {
.open = amdgpu_gem_object_open,
.close = amdgpu_gem_object_close,
.export = amdgpu_gem_prime_export,
- .vmap = amdgpu_gem_prime_vmap,
- .vunmap = amdgpu_gem_prime_vunmap,
+ .vmap = drm_gem_ttm_vmap,
+ .vunmap = drm_gem_ttm_vunmap,
};
/*
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
index 132e5f955180..01296ef0d673 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -100,7 +100,6 @@ struct amdgpu_bo {
struct amdgpu_bo *parent;
struct amdgpu_bo *shadow;
- struct ttm_bo_kmap_obj dma_buf_vmap;
struct amdgpu_mn *mn;
diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
index e0f4613918ad..742d43a7edf4 100644
--- a/drivers/gpu/drm/ast/ast_cursor.c
+++ b/drivers/gpu/drm/ast/ast_cursor.c
@@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
gbo = ast->cursor.gbo[i];
- drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+ drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
drm_gem_vram_unpin(gbo);
drm_gem_vram_put(gbo);
}
@@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
struct drm_device *dev = &ast->base;
size_t size, i;
struct drm_gem_vram_object *gbo;
- void __iomem *vaddr;
+ struct dma_buf_map map;
int ret;
size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
@@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
drm_gem_vram_put(gbo);
goto err_drm_gem_vram_put;
}
- vaddr = drm_gem_vram_vmap(gbo);
- if (IS_ERR(vaddr)) {
- ret = PTR_ERR(vaddr);
+ ret = drm_gem_vram_vmap(gbo, &map);
+ if (ret) {
drm_gem_vram_unpin(gbo);
drm_gem_vram_put(gbo);
goto err_drm_gem_vram_put;
}
ast->cursor.gbo[i] = gbo;
- ast->cursor.vaddr[i] = vaddr;
+ ast->cursor.map[i] = map;
}
return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
@@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
while (i) {
--i;
gbo = ast->cursor.gbo[i];
- drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+ drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
drm_gem_vram_unpin(gbo);
drm_gem_vram_put(gbo);
}
@@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
{
struct drm_device *dev = &ast->base;
struct drm_gem_vram_object *gbo;
+ struct dma_buf_map map;
int ret;
void *src;
void __iomem *dst;
@@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
ret = drm_gem_vram_pin(gbo, 0);
if (ret)
return ret;
- src = drm_gem_vram_vmap(gbo);
- if (IS_ERR(src)) {
- ret = PTR_ERR(src);
+ ret = drm_gem_vram_vmap(gbo, &map);
+ if (ret)
goto err_drm_gem_vram_unpin;
- }
+ src = map.vaddr; /* TODO: Use mapping abstraction properly */
- dst = ast->cursor.vaddr[ast->cursor.next_index];
+ dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
/* do data transfer to cursor BO */
update_cursor_image(dst, src, fb->width, fb->height);
- drm_gem_vram_vunmap(gbo, src);
+ drm_gem_vram_vunmap(gbo, &map);
drm_gem_vram_unpin(gbo);
return 0;
@@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
u8 __iomem *sig;
u8 jreg;
- dst = ast->cursor.vaddr[ast->cursor.next_index];
+ dst = ast->cursor.map[ast->cursor.next_index].vaddr;
sig = dst + AST_HWC_SIZE;
writel(x, sig + AST_HWC_SIGNATURE_X);
diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
index 467049ca8430..f963141dd851 100644
--- a/drivers/gpu/drm/ast/ast_drv.h
+++ b/drivers/gpu/drm/ast/ast_drv.h
@@ -28,10 +28,11 @@
#ifndef __AST_DRV_H__
#define __AST_DRV_H__
-#include <linux/types.h>
-#include <linux/io.h>
+#include <linux/dma-buf-map.h>
#include <linux/i2c.h>
#include <linux/i2c-algo-bit.h>
+#include <linux/io.h>
+#include <linux/types.h>
#include <drm/drm_connector.h>
#include <drm/drm_crtc.h>
@@ -131,7 +132,7 @@ struct ast_private {
struct {
struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
- void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
+ struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
unsigned int next_index;
} cursor;
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 1da67d34e55d..a89ad4570e3c 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -36,6 +36,7 @@
#include <linux/pagemap.h>
#include <linux/shmem_fs.h>
#include <linux/dma-buf.h>
+#include <linux/dma-buf-map.h>
#include <linux/mem_encrypt.h>
#include <linux/pagevec.h>
@@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
void *drm_gem_vmap(struct drm_gem_object *obj)
{
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
- if (obj->funcs->vmap)
- vaddr = obj->funcs->vmap(obj);
- else
- vaddr = ERR_PTR(-EOPNOTSUPP);
+ if (!obj->funcs->vmap)
+ return ERR_PTR(-EOPNOTSUPP);
- if (!vaddr)
- vaddr = ERR_PTR(-ENOMEM);
+ ret = obj->funcs->vmap(obj, &map);
+ if (ret)
+ return ERR_PTR(ret);
+ else if (dma_buf_map_is_null(&map))
+ return ERR_PTR(-ENOMEM);
- return vaddr;
+ return map.vaddr;
}
void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
{
+ struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
+
if (!vaddr)
return;
if (obj->funcs->vunmap)
- obj->funcs->vunmap(obj, vaddr);
+ obj->funcs->vunmap(obj, &map);
}
/**
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index d527485ea0b7..b57e3e9222f0 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
* drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
* address space
* @obj: GEM object
+ * @map: Returns the kernel virtual address of the CMA GEM object's backing
+ * store.
*
* This function maps a buffer exported via DRM PRIME into the kernel's
* virtual address space. Since the CMA buffers are already mapped into the
@@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
* driver's &drm_gem_object_funcs.vmap callback.
*
* Returns:
- * The kernel virtual address of the CMA GEM object's backing store.
+ * 0 on success, or a negative error code otherwise.
*/
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
- return cma_obj->vaddr;
+ dma_buf_map_set_vaddr(map, cma_obj->vaddr);
+
+ return 0;
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index fb11df7aced5..5553f58f68f3 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
}
EXPORT_SYMBOL(drm_gem_shmem_unpin);
-static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
+static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
{
struct drm_gem_object *obj = &shmem->base;
- struct dma_buf_map map;
int ret = 0;
- if (shmem->vmap_use_count++ > 0)
- return shmem->vaddr;
+ if (shmem->vmap_use_count++ > 0) {
+ dma_buf_map_set_vaddr(map, shmem->vaddr);
+ return 0;
+ }
if (obj->import_attach) {
- ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
- if (!ret)
- shmem->vaddr = map.vaddr;
+ ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
+ if (!ret) {
+ if (WARN_ON(map->is_iomem)) {
+ ret = -EIO;
+ goto err_put_pages;
+ }
+ shmem->vaddr = map->vaddr;
+ }
} else {
pgprot_t prot = PAGE_KERNEL;
@@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
VM_MAP, prot);
if (!shmem->vaddr)
ret = -ENOMEM;
+ else
+ dma_buf_map_set_vaddr(map, shmem->vaddr);
}
if (ret) {
@@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
goto err_put_pages;
}
- return shmem->vaddr;
+ return 0;
err_put_pages:
if (!obj->import_attach)
@@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
err_zero_use:
shmem->vmap_use_count = 0;
- return ERR_PTR(ret);
+ return ret;
}
/*
* drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
* @shmem: shmem GEM object
+ * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
+ * store.
*
* This function makes sure that a contiguous kernel virtual address mapping
* exists for the buffer backing the shmem GEM object.
@@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
* Returns:
* 0 on success or a negative error code on failure.
*/
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- void *vaddr;
int ret;
ret = mutex_lock_interruptible(&shmem->vmap_lock);
if (ret)
- return ERR_PTR(ret);
- vaddr = drm_gem_shmem_vmap_locked(shmem);
+ return ret;
+ ret = drm_gem_shmem_vmap_locked(shmem, map);
mutex_unlock(&shmem->vmap_lock);
- return vaddr;
+ return ret;
}
EXPORT_SYMBOL(drm_gem_shmem_vmap);
-static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
+static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
+ struct dma_buf_map *map)
{
struct drm_gem_object *obj = &shmem->base;
- struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
if (WARN_ON_ONCE(!shmem->vmap_use_count))
return;
@@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
return;
if (obj->import_attach)
- dma_buf_vunmap(obj->import_attach->dmabuf, &map);
+ dma_buf_vunmap(obj->import_attach->dmabuf, map);
else
vunmap(shmem->vaddr);
@@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
/*
* drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
* @shmem: shmem GEM object
+ * @map: Kernel virtual address where the SHMEM GEM object was mapped
*
* This function cleans up a kernel virtual address mapping acquired by
* drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
@@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
* also be called by drivers directly, in which case it will hide the
* differences between dma-buf imported and natively allocated objects.
*/
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
mutex_lock(&shmem->vmap_lock);
- drm_gem_shmem_vunmap_locked(shmem);
+ drm_gem_shmem_vunmap_locked(shmem, map);
mutex_unlock(&shmem->vmap_lock);
}
EXPORT_SYMBOL(drm_gem_shmem_vunmap);
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 2d5ed30518f1..4d8553b28558 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-or-later
+#include <linux/dma-buf-map.h>
#include <linux/module.h>
#include <drm/drm_debugfs.h>
@@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
* up; only release the GEM object.
*/
- WARN_ON(gbo->kmap_use_count);
- WARN_ON(gbo->kmap.virtual);
+ WARN_ON(gbo->vmap_use_count);
+ WARN_ON(dma_buf_map_is_set(&gbo->map));
drm_gem_object_release(&gbo->bo.base);
}
@@ -382,29 +383,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
}
EXPORT_SYMBOL(drm_gem_vram_unpin);
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
+static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
+ struct dma_buf_map *map)
{
int ret;
- struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
- bool is_iomem;
- if (gbo->kmap_use_count > 0)
+ if (gbo->vmap_use_count > 0)
goto out;
- ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
+ ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
if (ret)
- return ERR_PTR(ret);
+ return ret;
out:
- ++gbo->kmap_use_count;
- return ttm_kmap_obj_virtual(kmap, &is_iomem);
+ ++gbo->vmap_use_count;
+ *map = gbo->map;
+
+ return 0;
}
-static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
+static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
+ struct dma_buf_map *map)
{
- if (WARN_ON_ONCE(!gbo->kmap_use_count))
+ struct drm_device *dev = gbo->bo.base.dev;
+
+ if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count))
return;
- if (--gbo->kmap_use_count > 0)
+
+ if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
+ return; /* BUG: map not mapped from this BO */
+
+ if (--gbo->vmap_use_count > 0)
return;
/*
@@ -418,7 +427,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
/**
* drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
* space
- * @gbo: The GEM VRAM object to map
+ * @gbo: The GEM VRAM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ * store.
*
* The vmap function pins a GEM VRAM object to its current location, either
* system or video memory, and maps its buffer into kernel address space.
@@ -427,48 +438,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
* unmap and unpin the GEM VRAM object.
*
* Returns:
- * The buffer's virtual address on success, or
- * an ERR_PTR()-encoded error code otherwise.
+ * 0 on success, or a negative error code otherwise.
*/
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
{
int ret;
- void *base;
ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
if (ret)
- return ERR_PTR(ret);
+ return ret;
ret = drm_gem_vram_pin_locked(gbo, 0);
if (ret)
goto err_ttm_bo_unreserve;
- base = drm_gem_vram_kmap_locked(gbo);
- if (IS_ERR(base)) {
- ret = PTR_ERR(base);
+ ret = drm_gem_vram_kmap_locked(gbo, map);
+ if (ret)
goto err_drm_gem_vram_unpin_locked;
- }
ttm_bo_unreserve(&gbo->bo);
- return base;
+ return 0;
err_drm_gem_vram_unpin_locked:
drm_gem_vram_unpin_locked(gbo);
err_ttm_bo_unreserve:
ttm_bo_unreserve(&gbo->bo);
- return ERR_PTR(ret);
+ return ret;
}
EXPORT_SYMBOL(drm_gem_vram_vmap);
/**
* drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
- * @gbo: The GEM VRAM object to unmap
- * @vaddr: The mapping's base address as returned by drm_gem_vram_vmap()
+ * @gbo: The GEM VRAM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
*
* A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
* the documentation for drm_gem_vram_vmap() for more information.
*/
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
{
int ret;
@@ -476,7 +483,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
return;
- drm_gem_vram_kunmap_locked(gbo);
+ drm_gem_vram_kunmap_locked(gbo, map);
drm_gem_vram_unpin_locked(gbo);
ttm_bo_unreserve(&gbo->bo);
@@ -567,15 +574,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
bool evict,
struct ttm_resource *new_mem)
{
- struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+ struct ttm_buffer_object *bo = &gbo->bo;
+ struct drm_device *dev = bo->base.dev;
- if (WARN_ON_ONCE(gbo->kmap_use_count))
+ if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count))
return;
- if (!kmap->virtual)
- return;
- ttm_bo_kunmap(kmap);
- kmap->virtual = NULL;
+ ttm_bo_vunmap(bo, &gbo->map);
}
static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
@@ -832,37 +837,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
}
/**
- * drm_gem_vram_object_vmap() - \
- Implements &struct drm_gem_object_funcs.vmap
- * @gem: The GEM object to map
+ * drm_gem_vram_object_vmap() -
+ * Implements &struct drm_gem_object_funcs.vmap
+ * @gem: The GEM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ * store.
*
* Returns:
- * The buffers virtual address on success, or
- * NULL otherwise.
+ * 0 on success, or a negative error code otherwise.
*/
-static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
+static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
{
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
- void *base;
- base = drm_gem_vram_vmap(gbo);
- if (IS_ERR(base))
- return NULL;
- return base;
+ return drm_gem_vram_vmap(gbo, map);
}
/**
- * drm_gem_vram_object_vunmap() - \
- Implements &struct drm_gem_object_funcs.vunmap
- * @gem: The GEM object to unmap
- * @vaddr: The mapping's base address
+ * drm_gem_vram_object_vunmap() -
+ * Implements &struct drm_gem_object_funcs.vunmap
+ * @gem: The GEM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
*/
-static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
- void *vaddr)
+static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
{
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
- drm_gem_vram_vunmap(gbo, vaddr);
+ drm_gem_vram_vunmap(gbo, map);
}
/*
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 9682c26d89bb..f5be627e1de0 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index a6d9932a32ae..bc2543dd987d 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
}
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
- return etnaviv_gem_vmap(obj);
+ void *vaddr = etnaviv_gem_vmap(obj);
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
+
+ return 0;
}
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 11223fe348df..832e5280a6ed 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
return drm_gem_shmem_pin(obj);
}
-static void *lima_gem_vmap(struct drm_gem_object *obj)
+static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct lima_bo *bo = to_lima_bo(obj);
if (bo->heap_size)
- return ERR_PTR(-EINVAL);
+ return -EINVAL;
- return drm_gem_shmem_vmap(obj);
+ return drm_gem_shmem_vmap(obj, map);
}
static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index dc6df9e9a40d..a070a85f8f36 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0 OR MIT
/* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
+#include <linux/dma-buf-map.h>
#include <linux/kthread.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
@@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
struct lima_dump_chunk_buffer *buffer_chunk;
u32 size, task_size, mem_size;
int i;
+ struct dma_buf_map map;
+ int ret;
mutex_lock(&dev->error_task_list_lock);
@@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
} else {
buffer_chunk->size = lima_bo_size(bo);
- data = drm_gem_shmem_vmap(&bo->base.base);
- if (IS_ERR_OR_NULL(data)) {
+ ret = drm_gem_shmem_vmap(&bo->base.base, &map);
+ if (ret) {
kvfree(et);
goto out;
}
- memcpy(buffer_chunk + 1, data, buffer_chunk->size);
+ memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
- drm_gem_shmem_vunmap(&bo->base.base, data);
+ drm_gem_shmem_vunmap(&bo->base.base, &map);
}
buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
index 38672f9e5c4f..8ef76769b97f 100644
--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
+++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
@@ -9,6 +9,7 @@
*/
#include <linux/delay.h>
+#include <linux/dma-buf-map.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_atomic_state_helper.h>
@@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
struct drm_rect *clip)
{
struct drm_device *dev = &mdev->base;
+ struct dma_buf_map map;
void *vmap;
+ int ret;
- vmap = drm_gem_shmem_vmap(fb->obj[0]);
- if (drm_WARN_ON(dev, !vmap))
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (drm_WARN_ON(dev, ret))
return; /* BUG: SHMEM BO should always be vmapped */
+ vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
- drm_gem_shmem_vunmap(fb->obj[0], vmap);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
/* Always scanout image at VRAM offset 0 */
mgag200_set_startadd(mdev, (u32)0);
diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
index 5dec1e5694b7..9436310d0854 100644
--- a/drivers/gpu/drm/nouveau/Kconfig
+++ b/drivers/gpu/drm/nouveau/Kconfig
@@ -6,6 +6,7 @@ config DRM_NOUVEAU
select FW_LOADER
select DRM_KMS_HELPER
select DRM_TTM
+ select DRM_TTM_HELPER
select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT
select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT
select X86_PLATFORM_DEVICES if ACPI && X86
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h
index 641ef6298a0e..6045b85a762a 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.h
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.h
@@ -39,8 +39,6 @@ struct nouveau_bo {
unsigned mode;
struct nouveau_drm_tile *tile;
-
- struct ttm_bo_kmap_obj dma_buf_vmap;
};
static inline struct nouveau_bo *
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index 9a421c3949de..f942b526b0a5 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -24,6 +24,8 @@
*
*/
+#include <drm/drm_gem_ttm_helper.h>
+
#include "nouveau_drv.h"
#include "nouveau_dma.h"
#include "nouveau_fence.h"
@@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = {
.pin = nouveau_gem_prime_pin,
.unpin = nouveau_gem_prime_unpin,
.get_sg_table = nouveau_gem_prime_get_sg_table,
- .vmap = nouveau_gem_prime_vmap,
- .vunmap = nouveau_gem_prime_vunmap,
+ .vmap = drm_gem_ttm_vmap,
+ .vunmap = drm_gem_ttm_vunmap,
};
int
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
index b35c180322e2..3b919c7c931c 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.h
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
@@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
-extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
-extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
#endif
diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
index a8264aebf3d4..2f16b5249283 100644
--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
+++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
@@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
}
-void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
-{
- struct nouveau_bo *nvbo = nouveau_gem_object(obj);
- int ret;
-
- ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
- &nvbo->dma_buf_vmap);
- if (ret)
- return ERR_PTR(ret);
-
- return nvbo->dma_buf_vmap.virtual;
-}
-
-void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- struct nouveau_bo *nvbo = nouveau_gem_object(obj);
-
- ttm_bo_kunmap(&nvbo->dma_buf_vmap);
-}
-
struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sg)
diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
index fdbc8d949135..5ab03d605f57 100644
--- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
+++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
@@ -5,6 +5,7 @@
#include <drm/drm_gem_shmem_helper.h>
#include <drm/panfrost_drm.h>
#include <linux/completion.h>
+#include <linux/dma-buf-map.h>
#include <linux/iopoll.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
@@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
{
struct panfrost_file_priv *user = file_priv->driver_priv;
struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+ struct dma_buf_map map;
struct drm_gem_shmem_object *bo;
u32 cfg, as;
int ret;
@@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
goto err_close_bo;
}
- perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
- if (IS_ERR(perfcnt->buf)) {
- ret = PTR_ERR(perfcnt->buf);
+ ret = drm_gem_shmem_vmap(&bo->base, &map);
+ if (ret)
goto err_put_mapping;
- }
+ perfcnt->buf = map.vaddr;
/*
* Invalidate the cache and clear the counters to start from a fresh
@@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
return 0;
err_vunmap:
- drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
+ drm_gem_shmem_vunmap(&bo->base, &map);
err_put_mapping:
panfrost_gem_mapping_put(perfcnt->mapping);
err_close_bo:
@@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
{
struct panfrost_file_priv *user = file_priv->driver_priv;
struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+ struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
if (user != perfcnt->user)
return -EINVAL;
@@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
perfcnt->user = NULL;
- drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
+ drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
perfcnt->buf = NULL;
panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
index 45fd76e04bdc..e165fa9b2089 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -25,6 +25,7 @@
#include <linux/crc32.h>
#include <linux/delay.h>
+#include <linux/dma-buf-map.h>
#include <drm/drm_drv.h>
#include <drm/drm_atomic.h>
@@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
struct drm_gem_object *obj;
struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
int ret;
+ struct dma_buf_map user_map;
+ struct dma_buf_map cursor_map;
void *user_ptr;
int size = 64*64*4;
@@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
user_bo = gem_to_qxl_bo(obj);
/* pinning is done in the prepare/cleanup framevbuffer */
- ret = qxl_bo_kmap(user_bo, &user_ptr);
+ ret = qxl_bo_kmap(user_bo, &user_map);
if (ret)
goto out_free_release;
+ user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */
ret = qxl_alloc_bo_reserved(qdev, release,
sizeof(struct qxl_cursor) + size,
@@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
if (ret)
goto out_unpin;
- ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
+ ret = qxl_bo_kmap(cursor_bo, &cursor_map);
if (ret)
goto out_backoff;
@@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
{
int ret;
struct drm_gem_object *gobj;
+ struct dma_buf_map map;
int monitors_config_size = sizeof(struct qxl_monitors_config) +
qxl_num_crtc * sizeof(struct qxl_head);
@@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
if (ret)
return ret;
- qxl_bo_kmap(qdev->monitors_config_bo, NULL);
+ qxl_bo_kmap(qdev->monitors_config_bo, &map);
qdev->monitors_config = qdev->monitors_config_bo->kptr;
qdev->ram_header->monitors_config =
diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
index 3599db096973..7b7acb910780 100644
--- a/drivers/gpu/drm/qxl/qxl_draw.c
+++ b/drivers/gpu/drm/qxl/qxl_draw.c
@@ -20,6 +20,8 @@
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
+#include <linux/dma-buf-map.h>
+
#include <drm/drm_fourcc.h>
#include "qxl_drv.h"
@@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
unsigned int num_clips,
struct qxl_bo *clips_bo)
{
+ struct dma_buf_map map;
struct qxl_clip_rects *dev_clips;
int ret;
- ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
- if (ret) {
+ ret = qxl_bo_kmap(clips_bo, &map);
+ if (ret)
return NULL;
- }
+ dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
+
dev_clips->num_rects = num_clips;
dev_clips->chunk.next_chunk = 0;
dev_clips->chunk.prev_chunk = 0;
@@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
int stride = fb->pitches[0];
/* depth is not actually interesting, we don't mask with it */
int depth = fb->format->cpp[0] * 8;
+ struct dma_buf_map surface_map;
uint8_t *surface_base;
struct qxl_release *release;
struct qxl_bo *clips_bo;
@@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
if (ret)
goto out_release_backoff;
- ret = qxl_bo_kmap(bo, (void **)&surface_base);
+ ret = qxl_bo_kmap(bo, &surface_map);
if (ret)
goto out_release_backoff;
+ surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */
ret = qxl_image_init(qdev, release, dimage, surface_base,
left - dumb_shadow_offset,
diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 3602e8b34189..eb437fea5d9e 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -30,6 +30,7 @@
* Definitions taken from spice-protocol, plus kernel driver specific bits.
*/
+#include <linux/dma-buf-map.h>
#include <linux/dma-fence.h>
#include <linux/firmware.h>
#include <linux/platform_device.h>
@@ -50,6 +51,8 @@
#include "qxl_dev.h"
+struct dma_buf_map;
+
#define DRIVER_AUTHOR "Dave Airlie"
#define DRIVER_NAME "qxl"
@@ -79,7 +82,7 @@ struct qxl_bo {
/* Protected by tbo.reserved */
struct ttm_place placements[3];
struct ttm_placement placement;
- struct ttm_bo_kmap_obj kmap;
+ struct dma_buf_map map;
void *kptr;
unsigned int map_count;
int type;
@@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
void qxl_gem_object_close(struct drm_gem_object *obj,
struct drm_file *file_priv);
void qxl_bo_force_delete(struct qxl_device *qdev);
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
/* qxl_dumb.c */
int qxl_mode_dumb_create(struct drm_file *file_priv,
@@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
struct drm_gem_object *qxl_gem_prime_import_sg_table(
struct drm_device *dev, struct dma_buf_attachment *attach,
struct sg_table *sgt);
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+ struct dma_buf_map *map);
int qxl_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 940e99354f49..755df4d8f95f 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -23,10 +23,12 @@
* Alon Levy
*/
+#include <linux/dma-buf-map.h>
+#include <linux/io-mapping.h>
+
#include "qxl_drv.h"
#include "qxl_object.h"
-#include <linux/io-mapping.h>
static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
{
struct qxl_bo *bo;
@@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev,
return 0;
}
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
+int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
{
- bool is_iomem;
int r;
if (bo->kptr) {
- if (ptr)
- *ptr = bo->kptr;
bo->map_count++;
- return 0;
+ goto out;
}
- r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
+ r = ttm_bo_vmap(&bo->tbo, &bo->map);
if (r)
return r;
- bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
- if (ptr)
- *ptr = bo->kptr;
bo->map_count = 1;
+
+ /* TODO: Remove kptr in favor of map everywhere. */
+ if (bo->map.is_iomem)
+ bo->kptr = (void *)bo->map.vaddr_iomem;
+ else
+ bo->kptr = bo->map.vaddr;
+
+out:
+ *map = bo->map;
return 0;
}
@@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
void *rptr;
int ret;
struct io_mapping *map;
+ struct dma_buf_map bo_map;
if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
map = qdev->vram_mapping;
@@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
return rptr;
}
- ret = qxl_bo_kmap(bo, &rptr);
+ ret = qxl_bo_kmap(bo, &bo_map);
if (ret)
return NULL;
+ rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */
rptr += page_offset * PAGE_SIZE;
return rptr;
@@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo)
if (bo->map_count > 0)
return;
bo->kptr = NULL;
- ttm_bo_kunmap(&bo->kmap);
+ ttm_bo_vunmap(&bo->tbo, &bo->map);
}
void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index 09a5c818324d..ebf24c9d2bf2 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
bool kernel, bool pinned, u32 domain,
struct qxl_surface *surf,
struct qxl_bo **bo_ptr);
-extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
+extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
extern void qxl_bo_kunmap(struct qxl_bo *bo);
void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index 7d3816fca5a8..4aa949799446 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
return ERR_PTR(-ENOSYS);
}
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct qxl_bo *bo = gem_to_qxl_bo(obj);
- void *ptr;
int ret;
- ret = qxl_bo_kmap(bo, &ptr);
+ ret = qxl_bo_kmap(bo, map);
if (ret < 0)
- return ERR_PTR(ret);
+ return ret;
- return ptr;
+ return 0;
}
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+ struct dma_buf_map *map)
{
struct qxl_bo *bo = gem_to_qxl_bo(obj);
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 5d54bccebd4d..44cb5ee6fc20 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -509,7 +509,6 @@ struct radeon_bo {
/* Constant after initialization */
struct radeon_device *rdev;
- struct ttm_bo_kmap_obj dma_buf_vmap;
pid_t pid;
#ifdef CONFIG_MMU_NOTIFIER
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
index 0ccd7213e41f..d2876ce3bc9e 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -31,6 +31,7 @@
#include <drm/drm_debugfs.h>
#include <drm/drm_device.h>
#include <drm/drm_file.h>
+#include <drm/drm_gem_ttm_helper.h>
#include <drm/radeon_drm.h>
#include "radeon.h"
@@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
int radeon_gem_prime_pin(struct drm_gem_object *obj);
void radeon_gem_prime_unpin(struct drm_gem_object *obj);
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
static const struct drm_gem_object_funcs radeon_gem_object_funcs;
@@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = {
.pin = radeon_gem_prime_pin,
.unpin = radeon_gem_prime_unpin,
.get_sg_table = radeon_gem_prime_get_sg_table,
- .vmap = radeon_gem_prime_vmap,
- .vunmap = radeon_gem_prime_vunmap,
+ .vmap = drm_gem_ttm_vmap,
+ .vunmap = drm_gem_ttm_vunmap,
};
/*
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
index b9de0e51c0be..088d39a51c0d 100644
--- a/drivers/gpu/drm/radeon/radeon_prime.c
+++ b/drivers/gpu/drm/radeon/radeon_prime.c
@@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
}
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
-{
- struct radeon_bo *bo = gem_to_radeon_bo(obj);
- int ret;
-
- ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
- &bo->dma_buf_vmap);
- if (ret)
- return ERR_PTR(ret);
-
- return bo->dma_buf_vmap.virtual;
-}
-
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- struct radeon_bo *bo = gem_to_radeon_bo(obj);
-
- ttm_bo_kunmap(&bo->dma_buf_vmap);
-}
-
struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sg)
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
index 7d5ebb10323b..7971f57436dd 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
@@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
return ERR_PTR(ret);
}
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
- if (rk_obj->pages)
- return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
- pgprot_writecombine(PAGE_KERNEL));
+ if (rk_obj->pages) {
+ void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
+ pgprot_writecombine(PAGE_KERNEL));
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
+ return 0;
+ }
if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
- return NULL;
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
- return rk_obj->kvaddr;
+ return 0;
}
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
if (rk_obj->pages) {
- vunmap(vaddr);
+ vunmap(map->vaddr);
return;
}
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
index 7ffc541bea07..5a70a56cd406 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
@@ -31,8 +31,8 @@ struct drm_gem_object *
rockchip_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sg);
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
/* drm driver mmap file operations */
int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
index 744a8e337e41..c02e35ed6e76 100644
--- a/drivers/gpu/drm/tiny/cirrus.c
+++ b/drivers/gpu/drm/tiny/cirrus.c
@@ -17,6 +17,7 @@
*/
#include <linux/console.h>
+#include <linux/dma-buf-map.h>
#include <linux/module.h>
#include <linux/pci.h>
@@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
struct drm_rect *rect)
{
struct cirrus_device *cirrus = to_cirrus(fb->dev);
+ struct dma_buf_map map;
void *vmap;
int idx, ret;
@@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
if (!drm_dev_enter(&cirrus->dev, &idx))
goto out;
- ret = -ENOMEM;
- vmap = drm_gem_shmem_vmap(fb->obj[0]);
- if (!vmap)
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (ret)
goto out_dev_exit;
+ vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
if (cirrus->cpp == fb->format->cpp[0])
drm_fb_memcpy_dstclip(cirrus->vram,
@@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
else
WARN_ON_ONCE("cpp mismatch");
- drm_gem_shmem_vunmap(fb->obj[0], vmap);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
ret = 0;
out_dev_exit:
diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
index cc397671f689..12a890cea6e9 100644
--- a/drivers/gpu/drm/tiny/gm12u320.c
+++ b/drivers/gpu/drm/tiny/gm12u320.c
@@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
{
int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
struct drm_framebuffer *fb;
+ struct dma_buf_map map;
void *vaddr;
u8 *src;
@@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
y1 = gm12u320->fb_update.rect.y1;
y2 = gm12u320->fb_update.rect.y2;
- vaddr = drm_gem_shmem_vmap(fb->obj[0]);
- if (IS_ERR(vaddr)) {
- GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (ret) {
+ GM12U320_ERR("failed to vmap fb: %d\n", ret);
goto put_fb;
}
+ vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
if (fb->obj[0]->import_attach) {
ret = dma_buf_begin_cpu_access(
@@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
}
vunmap:
- drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
put_fb:
drm_framebuffer_put(fb);
gm12u320->fb_update.fb = NULL;
diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
index fef43f4e3bac..42eeba1dfdbf 100644
--- a/drivers/gpu/drm/udl/udl_modeset.c
+++ b/drivers/gpu/drm/udl/udl_modeset.c
@@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
struct urb *urb;
struct drm_rect clip;
int log_bpp;
+ struct dma_buf_map map;
void *vaddr;
ret = udl_log_cpp(fb->format->cpp[0]);
@@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
return ret;
}
- vaddr = drm_gem_shmem_vmap(fb->obj[0]);
- if (IS_ERR(vaddr)) {
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (ret) {
DRM_ERROR("failed to vmap fb\n");
goto out_dma_buf_end_cpu_access;
}
+ vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
urb = udl_get_urb(dev);
if (!urb)
@@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
ret = 0;
out_drm_gem_shmem_vunmap:
- drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
out_dma_buf_end_cpu_access:
if (import_attach) {
tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
index 931c55126148..f268fb258c83 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
@@ -9,6 +9,8 @@
* Michael Thayer <michael.thayer@oracle.com,
* Hans de Goede <hdegoede@redhat.com>
*/
+
+#include <linux/dma-buf-map.h>
#include <linux/export.h>
#include <drm/drm_atomic.h>
@@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
u32 height = plane->state->crtc_h;
size_t data_size, mask_size;
u32 flags;
+ struct dma_buf_map map;
+ int ret;
u8 *src;
/*
@@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
vbox_crtc->cursor_enabled = true;
- src = drm_gem_vram_vmap(gbo);
- if (IS_ERR(src)) {
+ ret = drm_gem_vram_vmap(gbo, &map);
+ if (ret) {
/*
* BUG: we should have pinned the BO in prepare_fb().
*/
@@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
DRM_WARN("Could not map cursor bo, skipping update\n");
return;
}
+ src = map.vaddr; /* TODO: Use mapping abstraction properly */
/*
* The mask must be calculated based on the alpha
@@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
data_size = width * height * 4 + mask_size;
copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
- drm_gem_vram_vunmap(gbo, src);
+ drm_gem_vram_vunmap(gbo, &map);
flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
VBOX_MOUSE_POINTER_ALPHA;
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index 557f0d1e6437..f290a9a942dc 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
return drm_gem_cma_prime_mmap(obj, vma);
}
-void *vc4_prime_vmap(struct drm_gem_object *obj)
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct vc4_bo *bo = to_vc4_bo(obj);
if (bo->validated_shader) {
DRM_DEBUG("mmaping of shader BOs not allowed.\n");
- return ERR_PTR(-EINVAL);
+ return -EINVAL;
}
- return drm_gem_cma_prime_vmap(obj);
+ return drm_gem_cma_prime_vmap(obj, map);
}
struct drm_gem_object *
diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
index cc79b1aaa878..904f2c36c963 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.h
+++ b/drivers/gpu/drm/vc4/vc4_drv.h
@@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sgt);
-void *vc4_prime_vmap(struct drm_gem_object *obj);
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
int vc4_bo_cache_init(struct drm_device *dev);
void vc4_bo_cache_destroy(struct drm_device *dev);
int vc4_bo_inc_usecnt(struct vc4_bo *bo);
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index fa54a6d1403d..b2aa26e1e4a2 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
return &obj->base;
}
-static void *vgem_prime_vmap(struct drm_gem_object *obj)
+static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
long n_pages = obj->size >> PAGE_SHIFT;
struct page **pages;
+ void *vaddr;
pages = vgem_pin_pages(bo);
if (IS_ERR(pages))
- return NULL;
+ return PTR_ERR(pages);
+
+ vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
- return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+ return 0;
}
-static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
- vunmap(vaddr);
+ vunmap(map->vaddr);
vgem_unpin_pages(bo);
}
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index 4f34ef34ba60..74db5a840bed 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
return gem_mmap_obj(xen_obj, vma);
}
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
{
struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+ void *vaddr;
if (!xen_obj->pages)
- return NULL;
+ return -ENOMEM;
/* Please see comment in gem_mmap_obj on mapping and attributes. */
- return vmap(xen_obj->pages, xen_obj->num_pages,
- VM_MAP, PAGE_KERNEL);
+ vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
+ VM_MAP, PAGE_KERNEL);
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
+
+ return 0;
}
void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
- void *vaddr)
+ struct dma_buf_map *map)
{
- vunmap(vaddr);
+ vunmap(map->vaddr);
}
int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
index a39675fa31b2..a4e67d0a149c 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
@@ -12,6 +12,7 @@
#define __XEN_DRM_FRONT_GEM_H
struct dma_buf_attachment;
+struct dma_buf_map;
struct drm_device;
struct drm_gem_object;
struct file;
@@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
+ struct dma_buf_map *map);
void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
- void *vaddr);
+ struct dma_buf_map *map);
int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
struct vm_area_struct *vma);
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index c38dd35da00b..5e6daa1c982f 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -39,6 +39,7 @@
#include <drm/drm_vma_manager.h>
+struct dma_buf_map;
struct drm_gem_object;
/**
@@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
*
* This callback is optional.
*/
- void *(*vmap)(struct drm_gem_object *obj);
+ int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
/**
* @vunmap:
@@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
*
* This callback is optional.
*/
- void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
+ void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
/**
* @mmap:
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index a064b0d1c480..caf98b9cf4b4 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
struct sg_table *sgt);
int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
struct drm_gem_object *
drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 5381f0c8cf6f..3449a0353fe0 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
int drm_gem_shmem_pin(struct drm_gem_object *obj);
void drm_gem_shmem_unpin(struct drm_gem_object *obj);
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
index 128f88174d32..c0d28ba0f5c9 100644
--- a/include/drm/drm_gem_vram_helper.h
+++ b/include/drm/drm_gem_vram_helper.h
@@ -10,6 +10,7 @@
#include <drm/ttm/ttm_bo_api.h>
#include <drm/ttm/ttm_bo_driver.h>
+#include <linux/dma-buf-map.h>
#include <linux/kernel.h> /* for container_of() */
struct drm_mode_create_dumb;
@@ -29,9 +30,8 @@ struct vm_area_struct;
/**
* struct drm_gem_vram_object - GEM object backed by VRAM
- * @gem: GEM object
* @bo: TTM buffer object
- * @kmap: Mapping information for @bo
+ * @map: Mapping information for @bo
* @placement: TTM placement information. Supported placements are \
%TTM_PL_VRAM and %TTM_PL_SYSTEM
* @placements: TTM placement information.
@@ -50,15 +50,15 @@ struct vm_area_struct;
*/
struct drm_gem_vram_object {
struct ttm_buffer_object bo;
- struct ttm_bo_kmap_obj kmap;
+ struct dma_buf_map map;
/**
- * @kmap_use_count:
+ * @vmap_use_count:
*
* Reference count on the virtual address.
* The address are un-mapped when the count reaches zero.
*/
- unsigned int kmap_use_count;
+ unsigned int vmap_use_count;
/* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
struct ttm_placement placement;
@@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
int drm_gem_vram_fill_create_dumb(struct drm_file *file,
struct drm_device *dev,
--
2.28.0
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
This patch replaces the vmap/vunmap's use of raw pointers in GEM object
functions with instances of struct dma_buf_map. GEM backends are
converted as well. For most of them, this simply changes the returned type.
TTM-based drivers now return information about the location of the memory,
either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap()
et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of
implementing their own vmap callbacks.
v4:
* use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian)
* fix a trailing { in drm_gem_vmap()
* remove several empty functions instead of converting them (Daniel)
* comment uses of raw pointers with a TODO (Daniel)
* TODO list: convert more helpers to use struct dma_buf_map
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
Documentation/gpu/todo.rst | 18 ++++
drivers/gpu/drm/Kconfig | 2 +
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 -------
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 -
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 -
drivers/gpu/drm/ast/ast_cursor.c | 27 +++--
drivers/gpu/drm/ast/ast_drv.h | 7 +-
drivers/gpu/drm/drm_gem.c | 23 +++--
drivers/gpu/drm/drm_gem_cma_helper.c | 10 +-
drivers/gpu/drm/drm_gem_shmem_helper.c | 48 +++++----
drivers/gpu/drm/drm_gem_vram_helper.c | 107 ++++++++++----------
drivers/gpu/drm/etnaviv/etnaviv_drv.h | 2 +-
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 9 +-
drivers/gpu/drm/lima/lima_gem.c | 6 +-
drivers/gpu/drm/lima/lima_sched.c | 11 +-
drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +-
drivers/gpu/drm/nouveau/Kconfig | 1 +
drivers/gpu/drm/nouveau/nouveau_bo.h | 2 -
drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +-
drivers/gpu/drm/nouveau/nouveau_gem.h | 2 -
drivers/gpu/drm/nouveau/nouveau_prime.c | 20 ----
drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +--
drivers/gpu/drm/qxl/qxl_display.c | 11 +-
drivers/gpu/drm/qxl/qxl_draw.c | 14 ++-
drivers/gpu/drm/qxl/qxl_drv.h | 11 +-
drivers/gpu/drm/qxl/qxl_object.c | 31 +++---
drivers/gpu/drm/qxl/qxl_object.h | 2 +-
drivers/gpu/drm/qxl/qxl_prime.c | 12 +--
drivers/gpu/drm/radeon/radeon.h | 1 -
drivers/gpu/drm/radeon/radeon_gem.c | 7 +-
drivers/gpu/drm/radeon/radeon_prime.c | 20 ----
drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 ++--
drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +-
drivers/gpu/drm/tiny/cirrus.c | 10 +-
drivers/gpu/drm/tiny/gm12u320.c | 10 +-
drivers/gpu/drm/udl/udl_modeset.c | 8 +-
drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +-
drivers/gpu/drm/vc4/vc4_bo.c | 6 +-
drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
drivers/gpu/drm/vgem/vgem_drv.c | 16 ++-
drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 ++--
drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +-
include/drm/drm_gem.h | 5 +-
include/drm/drm_gem_cma_helper.h | 2 +-
include/drm/drm_gem_shmem_helper.h | 4 +-
include/drm/drm_gem_vram_helper.h | 14 +--
47 files changed, 321 insertions(+), 295 deletions(-)
diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 700637e25ecd..7e6fc3c04add 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -446,6 +446,24 @@ Contact: Ville Syrjälä, Daniel Vetter
Level: Intermediate
+Use struct dma_buf_map throughout codebase
+------------------------------------------
+
+Pointers to shared device memory are stored in struct dma_buf_map. Each
+instance knows whether it refers to system or I/O memory. Most of the DRM-wide
+interface have been converted to use struct dma_buf_map, but implementations
+often still use raw pointers.
+
+The task is to use struct dma_buf_map where it makes sense.
+
+* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
+* TTM might benefit from using struct dma_buf_map internally.
+* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
+
+Level: Intermediate
+
Core refactorings
=================
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 147d61b9674e..319839b87d37 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -239,6 +239,7 @@ config DRM_RADEON
select FW_LOADER
select DRM_KMS_HELPER
select DRM_TTM
+ select DRM_TTM_HELPER
select POWER_SUPPLY
select HWMON
select BACKLIGHT_CLASS_DEVICE
@@ -259,6 +260,7 @@ config DRM_AMDGPU
select DRM_KMS_HELPER
select DRM_SCHED
select DRM_TTM
+ select DRM_TTM_HELPER
select POWER_SUPPLY
select HWMON
select BACKLIGHT_CLASS_DEVICE
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 5b465ab774d1..e5919efca870 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -41,42 +41,6 @@
#include <linux/dma-fence-array.h>
#include <linux/pci-p2pdma.h>
-/**
- * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
- * @obj: GEM BO
- *
- * Sets up an in-kernel virtual mapping of the BO's memory.
- *
- * Returns:
- * The virtual address of the mapping or an error pointer.
- */
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
-{
- struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
- int ret;
-
- ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
- &bo->dma_buf_vmap);
- if (ret)
- return ERR_PTR(ret);
-
- return bo->dma_buf_vmap.virtual;
-}
-
-/**
- * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
- * @obj: GEM BO
- * @vaddr: Virtual address (unused)
- *
- * Tears down the in-kernel virtual mapping of the BO's memory.
- */
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
-
- ttm_bo_kunmap(&bo->dma_buf_vmap);
-}
-
/**
* amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation
* @obj: GEM BO
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
index 2c5c84a06bb9..39b5b9616fd8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
@@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
struct dma_buf *dma_buf);
bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
struct amdgpu_bo *bo);
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index be08a63ef58c..576659827e74 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -33,6 +33,7 @@
#include <drm/amdgpu_drm.h>
#include <drm/drm_debugfs.h>
+#include <drm/drm_gem_ttm_helper.h>
#include "amdgpu.h"
#include "amdgpu_display.h"
@@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = {
.open = amdgpu_gem_object_open,
.close = amdgpu_gem_object_close,
.export = amdgpu_gem_prime_export,
- .vmap = amdgpu_gem_prime_vmap,
- .vunmap = amdgpu_gem_prime_vunmap,
+ .vmap = drm_gem_ttm_vmap,
+ .vunmap = drm_gem_ttm_vunmap,
};
/*
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
index 132e5f955180..01296ef0d673 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -100,7 +100,6 @@ struct amdgpu_bo {
struct amdgpu_bo *parent;
struct amdgpu_bo *shadow;
- struct ttm_bo_kmap_obj dma_buf_vmap;
struct amdgpu_mn *mn;
diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
index e0f4613918ad..742d43a7edf4 100644
--- a/drivers/gpu/drm/ast/ast_cursor.c
+++ b/drivers/gpu/drm/ast/ast_cursor.c
@@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
gbo = ast->cursor.gbo[i];
- drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+ drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
drm_gem_vram_unpin(gbo);
drm_gem_vram_put(gbo);
}
@@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
struct drm_device *dev = &ast->base;
size_t size, i;
struct drm_gem_vram_object *gbo;
- void __iomem *vaddr;
+ struct dma_buf_map map;
int ret;
size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
@@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
drm_gem_vram_put(gbo);
goto err_drm_gem_vram_put;
}
- vaddr = drm_gem_vram_vmap(gbo);
- if (IS_ERR(vaddr)) {
- ret = PTR_ERR(vaddr);
+ ret = drm_gem_vram_vmap(gbo, &map);
+ if (ret) {
drm_gem_vram_unpin(gbo);
drm_gem_vram_put(gbo);
goto err_drm_gem_vram_put;
}
ast->cursor.gbo[i] = gbo;
- ast->cursor.vaddr[i] = vaddr;
+ ast->cursor.map[i] = map;
}
return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
@@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
while (i) {
--i;
gbo = ast->cursor.gbo[i];
- drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+ drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
drm_gem_vram_unpin(gbo);
drm_gem_vram_put(gbo);
}
@@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
{
struct drm_device *dev = &ast->base;
struct drm_gem_vram_object *gbo;
+ struct dma_buf_map map;
int ret;
void *src;
void __iomem *dst;
@@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
ret = drm_gem_vram_pin(gbo, 0);
if (ret)
return ret;
- src = drm_gem_vram_vmap(gbo);
- if (IS_ERR(src)) {
- ret = PTR_ERR(src);
+ ret = drm_gem_vram_vmap(gbo, &map);
+ if (ret)
goto err_drm_gem_vram_unpin;
- }
+ src = map.vaddr; /* TODO: Use mapping abstraction properly */
- dst = ast->cursor.vaddr[ast->cursor.next_index];
+ dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
/* do data transfer to cursor BO */
update_cursor_image(dst, src, fb->width, fb->height);
- drm_gem_vram_vunmap(gbo, src);
+ drm_gem_vram_vunmap(gbo, &map);
drm_gem_vram_unpin(gbo);
return 0;
@@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
u8 __iomem *sig;
u8 jreg;
- dst = ast->cursor.vaddr[ast->cursor.next_index];
+ dst = ast->cursor.map[ast->cursor.next_index].vaddr;
sig = dst + AST_HWC_SIZE;
writel(x, sig + AST_HWC_SIGNATURE_X);
diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
index 467049ca8430..f963141dd851 100644
--- a/drivers/gpu/drm/ast/ast_drv.h
+++ b/drivers/gpu/drm/ast/ast_drv.h
@@ -28,10 +28,11 @@
#ifndef __AST_DRV_H__
#define __AST_DRV_H__
-#include <linux/types.h>
-#include <linux/io.h>
+#include <linux/dma-buf-map.h>
#include <linux/i2c.h>
#include <linux/i2c-algo-bit.h>
+#include <linux/io.h>
+#include <linux/types.h>
#include <drm/drm_connector.h>
#include <drm/drm_crtc.h>
@@ -131,7 +132,7 @@ struct ast_private {
struct {
struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
- void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
+ struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
unsigned int next_index;
} cursor;
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 1da67d34e55d..a89ad4570e3c 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -36,6 +36,7 @@
#include <linux/pagemap.h>
#include <linux/shmem_fs.h>
#include <linux/dma-buf.h>
+#include <linux/dma-buf-map.h>
#include <linux/mem_encrypt.h>
#include <linux/pagevec.h>
@@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
void *drm_gem_vmap(struct drm_gem_object *obj)
{
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
- if (obj->funcs->vmap)
- vaddr = obj->funcs->vmap(obj);
- else
- vaddr = ERR_PTR(-EOPNOTSUPP);
+ if (!obj->funcs->vmap)
+ return ERR_PTR(-EOPNOTSUPP);
- if (!vaddr)
- vaddr = ERR_PTR(-ENOMEM);
+ ret = obj->funcs->vmap(obj, &map);
+ if (ret)
+ return ERR_PTR(ret);
+ else if (dma_buf_map_is_null(&map))
+ return ERR_PTR(-ENOMEM);
- return vaddr;
+ return map.vaddr;
}
void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
{
+ struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
+
if (!vaddr)
return;
if (obj->funcs->vunmap)
- obj->funcs->vunmap(obj, vaddr);
+ obj->funcs->vunmap(obj, &map);
}
/**
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index d527485ea0b7..b57e3e9222f0 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
* drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
* address space
* @obj: GEM object
+ * @map: Returns the kernel virtual address of the CMA GEM object's backing
+ * store.
*
* This function maps a buffer exported via DRM PRIME into the kernel's
* virtual address space. Since the CMA buffers are already mapped into the
@@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
* driver's &drm_gem_object_funcs.vmap callback.
*
* Returns:
- * The kernel virtual address of the CMA GEM object's backing store.
+ * 0 on success, or a negative error code otherwise.
*/
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
- return cma_obj->vaddr;
+ dma_buf_map_set_vaddr(map, cma_obj->vaddr);
+
+ return 0;
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index fb11df7aced5..5553f58f68f3 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
}
EXPORT_SYMBOL(drm_gem_shmem_unpin);
-static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
+static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
{
struct drm_gem_object *obj = &shmem->base;
- struct dma_buf_map map;
int ret = 0;
- if (shmem->vmap_use_count++ > 0)
- return shmem->vaddr;
+ if (shmem->vmap_use_count++ > 0) {
+ dma_buf_map_set_vaddr(map, shmem->vaddr);
+ return 0;
+ }
if (obj->import_attach) {
- ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
- if (!ret)
- shmem->vaddr = map.vaddr;
+ ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
+ if (!ret) {
+ if (WARN_ON(map->is_iomem)) {
+ ret = -EIO;
+ goto err_put_pages;
+ }
+ shmem->vaddr = map->vaddr;
+ }
} else {
pgprot_t prot = PAGE_KERNEL;
@@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
VM_MAP, prot);
if (!shmem->vaddr)
ret = -ENOMEM;
+ else
+ dma_buf_map_set_vaddr(map, shmem->vaddr);
}
if (ret) {
@@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
goto err_put_pages;
}
- return shmem->vaddr;
+ return 0;
err_put_pages:
if (!obj->import_attach)
@@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
err_zero_use:
shmem->vmap_use_count = 0;
- return ERR_PTR(ret);
+ return ret;
}
/*
* drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
* @shmem: shmem GEM object
+ * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
+ * store.
*
* This function makes sure that a contiguous kernel virtual address mapping
* exists for the buffer backing the shmem GEM object.
@@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
* Returns:
* 0 on success or a negative error code on failure.
*/
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- void *vaddr;
int ret;
ret = mutex_lock_interruptible(&shmem->vmap_lock);
if (ret)
- return ERR_PTR(ret);
- vaddr = drm_gem_shmem_vmap_locked(shmem);
+ return ret;
+ ret = drm_gem_shmem_vmap_locked(shmem, map);
mutex_unlock(&shmem->vmap_lock);
- return vaddr;
+ return ret;
}
EXPORT_SYMBOL(drm_gem_shmem_vmap);
-static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
+static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
+ struct dma_buf_map *map)
{
struct drm_gem_object *obj = &shmem->base;
- struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
if (WARN_ON_ONCE(!shmem->vmap_use_count))
return;
@@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
return;
if (obj->import_attach)
- dma_buf_vunmap(obj->import_attach->dmabuf, &map);
+ dma_buf_vunmap(obj->import_attach->dmabuf, map);
else
vunmap(shmem->vaddr);
@@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
/*
* drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
* @shmem: shmem GEM object
+ * @map: Kernel virtual address where the SHMEM GEM object was mapped
*
* This function cleans up a kernel virtual address mapping acquired by
* drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
@@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
* also be called by drivers directly, in which case it will hide the
* differences between dma-buf imported and natively allocated objects.
*/
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
mutex_lock(&shmem->vmap_lock);
- drm_gem_shmem_vunmap_locked(shmem);
+ drm_gem_shmem_vunmap_locked(shmem, map);
mutex_unlock(&shmem->vmap_lock);
}
EXPORT_SYMBOL(drm_gem_shmem_vunmap);
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 2d5ed30518f1..4d8553b28558 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-or-later
+#include <linux/dma-buf-map.h>
#include <linux/module.h>
#include <drm/drm_debugfs.h>
@@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
* up; only release the GEM object.
*/
- WARN_ON(gbo->kmap_use_count);
- WARN_ON(gbo->kmap.virtual);
+ WARN_ON(gbo->vmap_use_count);
+ WARN_ON(dma_buf_map_is_set(&gbo->map));
drm_gem_object_release(&gbo->bo.base);
}
@@ -382,29 +383,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
}
EXPORT_SYMBOL(drm_gem_vram_unpin);
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
+static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
+ struct dma_buf_map *map)
{
int ret;
- struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
- bool is_iomem;
- if (gbo->kmap_use_count > 0)
+ if (gbo->vmap_use_count > 0)
goto out;
- ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
+ ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
if (ret)
- return ERR_PTR(ret);
+ return ret;
out:
- ++gbo->kmap_use_count;
- return ttm_kmap_obj_virtual(kmap, &is_iomem);
+ ++gbo->vmap_use_count;
+ *map = gbo->map;
+
+ return 0;
}
-static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
+static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
+ struct dma_buf_map *map)
{
- if (WARN_ON_ONCE(!gbo->kmap_use_count))
+ struct drm_device *dev = gbo->bo.base.dev;
+
+ if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count))
return;
- if (--gbo->kmap_use_count > 0)
+
+ if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
+ return; /* BUG: map not mapped from this BO */
+
+ if (--gbo->vmap_use_count > 0)
return;
/*
@@ -418,7 +427,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
/**
* drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
* space
- * @gbo: The GEM VRAM object to map
+ * @gbo: The GEM VRAM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ * store.
*
* The vmap function pins a GEM VRAM object to its current location, either
* system or video memory, and maps its buffer into kernel address space.
@@ -427,48 +438,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
* unmap and unpin the GEM VRAM object.
*
* Returns:
- * The buffer's virtual address on success, or
- * an ERR_PTR()-encoded error code otherwise.
+ * 0 on success, or a negative error code otherwise.
*/
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
{
int ret;
- void *base;
ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
if (ret)
- return ERR_PTR(ret);
+ return ret;
ret = drm_gem_vram_pin_locked(gbo, 0);
if (ret)
goto err_ttm_bo_unreserve;
- base = drm_gem_vram_kmap_locked(gbo);
- if (IS_ERR(base)) {
- ret = PTR_ERR(base);
+ ret = drm_gem_vram_kmap_locked(gbo, map);
+ if (ret)
goto err_drm_gem_vram_unpin_locked;
- }
ttm_bo_unreserve(&gbo->bo);
- return base;
+ return 0;
err_drm_gem_vram_unpin_locked:
drm_gem_vram_unpin_locked(gbo);
err_ttm_bo_unreserve:
ttm_bo_unreserve(&gbo->bo);
- return ERR_PTR(ret);
+ return ret;
}
EXPORT_SYMBOL(drm_gem_vram_vmap);
/**
* drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
- * @gbo: The GEM VRAM object to unmap
- * @vaddr: The mapping's base address as returned by drm_gem_vram_vmap()
+ * @gbo: The GEM VRAM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
*
* A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
* the documentation for drm_gem_vram_vmap() for more information.
*/
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
{
int ret;
@@ -476,7 +483,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
return;
- drm_gem_vram_kunmap_locked(gbo);
+ drm_gem_vram_kunmap_locked(gbo, map);
drm_gem_vram_unpin_locked(gbo);
ttm_bo_unreserve(&gbo->bo);
@@ -567,15 +574,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
bool evict,
struct ttm_resource *new_mem)
{
- struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+ struct ttm_buffer_object *bo = &gbo->bo;
+ struct drm_device *dev = bo->base.dev;
- if (WARN_ON_ONCE(gbo->kmap_use_count))
+ if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count))
return;
- if (!kmap->virtual)
- return;
- ttm_bo_kunmap(kmap);
- kmap->virtual = NULL;
+ ttm_bo_vunmap(bo, &gbo->map);
}
static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
@@ -832,37 +837,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
}
/**
- * drm_gem_vram_object_vmap() - \
- Implements &struct drm_gem_object_funcs.vmap
- * @gem: The GEM object to map
+ * drm_gem_vram_object_vmap() -
+ * Implements &struct drm_gem_object_funcs.vmap
+ * @gem: The GEM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ * store.
*
* Returns:
- * The buffers virtual address on success, or
- * NULL otherwise.
+ * 0 on success, or a negative error code otherwise.
*/
-static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
+static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
{
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
- void *base;
- base = drm_gem_vram_vmap(gbo);
- if (IS_ERR(base))
- return NULL;
- return base;
+ return drm_gem_vram_vmap(gbo, map);
}
/**
- * drm_gem_vram_object_vunmap() - \
- Implements &struct drm_gem_object_funcs.vunmap
- * @gem: The GEM object to unmap
- * @vaddr: The mapping's base address
+ * drm_gem_vram_object_vunmap() -
+ * Implements &struct drm_gem_object_funcs.vunmap
+ * @gem: The GEM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
*/
-static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
- void *vaddr)
+static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
{
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
- drm_gem_vram_vunmap(gbo, vaddr);
+ drm_gem_vram_vunmap(gbo, map);
}
/*
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 9682c26d89bb..f5be627e1de0 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index a6d9932a32ae..bc2543dd987d 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
}
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
- return etnaviv_gem_vmap(obj);
+ void *vaddr = etnaviv_gem_vmap(obj);
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
+
+ return 0;
}
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 11223fe348df..832e5280a6ed 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
return drm_gem_shmem_pin(obj);
}
-static void *lima_gem_vmap(struct drm_gem_object *obj)
+static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct lima_bo *bo = to_lima_bo(obj);
if (bo->heap_size)
- return ERR_PTR(-EINVAL);
+ return -EINVAL;
- return drm_gem_shmem_vmap(obj);
+ return drm_gem_shmem_vmap(obj, map);
}
static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index dc6df9e9a40d..a070a85f8f36 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0 OR MIT
/* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
+#include <linux/dma-buf-map.h>
#include <linux/kthread.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
@@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
struct lima_dump_chunk_buffer *buffer_chunk;
u32 size, task_size, mem_size;
int i;
+ struct dma_buf_map map;
+ int ret;
mutex_lock(&dev->error_task_list_lock);
@@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
} else {
buffer_chunk->size = lima_bo_size(bo);
- data = drm_gem_shmem_vmap(&bo->base.base);
- if (IS_ERR_OR_NULL(data)) {
+ ret = drm_gem_shmem_vmap(&bo->base.base, &map);
+ if (ret) {
kvfree(et);
goto out;
}
- memcpy(buffer_chunk + 1, data, buffer_chunk->size);
+ memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
- drm_gem_shmem_vunmap(&bo->base.base, data);
+ drm_gem_shmem_vunmap(&bo->base.base, &map);
}
buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
index 38672f9e5c4f..8ef76769b97f 100644
--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
+++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
@@ -9,6 +9,7 @@
*/
#include <linux/delay.h>
+#include <linux/dma-buf-map.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_atomic_state_helper.h>
@@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
struct drm_rect *clip)
{
struct drm_device *dev = &mdev->base;
+ struct dma_buf_map map;
void *vmap;
+ int ret;
- vmap = drm_gem_shmem_vmap(fb->obj[0]);
- if (drm_WARN_ON(dev, !vmap))
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (drm_WARN_ON(dev, ret))
return; /* BUG: SHMEM BO should always be vmapped */
+ vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
- drm_gem_shmem_vunmap(fb->obj[0], vmap);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
/* Always scanout image at VRAM offset 0 */
mgag200_set_startadd(mdev, (u32)0);
diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
index 5dec1e5694b7..9436310d0854 100644
--- a/drivers/gpu/drm/nouveau/Kconfig
+++ b/drivers/gpu/drm/nouveau/Kconfig
@@ -6,6 +6,7 @@ config DRM_NOUVEAU
select FW_LOADER
select DRM_KMS_HELPER
select DRM_TTM
+ select DRM_TTM_HELPER
select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT
select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT
select X86_PLATFORM_DEVICES if ACPI && X86
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h
index 641ef6298a0e..6045b85a762a 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.h
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.h
@@ -39,8 +39,6 @@ struct nouveau_bo {
unsigned mode;
struct nouveau_drm_tile *tile;
-
- struct ttm_bo_kmap_obj dma_buf_vmap;
};
static inline struct nouveau_bo *
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index 9a421c3949de..f942b526b0a5 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -24,6 +24,8 @@
*
*/
+#include <drm/drm_gem_ttm_helper.h>
+
#include "nouveau_drv.h"
#include "nouveau_dma.h"
#include "nouveau_fence.h"
@@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = {
.pin = nouveau_gem_prime_pin,
.unpin = nouveau_gem_prime_unpin,
.get_sg_table = nouveau_gem_prime_get_sg_table,
- .vmap = nouveau_gem_prime_vmap,
- .vunmap = nouveau_gem_prime_vunmap,
+ .vmap = drm_gem_ttm_vmap,
+ .vunmap = drm_gem_ttm_vunmap,
};
int
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
index b35c180322e2..3b919c7c931c 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.h
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
@@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
-extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
-extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
#endif
diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
index a8264aebf3d4..2f16b5249283 100644
--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
+++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
@@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
}
-void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
-{
- struct nouveau_bo *nvbo = nouveau_gem_object(obj);
- int ret;
-
- ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
- &nvbo->dma_buf_vmap);
- if (ret)
- return ERR_PTR(ret);
-
- return nvbo->dma_buf_vmap.virtual;
-}
-
-void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- struct nouveau_bo *nvbo = nouveau_gem_object(obj);
-
- ttm_bo_kunmap(&nvbo->dma_buf_vmap);
-}
-
struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sg)
diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
index fdbc8d949135..5ab03d605f57 100644
--- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
+++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
@@ -5,6 +5,7 @@
#include <drm/drm_gem_shmem_helper.h>
#include <drm/panfrost_drm.h>
#include <linux/completion.h>
+#include <linux/dma-buf-map.h>
#include <linux/iopoll.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
@@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
{
struct panfrost_file_priv *user = file_priv->driver_priv;
struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+ struct dma_buf_map map;
struct drm_gem_shmem_object *bo;
u32 cfg, as;
int ret;
@@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
goto err_close_bo;
}
- perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
- if (IS_ERR(perfcnt->buf)) {
- ret = PTR_ERR(perfcnt->buf);
+ ret = drm_gem_shmem_vmap(&bo->base, &map);
+ if (ret)
goto err_put_mapping;
- }
+ perfcnt->buf = map.vaddr;
/*
* Invalidate the cache and clear the counters to start from a fresh
@@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
return 0;
err_vunmap:
- drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
+ drm_gem_shmem_vunmap(&bo->base, &map);
err_put_mapping:
panfrost_gem_mapping_put(perfcnt->mapping);
err_close_bo:
@@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
{
struct panfrost_file_priv *user = file_priv->driver_priv;
struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+ struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
if (user != perfcnt->user)
return -EINVAL;
@@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
perfcnt->user = NULL;
- drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
+ drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
perfcnt->buf = NULL;
panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
index 45fd76e04bdc..e165fa9b2089 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -25,6 +25,7 @@
#include <linux/crc32.h>
#include <linux/delay.h>
+#include <linux/dma-buf-map.h>
#include <drm/drm_drv.h>
#include <drm/drm_atomic.h>
@@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
struct drm_gem_object *obj;
struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
int ret;
+ struct dma_buf_map user_map;
+ struct dma_buf_map cursor_map;
void *user_ptr;
int size = 64*64*4;
@@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
user_bo = gem_to_qxl_bo(obj);
/* pinning is done in the prepare/cleanup framevbuffer */
- ret = qxl_bo_kmap(user_bo, &user_ptr);
+ ret = qxl_bo_kmap(user_bo, &user_map);
if (ret)
goto out_free_release;
+ user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */
ret = qxl_alloc_bo_reserved(qdev, release,
sizeof(struct qxl_cursor) + size,
@@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
if (ret)
goto out_unpin;
- ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
+ ret = qxl_bo_kmap(cursor_bo, &cursor_map);
if (ret)
goto out_backoff;
@@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
{
int ret;
struct drm_gem_object *gobj;
+ struct dma_buf_map map;
int monitors_config_size = sizeof(struct qxl_monitors_config) +
qxl_num_crtc * sizeof(struct qxl_head);
@@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
if (ret)
return ret;
- qxl_bo_kmap(qdev->monitors_config_bo, NULL);
+ qxl_bo_kmap(qdev->monitors_config_bo, &map);
qdev->monitors_config = qdev->monitors_config_bo->kptr;
qdev->ram_header->monitors_config =
diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
index 3599db096973..7b7acb910780 100644
--- a/drivers/gpu/drm/qxl/qxl_draw.c
+++ b/drivers/gpu/drm/qxl/qxl_draw.c
@@ -20,6 +20,8 @@
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
+#include <linux/dma-buf-map.h>
+
#include <drm/drm_fourcc.h>
#include "qxl_drv.h"
@@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
unsigned int num_clips,
struct qxl_bo *clips_bo)
{
+ struct dma_buf_map map;
struct qxl_clip_rects *dev_clips;
int ret;
- ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
- if (ret) {
+ ret = qxl_bo_kmap(clips_bo, &map);
+ if (ret)
return NULL;
- }
+ dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
+
dev_clips->num_rects = num_clips;
dev_clips->chunk.next_chunk = 0;
dev_clips->chunk.prev_chunk = 0;
@@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
int stride = fb->pitches[0];
/* depth is not actually interesting, we don't mask with it */
int depth = fb->format->cpp[0] * 8;
+ struct dma_buf_map surface_map;
uint8_t *surface_base;
struct qxl_release *release;
struct qxl_bo *clips_bo;
@@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
if (ret)
goto out_release_backoff;
- ret = qxl_bo_kmap(bo, (void **)&surface_base);
+ ret = qxl_bo_kmap(bo, &surface_map);
if (ret)
goto out_release_backoff;
+ surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */
ret = qxl_image_init(qdev, release, dimage, surface_base,
left - dumb_shadow_offset,
diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 3602e8b34189..eb437fea5d9e 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -30,6 +30,7 @@
* Definitions taken from spice-protocol, plus kernel driver specific bits.
*/
+#include <linux/dma-buf-map.h>
#include <linux/dma-fence.h>
#include <linux/firmware.h>
#include <linux/platform_device.h>
@@ -50,6 +51,8 @@
#include "qxl_dev.h"
+struct dma_buf_map;
+
#define DRIVER_AUTHOR "Dave Airlie"
#define DRIVER_NAME "qxl"
@@ -79,7 +82,7 @@ struct qxl_bo {
/* Protected by tbo.reserved */
struct ttm_place placements[3];
struct ttm_placement placement;
- struct ttm_bo_kmap_obj kmap;
+ struct dma_buf_map map;
void *kptr;
unsigned int map_count;
int type;
@@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
void qxl_gem_object_close(struct drm_gem_object *obj,
struct drm_file *file_priv);
void qxl_bo_force_delete(struct qxl_device *qdev);
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
/* qxl_dumb.c */
int qxl_mode_dumb_create(struct drm_file *file_priv,
@@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
struct drm_gem_object *qxl_gem_prime_import_sg_table(
struct drm_device *dev, struct dma_buf_attachment *attach,
struct sg_table *sgt);
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+ struct dma_buf_map *map);
int qxl_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 940e99354f49..755df4d8f95f 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -23,10 +23,12 @@
* Alon Levy
*/
+#include <linux/dma-buf-map.h>
+#include <linux/io-mapping.h>
+
#include "qxl_drv.h"
#include "qxl_object.h"
-#include <linux/io-mapping.h>
static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
{
struct qxl_bo *bo;
@@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev,
return 0;
}
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
+int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
{
- bool is_iomem;
int r;
if (bo->kptr) {
- if (ptr)
- *ptr = bo->kptr;
bo->map_count++;
- return 0;
+ goto out;
}
- r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
+ r = ttm_bo_vmap(&bo->tbo, &bo->map);
if (r)
return r;
- bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
- if (ptr)
- *ptr = bo->kptr;
bo->map_count = 1;
+
+ /* TODO: Remove kptr in favor of map everywhere. */
+ if (bo->map.is_iomem)
+ bo->kptr = (void *)bo->map.vaddr_iomem;
+ else
+ bo->kptr = bo->map.vaddr;
+
+out:
+ *map = bo->map;
return 0;
}
@@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
void *rptr;
int ret;
struct io_mapping *map;
+ struct dma_buf_map bo_map;
if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
map = qdev->vram_mapping;
@@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
return rptr;
}
- ret = qxl_bo_kmap(bo, &rptr);
+ ret = qxl_bo_kmap(bo, &bo_map);
if (ret)
return NULL;
+ rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */
rptr += page_offset * PAGE_SIZE;
return rptr;
@@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo)
if (bo->map_count > 0)
return;
bo->kptr = NULL;
- ttm_bo_kunmap(&bo->kmap);
+ ttm_bo_vunmap(&bo->tbo, &bo->map);
}
void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index 09a5c818324d..ebf24c9d2bf2 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
bool kernel, bool pinned, u32 domain,
struct qxl_surface *surf,
struct qxl_bo **bo_ptr);
-extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
+extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
extern void qxl_bo_kunmap(struct qxl_bo *bo);
void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index 7d3816fca5a8..4aa949799446 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
return ERR_PTR(-ENOSYS);
}
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct qxl_bo *bo = gem_to_qxl_bo(obj);
- void *ptr;
int ret;
- ret = qxl_bo_kmap(bo, &ptr);
+ ret = qxl_bo_kmap(bo, map);
if (ret < 0)
- return ERR_PTR(ret);
+ return ret;
- return ptr;
+ return 0;
}
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+ struct dma_buf_map *map)
{
struct qxl_bo *bo = gem_to_qxl_bo(obj);
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 5d54bccebd4d..44cb5ee6fc20 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -509,7 +509,6 @@ struct radeon_bo {
/* Constant after initialization */
struct radeon_device *rdev;
- struct ttm_bo_kmap_obj dma_buf_vmap;
pid_t pid;
#ifdef CONFIG_MMU_NOTIFIER
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
index 0ccd7213e41f..d2876ce3bc9e 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -31,6 +31,7 @@
#include <drm/drm_debugfs.h>
#include <drm/drm_device.h>
#include <drm/drm_file.h>
+#include <drm/drm_gem_ttm_helper.h>
#include <drm/radeon_drm.h>
#include "radeon.h"
@@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
int radeon_gem_prime_pin(struct drm_gem_object *obj);
void radeon_gem_prime_unpin(struct drm_gem_object *obj);
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
static const struct drm_gem_object_funcs radeon_gem_object_funcs;
@@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = {
.pin = radeon_gem_prime_pin,
.unpin = radeon_gem_prime_unpin,
.get_sg_table = radeon_gem_prime_get_sg_table,
- .vmap = radeon_gem_prime_vmap,
- .vunmap = radeon_gem_prime_vunmap,
+ .vmap = drm_gem_ttm_vmap,
+ .vunmap = drm_gem_ttm_vunmap,
};
/*
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
index b9de0e51c0be..088d39a51c0d 100644
--- a/drivers/gpu/drm/radeon/radeon_prime.c
+++ b/drivers/gpu/drm/radeon/radeon_prime.c
@@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
}
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
-{
- struct radeon_bo *bo = gem_to_radeon_bo(obj);
- int ret;
-
- ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
- &bo->dma_buf_vmap);
- if (ret)
- return ERR_PTR(ret);
-
- return bo->dma_buf_vmap.virtual;
-}
-
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- struct radeon_bo *bo = gem_to_radeon_bo(obj);
-
- ttm_bo_kunmap(&bo->dma_buf_vmap);
-}
-
struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sg)
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
index 7d5ebb10323b..7971f57436dd 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
@@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
return ERR_PTR(ret);
}
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
- if (rk_obj->pages)
- return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
- pgprot_writecombine(PAGE_KERNEL));
+ if (rk_obj->pages) {
+ void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
+ pgprot_writecombine(PAGE_KERNEL));
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
+ return 0;
+ }
if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
- return NULL;
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
- return rk_obj->kvaddr;
+ return 0;
}
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
if (rk_obj->pages) {
- vunmap(vaddr);
+ vunmap(map->vaddr);
return;
}
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
index 7ffc541bea07..5a70a56cd406 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
@@ -31,8 +31,8 @@ struct drm_gem_object *
rockchip_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sg);
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
/* drm driver mmap file operations */
int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
index 744a8e337e41..c02e35ed6e76 100644
--- a/drivers/gpu/drm/tiny/cirrus.c
+++ b/drivers/gpu/drm/tiny/cirrus.c
@@ -17,6 +17,7 @@
*/
#include <linux/console.h>
+#include <linux/dma-buf-map.h>
#include <linux/module.h>
#include <linux/pci.h>
@@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
struct drm_rect *rect)
{
struct cirrus_device *cirrus = to_cirrus(fb->dev);
+ struct dma_buf_map map;
void *vmap;
int idx, ret;
@@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
if (!drm_dev_enter(&cirrus->dev, &idx))
goto out;
- ret = -ENOMEM;
- vmap = drm_gem_shmem_vmap(fb->obj[0]);
- if (!vmap)
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (ret)
goto out_dev_exit;
+ vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
if (cirrus->cpp == fb->format->cpp[0])
drm_fb_memcpy_dstclip(cirrus->vram,
@@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
else
WARN_ON_ONCE("cpp mismatch");
- drm_gem_shmem_vunmap(fb->obj[0], vmap);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
ret = 0;
out_dev_exit:
diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
index cc397671f689..12a890cea6e9 100644
--- a/drivers/gpu/drm/tiny/gm12u320.c
+++ b/drivers/gpu/drm/tiny/gm12u320.c
@@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
{
int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
struct drm_framebuffer *fb;
+ struct dma_buf_map map;
void *vaddr;
u8 *src;
@@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
y1 = gm12u320->fb_update.rect.y1;
y2 = gm12u320->fb_update.rect.y2;
- vaddr = drm_gem_shmem_vmap(fb->obj[0]);
- if (IS_ERR(vaddr)) {
- GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (ret) {
+ GM12U320_ERR("failed to vmap fb: %d\n", ret);
goto put_fb;
}
+ vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
if (fb->obj[0]->import_attach) {
ret = dma_buf_begin_cpu_access(
@@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
}
vunmap:
- drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
put_fb:
drm_framebuffer_put(fb);
gm12u320->fb_update.fb = NULL;
diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
index fef43f4e3bac..42eeba1dfdbf 100644
--- a/drivers/gpu/drm/udl/udl_modeset.c
+++ b/drivers/gpu/drm/udl/udl_modeset.c
@@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
struct urb *urb;
struct drm_rect clip;
int log_bpp;
+ struct dma_buf_map map;
void *vaddr;
ret = udl_log_cpp(fb->format->cpp[0]);
@@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
return ret;
}
- vaddr = drm_gem_shmem_vmap(fb->obj[0]);
- if (IS_ERR(vaddr)) {
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (ret) {
DRM_ERROR("failed to vmap fb\n");
goto out_dma_buf_end_cpu_access;
}
+ vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
urb = udl_get_urb(dev);
if (!urb)
@@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
ret = 0;
out_drm_gem_shmem_vunmap:
- drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
out_dma_buf_end_cpu_access:
if (import_attach) {
tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
index 931c55126148..f268fb258c83 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
@@ -9,6 +9,8 @@
* Michael Thayer <michael.thayer@oracle.com,
* Hans de Goede <hdegoede@redhat.com>
*/
+
+#include <linux/dma-buf-map.h>
#include <linux/export.h>
#include <drm/drm_atomic.h>
@@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
u32 height = plane->state->crtc_h;
size_t data_size, mask_size;
u32 flags;
+ struct dma_buf_map map;
+ int ret;
u8 *src;
/*
@@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
vbox_crtc->cursor_enabled = true;
- src = drm_gem_vram_vmap(gbo);
- if (IS_ERR(src)) {
+ ret = drm_gem_vram_vmap(gbo, &map);
+ if (ret) {
/*
* BUG: we should have pinned the BO in prepare_fb().
*/
@@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
DRM_WARN("Could not map cursor bo, skipping update\n");
return;
}
+ src = map.vaddr; /* TODO: Use mapping abstraction properly */
/*
* The mask must be calculated based on the alpha
@@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
data_size = width * height * 4 + mask_size;
copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
- drm_gem_vram_vunmap(gbo, src);
+ drm_gem_vram_vunmap(gbo, &map);
flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
VBOX_MOUSE_POINTER_ALPHA;
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index 557f0d1e6437..f290a9a942dc 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
return drm_gem_cma_prime_mmap(obj, vma);
}
-void *vc4_prime_vmap(struct drm_gem_object *obj)
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct vc4_bo *bo = to_vc4_bo(obj);
if (bo->validated_shader) {
DRM_DEBUG("mmaping of shader BOs not allowed.\n");
- return ERR_PTR(-EINVAL);
+ return -EINVAL;
}
- return drm_gem_cma_prime_vmap(obj);
+ return drm_gem_cma_prime_vmap(obj, map);
}
struct drm_gem_object *
diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
index cc79b1aaa878..904f2c36c963 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.h
+++ b/drivers/gpu/drm/vc4/vc4_drv.h
@@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sgt);
-void *vc4_prime_vmap(struct drm_gem_object *obj);
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
int vc4_bo_cache_init(struct drm_device *dev);
void vc4_bo_cache_destroy(struct drm_device *dev);
int vc4_bo_inc_usecnt(struct vc4_bo *bo);
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index fa54a6d1403d..b2aa26e1e4a2 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
return &obj->base;
}
-static void *vgem_prime_vmap(struct drm_gem_object *obj)
+static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
long n_pages = obj->size >> PAGE_SHIFT;
struct page **pages;
+ void *vaddr;
pages = vgem_pin_pages(bo);
if (IS_ERR(pages))
- return NULL;
+ return PTR_ERR(pages);
+
+ vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
- return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+ return 0;
}
-static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
- vunmap(vaddr);
+ vunmap(map->vaddr);
vgem_unpin_pages(bo);
}
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index 4f34ef34ba60..74db5a840bed 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
return gem_mmap_obj(xen_obj, vma);
}
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
{
struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+ void *vaddr;
if (!xen_obj->pages)
- return NULL;
+ return -ENOMEM;
/* Please see comment in gem_mmap_obj on mapping and attributes. */
- return vmap(xen_obj->pages, xen_obj->num_pages,
- VM_MAP, PAGE_KERNEL);
+ vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
+ VM_MAP, PAGE_KERNEL);
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
+
+ return 0;
}
void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
- void *vaddr)
+ struct dma_buf_map *map)
{
- vunmap(vaddr);
+ vunmap(map->vaddr);
}
int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
index a39675fa31b2..a4e67d0a149c 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
@@ -12,6 +12,7 @@
#define __XEN_DRM_FRONT_GEM_H
struct dma_buf_attachment;
+struct dma_buf_map;
struct drm_device;
struct drm_gem_object;
struct file;
@@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
+ struct dma_buf_map *map);
void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
- void *vaddr);
+ struct dma_buf_map *map);
int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
struct vm_area_struct *vma);
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index c38dd35da00b..5e6daa1c982f 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -39,6 +39,7 @@
#include <drm/drm_vma_manager.h>
+struct dma_buf_map;
struct drm_gem_object;
/**
@@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
*
* This callback is optional.
*/
- void *(*vmap)(struct drm_gem_object *obj);
+ int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
/**
* @vunmap:
@@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
*
* This callback is optional.
*/
- void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
+ void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
/**
* @mmap:
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index a064b0d1c480..caf98b9cf4b4 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
struct sg_table *sgt);
int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
struct drm_gem_object *
drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 5381f0c8cf6f..3449a0353fe0 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
int drm_gem_shmem_pin(struct drm_gem_object *obj);
void drm_gem_shmem_unpin(struct drm_gem_object *obj);
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
index 128f88174d32..c0d28ba0f5c9 100644
--- a/include/drm/drm_gem_vram_helper.h
+++ b/include/drm/drm_gem_vram_helper.h
@@ -10,6 +10,7 @@
#include <drm/ttm/ttm_bo_api.h>
#include <drm/ttm/ttm_bo_driver.h>
+#include <linux/dma-buf-map.h>
#include <linux/kernel.h> /* for container_of() */
struct drm_mode_create_dumb;
@@ -29,9 +30,8 @@ struct vm_area_struct;
/**
* struct drm_gem_vram_object - GEM object backed by VRAM
- * @gem: GEM object
* @bo: TTM buffer object
- * @kmap: Mapping information for @bo
+ * @map: Mapping information for @bo
* @placement: TTM placement information. Supported placements are \
%TTM_PL_VRAM and %TTM_PL_SYSTEM
* @placements: TTM placement information.
@@ -50,15 +50,15 @@ struct vm_area_struct;
*/
struct drm_gem_vram_object {
struct ttm_buffer_object bo;
- struct ttm_bo_kmap_obj kmap;
+ struct dma_buf_map map;
/**
- * @kmap_use_count:
+ * @vmap_use_count:
*
* Reference count on the virtual address.
* The address are un-mapped when the count reaches zero.
*/
- unsigned int kmap_use_count;
+ unsigned int vmap_use_count;
/* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
struct ttm_placement placement;
@@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
int drm_gem_vram_fill_create_dumb(struct drm_file *file,
struct drm_device *dev,
--
2.28.0
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
This patch replaces the vmap/vunmap's use of raw pointers in GEM object
functions with instances of struct dma_buf_map. GEM backends are
converted as well. For most of them, this simply changes the returned type.
TTM-based drivers now return information about the location of the memory,
either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap()
et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of
implementing their own vmap callbacks.
v4:
* use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian)
* fix a trailing { in drm_gem_vmap()
* remove several empty functions instead of converting them (Daniel)
* comment uses of raw pointers with a TODO (Daniel)
* TODO list: convert more helpers to use struct dma_buf_map
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
Documentation/gpu/todo.rst | 18 ++++
drivers/gpu/drm/Kconfig | 2 +
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 -------
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 -
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 -
drivers/gpu/drm/ast/ast_cursor.c | 27 +++--
drivers/gpu/drm/ast/ast_drv.h | 7 +-
drivers/gpu/drm/drm_gem.c | 23 +++--
drivers/gpu/drm/drm_gem_cma_helper.c | 10 +-
drivers/gpu/drm/drm_gem_shmem_helper.c | 48 +++++----
drivers/gpu/drm/drm_gem_vram_helper.c | 107 ++++++++++----------
drivers/gpu/drm/etnaviv/etnaviv_drv.h | 2 +-
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 9 +-
drivers/gpu/drm/lima/lima_gem.c | 6 +-
drivers/gpu/drm/lima/lima_sched.c | 11 +-
drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +-
drivers/gpu/drm/nouveau/Kconfig | 1 +
drivers/gpu/drm/nouveau/nouveau_bo.h | 2 -
drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +-
drivers/gpu/drm/nouveau/nouveau_gem.h | 2 -
drivers/gpu/drm/nouveau/nouveau_prime.c | 20 ----
drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +--
drivers/gpu/drm/qxl/qxl_display.c | 11 +-
drivers/gpu/drm/qxl/qxl_draw.c | 14 ++-
drivers/gpu/drm/qxl/qxl_drv.h | 11 +-
drivers/gpu/drm/qxl/qxl_object.c | 31 +++---
drivers/gpu/drm/qxl/qxl_object.h | 2 +-
drivers/gpu/drm/qxl/qxl_prime.c | 12 +--
drivers/gpu/drm/radeon/radeon.h | 1 -
drivers/gpu/drm/radeon/radeon_gem.c | 7 +-
drivers/gpu/drm/radeon/radeon_prime.c | 20 ----
drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 ++--
drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +-
drivers/gpu/drm/tiny/cirrus.c | 10 +-
drivers/gpu/drm/tiny/gm12u320.c | 10 +-
drivers/gpu/drm/udl/udl_modeset.c | 8 +-
drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +-
drivers/gpu/drm/vc4/vc4_bo.c | 6 +-
drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
drivers/gpu/drm/vgem/vgem_drv.c | 16 ++-
drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 ++--
drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +-
include/drm/drm_gem.h | 5 +-
include/drm/drm_gem_cma_helper.h | 2 +-
include/drm/drm_gem_shmem_helper.h | 4 +-
include/drm/drm_gem_vram_helper.h | 14 +--
47 files changed, 321 insertions(+), 295 deletions(-)
diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 700637e25ecd..7e6fc3c04add 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -446,6 +446,24 @@ Contact: Ville Syrjälä, Daniel Vetter
Level: Intermediate
+Use struct dma_buf_map throughout codebase
+------------------------------------------
+
+Pointers to shared device memory are stored in struct dma_buf_map. Each
+instance knows whether it refers to system or I/O memory. Most of the DRM-wide
+interface have been converted to use struct dma_buf_map, but implementations
+often still use raw pointers.
+
+The task is to use struct dma_buf_map where it makes sense.
+
+* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
+* TTM might benefit from using struct dma_buf_map internally.
+* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
+
+Level: Intermediate
+
Core refactorings
=================
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 147d61b9674e..319839b87d37 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -239,6 +239,7 @@ config DRM_RADEON
select FW_LOADER
select DRM_KMS_HELPER
select DRM_TTM
+ select DRM_TTM_HELPER
select POWER_SUPPLY
select HWMON
select BACKLIGHT_CLASS_DEVICE
@@ -259,6 +260,7 @@ config DRM_AMDGPU
select DRM_KMS_HELPER
select DRM_SCHED
select DRM_TTM
+ select DRM_TTM_HELPER
select POWER_SUPPLY
select HWMON
select BACKLIGHT_CLASS_DEVICE
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 5b465ab774d1..e5919efca870 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -41,42 +41,6 @@
#include <linux/dma-fence-array.h>
#include <linux/pci-p2pdma.h>
-/**
- * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
- * @obj: GEM BO
- *
- * Sets up an in-kernel virtual mapping of the BO's memory.
- *
- * Returns:
- * The virtual address of the mapping or an error pointer.
- */
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
-{
- struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
- int ret;
-
- ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
- &bo->dma_buf_vmap);
- if (ret)
- return ERR_PTR(ret);
-
- return bo->dma_buf_vmap.virtual;
-}
-
-/**
- * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
- * @obj: GEM BO
- * @vaddr: Virtual address (unused)
- *
- * Tears down the in-kernel virtual mapping of the BO's memory.
- */
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
-
- ttm_bo_kunmap(&bo->dma_buf_vmap);
-}
-
/**
* amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation
* @obj: GEM BO
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
index 2c5c84a06bb9..39b5b9616fd8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
@@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
struct dma_buf *dma_buf);
bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
struct amdgpu_bo *bo);
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index be08a63ef58c..576659827e74 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -33,6 +33,7 @@
#include <drm/amdgpu_drm.h>
#include <drm/drm_debugfs.h>
+#include <drm/drm_gem_ttm_helper.h>
#include "amdgpu.h"
#include "amdgpu_display.h"
@@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = {
.open = amdgpu_gem_object_open,
.close = amdgpu_gem_object_close,
.export = amdgpu_gem_prime_export,
- .vmap = amdgpu_gem_prime_vmap,
- .vunmap = amdgpu_gem_prime_vunmap,
+ .vmap = drm_gem_ttm_vmap,
+ .vunmap = drm_gem_ttm_vunmap,
};
/*
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
index 132e5f955180..01296ef0d673 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -100,7 +100,6 @@ struct amdgpu_bo {
struct amdgpu_bo *parent;
struct amdgpu_bo *shadow;
- struct ttm_bo_kmap_obj dma_buf_vmap;
struct amdgpu_mn *mn;
diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
index e0f4613918ad..742d43a7edf4 100644
--- a/drivers/gpu/drm/ast/ast_cursor.c
+++ b/drivers/gpu/drm/ast/ast_cursor.c
@@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
gbo = ast->cursor.gbo[i];
- drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+ drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
drm_gem_vram_unpin(gbo);
drm_gem_vram_put(gbo);
}
@@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
struct drm_device *dev = &ast->base;
size_t size, i;
struct drm_gem_vram_object *gbo;
- void __iomem *vaddr;
+ struct dma_buf_map map;
int ret;
size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
@@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
drm_gem_vram_put(gbo);
goto err_drm_gem_vram_put;
}
- vaddr = drm_gem_vram_vmap(gbo);
- if (IS_ERR(vaddr)) {
- ret = PTR_ERR(vaddr);
+ ret = drm_gem_vram_vmap(gbo, &map);
+ if (ret) {
drm_gem_vram_unpin(gbo);
drm_gem_vram_put(gbo);
goto err_drm_gem_vram_put;
}
ast->cursor.gbo[i] = gbo;
- ast->cursor.vaddr[i] = vaddr;
+ ast->cursor.map[i] = map;
}
return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
@@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
while (i) {
--i;
gbo = ast->cursor.gbo[i];
- drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+ drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
drm_gem_vram_unpin(gbo);
drm_gem_vram_put(gbo);
}
@@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
{
struct drm_device *dev = &ast->base;
struct drm_gem_vram_object *gbo;
+ struct dma_buf_map map;
int ret;
void *src;
void __iomem *dst;
@@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
ret = drm_gem_vram_pin(gbo, 0);
if (ret)
return ret;
- src = drm_gem_vram_vmap(gbo);
- if (IS_ERR(src)) {
- ret = PTR_ERR(src);
+ ret = drm_gem_vram_vmap(gbo, &map);
+ if (ret)
goto err_drm_gem_vram_unpin;
- }
+ src = map.vaddr; /* TODO: Use mapping abstraction properly */
- dst = ast->cursor.vaddr[ast->cursor.next_index];
+ dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
/* do data transfer to cursor BO */
update_cursor_image(dst, src, fb->width, fb->height);
- drm_gem_vram_vunmap(gbo, src);
+ drm_gem_vram_vunmap(gbo, &map);
drm_gem_vram_unpin(gbo);
return 0;
@@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
u8 __iomem *sig;
u8 jreg;
- dst = ast->cursor.vaddr[ast->cursor.next_index];
+ dst = ast->cursor.map[ast->cursor.next_index].vaddr;
sig = dst + AST_HWC_SIZE;
writel(x, sig + AST_HWC_SIGNATURE_X);
diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
index 467049ca8430..f963141dd851 100644
--- a/drivers/gpu/drm/ast/ast_drv.h
+++ b/drivers/gpu/drm/ast/ast_drv.h
@@ -28,10 +28,11 @@
#ifndef __AST_DRV_H__
#define __AST_DRV_H__
-#include <linux/types.h>
-#include <linux/io.h>
+#include <linux/dma-buf-map.h>
#include <linux/i2c.h>
#include <linux/i2c-algo-bit.h>
+#include <linux/io.h>
+#include <linux/types.h>
#include <drm/drm_connector.h>
#include <drm/drm_crtc.h>
@@ -131,7 +132,7 @@ struct ast_private {
struct {
struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
- void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
+ struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
unsigned int next_index;
} cursor;
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 1da67d34e55d..a89ad4570e3c 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -36,6 +36,7 @@
#include <linux/pagemap.h>
#include <linux/shmem_fs.h>
#include <linux/dma-buf.h>
+#include <linux/dma-buf-map.h>
#include <linux/mem_encrypt.h>
#include <linux/pagevec.h>
@@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
void *drm_gem_vmap(struct drm_gem_object *obj)
{
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
- if (obj->funcs->vmap)
- vaddr = obj->funcs->vmap(obj);
- else
- vaddr = ERR_PTR(-EOPNOTSUPP);
+ if (!obj->funcs->vmap)
+ return ERR_PTR(-EOPNOTSUPP);
- if (!vaddr)
- vaddr = ERR_PTR(-ENOMEM);
+ ret = obj->funcs->vmap(obj, &map);
+ if (ret)
+ return ERR_PTR(ret);
+ else if (dma_buf_map_is_null(&map))
+ return ERR_PTR(-ENOMEM);
- return vaddr;
+ return map.vaddr;
}
void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
{
+ struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
+
if (!vaddr)
return;
if (obj->funcs->vunmap)
- obj->funcs->vunmap(obj, vaddr);
+ obj->funcs->vunmap(obj, &map);
}
/**
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index d527485ea0b7..b57e3e9222f0 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
* drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
* address space
* @obj: GEM object
+ * @map: Returns the kernel virtual address of the CMA GEM object's backing
+ * store.
*
* This function maps a buffer exported via DRM PRIME into the kernel's
* virtual address space. Since the CMA buffers are already mapped into the
@@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
* driver's &drm_gem_object_funcs.vmap callback.
*
* Returns:
- * The kernel virtual address of the CMA GEM object's backing store.
+ * 0 on success, or a negative error code otherwise.
*/
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
- return cma_obj->vaddr;
+ dma_buf_map_set_vaddr(map, cma_obj->vaddr);
+
+ return 0;
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index fb11df7aced5..5553f58f68f3 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
}
EXPORT_SYMBOL(drm_gem_shmem_unpin);
-static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
+static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
{
struct drm_gem_object *obj = &shmem->base;
- struct dma_buf_map map;
int ret = 0;
- if (shmem->vmap_use_count++ > 0)
- return shmem->vaddr;
+ if (shmem->vmap_use_count++ > 0) {
+ dma_buf_map_set_vaddr(map, shmem->vaddr);
+ return 0;
+ }
if (obj->import_attach) {
- ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
- if (!ret)
- shmem->vaddr = map.vaddr;
+ ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
+ if (!ret) {
+ if (WARN_ON(map->is_iomem)) {
+ ret = -EIO;
+ goto err_put_pages;
+ }
+ shmem->vaddr = map->vaddr;
+ }
} else {
pgprot_t prot = PAGE_KERNEL;
@@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
VM_MAP, prot);
if (!shmem->vaddr)
ret = -ENOMEM;
+ else
+ dma_buf_map_set_vaddr(map, shmem->vaddr);
}
if (ret) {
@@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
goto err_put_pages;
}
- return shmem->vaddr;
+ return 0;
err_put_pages:
if (!obj->import_attach)
@@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
err_zero_use:
shmem->vmap_use_count = 0;
- return ERR_PTR(ret);
+ return ret;
}
/*
* drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
* @shmem: shmem GEM object
+ * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
+ * store.
*
* This function makes sure that a contiguous kernel virtual address mapping
* exists for the buffer backing the shmem GEM object.
@@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
* Returns:
* 0 on success or a negative error code on failure.
*/
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- void *vaddr;
int ret;
ret = mutex_lock_interruptible(&shmem->vmap_lock);
if (ret)
- return ERR_PTR(ret);
- vaddr = drm_gem_shmem_vmap_locked(shmem);
+ return ret;
+ ret = drm_gem_shmem_vmap_locked(shmem, map);
mutex_unlock(&shmem->vmap_lock);
- return vaddr;
+ return ret;
}
EXPORT_SYMBOL(drm_gem_shmem_vmap);
-static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
+static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
+ struct dma_buf_map *map)
{
struct drm_gem_object *obj = &shmem->base;
- struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
if (WARN_ON_ONCE(!shmem->vmap_use_count))
return;
@@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
return;
if (obj->import_attach)
- dma_buf_vunmap(obj->import_attach->dmabuf, &map);
+ dma_buf_vunmap(obj->import_attach->dmabuf, map);
else
vunmap(shmem->vaddr);
@@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
/*
* drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
* @shmem: shmem GEM object
+ * @map: Kernel virtual address where the SHMEM GEM object was mapped
*
* This function cleans up a kernel virtual address mapping acquired by
* drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
@@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
* also be called by drivers directly, in which case it will hide the
* differences between dma-buf imported and natively allocated objects.
*/
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
mutex_lock(&shmem->vmap_lock);
- drm_gem_shmem_vunmap_locked(shmem);
+ drm_gem_shmem_vunmap_locked(shmem, map);
mutex_unlock(&shmem->vmap_lock);
}
EXPORT_SYMBOL(drm_gem_shmem_vunmap);
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 2d5ed30518f1..4d8553b28558 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-or-later
+#include <linux/dma-buf-map.h>
#include <linux/module.h>
#include <drm/drm_debugfs.h>
@@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
* up; only release the GEM object.
*/
- WARN_ON(gbo->kmap_use_count);
- WARN_ON(gbo->kmap.virtual);
+ WARN_ON(gbo->vmap_use_count);
+ WARN_ON(dma_buf_map_is_set(&gbo->map));
drm_gem_object_release(&gbo->bo.base);
}
@@ -382,29 +383,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
}
EXPORT_SYMBOL(drm_gem_vram_unpin);
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
+static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
+ struct dma_buf_map *map)
{
int ret;
- struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
- bool is_iomem;
- if (gbo->kmap_use_count > 0)
+ if (gbo->vmap_use_count > 0)
goto out;
- ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
+ ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
if (ret)
- return ERR_PTR(ret);
+ return ret;
out:
- ++gbo->kmap_use_count;
- return ttm_kmap_obj_virtual(kmap, &is_iomem);
+ ++gbo->vmap_use_count;
+ *map = gbo->map;
+
+ return 0;
}
-static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
+static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
+ struct dma_buf_map *map)
{
- if (WARN_ON_ONCE(!gbo->kmap_use_count))
+ struct drm_device *dev = gbo->bo.base.dev;
+
+ if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count))
return;
- if (--gbo->kmap_use_count > 0)
+
+ if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
+ return; /* BUG: map not mapped from this BO */
+
+ if (--gbo->vmap_use_count > 0)
return;
/*
@@ -418,7 +427,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
/**
* drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
* space
- * @gbo: The GEM VRAM object to map
+ * @gbo: The GEM VRAM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ * store.
*
* The vmap function pins a GEM VRAM object to its current location, either
* system or video memory, and maps its buffer into kernel address space.
@@ -427,48 +438,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
* unmap and unpin the GEM VRAM object.
*
* Returns:
- * The buffer's virtual address on success, or
- * an ERR_PTR()-encoded error code otherwise.
+ * 0 on success, or a negative error code otherwise.
*/
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
{
int ret;
- void *base;
ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
if (ret)
- return ERR_PTR(ret);
+ return ret;
ret = drm_gem_vram_pin_locked(gbo, 0);
if (ret)
goto err_ttm_bo_unreserve;
- base = drm_gem_vram_kmap_locked(gbo);
- if (IS_ERR(base)) {
- ret = PTR_ERR(base);
+ ret = drm_gem_vram_kmap_locked(gbo, map);
+ if (ret)
goto err_drm_gem_vram_unpin_locked;
- }
ttm_bo_unreserve(&gbo->bo);
- return base;
+ return 0;
err_drm_gem_vram_unpin_locked:
drm_gem_vram_unpin_locked(gbo);
err_ttm_bo_unreserve:
ttm_bo_unreserve(&gbo->bo);
- return ERR_PTR(ret);
+ return ret;
}
EXPORT_SYMBOL(drm_gem_vram_vmap);
/**
* drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
- * @gbo: The GEM VRAM object to unmap
- * @vaddr: The mapping's base address as returned by drm_gem_vram_vmap()
+ * @gbo: The GEM VRAM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
*
* A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
* the documentation for drm_gem_vram_vmap() for more information.
*/
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
{
int ret;
@@ -476,7 +483,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
return;
- drm_gem_vram_kunmap_locked(gbo);
+ drm_gem_vram_kunmap_locked(gbo, map);
drm_gem_vram_unpin_locked(gbo);
ttm_bo_unreserve(&gbo->bo);
@@ -567,15 +574,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
bool evict,
struct ttm_resource *new_mem)
{
- struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+ struct ttm_buffer_object *bo = &gbo->bo;
+ struct drm_device *dev = bo->base.dev;
- if (WARN_ON_ONCE(gbo->kmap_use_count))
+ if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count))
return;
- if (!kmap->virtual)
- return;
- ttm_bo_kunmap(kmap);
- kmap->virtual = NULL;
+ ttm_bo_vunmap(bo, &gbo->map);
}
static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
@@ -832,37 +837,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
}
/**
- * drm_gem_vram_object_vmap() - \
- Implements &struct drm_gem_object_funcs.vmap
- * @gem: The GEM object to map
+ * drm_gem_vram_object_vmap() -
+ * Implements &struct drm_gem_object_funcs.vmap
+ * @gem: The GEM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ * store.
*
* Returns:
- * The buffers virtual address on success, or
- * NULL otherwise.
+ * 0 on success, or a negative error code otherwise.
*/
-static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
+static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
{
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
- void *base;
- base = drm_gem_vram_vmap(gbo);
- if (IS_ERR(base))
- return NULL;
- return base;
+ return drm_gem_vram_vmap(gbo, map);
}
/**
- * drm_gem_vram_object_vunmap() - \
- Implements &struct drm_gem_object_funcs.vunmap
- * @gem: The GEM object to unmap
- * @vaddr: The mapping's base address
+ * drm_gem_vram_object_vunmap() -
+ * Implements &struct drm_gem_object_funcs.vunmap
+ * @gem: The GEM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
*/
-static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
- void *vaddr)
+static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
{
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
- drm_gem_vram_vunmap(gbo, vaddr);
+ drm_gem_vram_vunmap(gbo, map);
}
/*
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 9682c26d89bb..f5be627e1de0 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index a6d9932a32ae..bc2543dd987d 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
}
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
- return etnaviv_gem_vmap(obj);
+ void *vaddr = etnaviv_gem_vmap(obj);
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
+
+ return 0;
}
int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 11223fe348df..832e5280a6ed 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
return drm_gem_shmem_pin(obj);
}
-static void *lima_gem_vmap(struct drm_gem_object *obj)
+static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct lima_bo *bo = to_lima_bo(obj);
if (bo->heap_size)
- return ERR_PTR(-EINVAL);
+ return -EINVAL;
- return drm_gem_shmem_vmap(obj);
+ return drm_gem_shmem_vmap(obj, map);
}
static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index dc6df9e9a40d..a070a85f8f36 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0 OR MIT
/* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
+#include <linux/dma-buf-map.h>
#include <linux/kthread.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
@@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
struct lima_dump_chunk_buffer *buffer_chunk;
u32 size, task_size, mem_size;
int i;
+ struct dma_buf_map map;
+ int ret;
mutex_lock(&dev->error_task_list_lock);
@@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
} else {
buffer_chunk->size = lima_bo_size(bo);
- data = drm_gem_shmem_vmap(&bo->base.base);
- if (IS_ERR_OR_NULL(data)) {
+ ret = drm_gem_shmem_vmap(&bo->base.base, &map);
+ if (ret) {
kvfree(et);
goto out;
}
- memcpy(buffer_chunk + 1, data, buffer_chunk->size);
+ memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
- drm_gem_shmem_vunmap(&bo->base.base, data);
+ drm_gem_shmem_vunmap(&bo->base.base, &map);
}
buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
index 38672f9e5c4f..8ef76769b97f 100644
--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
+++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
@@ -9,6 +9,7 @@
*/
#include <linux/delay.h>
+#include <linux/dma-buf-map.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_atomic_state_helper.h>
@@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
struct drm_rect *clip)
{
struct drm_device *dev = &mdev->base;
+ struct dma_buf_map map;
void *vmap;
+ int ret;
- vmap = drm_gem_shmem_vmap(fb->obj[0]);
- if (drm_WARN_ON(dev, !vmap))
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (drm_WARN_ON(dev, ret))
return; /* BUG: SHMEM BO should always be vmapped */
+ vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
- drm_gem_shmem_vunmap(fb->obj[0], vmap);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
/* Always scanout image at VRAM offset 0 */
mgag200_set_startadd(mdev, (u32)0);
diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
index 5dec1e5694b7..9436310d0854 100644
--- a/drivers/gpu/drm/nouveau/Kconfig
+++ b/drivers/gpu/drm/nouveau/Kconfig
@@ -6,6 +6,7 @@ config DRM_NOUVEAU
select FW_LOADER
select DRM_KMS_HELPER
select DRM_TTM
+ select DRM_TTM_HELPER
select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT
select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT
select X86_PLATFORM_DEVICES if ACPI && X86
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h
index 641ef6298a0e..6045b85a762a 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.h
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.h
@@ -39,8 +39,6 @@ struct nouveau_bo {
unsigned mode;
struct nouveau_drm_tile *tile;
-
- struct ttm_bo_kmap_obj dma_buf_vmap;
};
static inline struct nouveau_bo *
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index 9a421c3949de..f942b526b0a5 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -24,6 +24,8 @@
*
*/
+#include <drm/drm_gem_ttm_helper.h>
+
#include "nouveau_drv.h"
#include "nouveau_dma.h"
#include "nouveau_fence.h"
@@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = {
.pin = nouveau_gem_prime_pin,
.unpin = nouveau_gem_prime_unpin,
.get_sg_table = nouveau_gem_prime_get_sg_table,
- .vmap = nouveau_gem_prime_vmap,
- .vunmap = nouveau_gem_prime_vunmap,
+ .vmap = drm_gem_ttm_vmap,
+ .vunmap = drm_gem_ttm_vunmap,
};
int
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
index b35c180322e2..3b919c7c931c 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.h
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
@@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
-extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
-extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
#endif
diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
index a8264aebf3d4..2f16b5249283 100644
--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
+++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
@@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
}
-void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
-{
- struct nouveau_bo *nvbo = nouveau_gem_object(obj);
- int ret;
-
- ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
- &nvbo->dma_buf_vmap);
- if (ret)
- return ERR_PTR(ret);
-
- return nvbo->dma_buf_vmap.virtual;
-}
-
-void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- struct nouveau_bo *nvbo = nouveau_gem_object(obj);
-
- ttm_bo_kunmap(&nvbo->dma_buf_vmap);
-}
-
struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sg)
diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
index fdbc8d949135..5ab03d605f57 100644
--- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
+++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
@@ -5,6 +5,7 @@
#include <drm/drm_gem_shmem_helper.h>
#include <drm/panfrost_drm.h>
#include <linux/completion.h>
+#include <linux/dma-buf-map.h>
#include <linux/iopoll.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
@@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
{
struct panfrost_file_priv *user = file_priv->driver_priv;
struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+ struct dma_buf_map map;
struct drm_gem_shmem_object *bo;
u32 cfg, as;
int ret;
@@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
goto err_close_bo;
}
- perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
- if (IS_ERR(perfcnt->buf)) {
- ret = PTR_ERR(perfcnt->buf);
+ ret = drm_gem_shmem_vmap(&bo->base, &map);
+ if (ret)
goto err_put_mapping;
- }
+ perfcnt->buf = map.vaddr;
/*
* Invalidate the cache and clear the counters to start from a fresh
@@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
return 0;
err_vunmap:
- drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
+ drm_gem_shmem_vunmap(&bo->base, &map);
err_put_mapping:
panfrost_gem_mapping_put(perfcnt->mapping);
err_close_bo:
@@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
{
struct panfrost_file_priv *user = file_priv->driver_priv;
struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+ struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
if (user != perfcnt->user)
return -EINVAL;
@@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
perfcnt->user = NULL;
- drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
+ drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
perfcnt->buf = NULL;
panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
index 45fd76e04bdc..e165fa9b2089 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -25,6 +25,7 @@
#include <linux/crc32.h>
#include <linux/delay.h>
+#include <linux/dma-buf-map.h>
#include <drm/drm_drv.h>
#include <drm/drm_atomic.h>
@@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
struct drm_gem_object *obj;
struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
int ret;
+ struct dma_buf_map user_map;
+ struct dma_buf_map cursor_map;
void *user_ptr;
int size = 64*64*4;
@@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
user_bo = gem_to_qxl_bo(obj);
/* pinning is done in the prepare/cleanup framevbuffer */
- ret = qxl_bo_kmap(user_bo, &user_ptr);
+ ret = qxl_bo_kmap(user_bo, &user_map);
if (ret)
goto out_free_release;
+ user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */
ret = qxl_alloc_bo_reserved(qdev, release,
sizeof(struct qxl_cursor) + size,
@@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
if (ret)
goto out_unpin;
- ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
+ ret = qxl_bo_kmap(cursor_bo, &cursor_map);
if (ret)
goto out_backoff;
@@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
{
int ret;
struct drm_gem_object *gobj;
+ struct dma_buf_map map;
int monitors_config_size = sizeof(struct qxl_monitors_config) +
qxl_num_crtc * sizeof(struct qxl_head);
@@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
if (ret)
return ret;
- qxl_bo_kmap(qdev->monitors_config_bo, NULL);
+ qxl_bo_kmap(qdev->monitors_config_bo, &map);
qdev->monitors_config = qdev->monitors_config_bo->kptr;
qdev->ram_header->monitors_config =
diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
index 3599db096973..7b7acb910780 100644
--- a/drivers/gpu/drm/qxl/qxl_draw.c
+++ b/drivers/gpu/drm/qxl/qxl_draw.c
@@ -20,6 +20,8 @@
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
+#include <linux/dma-buf-map.h>
+
#include <drm/drm_fourcc.h>
#include "qxl_drv.h"
@@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
unsigned int num_clips,
struct qxl_bo *clips_bo)
{
+ struct dma_buf_map map;
struct qxl_clip_rects *dev_clips;
int ret;
- ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
- if (ret) {
+ ret = qxl_bo_kmap(clips_bo, &map);
+ if (ret)
return NULL;
- }
+ dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
+
dev_clips->num_rects = num_clips;
dev_clips->chunk.next_chunk = 0;
dev_clips->chunk.prev_chunk = 0;
@@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
int stride = fb->pitches[0];
/* depth is not actually interesting, we don't mask with it */
int depth = fb->format->cpp[0] * 8;
+ struct dma_buf_map surface_map;
uint8_t *surface_base;
struct qxl_release *release;
struct qxl_bo *clips_bo;
@@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
if (ret)
goto out_release_backoff;
- ret = qxl_bo_kmap(bo, (void **)&surface_base);
+ ret = qxl_bo_kmap(bo, &surface_map);
if (ret)
goto out_release_backoff;
+ surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */
ret = qxl_image_init(qdev, release, dimage, surface_base,
left - dumb_shadow_offset,
diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 3602e8b34189..eb437fea5d9e 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -30,6 +30,7 @@
* Definitions taken from spice-protocol, plus kernel driver specific bits.
*/
+#include <linux/dma-buf-map.h>
#include <linux/dma-fence.h>
#include <linux/firmware.h>
#include <linux/platform_device.h>
@@ -50,6 +51,8 @@
#include "qxl_dev.h"
+struct dma_buf_map;
+
#define DRIVER_AUTHOR "Dave Airlie"
#define DRIVER_NAME "qxl"
@@ -79,7 +82,7 @@ struct qxl_bo {
/* Protected by tbo.reserved */
struct ttm_place placements[3];
struct ttm_placement placement;
- struct ttm_bo_kmap_obj kmap;
+ struct dma_buf_map map;
void *kptr;
unsigned int map_count;
int type;
@@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
void qxl_gem_object_close(struct drm_gem_object *obj,
struct drm_file *file_priv);
void qxl_bo_force_delete(struct qxl_device *qdev);
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
/* qxl_dumb.c */
int qxl_mode_dumb_create(struct drm_file *file_priv,
@@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
struct drm_gem_object *qxl_gem_prime_import_sg_table(
struct drm_device *dev, struct dma_buf_attachment *attach,
struct sg_table *sgt);
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+ struct dma_buf_map *map);
int qxl_gem_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 940e99354f49..755df4d8f95f 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -23,10 +23,12 @@
* Alon Levy
*/
+#include <linux/dma-buf-map.h>
+#include <linux/io-mapping.h>
+
#include "qxl_drv.h"
#include "qxl_object.h"
-#include <linux/io-mapping.h>
static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
{
struct qxl_bo *bo;
@@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev,
return 0;
}
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
+int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
{
- bool is_iomem;
int r;
if (bo->kptr) {
- if (ptr)
- *ptr = bo->kptr;
bo->map_count++;
- return 0;
+ goto out;
}
- r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
+ r = ttm_bo_vmap(&bo->tbo, &bo->map);
if (r)
return r;
- bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
- if (ptr)
- *ptr = bo->kptr;
bo->map_count = 1;
+
+ /* TODO: Remove kptr in favor of map everywhere. */
+ if (bo->map.is_iomem)
+ bo->kptr = (void *)bo->map.vaddr_iomem;
+ else
+ bo->kptr = bo->map.vaddr;
+
+out:
+ *map = bo->map;
return 0;
}
@@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
void *rptr;
int ret;
struct io_mapping *map;
+ struct dma_buf_map bo_map;
if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
map = qdev->vram_mapping;
@@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
return rptr;
}
- ret = qxl_bo_kmap(bo, &rptr);
+ ret = qxl_bo_kmap(bo, &bo_map);
if (ret)
return NULL;
+ rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */
rptr += page_offset * PAGE_SIZE;
return rptr;
@@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo)
if (bo->map_count > 0)
return;
bo->kptr = NULL;
- ttm_bo_kunmap(&bo->kmap);
+ ttm_bo_vunmap(&bo->tbo, &bo->map);
}
void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index 09a5c818324d..ebf24c9d2bf2 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
bool kernel, bool pinned, u32 domain,
struct qxl_surface *surf,
struct qxl_bo **bo_ptr);
-extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
+extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
extern void qxl_bo_kunmap(struct qxl_bo *bo);
void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index 7d3816fca5a8..4aa949799446 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
return ERR_PTR(-ENOSYS);
}
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct qxl_bo *bo = gem_to_qxl_bo(obj);
- void *ptr;
int ret;
- ret = qxl_bo_kmap(bo, &ptr);
+ ret = qxl_bo_kmap(bo, map);
if (ret < 0)
- return ERR_PTR(ret);
+ return ret;
- return ptr;
+ return 0;
}
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+ struct dma_buf_map *map)
{
struct qxl_bo *bo = gem_to_qxl_bo(obj);
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 5d54bccebd4d..44cb5ee6fc20 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -509,7 +509,6 @@ struct radeon_bo {
/* Constant after initialization */
struct radeon_device *rdev;
- struct ttm_bo_kmap_obj dma_buf_vmap;
pid_t pid;
#ifdef CONFIG_MMU_NOTIFIER
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
index 0ccd7213e41f..d2876ce3bc9e 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -31,6 +31,7 @@
#include <drm/drm_debugfs.h>
#include <drm/drm_device.h>
#include <drm/drm_file.h>
+#include <drm/drm_gem_ttm_helper.h>
#include <drm/radeon_drm.h>
#include "radeon.h"
@@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
int radeon_gem_prime_pin(struct drm_gem_object *obj);
void radeon_gem_prime_unpin(struct drm_gem_object *obj);
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
static const struct drm_gem_object_funcs radeon_gem_object_funcs;
@@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = {
.pin = radeon_gem_prime_pin,
.unpin = radeon_gem_prime_unpin,
.get_sg_table = radeon_gem_prime_get_sg_table,
- .vmap = radeon_gem_prime_vmap,
- .vunmap = radeon_gem_prime_vunmap,
+ .vmap = drm_gem_ttm_vmap,
+ .vunmap = drm_gem_ttm_vunmap,
};
/*
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
index b9de0e51c0be..088d39a51c0d 100644
--- a/drivers/gpu/drm/radeon/radeon_prime.c
+++ b/drivers/gpu/drm/radeon/radeon_prime.c
@@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
}
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
-{
- struct radeon_bo *bo = gem_to_radeon_bo(obj);
- int ret;
-
- ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
- &bo->dma_buf_vmap);
- if (ret)
- return ERR_PTR(ret);
-
- return bo->dma_buf_vmap.virtual;
-}
-
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- struct radeon_bo *bo = gem_to_radeon_bo(obj);
-
- ttm_bo_kunmap(&bo->dma_buf_vmap);
-}
-
struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sg)
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
index 7d5ebb10323b..7971f57436dd 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
@@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
return ERR_PTR(ret);
}
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
- if (rk_obj->pages)
- return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
- pgprot_writecombine(PAGE_KERNEL));
+ if (rk_obj->pages) {
+ void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
+ pgprot_writecombine(PAGE_KERNEL));
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
+ return 0;
+ }
if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
- return NULL;
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
- return rk_obj->kvaddr;
+ return 0;
}
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
if (rk_obj->pages) {
- vunmap(vaddr);
+ vunmap(map->vaddr);
return;
}
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
index 7ffc541bea07..5a70a56cd406 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
@@ -31,8 +31,8 @@ struct drm_gem_object *
rockchip_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sg);
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
/* drm driver mmap file operations */
int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
index 744a8e337e41..c02e35ed6e76 100644
--- a/drivers/gpu/drm/tiny/cirrus.c
+++ b/drivers/gpu/drm/tiny/cirrus.c
@@ -17,6 +17,7 @@
*/
#include <linux/console.h>
+#include <linux/dma-buf-map.h>
#include <linux/module.h>
#include <linux/pci.h>
@@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
struct drm_rect *rect)
{
struct cirrus_device *cirrus = to_cirrus(fb->dev);
+ struct dma_buf_map map;
void *vmap;
int idx, ret;
@@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
if (!drm_dev_enter(&cirrus->dev, &idx))
goto out;
- ret = -ENOMEM;
- vmap = drm_gem_shmem_vmap(fb->obj[0]);
- if (!vmap)
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (ret)
goto out_dev_exit;
+ vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
if (cirrus->cpp == fb->format->cpp[0])
drm_fb_memcpy_dstclip(cirrus->vram,
@@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
else
WARN_ON_ONCE("cpp mismatch");
- drm_gem_shmem_vunmap(fb->obj[0], vmap);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
ret = 0;
out_dev_exit:
diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
index cc397671f689..12a890cea6e9 100644
--- a/drivers/gpu/drm/tiny/gm12u320.c
+++ b/drivers/gpu/drm/tiny/gm12u320.c
@@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
{
int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
struct drm_framebuffer *fb;
+ struct dma_buf_map map;
void *vaddr;
u8 *src;
@@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
y1 = gm12u320->fb_update.rect.y1;
y2 = gm12u320->fb_update.rect.y2;
- vaddr = drm_gem_shmem_vmap(fb->obj[0]);
- if (IS_ERR(vaddr)) {
- GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (ret) {
+ GM12U320_ERR("failed to vmap fb: %d\n", ret);
goto put_fb;
}
+ vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
if (fb->obj[0]->import_attach) {
ret = dma_buf_begin_cpu_access(
@@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
}
vunmap:
- drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
put_fb:
drm_framebuffer_put(fb);
gm12u320->fb_update.fb = NULL;
diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
index fef43f4e3bac..42eeba1dfdbf 100644
--- a/drivers/gpu/drm/udl/udl_modeset.c
+++ b/drivers/gpu/drm/udl/udl_modeset.c
@@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
struct urb *urb;
struct drm_rect clip;
int log_bpp;
+ struct dma_buf_map map;
void *vaddr;
ret = udl_log_cpp(fb->format->cpp[0]);
@@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
return ret;
}
- vaddr = drm_gem_shmem_vmap(fb->obj[0]);
- if (IS_ERR(vaddr)) {
+ ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+ if (ret) {
DRM_ERROR("failed to vmap fb\n");
goto out_dma_buf_end_cpu_access;
}
+ vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
urb = udl_get_urb(dev);
if (!urb)
@@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
ret = 0;
out_drm_gem_shmem_vunmap:
- drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+ drm_gem_shmem_vunmap(fb->obj[0], &map);
out_dma_buf_end_cpu_access:
if (import_attach) {
tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
index 931c55126148..f268fb258c83 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
@@ -9,6 +9,8 @@
* Michael Thayer <michael.thayer@oracle.com,
* Hans de Goede <hdegoede@redhat.com>
*/
+
+#include <linux/dma-buf-map.h>
#include <linux/export.h>
#include <drm/drm_atomic.h>
@@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
u32 height = plane->state->crtc_h;
size_t data_size, mask_size;
u32 flags;
+ struct dma_buf_map map;
+ int ret;
u8 *src;
/*
@@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
vbox_crtc->cursor_enabled = true;
- src = drm_gem_vram_vmap(gbo);
- if (IS_ERR(src)) {
+ ret = drm_gem_vram_vmap(gbo, &map);
+ if (ret) {
/*
* BUG: we should have pinned the BO in prepare_fb().
*/
@@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
DRM_WARN("Could not map cursor bo, skipping update\n");
return;
}
+ src = map.vaddr; /* TODO: Use mapping abstraction properly */
/*
* The mask must be calculated based on the alpha
@@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
data_size = width * height * 4 + mask_size;
copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
- drm_gem_vram_vunmap(gbo, src);
+ drm_gem_vram_vunmap(gbo, &map);
flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
VBOX_MOUSE_POINTER_ALPHA;
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index 557f0d1e6437..f290a9a942dc 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
return drm_gem_cma_prime_mmap(obj, vma);
}
-void *vc4_prime_vmap(struct drm_gem_object *obj)
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct vc4_bo *bo = to_vc4_bo(obj);
if (bo->validated_shader) {
DRM_DEBUG("mmaping of shader BOs not allowed.\n");
- return ERR_PTR(-EINVAL);
+ return -EINVAL;
}
- return drm_gem_cma_prime_vmap(obj);
+ return drm_gem_cma_prime_vmap(obj, map);
}
struct drm_gem_object *
diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
index cc79b1aaa878..904f2c36c963 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.h
+++ b/drivers/gpu/drm/vc4/vc4_drv.h
@@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sgt);
-void *vc4_prime_vmap(struct drm_gem_object *obj);
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
int vc4_bo_cache_init(struct drm_device *dev);
void vc4_bo_cache_destroy(struct drm_device *dev);
int vc4_bo_inc_usecnt(struct vc4_bo *bo);
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index fa54a6d1403d..b2aa26e1e4a2 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
return &obj->base;
}
-static void *vgem_prime_vmap(struct drm_gem_object *obj)
+static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
long n_pages = obj->size >> PAGE_SHIFT;
struct page **pages;
+ void *vaddr;
pages = vgem_pin_pages(bo);
if (IS_ERR(pages))
- return NULL;
+ return PTR_ERR(pages);
+
+ vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
- return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+ return 0;
}
-static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
- vunmap(vaddr);
+ vunmap(map->vaddr);
vgem_unpin_pages(bo);
}
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index 4f34ef34ba60..74db5a840bed 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
return gem_mmap_obj(xen_obj, vma);
}
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
{
struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+ void *vaddr;
if (!xen_obj->pages)
- return NULL;
+ return -ENOMEM;
/* Please see comment in gem_mmap_obj on mapping and attributes. */
- return vmap(xen_obj->pages, xen_obj->num_pages,
- VM_MAP, PAGE_KERNEL);
+ vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
+ VM_MAP, PAGE_KERNEL);
+ if (!vaddr)
+ return -ENOMEM;
+ dma_buf_map_set_vaddr(map, vaddr);
+
+ return 0;
}
void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
- void *vaddr)
+ struct dma_buf_map *map)
{
- vunmap(vaddr);
+ vunmap(map->vaddr);
}
int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
index a39675fa31b2..a4e67d0a149c 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
@@ -12,6 +12,7 @@
#define __XEN_DRM_FRONT_GEM_H
struct dma_buf_attachment;
+struct dma_buf_map;
struct drm_device;
struct drm_gem_object;
struct file;
@@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
+ struct dma_buf_map *map);
void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
- void *vaddr);
+ struct dma_buf_map *map);
int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
struct vm_area_struct *vma);
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index c38dd35da00b..5e6daa1c982f 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -39,6 +39,7 @@
#include <drm/drm_vma_manager.h>
+struct dma_buf_map;
struct drm_gem_object;
/**
@@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
*
* This callback is optional.
*/
- void *(*vmap)(struct drm_gem_object *obj);
+ int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
/**
* @vunmap:
@@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
*
* This callback is optional.
*/
- void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
+ void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
/**
* @mmap:
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index a064b0d1c480..caf98b9cf4b4 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
struct sg_table *sgt);
int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
struct drm_gem_object *
drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 5381f0c8cf6f..3449a0353fe0 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
int drm_gem_shmem_pin(struct drm_gem_object *obj);
void drm_gem_shmem_unpin(struct drm_gem_object *obj);
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
index 128f88174d32..c0d28ba0f5c9 100644
--- a/include/drm/drm_gem_vram_helper.h
+++ b/include/drm/drm_gem_vram_helper.h
@@ -10,6 +10,7 @@
#include <drm/ttm/ttm_bo_api.h>
#include <drm/ttm/ttm_bo_driver.h>
+#include <linux/dma-buf-map.h>
#include <linux/kernel.h> /* for container_of() */
struct drm_mode_create_dumb;
@@ -29,9 +30,8 @@ struct vm_area_struct;
/**
* struct drm_gem_vram_object - GEM object backed by VRAM
- * @gem: GEM object
* @bo: TTM buffer object
- * @kmap: Mapping information for @bo
+ * @map: Mapping information for @bo
* @placement: TTM placement information. Supported placements are \
%TTM_PL_VRAM and %TTM_PL_SYSTEM
* @placements: TTM placement information.
@@ -50,15 +50,15 @@ struct vm_area_struct;
*/
struct drm_gem_vram_object {
struct ttm_buffer_object bo;
- struct ttm_bo_kmap_obj kmap;
+ struct dma_buf_map map;
/**
- * @kmap_use_count:
+ * @vmap_use_count:
*
* Reference count on the virtual address.
* The address are un-mapped when the count reaches zero.
*/
- unsigned int kmap_use_count;
+ unsigned int vmap_use_count;
/* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
struct ttm_placement placement;
@@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
int drm_gem_vram_fill_create_dumb(struct drm_file *file,
struct drm_device *dev,
--
2.28.0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 07/10] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map
2020-10-15 12:37 ` Thomas Zimmermann
` (3 preceding siblings ...)
(?)
@ 2020-10-15 12:38 ` Thomas Zimmermann
-1 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann,
Daniel Vetter
GEM's vmap and vunmap interfaces now wrap memory pointers in struct
dma_buf_map.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/gpu/drm/drm_client.c | 18 +++++++++++-------
drivers/gpu/drm/drm_gem.c | 26 +++++++++++++-------------
drivers/gpu/drm/drm_internal.h | 5 +++--
drivers/gpu/drm/drm_prime.c | 14 ++++----------
4 files changed, 31 insertions(+), 32 deletions(-)
diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index 495f47d23d87..ac0082bed966 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -3,6 +3,7 @@
* Copyright 2018 Noralf Trønnes
*/
+#include <linux/dma-buf-map.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/mutex.h>
@@ -304,7 +305,8 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
*/
void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
{
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
if (buffer->vaddr)
return buffer->vaddr;
@@ -317,13 +319,13 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
* fd_install step out of the driver backend hooks, to make that
* final step optional for internal users.
*/
- vaddr = drm_gem_vmap(buffer->gem);
- if (IS_ERR(vaddr))
- return vaddr;
+ ret = drm_gem_vmap(buffer->gem, &map);
+ if (ret)
+ return ERR_PTR(ret);
- buffer->vaddr = vaddr;
+ buffer->vaddr = map.vaddr;
- return vaddr;
+ return map.vaddr;
}
EXPORT_SYMBOL(drm_client_buffer_vmap);
@@ -337,7 +339,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
*/
void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
{
- drm_gem_vunmap(buffer->gem, buffer->vaddr);
+ struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+
+ drm_gem_vunmap(buffer->gem, &map);
buffer->vaddr = NULL;
}
EXPORT_SYMBOL(drm_client_buffer_vunmap);
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index a89ad4570e3c..4d5fff4bd821 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1206,32 +1206,32 @@ void drm_gem_unpin(struct drm_gem_object *obj)
obj->funcs->unpin(obj);
}
-void *drm_gem_vmap(struct drm_gem_object *obj)
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
- struct dma_buf_map map;
int ret;
if (!obj->funcs->vmap)
- return ERR_PTR(-EOPNOTSUPP);
+ return -EOPNOTSUPP;
- ret = obj->funcs->vmap(obj, &map);
+ ret = obj->funcs->vmap(obj, map);
if (ret)
- return ERR_PTR(ret);
- else if (dma_buf_map_is_null(&map))
- return ERR_PTR(-ENOMEM);
+ return ret;
+ else if (dma_buf_map_is_null(map))
+ return -ENOMEM;
- return map.vaddr;
+ return 0;
}
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
- struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
-
- if (!vaddr)
+ if (dma_buf_map_is_null(map))
return;
if (obj->funcs->vunmap)
- obj->funcs->vunmap(obj, &map);
+ obj->funcs->vunmap(obj, map);
+
+ /* Always set the mapping to NULL. Callers may rely on this. */
+ dma_buf_map_clear(map);
}
/**
diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
index b65865c630b0..58832d75a9bd 100644
--- a/drivers/gpu/drm/drm_internal.h
+++ b/drivers/gpu/drm/drm_internal.h
@@ -33,6 +33,7 @@
struct dentry;
struct dma_buf;
+struct dma_buf_map;
struct drm_connector;
struct drm_crtc;
struct drm_framebuffer;
@@ -187,8 +188,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
int drm_gem_pin(struct drm_gem_object *obj);
void drm_gem_unpin(struct drm_gem_object *obj);
-void *drm_gem_vmap(struct drm_gem_object *obj);
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
/* drm_debugfs.c drm_debugfs_crc.c */
#if defined(CONFIG_DEBUG_FS)
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 89e2a2496734..cb8fbeeb731b 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -667,21 +667,15 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
*
* Sets up a kernel virtual mapping. This can be used as the &dma_buf_ops.vmap
* callback. Calls into &drm_gem_object_funcs.vmap for device specific handling.
+ * The kernel virtual address is returned in map.
*
- * Returns the kernel virtual address or NULL on failure.
+ * Returns 0 on success or a negative errno code otherwise.
*/
int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
- void *vaddr;
- vaddr = drm_gem_vmap(obj);
- if (IS_ERR(vaddr))
- return PTR_ERR(vaddr);
-
- dma_buf_map_set_vaddr(map, vaddr);
-
- return 0;
+ return drm_gem_vmap(obj, map);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
@@ -697,7 +691,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
- drm_gem_vunmap(obj, map->vaddr);
+ drm_gem_vunmap(obj, map);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 07/10] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers, linus.walle
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann,
Daniel Vetter
GEM's vmap and vunmap interfaces now wrap memory pointers in struct
dma_buf_map.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/gpu/drm/drm_client.c | 18 +++++++++++-------
drivers/gpu/drm/drm_gem.c | 26 +++++++++++++-------------
drivers/gpu/drm/drm_internal.h | 5 +++--
drivers/gpu/drm/drm_prime.c | 14 ++++----------
4 files changed, 31 insertions(+), 32 deletions(-)
diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index 495f47d23d87..ac0082bed966 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -3,6 +3,7 @@
* Copyright 2018 Noralf Trønnes
*/
+#include <linux/dma-buf-map.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/mutex.h>
@@ -304,7 +305,8 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
*/
void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
{
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
if (buffer->vaddr)
return buffer->vaddr;
@@ -317,13 +319,13 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
* fd_install step out of the driver backend hooks, to make that
* final step optional for internal users.
*/
- vaddr = drm_gem_vmap(buffer->gem);
- if (IS_ERR(vaddr))
- return vaddr;
+ ret = drm_gem_vmap(buffer->gem, &map);
+ if (ret)
+ return ERR_PTR(ret);
- buffer->vaddr = vaddr;
+ buffer->vaddr = map.vaddr;
- return vaddr;
+ return map.vaddr;
}
EXPORT_SYMBOL(drm_client_buffer_vmap);
@@ -337,7 +339,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
*/
void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
{
- drm_gem_vunmap(buffer->gem, buffer->vaddr);
+ struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+
+ drm_gem_vunmap(buffer->gem, &map);
buffer->vaddr = NULL;
}
EXPORT_SYMBOL(drm_client_buffer_vunmap);
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index a89ad4570e3c..4d5fff4bd821 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1206,32 +1206,32 @@ void drm_gem_unpin(struct drm_gem_object *obj)
obj->funcs->unpin(obj);
}
-void *drm_gem_vmap(struct drm_gem_object *obj)
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
- struct dma_buf_map map;
int ret;
if (!obj->funcs->vmap)
- return ERR_PTR(-EOPNOTSUPP);
+ return -EOPNOTSUPP;
- ret = obj->funcs->vmap(obj, &map);
+ ret = obj->funcs->vmap(obj, map);
if (ret)
- return ERR_PTR(ret);
- else if (dma_buf_map_is_null(&map))
- return ERR_PTR(-ENOMEM);
+ return ret;
+ else if (dma_buf_map_is_null(map))
+ return -ENOMEM;
- return map.vaddr;
+ return 0;
}
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
- struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
-
- if (!vaddr)
+ if (dma_buf_map_is_null(map))
return;
if (obj->funcs->vunmap)
- obj->funcs->vunmap(obj, &map);
+ obj->funcs->vunmap(obj, map);
+
+ /* Always set the mapping to NULL. Callers may rely on this. */
+ dma_buf_map_clear(map);
}
/**
diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
index b65865c630b0..58832d75a9bd 100644
--- a/drivers/gpu/drm/drm_internal.h
+++ b/drivers/gpu/drm/drm_internal.h
@@ -33,6 +33,7 @@
struct dentry;
struct dma_buf;
+struct dma_buf_map;
struct drm_connector;
struct drm_crtc;
struct drm_framebuffer;
@@ -187,8 +188,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
int drm_gem_pin(struct drm_gem_object *obj);
void drm_gem_unpin(struct drm_gem_object *obj);
-void *drm_gem_vmap(struct drm_gem_object *obj);
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
/* drm_debugfs.c drm_debugfs_crc.c */
#if defined(CONFIG_DEBUG_FS)
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 89e2a2496734..cb8fbeeb731b 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -667,21 +667,15 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
*
* Sets up a kernel virtual mapping. This can be used as the &dma_buf_ops.vmap
* callback. Calls into &drm_gem_object_funcs.vmap for device specific handling.
+ * The kernel virtual address is returned in map.
*
- * Returns the kernel virtual address or NULL on failure.
+ * Returns 0 on success or a negative errno code otherwise.
*/
int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
- void *vaddr;
- vaddr = drm_gem_vmap(obj);
- if (IS_ERR(vaddr))
- return PTR_ERR(vaddr);
-
- dma_buf_map_set_vaddr(map, vaddr);
-
- return 0;
+ return drm_gem_vmap(obj, map);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
@@ -697,7 +691,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
- drm_gem_vunmap(obj, map->vaddr);
+ drm_gem_vunmap(obj, map);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 07/10] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, Daniel Vetter, etnaviv,
amd-gfx, virtualization, linaro-mm-sig, linux-rockchip,
dri-devel, Thomas Zimmermann, xen-devel, spice-devel,
linux-arm-kernel, linux-media
GEM's vmap and vunmap interfaces now wrap memory pointers in struct
dma_buf_map.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/gpu/drm/drm_client.c | 18 +++++++++++-------
drivers/gpu/drm/drm_gem.c | 26 +++++++++++++-------------
drivers/gpu/drm/drm_internal.h | 5 +++--
drivers/gpu/drm/drm_prime.c | 14 ++++----------
4 files changed, 31 insertions(+), 32 deletions(-)
diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index 495f47d23d87..ac0082bed966 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -3,6 +3,7 @@
* Copyright 2018 Noralf Trønnes
*/
+#include <linux/dma-buf-map.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/mutex.h>
@@ -304,7 +305,8 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
*/
void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
{
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
if (buffer->vaddr)
return buffer->vaddr;
@@ -317,13 +319,13 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
* fd_install step out of the driver backend hooks, to make that
* final step optional for internal users.
*/
- vaddr = drm_gem_vmap(buffer->gem);
- if (IS_ERR(vaddr))
- return vaddr;
+ ret = drm_gem_vmap(buffer->gem, &map);
+ if (ret)
+ return ERR_PTR(ret);
- buffer->vaddr = vaddr;
+ buffer->vaddr = map.vaddr;
- return vaddr;
+ return map.vaddr;
}
EXPORT_SYMBOL(drm_client_buffer_vmap);
@@ -337,7 +339,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
*/
void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
{
- drm_gem_vunmap(buffer->gem, buffer->vaddr);
+ struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+
+ drm_gem_vunmap(buffer->gem, &map);
buffer->vaddr = NULL;
}
EXPORT_SYMBOL(drm_client_buffer_vunmap);
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index a89ad4570e3c..4d5fff4bd821 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1206,32 +1206,32 @@ void drm_gem_unpin(struct drm_gem_object *obj)
obj->funcs->unpin(obj);
}
-void *drm_gem_vmap(struct drm_gem_object *obj)
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
- struct dma_buf_map map;
int ret;
if (!obj->funcs->vmap)
- return ERR_PTR(-EOPNOTSUPP);
+ return -EOPNOTSUPP;
- ret = obj->funcs->vmap(obj, &map);
+ ret = obj->funcs->vmap(obj, map);
if (ret)
- return ERR_PTR(ret);
- else if (dma_buf_map_is_null(&map))
- return ERR_PTR(-ENOMEM);
+ return ret;
+ else if (dma_buf_map_is_null(map))
+ return -ENOMEM;
- return map.vaddr;
+ return 0;
}
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
- struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
-
- if (!vaddr)
+ if (dma_buf_map_is_null(map))
return;
if (obj->funcs->vunmap)
- obj->funcs->vunmap(obj, &map);
+ obj->funcs->vunmap(obj, map);
+
+ /* Always set the mapping to NULL. Callers may rely on this. */
+ dma_buf_map_clear(map);
}
/**
diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
index b65865c630b0..58832d75a9bd 100644
--- a/drivers/gpu/drm/drm_internal.h
+++ b/drivers/gpu/drm/drm_internal.h
@@ -33,6 +33,7 @@
struct dentry;
struct dma_buf;
+struct dma_buf_map;
struct drm_connector;
struct drm_crtc;
struct drm_framebuffer;
@@ -187,8 +188,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
int drm_gem_pin(struct drm_gem_object *obj);
void drm_gem_unpin(struct drm_gem_object *obj);
-void *drm_gem_vmap(struct drm_gem_object *obj);
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
/* drm_debugfs.c drm_debugfs_crc.c */
#if defined(CONFIG_DEBUG_FS)
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 89e2a2496734..cb8fbeeb731b 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -667,21 +667,15 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
*
* Sets up a kernel virtual mapping. This can be used as the &dma_buf_ops.vmap
* callback. Calls into &drm_gem_object_funcs.vmap for device specific handling.
+ * The kernel virtual address is returned in map.
*
- * Returns the kernel virtual address or NULL on failure.
+ * Returns 0 on success or a negative errno code otherwise.
*/
int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
- void *vaddr;
- vaddr = drm_gem_vmap(obj);
- if (IS_ERR(vaddr))
- return PTR_ERR(vaddr);
-
- dma_buf_map_set_vaddr(map, vaddr);
-
- return 0;
+ return drm_gem_vmap(obj, map);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
@@ -697,7 +691,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
- drm_gem_vunmap(obj, map->vaddr);
+ drm_gem_vunmap(obj, map);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
--
2.28.0
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 07/10] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, Daniel Vetter, etnaviv,
amd-gfx, virtualization, linaro-mm-sig, linux-rockchip,
dri-devel, Thomas Zimmermann, xen-devel, spice-devel,
linux-arm-kernel, linux-media
GEM's vmap and vunmap interfaces now wrap memory pointers in struct
dma_buf_map.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/gpu/drm/drm_client.c | 18 +++++++++++-------
drivers/gpu/drm/drm_gem.c | 26 +++++++++++++-------------
drivers/gpu/drm/drm_internal.h | 5 +++--
drivers/gpu/drm/drm_prime.c | 14 ++++----------
4 files changed, 31 insertions(+), 32 deletions(-)
diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index 495f47d23d87..ac0082bed966 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -3,6 +3,7 @@
* Copyright 2018 Noralf Trønnes
*/
+#include <linux/dma-buf-map.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/mutex.h>
@@ -304,7 +305,8 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
*/
void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
{
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
if (buffer->vaddr)
return buffer->vaddr;
@@ -317,13 +319,13 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
* fd_install step out of the driver backend hooks, to make that
* final step optional for internal users.
*/
- vaddr = drm_gem_vmap(buffer->gem);
- if (IS_ERR(vaddr))
- return vaddr;
+ ret = drm_gem_vmap(buffer->gem, &map);
+ if (ret)
+ return ERR_PTR(ret);
- buffer->vaddr = vaddr;
+ buffer->vaddr = map.vaddr;
- return vaddr;
+ return map.vaddr;
}
EXPORT_SYMBOL(drm_client_buffer_vmap);
@@ -337,7 +339,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
*/
void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
{
- drm_gem_vunmap(buffer->gem, buffer->vaddr);
+ struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+
+ drm_gem_vunmap(buffer->gem, &map);
buffer->vaddr = NULL;
}
EXPORT_SYMBOL(drm_client_buffer_vunmap);
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index a89ad4570e3c..4d5fff4bd821 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1206,32 +1206,32 @@ void drm_gem_unpin(struct drm_gem_object *obj)
obj->funcs->unpin(obj);
}
-void *drm_gem_vmap(struct drm_gem_object *obj)
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
- struct dma_buf_map map;
int ret;
if (!obj->funcs->vmap)
- return ERR_PTR(-EOPNOTSUPP);
+ return -EOPNOTSUPP;
- ret = obj->funcs->vmap(obj, &map);
+ ret = obj->funcs->vmap(obj, map);
if (ret)
- return ERR_PTR(ret);
- else if (dma_buf_map_is_null(&map))
- return ERR_PTR(-ENOMEM);
+ return ret;
+ else if (dma_buf_map_is_null(map))
+ return -ENOMEM;
- return map.vaddr;
+ return 0;
}
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
- struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
-
- if (!vaddr)
+ if (dma_buf_map_is_null(map))
return;
if (obj->funcs->vunmap)
- obj->funcs->vunmap(obj, &map);
+ obj->funcs->vunmap(obj, map);
+
+ /* Always set the mapping to NULL. Callers may rely on this. */
+ dma_buf_map_clear(map);
}
/**
diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
index b65865c630b0..58832d75a9bd 100644
--- a/drivers/gpu/drm/drm_internal.h
+++ b/drivers/gpu/drm/drm_internal.h
@@ -33,6 +33,7 @@
struct dentry;
struct dma_buf;
+struct dma_buf_map;
struct drm_connector;
struct drm_crtc;
struct drm_framebuffer;
@@ -187,8 +188,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
int drm_gem_pin(struct drm_gem_object *obj);
void drm_gem_unpin(struct drm_gem_object *obj);
-void *drm_gem_vmap(struct drm_gem_object *obj);
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
/* drm_debugfs.c drm_debugfs_crc.c */
#if defined(CONFIG_DEBUG_FS)
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 89e2a2496734..cb8fbeeb731b 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -667,21 +667,15 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
*
* Sets up a kernel virtual mapping. This can be used as the &dma_buf_ops.vmap
* callback. Calls into &drm_gem_object_funcs.vmap for device specific handling.
+ * The kernel virtual address is returned in map.
*
- * Returns the kernel virtual address or NULL on failure.
+ * Returns 0 on success or a negative errno code otherwise.
*/
int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
- void *vaddr;
- vaddr = drm_gem_vmap(obj);
- if (IS_ERR(vaddr))
- return PTR_ERR(vaddr);
-
- dma_buf_map_set_vaddr(map, vaddr);
-
- return 0;
+ return drm_gem_vmap(obj, map);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
@@ -697,7 +691,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
- drm_gem_vunmap(obj, map->vaddr);
+ drm_gem_vunmap(obj, map);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
--
2.28.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 07/10] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, Daniel Vetter, etnaviv,
amd-gfx, virtualization, linaro-mm-sig, linux-rockchip,
dri-devel, Thomas Zimmermann, xen-devel, spice-devel,
linux-arm-kernel, linux-media
GEM's vmap and vunmap interfaces now wrap memory pointers in struct
dma_buf_map.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/gpu/drm/drm_client.c | 18 +++++++++++-------
drivers/gpu/drm/drm_gem.c | 26 +++++++++++++-------------
drivers/gpu/drm/drm_internal.h | 5 +++--
drivers/gpu/drm/drm_prime.c | 14 ++++----------
4 files changed, 31 insertions(+), 32 deletions(-)
diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index 495f47d23d87..ac0082bed966 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -3,6 +3,7 @@
* Copyright 2018 Noralf Trønnes
*/
+#include <linux/dma-buf-map.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/mutex.h>
@@ -304,7 +305,8 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
*/
void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
{
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
if (buffer->vaddr)
return buffer->vaddr;
@@ -317,13 +319,13 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
* fd_install step out of the driver backend hooks, to make that
* final step optional for internal users.
*/
- vaddr = drm_gem_vmap(buffer->gem);
- if (IS_ERR(vaddr))
- return vaddr;
+ ret = drm_gem_vmap(buffer->gem, &map);
+ if (ret)
+ return ERR_PTR(ret);
- buffer->vaddr = vaddr;
+ buffer->vaddr = map.vaddr;
- return vaddr;
+ return map.vaddr;
}
EXPORT_SYMBOL(drm_client_buffer_vmap);
@@ -337,7 +339,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
*/
void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
{
- drm_gem_vunmap(buffer->gem, buffer->vaddr);
+ struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+
+ drm_gem_vunmap(buffer->gem, &map);
buffer->vaddr = NULL;
}
EXPORT_SYMBOL(drm_client_buffer_vunmap);
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index a89ad4570e3c..4d5fff4bd821 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1206,32 +1206,32 @@ void drm_gem_unpin(struct drm_gem_object *obj)
obj->funcs->unpin(obj);
}
-void *drm_gem_vmap(struct drm_gem_object *obj)
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
- struct dma_buf_map map;
int ret;
if (!obj->funcs->vmap)
- return ERR_PTR(-EOPNOTSUPP);
+ return -EOPNOTSUPP;
- ret = obj->funcs->vmap(obj, &map);
+ ret = obj->funcs->vmap(obj, map);
if (ret)
- return ERR_PTR(ret);
- else if (dma_buf_map_is_null(&map))
- return ERR_PTR(-ENOMEM);
+ return ret;
+ else if (dma_buf_map_is_null(map))
+ return -ENOMEM;
- return map.vaddr;
+ return 0;
}
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
- struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
-
- if (!vaddr)
+ if (dma_buf_map_is_null(map))
return;
if (obj->funcs->vunmap)
- obj->funcs->vunmap(obj, &map);
+ obj->funcs->vunmap(obj, map);
+
+ /* Always set the mapping to NULL. Callers may rely on this. */
+ dma_buf_map_clear(map);
}
/**
diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
index b65865c630b0..58832d75a9bd 100644
--- a/drivers/gpu/drm/drm_internal.h
+++ b/drivers/gpu/drm/drm_internal.h
@@ -33,6 +33,7 @@
struct dentry;
struct dma_buf;
+struct dma_buf_map;
struct drm_connector;
struct drm_crtc;
struct drm_framebuffer;
@@ -187,8 +188,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
int drm_gem_pin(struct drm_gem_object *obj);
void drm_gem_unpin(struct drm_gem_object *obj);
-void *drm_gem_vmap(struct drm_gem_object *obj);
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
/* drm_debugfs.c drm_debugfs_crc.c */
#if defined(CONFIG_DEBUG_FS)
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 89e2a2496734..cb8fbeeb731b 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -667,21 +667,15 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
*
* Sets up a kernel virtual mapping. This can be used as the &dma_buf_ops.vmap
* callback. Calls into &drm_gem_object_funcs.vmap for device specific handling.
+ * The kernel virtual address is returned in map.
*
- * Returns the kernel virtual address or NULL on failure.
+ * Returns 0 on success or a negative errno code otherwise.
*/
int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
- void *vaddr;
- vaddr = drm_gem_vmap(obj);
- if (IS_ERR(vaddr))
- return PTR_ERR(vaddr);
-
- dma_buf_map_set_vaddr(map, vaddr);
-
- return 0;
+ return drm_gem_vmap(obj, map);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
@@ -697,7 +691,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
- drm_gem_vunmap(obj, map->vaddr);
+ drm_gem_vunmap(obj, map);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
--
2.28.0
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 07/10] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, Daniel Vetter, etnaviv,
amd-gfx, virtualization, linaro-mm-sig, linux-rockchip,
dri-devel, Thomas Zimmermann, xen-devel, spice-devel,
linux-arm-kernel, linux-media
GEM's vmap and vunmap interfaces now wrap memory pointers in struct
dma_buf_map.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/gpu/drm/drm_client.c | 18 +++++++++++-------
drivers/gpu/drm/drm_gem.c | 26 +++++++++++++-------------
drivers/gpu/drm/drm_internal.h | 5 +++--
drivers/gpu/drm/drm_prime.c | 14 ++++----------
4 files changed, 31 insertions(+), 32 deletions(-)
diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index 495f47d23d87..ac0082bed966 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -3,6 +3,7 @@
* Copyright 2018 Noralf Trønnes
*/
+#include <linux/dma-buf-map.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/mutex.h>
@@ -304,7 +305,8 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
*/
void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
{
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
if (buffer->vaddr)
return buffer->vaddr;
@@ -317,13 +319,13 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
* fd_install step out of the driver backend hooks, to make that
* final step optional for internal users.
*/
- vaddr = drm_gem_vmap(buffer->gem);
- if (IS_ERR(vaddr))
- return vaddr;
+ ret = drm_gem_vmap(buffer->gem, &map);
+ if (ret)
+ return ERR_PTR(ret);
- buffer->vaddr = vaddr;
+ buffer->vaddr = map.vaddr;
- return vaddr;
+ return map.vaddr;
}
EXPORT_SYMBOL(drm_client_buffer_vmap);
@@ -337,7 +339,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
*/
void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
{
- drm_gem_vunmap(buffer->gem, buffer->vaddr);
+ struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+
+ drm_gem_vunmap(buffer->gem, &map);
buffer->vaddr = NULL;
}
EXPORT_SYMBOL(drm_client_buffer_vunmap);
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index a89ad4570e3c..4d5fff4bd821 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1206,32 +1206,32 @@ void drm_gem_unpin(struct drm_gem_object *obj)
obj->funcs->unpin(obj);
}
-void *drm_gem_vmap(struct drm_gem_object *obj)
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
- struct dma_buf_map map;
int ret;
if (!obj->funcs->vmap)
- return ERR_PTR(-EOPNOTSUPP);
+ return -EOPNOTSUPP;
- ret = obj->funcs->vmap(obj, &map);
+ ret = obj->funcs->vmap(obj, map);
if (ret)
- return ERR_PTR(ret);
- else if (dma_buf_map_is_null(&map))
- return ERR_PTR(-ENOMEM);
+ return ret;
+ else if (dma_buf_map_is_null(map))
+ return -ENOMEM;
- return map.vaddr;
+ return 0;
}
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
{
- struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
-
- if (!vaddr)
+ if (dma_buf_map_is_null(map))
return;
if (obj->funcs->vunmap)
- obj->funcs->vunmap(obj, &map);
+ obj->funcs->vunmap(obj, map);
+
+ /* Always set the mapping to NULL. Callers may rely on this. */
+ dma_buf_map_clear(map);
}
/**
diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
index b65865c630b0..58832d75a9bd 100644
--- a/drivers/gpu/drm/drm_internal.h
+++ b/drivers/gpu/drm/drm_internal.h
@@ -33,6 +33,7 @@
struct dentry;
struct dma_buf;
+struct dma_buf_map;
struct drm_connector;
struct drm_crtc;
struct drm_framebuffer;
@@ -187,8 +188,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
int drm_gem_pin(struct drm_gem_object *obj);
void drm_gem_unpin(struct drm_gem_object *obj);
-void *drm_gem_vmap(struct drm_gem_object *obj);
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
/* drm_debugfs.c drm_debugfs_crc.c */
#if defined(CONFIG_DEBUG_FS)
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 89e2a2496734..cb8fbeeb731b 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -667,21 +667,15 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
*
* Sets up a kernel virtual mapping. This can be used as the &dma_buf_ops.vmap
* callback. Calls into &drm_gem_object_funcs.vmap for device specific handling.
+ * The kernel virtual address is returned in map.
*
- * Returns the kernel virtual address or NULL on failure.
+ * Returns 0 on success or a negative errno code otherwise.
*/
int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
- void *vaddr;
- vaddr = drm_gem_vmap(obj);
- if (IS_ERR(vaddr))
- return PTR_ERR(vaddr);
-
- dma_buf_map_set_vaddr(map, vaddr);
-
- return 0;
+ return drm_gem_vmap(obj, map);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
@@ -697,7 +691,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
- drm_gem_vunmap(obj, map->vaddr);
+ drm_gem_vunmap(obj, map);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
--
2.28.0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map
2020-10-15 12:37 ` Thomas Zimmermann
` (3 preceding siblings ...)
(?)
@ 2020-10-15 12:38 ` Thomas Zimmermann
-1 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann,
Daniel Vetter
Kernel DRM clients now store their framebuffer address in an instance
of struct dma_buf_map. Depending on the buffer's location, the address
refers to system or I/O memory.
Callers of drm_client_buffer_vmap() receive a copy of the value in
the call's supplied arguments. It can be accessed and modified with
dma_buf_map interfaces.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/gpu/drm/drm_client.c | 34 +++++++++++++++++++--------------
drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++---------
include/drm/drm_client.h | 7 ++++---
3 files changed, 38 insertions(+), 26 deletions(-)
diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index ac0082bed966..fe573acf1067 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
{
struct drm_device *dev = buffer->client->dev;
- drm_gem_vunmap(buffer->gem, buffer->vaddr);
+ drm_gem_vunmap(buffer->gem, &buffer->map);
if (buffer->gem)
drm_gem_object_put(buffer->gem);
@@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
/**
* drm_client_buffer_vmap - Map DRM client buffer into address space
* @buffer: DRM client buffer
+ * @map_copy: Returns the mapped memory's address
*
* This function maps a client buffer into kernel address space. If the
- * buffer is already mapped, it returns the mapping's address.
+ * buffer is already mapped, it returns the existing mapping's address.
*
* Client buffer mappings are not ref'counted. Each call to
* drm_client_buffer_vmap() should be followed by a call to
* drm_client_buffer_vunmap(); or the client buffer should be mapped
* throughout its lifetime.
*
+ * The returned address is a copy of the internal value. In contrast to
+ * other vmap interfaces, you don't need it for the client's vunmap
+ * function. So you can modify it at will during blit and draw operations.
+ *
* Returns:
- * The mapped memory's address
+ * 0 on success, or a negative errno code otherwise.
*/
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
+int
+drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy)
{
- struct dma_buf_map map;
+ struct dma_buf_map *map = &buffer->map;
int ret;
- if (buffer->vaddr)
- return buffer->vaddr;
+ if (dma_buf_map_is_set(map))
+ goto out;
/*
* FIXME: The dependency on GEM here isn't required, we could
@@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
* fd_install step out of the driver backend hooks, to make that
* final step optional for internal users.
*/
- ret = drm_gem_vmap(buffer->gem, &map);
+ ret = drm_gem_vmap(buffer->gem, map);
if (ret)
- return ERR_PTR(ret);
+ return ret;
- buffer->vaddr = map.vaddr;
+out:
+ *map_copy = *map;
- return map.vaddr;
+ return 0;
}
EXPORT_SYMBOL(drm_client_buffer_vmap);
@@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
*/
void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
{
- struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+ struct dma_buf_map *map = &buffer->map;
- drm_gem_vunmap(buffer->gem, &map);
- buffer->vaddr = NULL;
+ drm_gem_vunmap(buffer->gem, map);
}
EXPORT_SYMBOL(drm_client_buffer_vunmap);
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index c2f72bb6afb1..6212cd7cde1d 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
unsigned int cpp = fb->format->cpp[0];
size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
void *src = fb_helper->fbdev->screen_buffer + offset;
- void *dst = fb_helper->buffer->vaddr + offset;
+ void *dst = fb_helper->buffer->map.vaddr + offset;
size_t len = (clip->x2 - clip->x1) * cpp;
unsigned int y;
@@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
struct drm_clip_rect *clip = &helper->dirty_clip;
struct drm_clip_rect clip_copy;
unsigned long flags;
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
spin_lock_irqsave(&helper->dirty_lock, flags);
clip_copy = *clip;
@@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
/* Generic fbdev uses a shadow buffer */
if (helper->buffer) {
- vaddr = drm_client_buffer_vmap(helper->buffer);
- if (IS_ERR(vaddr))
+ ret = drm_client_buffer_vmap(helper->buffer, &map);
+ if (ret)
return;
drm_fb_helper_dirty_blit_real(helper, &clip_copy);
}
@@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
struct drm_framebuffer *fb;
struct fb_info *fbi;
u32 format;
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
sizes->surface_width, sizes->surface_height,
@@ -2096,11 +2098,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
fb_deferred_io_init(fbi);
} else {
/* buffer is mapped for HW framebuffer */
- vaddr = drm_client_buffer_vmap(fb_helper->buffer);
- if (IS_ERR(vaddr))
- return PTR_ERR(vaddr);
+ ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
+ if (ret)
+ return ret;
+ if (map.is_iomem)
+ fbi->screen_base = map.vaddr_iomem;
+ else
+ fbi->screen_buffer = map.vaddr;
- fbi->screen_buffer = vaddr;
/* Shamelessly leak the physical address to user-space */
#if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
index 7aaea665bfc2..f07f2fb02e75 100644
--- a/include/drm/drm_client.h
+++ b/include/drm/drm_client.h
@@ -3,6 +3,7 @@
#ifndef _DRM_CLIENT_H_
#define _DRM_CLIENT_H_
+#include <linux/dma-buf-map.h>
#include <linux/lockdep.h>
#include <linux/mutex.h>
#include <linux/types.h>
@@ -141,9 +142,9 @@ struct drm_client_buffer {
struct drm_gem_object *gem;
/**
- * @vaddr: Virtual address for the buffer
+ * @map: Virtual address for the buffer
*/
- void *vaddr;
+ struct dma_buf_map map;
/**
* @fb: DRM framebuffer
@@ -155,7 +156,7 @@ struct drm_client_buffer *
drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
+int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map);
void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
int drm_client_modeset_create(struct drm_client_dev *client);
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers, linus.walle
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann,
Daniel Vetter
Kernel DRM clients now store their framebuffer address in an instance
of struct dma_buf_map. Depending on the buffer's location, the address
refers to system or I/O memory.
Callers of drm_client_buffer_vmap() receive a copy of the value in
the call's supplied arguments. It can be accessed and modified with
dma_buf_map interfaces.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/gpu/drm/drm_client.c | 34 +++++++++++++++++++--------------
drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++---------
include/drm/drm_client.h | 7 ++++---
3 files changed, 38 insertions(+), 26 deletions(-)
diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index ac0082bed966..fe573acf1067 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
{
struct drm_device *dev = buffer->client->dev;
- drm_gem_vunmap(buffer->gem, buffer->vaddr);
+ drm_gem_vunmap(buffer->gem, &buffer->map);
if (buffer->gem)
drm_gem_object_put(buffer->gem);
@@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
/**
* drm_client_buffer_vmap - Map DRM client buffer into address space
* @buffer: DRM client buffer
+ * @map_copy: Returns the mapped memory's address
*
* This function maps a client buffer into kernel address space. If the
- * buffer is already mapped, it returns the mapping's address.
+ * buffer is already mapped, it returns the existing mapping's address.
*
* Client buffer mappings are not ref'counted. Each call to
* drm_client_buffer_vmap() should be followed by a call to
* drm_client_buffer_vunmap(); or the client buffer should be mapped
* throughout its lifetime.
*
+ * The returned address is a copy of the internal value. In contrast to
+ * other vmap interfaces, you don't need it for the client's vunmap
+ * function. So you can modify it at will during blit and draw operations.
+ *
* Returns:
- * The mapped memory's address
+ * 0 on success, or a negative errno code otherwise.
*/
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
+int
+drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy)
{
- struct dma_buf_map map;
+ struct dma_buf_map *map = &buffer->map;
int ret;
- if (buffer->vaddr)
- return buffer->vaddr;
+ if (dma_buf_map_is_set(map))
+ goto out;
/*
* FIXME: The dependency on GEM here isn't required, we could
@@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
* fd_install step out of the driver backend hooks, to make that
* final step optional for internal users.
*/
- ret = drm_gem_vmap(buffer->gem, &map);
+ ret = drm_gem_vmap(buffer->gem, map);
if (ret)
- return ERR_PTR(ret);
+ return ret;
- buffer->vaddr = map.vaddr;
+out:
+ *map_copy = *map;
- return map.vaddr;
+ return 0;
}
EXPORT_SYMBOL(drm_client_buffer_vmap);
@@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
*/
void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
{
- struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+ struct dma_buf_map *map = &buffer->map;
- drm_gem_vunmap(buffer->gem, &map);
- buffer->vaddr = NULL;
+ drm_gem_vunmap(buffer->gem, map);
}
EXPORT_SYMBOL(drm_client_buffer_vunmap);
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index c2f72bb6afb1..6212cd7cde1d 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
unsigned int cpp = fb->format->cpp[0];
size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
void *src = fb_helper->fbdev->screen_buffer + offset;
- void *dst = fb_helper->buffer->vaddr + offset;
+ void *dst = fb_helper->buffer->map.vaddr + offset;
size_t len = (clip->x2 - clip->x1) * cpp;
unsigned int y;
@@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
struct drm_clip_rect *clip = &helper->dirty_clip;
struct drm_clip_rect clip_copy;
unsigned long flags;
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
spin_lock_irqsave(&helper->dirty_lock, flags);
clip_copy = *clip;
@@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
/* Generic fbdev uses a shadow buffer */
if (helper->buffer) {
- vaddr = drm_client_buffer_vmap(helper->buffer);
- if (IS_ERR(vaddr))
+ ret = drm_client_buffer_vmap(helper->buffer, &map);
+ if (ret)
return;
drm_fb_helper_dirty_blit_real(helper, &clip_copy);
}
@@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
struct drm_framebuffer *fb;
struct fb_info *fbi;
u32 format;
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
sizes->surface_width, sizes->surface_height,
@@ -2096,11 +2098,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
fb_deferred_io_init(fbi);
} else {
/* buffer is mapped for HW framebuffer */
- vaddr = drm_client_buffer_vmap(fb_helper->buffer);
- if (IS_ERR(vaddr))
- return PTR_ERR(vaddr);
+ ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
+ if (ret)
+ return ret;
+ if (map.is_iomem)
+ fbi->screen_base = map.vaddr_iomem;
+ else
+ fbi->screen_buffer = map.vaddr;
- fbi->screen_buffer = vaddr;
/* Shamelessly leak the physical address to user-space */
#if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
index 7aaea665bfc2..f07f2fb02e75 100644
--- a/include/drm/drm_client.h
+++ b/include/drm/drm_client.h
@@ -3,6 +3,7 @@
#ifndef _DRM_CLIENT_H_
#define _DRM_CLIENT_H_
+#include <linux/dma-buf-map.h>
#include <linux/lockdep.h>
#include <linux/mutex.h>
#include <linux/types.h>
@@ -141,9 +142,9 @@ struct drm_client_buffer {
struct drm_gem_object *gem;
/**
- * @vaddr: Virtual address for the buffer
+ * @map: Virtual address for the buffer
*/
- void *vaddr;
+ struct dma_buf_map map;
/**
* @fb: DRM framebuffer
@@ -155,7 +156,7 @@ struct drm_client_buffer *
drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
+int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map);
void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
int drm_client_modeset_create(struct drm_client_dev *client);
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, Daniel Vetter, etnaviv,
amd-gfx, virtualization, linaro-mm-sig, linux-rockchip,
dri-devel, Thomas Zimmermann, xen-devel, spice-devel,
linux-arm-kernel, linux-media
Kernel DRM clients now store their framebuffer address in an instance
of struct dma_buf_map. Depending on the buffer's location, the address
refers to system or I/O memory.
Callers of drm_client_buffer_vmap() receive a copy of the value in
the call's supplied arguments. It can be accessed and modified with
dma_buf_map interfaces.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/gpu/drm/drm_client.c | 34 +++++++++++++++++++--------------
drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++---------
include/drm/drm_client.h | 7 ++++---
3 files changed, 38 insertions(+), 26 deletions(-)
diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index ac0082bed966..fe573acf1067 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
{
struct drm_device *dev = buffer->client->dev;
- drm_gem_vunmap(buffer->gem, buffer->vaddr);
+ drm_gem_vunmap(buffer->gem, &buffer->map);
if (buffer->gem)
drm_gem_object_put(buffer->gem);
@@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
/**
* drm_client_buffer_vmap - Map DRM client buffer into address space
* @buffer: DRM client buffer
+ * @map_copy: Returns the mapped memory's address
*
* This function maps a client buffer into kernel address space. If the
- * buffer is already mapped, it returns the mapping's address.
+ * buffer is already mapped, it returns the existing mapping's address.
*
* Client buffer mappings are not ref'counted. Each call to
* drm_client_buffer_vmap() should be followed by a call to
* drm_client_buffer_vunmap(); or the client buffer should be mapped
* throughout its lifetime.
*
+ * The returned address is a copy of the internal value. In contrast to
+ * other vmap interfaces, you don't need it for the client's vunmap
+ * function. So you can modify it at will during blit and draw operations.
+ *
* Returns:
- * The mapped memory's address
+ * 0 on success, or a negative errno code otherwise.
*/
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
+int
+drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy)
{
- struct dma_buf_map map;
+ struct dma_buf_map *map = &buffer->map;
int ret;
- if (buffer->vaddr)
- return buffer->vaddr;
+ if (dma_buf_map_is_set(map))
+ goto out;
/*
* FIXME: The dependency on GEM here isn't required, we could
@@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
* fd_install step out of the driver backend hooks, to make that
* final step optional for internal users.
*/
- ret = drm_gem_vmap(buffer->gem, &map);
+ ret = drm_gem_vmap(buffer->gem, map);
if (ret)
- return ERR_PTR(ret);
+ return ret;
- buffer->vaddr = map.vaddr;
+out:
+ *map_copy = *map;
- return map.vaddr;
+ return 0;
}
EXPORT_SYMBOL(drm_client_buffer_vmap);
@@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
*/
void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
{
- struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+ struct dma_buf_map *map = &buffer->map;
- drm_gem_vunmap(buffer->gem, &map);
- buffer->vaddr = NULL;
+ drm_gem_vunmap(buffer->gem, map);
}
EXPORT_SYMBOL(drm_client_buffer_vunmap);
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index c2f72bb6afb1..6212cd7cde1d 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
unsigned int cpp = fb->format->cpp[0];
size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
void *src = fb_helper->fbdev->screen_buffer + offset;
- void *dst = fb_helper->buffer->vaddr + offset;
+ void *dst = fb_helper->buffer->map.vaddr + offset;
size_t len = (clip->x2 - clip->x1) * cpp;
unsigned int y;
@@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
struct drm_clip_rect *clip = &helper->dirty_clip;
struct drm_clip_rect clip_copy;
unsigned long flags;
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
spin_lock_irqsave(&helper->dirty_lock, flags);
clip_copy = *clip;
@@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
/* Generic fbdev uses a shadow buffer */
if (helper->buffer) {
- vaddr = drm_client_buffer_vmap(helper->buffer);
- if (IS_ERR(vaddr))
+ ret = drm_client_buffer_vmap(helper->buffer, &map);
+ if (ret)
return;
drm_fb_helper_dirty_blit_real(helper, &clip_copy);
}
@@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
struct drm_framebuffer *fb;
struct fb_info *fbi;
u32 format;
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
sizes->surface_width, sizes->surface_height,
@@ -2096,11 +2098,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
fb_deferred_io_init(fbi);
} else {
/* buffer is mapped for HW framebuffer */
- vaddr = drm_client_buffer_vmap(fb_helper->buffer);
- if (IS_ERR(vaddr))
- return PTR_ERR(vaddr);
+ ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
+ if (ret)
+ return ret;
+ if (map.is_iomem)
+ fbi->screen_base = map.vaddr_iomem;
+ else
+ fbi->screen_buffer = map.vaddr;
- fbi->screen_buffer = vaddr;
/* Shamelessly leak the physical address to user-space */
#if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
index 7aaea665bfc2..f07f2fb02e75 100644
--- a/include/drm/drm_client.h
+++ b/include/drm/drm_client.h
@@ -3,6 +3,7 @@
#ifndef _DRM_CLIENT_H_
#define _DRM_CLIENT_H_
+#include <linux/dma-buf-map.h>
#include <linux/lockdep.h>
#include <linux/mutex.h>
#include <linux/types.h>
@@ -141,9 +142,9 @@ struct drm_client_buffer {
struct drm_gem_object *gem;
/**
- * @vaddr: Virtual address for the buffer
+ * @map: Virtual address for the buffer
*/
- void *vaddr;
+ struct dma_buf_map map;
/**
* @fb: DRM framebuffer
@@ -155,7 +156,7 @@ struct drm_client_buffer *
drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
+int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map);
void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
int drm_client_modeset_create(struct drm_client_dev *client);
--
2.28.0
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, Daniel Vetter, etnaviv,
amd-gfx, virtualization, linaro-mm-sig, linux-rockchip,
dri-devel, Thomas Zimmermann, xen-devel, spice-devel,
linux-arm-kernel, linux-media
Kernel DRM clients now store their framebuffer address in an instance
of struct dma_buf_map. Depending on the buffer's location, the address
refers to system or I/O memory.
Callers of drm_client_buffer_vmap() receive a copy of the value in
the call's supplied arguments. It can be accessed and modified with
dma_buf_map interfaces.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/gpu/drm/drm_client.c | 34 +++++++++++++++++++--------------
drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++---------
include/drm/drm_client.h | 7 ++++---
3 files changed, 38 insertions(+), 26 deletions(-)
diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index ac0082bed966..fe573acf1067 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
{
struct drm_device *dev = buffer->client->dev;
- drm_gem_vunmap(buffer->gem, buffer->vaddr);
+ drm_gem_vunmap(buffer->gem, &buffer->map);
if (buffer->gem)
drm_gem_object_put(buffer->gem);
@@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
/**
* drm_client_buffer_vmap - Map DRM client buffer into address space
* @buffer: DRM client buffer
+ * @map_copy: Returns the mapped memory's address
*
* This function maps a client buffer into kernel address space. If the
- * buffer is already mapped, it returns the mapping's address.
+ * buffer is already mapped, it returns the existing mapping's address.
*
* Client buffer mappings are not ref'counted. Each call to
* drm_client_buffer_vmap() should be followed by a call to
* drm_client_buffer_vunmap(); or the client buffer should be mapped
* throughout its lifetime.
*
+ * The returned address is a copy of the internal value. In contrast to
+ * other vmap interfaces, you don't need it for the client's vunmap
+ * function. So you can modify it at will during blit and draw operations.
+ *
* Returns:
- * The mapped memory's address
+ * 0 on success, or a negative errno code otherwise.
*/
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
+int
+drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy)
{
- struct dma_buf_map map;
+ struct dma_buf_map *map = &buffer->map;
int ret;
- if (buffer->vaddr)
- return buffer->vaddr;
+ if (dma_buf_map_is_set(map))
+ goto out;
/*
* FIXME: The dependency on GEM here isn't required, we could
@@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
* fd_install step out of the driver backend hooks, to make that
* final step optional for internal users.
*/
- ret = drm_gem_vmap(buffer->gem, &map);
+ ret = drm_gem_vmap(buffer->gem, map);
if (ret)
- return ERR_PTR(ret);
+ return ret;
- buffer->vaddr = map.vaddr;
+out:
+ *map_copy = *map;
- return map.vaddr;
+ return 0;
}
EXPORT_SYMBOL(drm_client_buffer_vmap);
@@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
*/
void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
{
- struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+ struct dma_buf_map *map = &buffer->map;
- drm_gem_vunmap(buffer->gem, &map);
- buffer->vaddr = NULL;
+ drm_gem_vunmap(buffer->gem, map);
}
EXPORT_SYMBOL(drm_client_buffer_vunmap);
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index c2f72bb6afb1..6212cd7cde1d 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
unsigned int cpp = fb->format->cpp[0];
size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
void *src = fb_helper->fbdev->screen_buffer + offset;
- void *dst = fb_helper->buffer->vaddr + offset;
+ void *dst = fb_helper->buffer->map.vaddr + offset;
size_t len = (clip->x2 - clip->x1) * cpp;
unsigned int y;
@@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
struct drm_clip_rect *clip = &helper->dirty_clip;
struct drm_clip_rect clip_copy;
unsigned long flags;
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
spin_lock_irqsave(&helper->dirty_lock, flags);
clip_copy = *clip;
@@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
/* Generic fbdev uses a shadow buffer */
if (helper->buffer) {
- vaddr = drm_client_buffer_vmap(helper->buffer);
- if (IS_ERR(vaddr))
+ ret = drm_client_buffer_vmap(helper->buffer, &map);
+ if (ret)
return;
drm_fb_helper_dirty_blit_real(helper, &clip_copy);
}
@@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
struct drm_framebuffer *fb;
struct fb_info *fbi;
u32 format;
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
sizes->surface_width, sizes->surface_height,
@@ -2096,11 +2098,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
fb_deferred_io_init(fbi);
} else {
/* buffer is mapped for HW framebuffer */
- vaddr = drm_client_buffer_vmap(fb_helper->buffer);
- if (IS_ERR(vaddr))
- return PTR_ERR(vaddr);
+ ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
+ if (ret)
+ return ret;
+ if (map.is_iomem)
+ fbi->screen_base = map.vaddr_iomem;
+ else
+ fbi->screen_buffer = map.vaddr;
- fbi->screen_buffer = vaddr;
/* Shamelessly leak the physical address to user-space */
#if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
index 7aaea665bfc2..f07f2fb02e75 100644
--- a/include/drm/drm_client.h
+++ b/include/drm/drm_client.h
@@ -3,6 +3,7 @@
#ifndef _DRM_CLIENT_H_
#define _DRM_CLIENT_H_
+#include <linux/dma-buf-map.h>
#include <linux/lockdep.h>
#include <linux/mutex.h>
#include <linux/types.h>
@@ -141,9 +142,9 @@ struct drm_client_buffer {
struct drm_gem_object *gem;
/**
- * @vaddr: Virtual address for the buffer
+ * @map: Virtual address for the buffer
*/
- void *vaddr;
+ struct dma_buf_map map;
/**
* @fb: DRM framebuffer
@@ -155,7 +156,7 @@ struct drm_client_buffer *
drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
+int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map);
void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
int drm_client_modeset_create(struct drm_client_dev *client);
--
2.28.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, Daniel Vetter, etnaviv,
amd-gfx, virtualization, linaro-mm-sig, linux-rockchip,
dri-devel, Thomas Zimmermann, xen-devel, spice-devel,
linux-arm-kernel, linux-media
Kernel DRM clients now store their framebuffer address in an instance
of struct dma_buf_map. Depending on the buffer's location, the address
refers to system or I/O memory.
Callers of drm_client_buffer_vmap() receive a copy of the value in
the call's supplied arguments. It can be accessed and modified with
dma_buf_map interfaces.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/gpu/drm/drm_client.c | 34 +++++++++++++++++++--------------
drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++---------
include/drm/drm_client.h | 7 ++++---
3 files changed, 38 insertions(+), 26 deletions(-)
diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index ac0082bed966..fe573acf1067 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
{
struct drm_device *dev = buffer->client->dev;
- drm_gem_vunmap(buffer->gem, buffer->vaddr);
+ drm_gem_vunmap(buffer->gem, &buffer->map);
if (buffer->gem)
drm_gem_object_put(buffer->gem);
@@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
/**
* drm_client_buffer_vmap - Map DRM client buffer into address space
* @buffer: DRM client buffer
+ * @map_copy: Returns the mapped memory's address
*
* This function maps a client buffer into kernel address space. If the
- * buffer is already mapped, it returns the mapping's address.
+ * buffer is already mapped, it returns the existing mapping's address.
*
* Client buffer mappings are not ref'counted. Each call to
* drm_client_buffer_vmap() should be followed by a call to
* drm_client_buffer_vunmap(); or the client buffer should be mapped
* throughout its lifetime.
*
+ * The returned address is a copy of the internal value. In contrast to
+ * other vmap interfaces, you don't need it for the client's vunmap
+ * function. So you can modify it at will during blit and draw operations.
+ *
* Returns:
- * The mapped memory's address
+ * 0 on success, or a negative errno code otherwise.
*/
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
+int
+drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy)
{
- struct dma_buf_map map;
+ struct dma_buf_map *map = &buffer->map;
int ret;
- if (buffer->vaddr)
- return buffer->vaddr;
+ if (dma_buf_map_is_set(map))
+ goto out;
/*
* FIXME: The dependency on GEM here isn't required, we could
@@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
* fd_install step out of the driver backend hooks, to make that
* final step optional for internal users.
*/
- ret = drm_gem_vmap(buffer->gem, &map);
+ ret = drm_gem_vmap(buffer->gem, map);
if (ret)
- return ERR_PTR(ret);
+ return ret;
- buffer->vaddr = map.vaddr;
+out:
+ *map_copy = *map;
- return map.vaddr;
+ return 0;
}
EXPORT_SYMBOL(drm_client_buffer_vmap);
@@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
*/
void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
{
- struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+ struct dma_buf_map *map = &buffer->map;
- drm_gem_vunmap(buffer->gem, &map);
- buffer->vaddr = NULL;
+ drm_gem_vunmap(buffer->gem, map);
}
EXPORT_SYMBOL(drm_client_buffer_vunmap);
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index c2f72bb6afb1..6212cd7cde1d 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
unsigned int cpp = fb->format->cpp[0];
size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
void *src = fb_helper->fbdev->screen_buffer + offset;
- void *dst = fb_helper->buffer->vaddr + offset;
+ void *dst = fb_helper->buffer->map.vaddr + offset;
size_t len = (clip->x2 - clip->x1) * cpp;
unsigned int y;
@@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
struct drm_clip_rect *clip = &helper->dirty_clip;
struct drm_clip_rect clip_copy;
unsigned long flags;
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
spin_lock_irqsave(&helper->dirty_lock, flags);
clip_copy = *clip;
@@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
/* Generic fbdev uses a shadow buffer */
if (helper->buffer) {
- vaddr = drm_client_buffer_vmap(helper->buffer);
- if (IS_ERR(vaddr))
+ ret = drm_client_buffer_vmap(helper->buffer, &map);
+ if (ret)
return;
drm_fb_helper_dirty_blit_real(helper, &clip_copy);
}
@@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
struct drm_framebuffer *fb;
struct fb_info *fbi;
u32 format;
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
sizes->surface_width, sizes->surface_height,
@@ -2096,11 +2098,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
fb_deferred_io_init(fbi);
} else {
/* buffer is mapped for HW framebuffer */
- vaddr = drm_client_buffer_vmap(fb_helper->buffer);
- if (IS_ERR(vaddr))
- return PTR_ERR(vaddr);
+ ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
+ if (ret)
+ return ret;
+ if (map.is_iomem)
+ fbi->screen_base = map.vaddr_iomem;
+ else
+ fbi->screen_buffer = map.vaddr;
- fbi->screen_buffer = vaddr;
/* Shamelessly leak the physical address to user-space */
#if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
index 7aaea665bfc2..f07f2fb02e75 100644
--- a/include/drm/drm_client.h
+++ b/include/drm/drm_client.h
@@ -3,6 +3,7 @@
#ifndef _DRM_CLIENT_H_
#define _DRM_CLIENT_H_
+#include <linux/dma-buf-map.h>
#include <linux/lockdep.h>
#include <linux/mutex.h>
#include <linux/types.h>
@@ -141,9 +142,9 @@ struct drm_client_buffer {
struct drm_gem_object *gem;
/**
- * @vaddr: Virtual address for the buffer
+ * @map: Virtual address for the buffer
*/
- void *vaddr;
+ struct dma_buf_map map;
/**
* @fb: DRM framebuffer
@@ -155,7 +156,7 @@ struct drm_client_buffer *
drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
+int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map);
void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
int drm_client_modeset_create(struct drm_client_dev *client);
--
2.28.0
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, Daniel Vetter, etnaviv,
amd-gfx, virtualization, linaro-mm-sig, linux-rockchip,
dri-devel, Thomas Zimmermann, xen-devel, spice-devel,
linux-arm-kernel, linux-media
Kernel DRM clients now store their framebuffer address in an instance
of struct dma_buf_map. Depending on the buffer's location, the address
refers to system or I/O memory.
Callers of drm_client_buffer_vmap() receive a copy of the value in
the call's supplied arguments. It can be accessed and modified with
dma_buf_map interfaces.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/gpu/drm/drm_client.c | 34 +++++++++++++++++++--------------
drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++---------
include/drm/drm_client.h | 7 ++++---
3 files changed, 38 insertions(+), 26 deletions(-)
diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index ac0082bed966..fe573acf1067 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
{
struct drm_device *dev = buffer->client->dev;
- drm_gem_vunmap(buffer->gem, buffer->vaddr);
+ drm_gem_vunmap(buffer->gem, &buffer->map);
if (buffer->gem)
drm_gem_object_put(buffer->gem);
@@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
/**
* drm_client_buffer_vmap - Map DRM client buffer into address space
* @buffer: DRM client buffer
+ * @map_copy: Returns the mapped memory's address
*
* This function maps a client buffer into kernel address space. If the
- * buffer is already mapped, it returns the mapping's address.
+ * buffer is already mapped, it returns the existing mapping's address.
*
* Client buffer mappings are not ref'counted. Each call to
* drm_client_buffer_vmap() should be followed by a call to
* drm_client_buffer_vunmap(); or the client buffer should be mapped
* throughout its lifetime.
*
+ * The returned address is a copy of the internal value. In contrast to
+ * other vmap interfaces, you don't need it for the client's vunmap
+ * function. So you can modify it at will during blit and draw operations.
+ *
* Returns:
- * The mapped memory's address
+ * 0 on success, or a negative errno code otherwise.
*/
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
+int
+drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy)
{
- struct dma_buf_map map;
+ struct dma_buf_map *map = &buffer->map;
int ret;
- if (buffer->vaddr)
- return buffer->vaddr;
+ if (dma_buf_map_is_set(map))
+ goto out;
/*
* FIXME: The dependency on GEM here isn't required, we could
@@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
* fd_install step out of the driver backend hooks, to make that
* final step optional for internal users.
*/
- ret = drm_gem_vmap(buffer->gem, &map);
+ ret = drm_gem_vmap(buffer->gem, map);
if (ret)
- return ERR_PTR(ret);
+ return ret;
- buffer->vaddr = map.vaddr;
+out:
+ *map_copy = *map;
- return map.vaddr;
+ return 0;
}
EXPORT_SYMBOL(drm_client_buffer_vmap);
@@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
*/
void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
{
- struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+ struct dma_buf_map *map = &buffer->map;
- drm_gem_vunmap(buffer->gem, &map);
- buffer->vaddr = NULL;
+ drm_gem_vunmap(buffer->gem, map);
}
EXPORT_SYMBOL(drm_client_buffer_vunmap);
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index c2f72bb6afb1..6212cd7cde1d 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
unsigned int cpp = fb->format->cpp[0];
size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
void *src = fb_helper->fbdev->screen_buffer + offset;
- void *dst = fb_helper->buffer->vaddr + offset;
+ void *dst = fb_helper->buffer->map.vaddr + offset;
size_t len = (clip->x2 - clip->x1) * cpp;
unsigned int y;
@@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
struct drm_clip_rect *clip = &helper->dirty_clip;
struct drm_clip_rect clip_copy;
unsigned long flags;
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
spin_lock_irqsave(&helper->dirty_lock, flags);
clip_copy = *clip;
@@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
/* Generic fbdev uses a shadow buffer */
if (helper->buffer) {
- vaddr = drm_client_buffer_vmap(helper->buffer);
- if (IS_ERR(vaddr))
+ ret = drm_client_buffer_vmap(helper->buffer, &map);
+ if (ret)
return;
drm_fb_helper_dirty_blit_real(helper, &clip_copy);
}
@@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
struct drm_framebuffer *fb;
struct fb_info *fbi;
u32 format;
- void *vaddr;
+ struct dma_buf_map map;
+ int ret;
drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
sizes->surface_width, sizes->surface_height,
@@ -2096,11 +2098,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
fb_deferred_io_init(fbi);
} else {
/* buffer is mapped for HW framebuffer */
- vaddr = drm_client_buffer_vmap(fb_helper->buffer);
- if (IS_ERR(vaddr))
- return PTR_ERR(vaddr);
+ ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
+ if (ret)
+ return ret;
+ if (map.is_iomem)
+ fbi->screen_base = map.vaddr_iomem;
+ else
+ fbi->screen_buffer = map.vaddr;
- fbi->screen_buffer = vaddr;
/* Shamelessly leak the physical address to user-space */
#if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
index 7aaea665bfc2..f07f2fb02e75 100644
--- a/include/drm/drm_client.h
+++ b/include/drm/drm_client.h
@@ -3,6 +3,7 @@
#ifndef _DRM_CLIENT_H_
#define _DRM_CLIENT_H_
+#include <linux/dma-buf-map.h>
#include <linux/lockdep.h>
#include <linux/mutex.h>
#include <linux/types.h>
@@ -141,9 +142,9 @@ struct drm_client_buffer {
struct drm_gem_object *gem;
/**
- * @vaddr: Virtual address for the buffer
+ * @map: Virtual address for the buffer
*/
- void *vaddr;
+ struct dma_buf_map map;
/**
* @fb: DRM framebuffer
@@ -155,7 +156,7 @@ struct drm_client_buffer *
drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
+int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map);
void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
int drm_client_modeset_create(struct drm_client_dev *client);
--
2.28.0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
2020-10-15 12:37 ` Thomas Zimmermann
` (3 preceding siblings ...)
(?)
@ 2020-10-15 12:38 ` Thomas Zimmermann
-1 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann
To do framebuffer updates, one needs memcpy from system memory and a
pointer-increment function. Add both interfaces with documentation.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
1 file changed, 62 insertions(+), 10 deletions(-)
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index 2e8bbecb5091..6ca0f304dda2 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -32,6 +32,14 @@
* accessing the buffer. Use the returned instance and the helper functions
* to access the buffer's memory in the correct way.
*
+ * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
+ * actually independent from the dma-buf infrastructure. When sharing buffers
+ * among devices, drivers have to know the location of the memory to access
+ * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
+ * solves this problem for dma-buf and its users. If other drivers or
+ * sub-systems require similar functionality, the type could be generalized
+ * and moved to a more prominent header file.
+ *
* Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
* considered bad style. Rather then accessing its fields directly, use one
* of the provided helper functions, or implement your own. For example,
@@ -51,6 +59,14 @@
*
* dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
*
+ * Instances of struct dma_buf_map do not have to be cleaned up, but
+ * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
+ * always refer to system memory.
+ *
+ * .. code-block:: c
+ *
+ * dma_buf_map_clear(&map);
+ *
* Test if a mapping is valid with either dma_buf_map_is_set() or
* dma_buf_map_is_null().
*
@@ -73,17 +89,19 @@
* if (dma_buf_map_is_equal(&sys_map, &io_map))
* // always false
*
- * Instances of struct dma_buf_map do not have to be cleaned up, but
- * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
- * always refer to system memory.
+ * A set up instance of struct dma_buf_map can be used to access or manipulate
+ * the buffer memory. Depending on the location of the memory, the provided
+ * helpers will pick the correct operations. Data can be copied into the memory
+ * with dma_buf_map_memcpy_to(). The address can be manipulated with
+ * dma_buf_map_incr().
*
- * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
- * actually independent from the dma-buf infrastructure. When sharing buffers
- * among devices, drivers have to know the location of the memory to access
- * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
- * solves this problem for dma-buf and its users. If other drivers or
- * sub-systems require similar functionality, the type could be generalized
- * and moved to a more prominent header file.
+ * .. code-block:: c
+ *
+ * const void *src = ...; // source buffer
+ * size_t len = ...; // length of src
+ *
+ * dma_buf_map_memcpy_to(&map, src, len);
+ * dma_buf_map_incr(&map, len); // go to first byte after the memcpy
*/
/**
@@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
}
}
+/**
+ * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
+ * @dst: The dma-buf mapping structure
+ * @src: The source buffer
+ * @len: The number of byte in src
+ *
+ * Copies data into a dma-buf mapping. The source buffer is in system
+ * memory. Depending on the buffer's location, the helper picks the correct
+ * method of accessing the memory.
+ */
+static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
+{
+ if (dst->is_iomem)
+ memcpy_toio(dst->vaddr_iomem, src, len);
+ else
+ memcpy(dst->vaddr, src, len);
+}
+
+/**
+ * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
+ * @map: The dma-buf mapping structure
+ * @incr: The number of bytes to increment
+ *
+ * Increments the address stored in a dma-buf mapping. Depending on the
+ * buffer's location, the correct value will be updated.
+ */
+static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
+{
+ if (map->is_iomem)
+ map->vaddr_iomem += incr;
+ else
+ map->vaddr += incr;
+}
+
#endif /* __DMA_BUF_MAP_H__ */
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
To do framebuffer updates, one needs memcpy from system memory and a
pointer-increment function. Add both interfaces with documentation.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
1 file changed, 62 insertions(+), 10 deletions(-)
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index 2e8bbecb5091..6ca0f304dda2 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -32,6 +32,14 @@
* accessing the buffer. Use the returned instance and the helper functions
* to access the buffer's memory in the correct way.
*
+ * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
+ * actually independent from the dma-buf infrastructure. When sharing buffers
+ * among devices, drivers have to know the location of the memory to access
+ * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
+ * solves this problem for dma-buf and its users. If other drivers or
+ * sub-systems require similar functionality, the type could be generalized
+ * and moved to a more prominent header file.
+ *
* Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
* considered bad style. Rather then accessing its fields directly, use one
* of the provided helper functions, or implement your own. For example,
@@ -51,6 +59,14 @@
*
* dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
*
+ * Instances of struct dma_buf_map do not have to be cleaned up, but
+ * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
+ * always refer to system memory.
+ *
+ * .. code-block:: c
+ *
+ * dma_buf_map_clear(&map);
+ *
* Test if a mapping is valid with either dma_buf_map_is_set() or
* dma_buf_map_is_null().
*
@@ -73,17 +89,19 @@
* if (dma_buf_map_is_equal(&sys_map, &io_map))
* // always false
*
- * Instances of struct dma_buf_map do not have to be cleaned up, but
- * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
- * always refer to system memory.
+ * A set up instance of struct dma_buf_map can be used to access or manipulate
+ * the buffer memory. Depending on the location of the memory, the provided
+ * helpers will pick the correct operations. Data can be copied into the memory
+ * with dma_buf_map_memcpy_to(). The address can be manipulated with
+ * dma_buf_map_incr().
*
- * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
- * actually independent from the dma-buf infrastructure. When sharing buffers
- * among devices, drivers have to know the location of the memory to access
- * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
- * solves this problem for dma-buf and its users. If other drivers or
- * sub-systems require similar functionality, the type could be generalized
- * and moved to a more prominent header file.
+ * .. code-block:: c
+ *
+ * const void *src = ...; // source buffer
+ * size_t len = ...; // length of src
+ *
+ * dma_buf_map_memcpy_to(&map, src, len);
+ * dma_buf_map_incr(&map, len); // go to first byte after the memcpy
*/
/**
@@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
}
}
+/**
+ * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
+ * @dst: The dma-buf mapping structure
+ * @src: The source buffer
+ * @len: The number of byte in src
+ *
+ * Copies data into a dma-buf mapping. The source buffer is in system
+ * memory. Depending on the buffer's location, the helper picks the correct
+ * method of accessing the memory.
+ */
+static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
+{
+ if (dst->is_iomem)
+ memcpy_toio(dst->vaddr_iomem, src, len);
+ else
+ memcpy(dst->vaddr, src, len);
+}
+
+/**
+ * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
+ * @map: The dma-buf mapping structure
+ * @incr: The number of bytes to increment
+ *
+ * Increments the address stored in a dma-buf mapping. Depending on the
+ * buffer's location, the correct value will be updated.
+ */
+static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
+{
+ if (map->is_iomem)
+ map->vaddr_iomem += incr;
+ else
+ map->vaddr += incr;
+}
+
#endif /* __DMA_BUF_MAP_H__ */
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
To do framebuffer updates, one needs memcpy from system memory and a
pointer-increment function. Add both interfaces with documentation.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
1 file changed, 62 insertions(+), 10 deletions(-)
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index 2e8bbecb5091..6ca0f304dda2 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -32,6 +32,14 @@
* accessing the buffer. Use the returned instance and the helper functions
* to access the buffer's memory in the correct way.
*
+ * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
+ * actually independent from the dma-buf infrastructure. When sharing buffers
+ * among devices, drivers have to know the location of the memory to access
+ * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
+ * solves this problem for dma-buf and its users. If other drivers or
+ * sub-systems require similar functionality, the type could be generalized
+ * and moved to a more prominent header file.
+ *
* Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
* considered bad style. Rather then accessing its fields directly, use one
* of the provided helper functions, or implement your own. For example,
@@ -51,6 +59,14 @@
*
* dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
*
+ * Instances of struct dma_buf_map do not have to be cleaned up, but
+ * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
+ * always refer to system memory.
+ *
+ * .. code-block:: c
+ *
+ * dma_buf_map_clear(&map);
+ *
* Test if a mapping is valid with either dma_buf_map_is_set() or
* dma_buf_map_is_null().
*
@@ -73,17 +89,19 @@
* if (dma_buf_map_is_equal(&sys_map, &io_map))
* // always false
*
- * Instances of struct dma_buf_map do not have to be cleaned up, but
- * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
- * always refer to system memory.
+ * A set up instance of struct dma_buf_map can be used to access or manipulate
+ * the buffer memory. Depending on the location of the memory, the provided
+ * helpers will pick the correct operations. Data can be copied into the memory
+ * with dma_buf_map_memcpy_to(). The address can be manipulated with
+ * dma_buf_map_incr().
*
- * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
- * actually independent from the dma-buf infrastructure. When sharing buffers
- * among devices, drivers have to know the location of the memory to access
- * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
- * solves this problem for dma-buf and its users. If other drivers or
- * sub-systems require similar functionality, the type could be generalized
- * and moved to a more prominent header file.
+ * .. code-block:: c
+ *
+ * const void *src = ...; // source buffer
+ * size_t len = ...; // length of src
+ *
+ * dma_buf_map_memcpy_to(&map, src, len);
+ * dma_buf_map_incr(&map, len); // go to first byte after the memcpy
*/
/**
@@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
}
}
+/**
+ * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
+ * @dst: The dma-buf mapping structure
+ * @src: The source buffer
+ * @len: The number of byte in src
+ *
+ * Copies data into a dma-buf mapping. The source buffer is in system
+ * memory. Depending on the buffer's location, the helper picks the correct
+ * method of accessing the memory.
+ */
+static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
+{
+ if (dst->is_iomem)
+ memcpy_toio(dst->vaddr_iomem, src, len);
+ else
+ memcpy(dst->vaddr, src, len);
+}
+
+/**
+ * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
+ * @map: The dma-buf mapping structure
+ * @incr: The number of bytes to increment
+ *
+ * Increments the address stored in a dma-buf mapping. Depending on the
+ * buffer's location, the correct value will be updated.
+ */
+static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
+{
+ if (map->is_iomem)
+ map->vaddr_iomem += incr;
+ else
+ map->vaddr += incr;
+}
+
#endif /* __DMA_BUF_MAP_H__ */
--
2.28.0
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
To do framebuffer updates, one needs memcpy from system memory and a
pointer-increment function. Add both interfaces with documentation.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
1 file changed, 62 insertions(+), 10 deletions(-)
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index 2e8bbecb5091..6ca0f304dda2 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -32,6 +32,14 @@
* accessing the buffer. Use the returned instance and the helper functions
* to access the buffer's memory in the correct way.
*
+ * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
+ * actually independent from the dma-buf infrastructure. When sharing buffers
+ * among devices, drivers have to know the location of the memory to access
+ * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
+ * solves this problem for dma-buf and its users. If other drivers or
+ * sub-systems require similar functionality, the type could be generalized
+ * and moved to a more prominent header file.
+ *
* Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
* considered bad style. Rather then accessing its fields directly, use one
* of the provided helper functions, or implement your own. For example,
@@ -51,6 +59,14 @@
*
* dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
*
+ * Instances of struct dma_buf_map do not have to be cleaned up, but
+ * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
+ * always refer to system memory.
+ *
+ * .. code-block:: c
+ *
+ * dma_buf_map_clear(&map);
+ *
* Test if a mapping is valid with either dma_buf_map_is_set() or
* dma_buf_map_is_null().
*
@@ -73,17 +89,19 @@
* if (dma_buf_map_is_equal(&sys_map, &io_map))
* // always false
*
- * Instances of struct dma_buf_map do not have to be cleaned up, but
- * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
- * always refer to system memory.
+ * A set up instance of struct dma_buf_map can be used to access or manipulate
+ * the buffer memory. Depending on the location of the memory, the provided
+ * helpers will pick the correct operations. Data can be copied into the memory
+ * with dma_buf_map_memcpy_to(). The address can be manipulated with
+ * dma_buf_map_incr().
*
- * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
- * actually independent from the dma-buf infrastructure. When sharing buffers
- * among devices, drivers have to know the location of the memory to access
- * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
- * solves this problem for dma-buf and its users. If other drivers or
- * sub-systems require similar functionality, the type could be generalized
- * and moved to a more prominent header file.
+ * .. code-block:: c
+ *
+ * const void *src = ...; // source buffer
+ * size_t len = ...; // length of src
+ *
+ * dma_buf_map_memcpy_to(&map, src, len);
+ * dma_buf_map_incr(&map, len); // go to first byte after the memcpy
*/
/**
@@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
}
}
+/**
+ * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
+ * @dst: The dma-buf mapping structure
+ * @src: The source buffer
+ * @len: The number of byte in src
+ *
+ * Copies data into a dma-buf mapping. The source buffer is in system
+ * memory. Depending on the buffer's location, the helper picks the correct
+ * method of accessing the memory.
+ */
+static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
+{
+ if (dst->is_iomem)
+ memcpy_toio(dst->vaddr_iomem, src, len);
+ else
+ memcpy(dst->vaddr, src, len);
+}
+
+/**
+ * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
+ * @map: The dma-buf mapping structure
+ * @incr: The number of bytes to increment
+ *
+ * Increments the address stored in a dma-buf mapping. Depending on the
+ * buffer's location, the correct value will be updated.
+ */
+static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
+{
+ if (map->is_iomem)
+ map->vaddr_iomem += incr;
+ else
+ map->vaddr += incr;
+}
+
#endif /* __DMA_BUF_MAP_H__ */
--
2.28.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
To do framebuffer updates, one needs memcpy from system memory and a
pointer-increment function. Add both interfaces with documentation.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
1 file changed, 62 insertions(+), 10 deletions(-)
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index 2e8bbecb5091..6ca0f304dda2 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -32,6 +32,14 @@
* accessing the buffer. Use the returned instance and the helper functions
* to access the buffer's memory in the correct way.
*
+ * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
+ * actually independent from the dma-buf infrastructure. When sharing buffers
+ * among devices, drivers have to know the location of the memory to access
+ * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
+ * solves this problem for dma-buf and its users. If other drivers or
+ * sub-systems require similar functionality, the type could be generalized
+ * and moved to a more prominent header file.
+ *
* Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
* considered bad style. Rather then accessing its fields directly, use one
* of the provided helper functions, or implement your own. For example,
@@ -51,6 +59,14 @@
*
* dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
*
+ * Instances of struct dma_buf_map do not have to be cleaned up, but
+ * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
+ * always refer to system memory.
+ *
+ * .. code-block:: c
+ *
+ * dma_buf_map_clear(&map);
+ *
* Test if a mapping is valid with either dma_buf_map_is_set() or
* dma_buf_map_is_null().
*
@@ -73,17 +89,19 @@
* if (dma_buf_map_is_equal(&sys_map, &io_map))
* // always false
*
- * Instances of struct dma_buf_map do not have to be cleaned up, but
- * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
- * always refer to system memory.
+ * A set up instance of struct dma_buf_map can be used to access or manipulate
+ * the buffer memory. Depending on the location of the memory, the provided
+ * helpers will pick the correct operations. Data can be copied into the memory
+ * with dma_buf_map_memcpy_to(). The address can be manipulated with
+ * dma_buf_map_incr().
*
- * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
- * actually independent from the dma-buf infrastructure. When sharing buffers
- * among devices, drivers have to know the location of the memory to access
- * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
- * solves this problem for dma-buf and its users. If other drivers or
- * sub-systems require similar functionality, the type could be generalized
- * and moved to a more prominent header file.
+ * .. code-block:: c
+ *
+ * const void *src = ...; // source buffer
+ * size_t len = ...; // length of src
+ *
+ * dma_buf_map_memcpy_to(&map, src, len);
+ * dma_buf_map_incr(&map, len); // go to first byte after the memcpy
*/
/**
@@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
}
}
+/**
+ * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
+ * @dst: The dma-buf mapping structure
+ * @src: The source buffer
+ * @len: The number of byte in src
+ *
+ * Copies data into a dma-buf mapping. The source buffer is in system
+ * memory. Depending on the buffer's location, the helper picks the correct
+ * method of accessing the memory.
+ */
+static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
+{
+ if (dst->is_iomem)
+ memcpy_toio(dst->vaddr_iomem, src, len);
+ else
+ memcpy(dst->vaddr, src, len);
+}
+
+/**
+ * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
+ * @map: The dma-buf mapping structure
+ * @incr: The number of bytes to increment
+ *
+ * Increments the address stored in a dma-buf mapping. Depending on the
+ * buffer's location, the correct value will be updated.
+ */
+static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
+{
+ if (map->is_iomem)
+ map->vaddr_iomem += incr;
+ else
+ map->vaddr += incr;
+}
+
#endif /* __DMA_BUF_MAP_H__ */
--
2.28.0
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
To do framebuffer updates, one needs memcpy from system memory and a
pointer-increment function. Add both interfaces with documentation.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
1 file changed, 62 insertions(+), 10 deletions(-)
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index 2e8bbecb5091..6ca0f304dda2 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -32,6 +32,14 @@
* accessing the buffer. Use the returned instance and the helper functions
* to access the buffer's memory in the correct way.
*
+ * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
+ * actually independent from the dma-buf infrastructure. When sharing buffers
+ * among devices, drivers have to know the location of the memory to access
+ * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
+ * solves this problem for dma-buf and its users. If other drivers or
+ * sub-systems require similar functionality, the type could be generalized
+ * and moved to a more prominent header file.
+ *
* Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
* considered bad style. Rather then accessing its fields directly, use one
* of the provided helper functions, or implement your own. For example,
@@ -51,6 +59,14 @@
*
* dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
*
+ * Instances of struct dma_buf_map do not have to be cleaned up, but
+ * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
+ * always refer to system memory.
+ *
+ * .. code-block:: c
+ *
+ * dma_buf_map_clear(&map);
+ *
* Test if a mapping is valid with either dma_buf_map_is_set() or
* dma_buf_map_is_null().
*
@@ -73,17 +89,19 @@
* if (dma_buf_map_is_equal(&sys_map, &io_map))
* // always false
*
- * Instances of struct dma_buf_map do not have to be cleaned up, but
- * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
- * always refer to system memory.
+ * A set up instance of struct dma_buf_map can be used to access or manipulate
+ * the buffer memory. Depending on the location of the memory, the provided
+ * helpers will pick the correct operations. Data can be copied into the memory
+ * with dma_buf_map_memcpy_to(). The address can be manipulated with
+ * dma_buf_map_incr().
*
- * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
- * actually independent from the dma-buf infrastructure. When sharing buffers
- * among devices, drivers have to know the location of the memory to access
- * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
- * solves this problem for dma-buf and its users. If other drivers or
- * sub-systems require similar functionality, the type could be generalized
- * and moved to a more prominent header file.
+ * .. code-block:: c
+ *
+ * const void *src = ...; // source buffer
+ * size_t len = ...; // length of src
+ *
+ * dma_buf_map_memcpy_to(&map, src, len);
+ * dma_buf_map_incr(&map, len); // go to first byte after the memcpy
*/
/**
@@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
}
}
+/**
+ * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
+ * @dst: The dma-buf mapping structure
+ * @src: The source buffer
+ * @len: The number of byte in src
+ *
+ * Copies data into a dma-buf mapping. The source buffer is in system
+ * memory. Depending on the buffer's location, the helper picks the correct
+ * method of accessing the memory.
+ */
+static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
+{
+ if (dst->is_iomem)
+ memcpy_toio(dst->vaddr_iomem, src, len);
+ else
+ memcpy(dst->vaddr, src, len);
+}
+
+/**
+ * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
+ * @map: The dma-buf mapping structure
+ * @incr: The number of bytes to increment
+ *
+ * Increments the address stored in a dma-buf mapping. Depending on the
+ * buffer's location, the correct value will be updated.
+ */
+static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
+{
+ if (map->is_iomem)
+ map->vaddr_iomem += incr;
+ else
+ map->vaddr += incr;
+}
+
#endif /* __DMA_BUF_MAP_H__ */
--
2.28.0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
2020-10-15 12:37 ` Thomas Zimmermann
` (3 preceding siblings ...)
(?)
@ 2020-10-15 12:38 ` Thomas Zimmermann
-1 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann
At least sparc64 requires I/O-specific access to framebuffers. This
patch updates the fbdev console accordingly.
For drivers with direct access to the framebuffer memory, the callback
functions in struct fb_ops test for the type of memory and call the rsp
fb_sys_ of fb_cfb_ functions.
For drivers that employ a shadow buffer, fbdev's blit function retrieves
the framebuffer address as struct dma_buf_map, and uses dma_buf_map
interfaces to access the buffer.
The bochs driver on sparc64 uses a workaround to flag the framebuffer as
I/O memory and avoid a HW exception. With the introduction of struct
dma_buf_map, this is not required any longer. The patch removes the rsp
code from both, bochs and fbdev.
v4:
* move dma_buf_map changes into separate patch (Daniel)
* TODO list: comment on fbdev updates (Daniel)
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
Documentation/gpu/todo.rst | 19 ++-
drivers/gpu/drm/bochs/bochs_kms.c | 1 -
drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
include/drm/drm_mode_config.h | 12 --
4 files changed, 220 insertions(+), 29 deletions(-)
diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 7e6fc3c04add..638b7f704339 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
------------------------------------------------
Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
-atomic modesetting and GEM vmap support. Current generic fbdev emulation
-expects the framebuffer in system memory (or system-like memory).
+atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
+expected the framebuffer in system memory or system-like memory. By employing
+struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
+as well.
Contact: Maintainer of the driver you plan to convert
Level: Intermediate
+Reimplement functions in drm_fbdev_fb_ops without fbdev
+-------------------------------------------------------
+
+A number of callback functions in drm_fbdev_fb_ops could benefit from
+being rewritten without dependencies on the fbdev module. Some of the
+helpers could further benefit from using struct dma_buf_map instead of
+raw pointers.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
+
+Level: Advanced
+
+
drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
-----------------------------------------------------------------
diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
index 13d0d04c4457..853081d186d5 100644
--- a/drivers/gpu/drm/bochs/bochs_kms.c
+++ b/drivers/gpu/drm/bochs/bochs_kms.c
@@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
bochs->dev->mode_config.preferred_depth = 24;
bochs->dev->mode_config.prefer_shadow = 0;
bochs->dev->mode_config.prefer_shadow_fbdev = 1;
- bochs->dev->mode_config.fbdev_use_iomem = true;
bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
bochs->dev->mode_config.funcs = &bochs_mode_funcs;
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 6212cd7cde1d..462b0c130ebb 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
}
static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
- struct drm_clip_rect *clip)
+ struct drm_clip_rect *clip,
+ struct dma_buf_map *dst)
{
struct drm_framebuffer *fb = fb_helper->fb;
unsigned int cpp = fb->format->cpp[0];
size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
void *src = fb_helper->fbdev->screen_buffer + offset;
- void *dst = fb_helper->buffer->map.vaddr + offset;
size_t len = (clip->x2 - clip->x1) * cpp;
unsigned int y;
- for (y = clip->y1; y < clip->y2; y++) {
- if (!fb_helper->dev->mode_config.fbdev_use_iomem)
- memcpy(dst, src, len);
- else
- memcpy_toio((void __iomem *)dst, src, len);
+ dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
+ for (y = clip->y1; y < clip->y2; y++) {
+ dma_buf_map_memcpy_to(dst, src, len);
+ dma_buf_map_incr(dst, fb->pitches[0]);
src += fb->pitches[0];
- dst += fb->pitches[0];
}
}
@@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
ret = drm_client_buffer_vmap(helper->buffer, &map);
if (ret)
return;
- drm_fb_helper_dirty_blit_real(helper, &clip_copy);
+ drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
}
+
if (helper->fb->funcs->dirty)
helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
&clip_copy, 1);
@@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
}
EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
+static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned long p = *ppos;
+ u8 *dst;
+ u8 __iomem *src;
+ int c, err = 0;
+ unsigned long total_size;
+ unsigned long alloc_size;
+ ssize_t ret = 0;
+
+ if (info->state != FBINFO_STATE_RUNNING)
+ return -EPERM;
+
+ total_size = info->screen_size;
+
+ if (total_size == 0)
+ total_size = info->fix.smem_len;
+
+ if (p >= total_size)
+ return 0;
+
+ if (count >= total_size)
+ count = total_size;
+
+ if (count + p > total_size)
+ count = total_size - p;
+
+ src = (u8 __iomem *)(info->screen_base + p);
+
+ alloc_size = min(count, PAGE_SIZE);
+
+ dst = kmalloc(alloc_size, GFP_KERNEL);
+ if (!dst)
+ return -ENOMEM;
+
+ while (count) {
+ c = min(count, alloc_size);
+
+ memcpy_fromio(dst, src, c);
+ if (copy_to_user(buf, dst, c)) {
+ err = -EFAULT;
+ break;
+ }
+
+ src += c;
+ *ppos += c;
+ buf += c;
+ ret += c;
+ count -= c;
+ }
+
+ kfree(dst);
+
+ if (err)
+ return err;
+
+ return ret;
+}
+
+static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned long p = *ppos;
+ u8 *src;
+ u8 __iomem *dst;
+ int c, err = 0;
+ unsigned long total_size;
+ unsigned long alloc_size;
+ ssize_t ret = 0;
+
+ if (info->state != FBINFO_STATE_RUNNING)
+ return -EPERM;
+
+ total_size = info->screen_size;
+
+ if (total_size == 0)
+ total_size = info->fix.smem_len;
+
+ if (p > total_size)
+ return -EFBIG;
+
+ if (count > total_size) {
+ err = -EFBIG;
+ count = total_size;
+ }
+
+ if (count + p > total_size) {
+ /*
+ * The framebuffer is too small. We do the
+ * copy operation, but return an error code
+ * afterwards. Taken from fbdev.
+ */
+ if (!err)
+ err = -ENOSPC;
+ count = total_size - p;
+ }
+
+ alloc_size = min(count, PAGE_SIZE);
+
+ src = kmalloc(alloc_size, GFP_KERNEL);
+ if (!src)
+ return -ENOMEM;
+
+ dst = (u8 __iomem *)(info->screen_base + p);
+
+ while (count) {
+ c = min(count, alloc_size);
+
+ if (copy_from_user(src, buf, c)) {
+ err = -EFAULT;
+ break;
+ }
+ memcpy_toio(dst, src, c);
+
+ dst += c;
+ *ppos += c;
+ buf += c;
+ ret += c;
+ count -= c;
+ }
+
+ kfree(src);
+
+ if (err)
+ return err;
+
+ return ret;
+}
+
/**
* drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
* @info: fbdev registered by the helper
@@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
return -ENODEV;
}
+static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ return drm_fb_helper_sys_read(info, buf, count, ppos);
+ else
+ return drm_fb_helper_cfb_read(info, buf, count, ppos);
+}
+
+static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ return drm_fb_helper_sys_write(info, buf, count, ppos);
+ else
+ return drm_fb_helper_cfb_write(info, buf, count, ppos);
+}
+
+static void drm_fbdev_fb_fillrect(struct fb_info *info,
+ const struct fb_fillrect *rect)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ drm_fb_helper_sys_fillrect(info, rect);
+ else
+ drm_fb_helper_cfb_fillrect(info, rect);
+}
+
+static void drm_fbdev_fb_copyarea(struct fb_info *info,
+ const struct fb_copyarea *area)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ drm_fb_helper_sys_copyarea(info, area);
+ else
+ drm_fb_helper_cfb_copyarea(info, area);
+}
+
+static void drm_fbdev_fb_imageblit(struct fb_info *info,
+ const struct fb_image *image)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ drm_fb_helper_sys_imageblit(info, image);
+ else
+ drm_fb_helper_cfb_imageblit(info, image);
+}
+
static const struct fb_ops drm_fbdev_fb_ops = {
.owner = THIS_MODULE,
DRM_FB_HELPER_DEFAULT_OPS,
@@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
.fb_release = drm_fbdev_fb_release,
.fb_destroy = drm_fbdev_fb_destroy,
.fb_mmap = drm_fbdev_fb_mmap,
- .fb_read = drm_fb_helper_sys_read,
- .fb_write = drm_fb_helper_sys_write,
- .fb_fillrect = drm_fb_helper_sys_fillrect,
- .fb_copyarea = drm_fb_helper_sys_copyarea,
- .fb_imageblit = drm_fb_helper_sys_imageblit,
+ .fb_read = drm_fbdev_fb_read,
+ .fb_write = drm_fbdev_fb_write,
+ .fb_fillrect = drm_fbdev_fb_fillrect,
+ .fb_copyarea = drm_fbdev_fb_copyarea,
+ .fb_imageblit = drm_fbdev_fb_imageblit,
};
static struct fb_deferred_io drm_fbdev_defio = {
diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
index 5ffbb4ed5b35..ab424ddd7665 100644
--- a/include/drm/drm_mode_config.h
+++ b/include/drm/drm_mode_config.h
@@ -877,18 +877,6 @@ struct drm_mode_config {
*/
bool prefer_shadow_fbdev;
- /**
- * @fbdev_use_iomem:
- *
- * Set to true if framebuffer reside in iomem.
- * When set to true memcpy_toio() is used when copying the framebuffer in
- * drm_fb_helper.drm_fb_helper_dirty_blit_real().
- *
- * FIXME: This should be replaced with a per-mapping is_iomem
- * flag (like ttm does), and then used everywhere in fbdev code.
- */
- bool fbdev_use_iomem;
-
/**
* @quirk_addfb_prefer_xbgr_30bpp:
*
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers, linus.walle
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig, Thomas Zimmermann
At least sparc64 requires I/O-specific access to framebuffers. This
patch updates the fbdev console accordingly.
For drivers with direct access to the framebuffer memory, the callback
functions in struct fb_ops test for the type of memory and call the rsp
fb_sys_ of fb_cfb_ functions.
For drivers that employ a shadow buffer, fbdev's blit function retrieves
the framebuffer address as struct dma_buf_map, and uses dma_buf_map
interfaces to access the buffer.
The bochs driver on sparc64 uses a workaround to flag the framebuffer as
I/O memory and avoid a HW exception. With the introduction of struct
dma_buf_map, this is not required any longer. The patch removes the rsp
code from both, bochs and fbdev.
v4:
* move dma_buf_map changes into separate patch (Daniel)
* TODO list: comment on fbdev updates (Daniel)
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
Documentation/gpu/todo.rst | 19 ++-
drivers/gpu/drm/bochs/bochs_kms.c | 1 -
drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
include/drm/drm_mode_config.h | 12 --
4 files changed, 220 insertions(+), 29 deletions(-)
diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 7e6fc3c04add..638b7f704339 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
------------------------------------------------
Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
-atomic modesetting and GEM vmap support. Current generic fbdev emulation
-expects the framebuffer in system memory (or system-like memory).
+atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
+expected the framebuffer in system memory or system-like memory. By employing
+struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
+as well.
Contact: Maintainer of the driver you plan to convert
Level: Intermediate
+Reimplement functions in drm_fbdev_fb_ops without fbdev
+-------------------------------------------------------
+
+A number of callback functions in drm_fbdev_fb_ops could benefit from
+being rewritten without dependencies on the fbdev module. Some of the
+helpers could further benefit from using struct dma_buf_map instead of
+raw pointers.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
+
+Level: Advanced
+
+
drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
-----------------------------------------------------------------
diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
index 13d0d04c4457..853081d186d5 100644
--- a/drivers/gpu/drm/bochs/bochs_kms.c
+++ b/drivers/gpu/drm/bochs/bochs_kms.c
@@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
bochs->dev->mode_config.preferred_depth = 24;
bochs->dev->mode_config.prefer_shadow = 0;
bochs->dev->mode_config.prefer_shadow_fbdev = 1;
- bochs->dev->mode_config.fbdev_use_iomem = true;
bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
bochs->dev->mode_config.funcs = &bochs_mode_funcs;
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 6212cd7cde1d..462b0c130ebb 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
}
static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
- struct drm_clip_rect *clip)
+ struct drm_clip_rect *clip,
+ struct dma_buf_map *dst)
{
struct drm_framebuffer *fb = fb_helper->fb;
unsigned int cpp = fb->format->cpp[0];
size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
void *src = fb_helper->fbdev->screen_buffer + offset;
- void *dst = fb_helper->buffer->map.vaddr + offset;
size_t len = (clip->x2 - clip->x1) * cpp;
unsigned int y;
- for (y = clip->y1; y < clip->y2; y++) {
- if (!fb_helper->dev->mode_config.fbdev_use_iomem)
- memcpy(dst, src, len);
- else
- memcpy_toio((void __iomem *)dst, src, len);
+ dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
+ for (y = clip->y1; y < clip->y2; y++) {
+ dma_buf_map_memcpy_to(dst, src, len);
+ dma_buf_map_incr(dst, fb->pitches[0]);
src += fb->pitches[0];
- dst += fb->pitches[0];
}
}
@@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
ret = drm_client_buffer_vmap(helper->buffer, &map);
if (ret)
return;
- drm_fb_helper_dirty_blit_real(helper, &clip_copy);
+ drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
}
+
if (helper->fb->funcs->dirty)
helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
&clip_copy, 1);
@@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
}
EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
+static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned long p = *ppos;
+ u8 *dst;
+ u8 __iomem *src;
+ int c, err = 0;
+ unsigned long total_size;
+ unsigned long alloc_size;
+ ssize_t ret = 0;
+
+ if (info->state != FBINFO_STATE_RUNNING)
+ return -EPERM;
+
+ total_size = info->screen_size;
+
+ if (total_size == 0)
+ total_size = info->fix.smem_len;
+
+ if (p >= total_size)
+ return 0;
+
+ if (count >= total_size)
+ count = total_size;
+
+ if (count + p > total_size)
+ count = total_size - p;
+
+ src = (u8 __iomem *)(info->screen_base + p);
+
+ alloc_size = min(count, PAGE_SIZE);
+
+ dst = kmalloc(alloc_size, GFP_KERNEL);
+ if (!dst)
+ return -ENOMEM;
+
+ while (count) {
+ c = min(count, alloc_size);
+
+ memcpy_fromio(dst, src, c);
+ if (copy_to_user(buf, dst, c)) {
+ err = -EFAULT;
+ break;
+ }
+
+ src += c;
+ *ppos += c;
+ buf += c;
+ ret += c;
+ count -= c;
+ }
+
+ kfree(dst);
+
+ if (err)
+ return err;
+
+ return ret;
+}
+
+static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned long p = *ppos;
+ u8 *src;
+ u8 __iomem *dst;
+ int c, err = 0;
+ unsigned long total_size;
+ unsigned long alloc_size;
+ ssize_t ret = 0;
+
+ if (info->state != FBINFO_STATE_RUNNING)
+ return -EPERM;
+
+ total_size = info->screen_size;
+
+ if (total_size == 0)
+ total_size = info->fix.smem_len;
+
+ if (p > total_size)
+ return -EFBIG;
+
+ if (count > total_size) {
+ err = -EFBIG;
+ count = total_size;
+ }
+
+ if (count + p > total_size) {
+ /*
+ * The framebuffer is too small. We do the
+ * copy operation, but return an error code
+ * afterwards. Taken from fbdev.
+ */
+ if (!err)
+ err = -ENOSPC;
+ count = total_size - p;
+ }
+
+ alloc_size = min(count, PAGE_SIZE);
+
+ src = kmalloc(alloc_size, GFP_KERNEL);
+ if (!src)
+ return -ENOMEM;
+
+ dst = (u8 __iomem *)(info->screen_base + p);
+
+ while (count) {
+ c = min(count, alloc_size);
+
+ if (copy_from_user(src, buf, c)) {
+ err = -EFAULT;
+ break;
+ }
+ memcpy_toio(dst, src, c);
+
+ dst += c;
+ *ppos += c;
+ buf += c;
+ ret += c;
+ count -= c;
+ }
+
+ kfree(src);
+
+ if (err)
+ return err;
+
+ return ret;
+}
+
/**
* drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
* @info: fbdev registered by the helper
@@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
return -ENODEV;
}
+static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ return drm_fb_helper_sys_read(info, buf, count, ppos);
+ else
+ return drm_fb_helper_cfb_read(info, buf, count, ppos);
+}
+
+static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ return drm_fb_helper_sys_write(info, buf, count, ppos);
+ else
+ return drm_fb_helper_cfb_write(info, buf, count, ppos);
+}
+
+static void drm_fbdev_fb_fillrect(struct fb_info *info,
+ const struct fb_fillrect *rect)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ drm_fb_helper_sys_fillrect(info, rect);
+ else
+ drm_fb_helper_cfb_fillrect(info, rect);
+}
+
+static void drm_fbdev_fb_copyarea(struct fb_info *info,
+ const struct fb_copyarea *area)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ drm_fb_helper_sys_copyarea(info, area);
+ else
+ drm_fb_helper_cfb_copyarea(info, area);
+}
+
+static void drm_fbdev_fb_imageblit(struct fb_info *info,
+ const struct fb_image *image)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ drm_fb_helper_sys_imageblit(info, image);
+ else
+ drm_fb_helper_cfb_imageblit(info, image);
+}
+
static const struct fb_ops drm_fbdev_fb_ops = {
.owner = THIS_MODULE,
DRM_FB_HELPER_DEFAULT_OPS,
@@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
.fb_release = drm_fbdev_fb_release,
.fb_destroy = drm_fbdev_fb_destroy,
.fb_mmap = drm_fbdev_fb_mmap,
- .fb_read = drm_fb_helper_sys_read,
- .fb_write = drm_fb_helper_sys_write,
- .fb_fillrect = drm_fb_helper_sys_fillrect,
- .fb_copyarea = drm_fb_helper_sys_copyarea,
- .fb_imageblit = drm_fb_helper_sys_imageblit,
+ .fb_read = drm_fbdev_fb_read,
+ .fb_write = drm_fbdev_fb_write,
+ .fb_fillrect = drm_fbdev_fb_fillrect,
+ .fb_copyarea = drm_fbdev_fb_copyarea,
+ .fb_imageblit = drm_fbdev_fb_imageblit,
};
static struct fb_deferred_io drm_fbdev_defio = {
diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
index 5ffbb4ed5b35..ab424ddd7665 100644
--- a/include/drm/drm_mode_config.h
+++ b/include/drm/drm_mode_config.h
@@ -877,18 +877,6 @@ struct drm_mode_config {
*/
bool prefer_shadow_fbdev;
- /**
- * @fbdev_use_iomem:
- *
- * Set to true if framebuffer reside in iomem.
- * When set to true memcpy_toio() is used when copying the framebuffer in
- * drm_fb_helper.drm_fb_helper_dirty_blit_real().
- *
- * FIXME: This should be replaced with a per-mapping is_iomem
- * flag (like ttm does), and then used everywhere in fbdev code.
- */
- bool fbdev_use_iomem;
-
/**
* @quirk_addfb_prefer_xbgr_30bpp:
*
--
2.28.0
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
At least sparc64 requires I/O-specific access to framebuffers. This
patch updates the fbdev console accordingly.
For drivers with direct access to the framebuffer memory, the callback
functions in struct fb_ops test for the type of memory and call the rsp
fb_sys_ of fb_cfb_ functions.
For drivers that employ a shadow buffer, fbdev's blit function retrieves
the framebuffer address as struct dma_buf_map, and uses dma_buf_map
interfaces to access the buffer.
The bochs driver on sparc64 uses a workaround to flag the framebuffer as
I/O memory and avoid a HW exception. With the introduction of struct
dma_buf_map, this is not required any longer. The patch removes the rsp
code from both, bochs and fbdev.
v4:
* move dma_buf_map changes into separate patch (Daniel)
* TODO list: comment on fbdev updates (Daniel)
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
Documentation/gpu/todo.rst | 19 ++-
drivers/gpu/drm/bochs/bochs_kms.c | 1 -
drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
include/drm/drm_mode_config.h | 12 --
4 files changed, 220 insertions(+), 29 deletions(-)
diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 7e6fc3c04add..638b7f704339 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
------------------------------------------------
Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
-atomic modesetting and GEM vmap support. Current generic fbdev emulation
-expects the framebuffer in system memory (or system-like memory).
+atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
+expected the framebuffer in system memory or system-like memory. By employing
+struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
+as well.
Contact: Maintainer of the driver you plan to convert
Level: Intermediate
+Reimplement functions in drm_fbdev_fb_ops without fbdev
+-------------------------------------------------------
+
+A number of callback functions in drm_fbdev_fb_ops could benefit from
+being rewritten without dependencies on the fbdev module. Some of the
+helpers could further benefit from using struct dma_buf_map instead of
+raw pointers.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
+
+Level: Advanced
+
+
drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
-----------------------------------------------------------------
diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
index 13d0d04c4457..853081d186d5 100644
--- a/drivers/gpu/drm/bochs/bochs_kms.c
+++ b/drivers/gpu/drm/bochs/bochs_kms.c
@@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
bochs->dev->mode_config.preferred_depth = 24;
bochs->dev->mode_config.prefer_shadow = 0;
bochs->dev->mode_config.prefer_shadow_fbdev = 1;
- bochs->dev->mode_config.fbdev_use_iomem = true;
bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
bochs->dev->mode_config.funcs = &bochs_mode_funcs;
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 6212cd7cde1d..462b0c130ebb 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
}
static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
- struct drm_clip_rect *clip)
+ struct drm_clip_rect *clip,
+ struct dma_buf_map *dst)
{
struct drm_framebuffer *fb = fb_helper->fb;
unsigned int cpp = fb->format->cpp[0];
size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
void *src = fb_helper->fbdev->screen_buffer + offset;
- void *dst = fb_helper->buffer->map.vaddr + offset;
size_t len = (clip->x2 - clip->x1) * cpp;
unsigned int y;
- for (y = clip->y1; y < clip->y2; y++) {
- if (!fb_helper->dev->mode_config.fbdev_use_iomem)
- memcpy(dst, src, len);
- else
- memcpy_toio((void __iomem *)dst, src, len);
+ dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
+ for (y = clip->y1; y < clip->y2; y++) {
+ dma_buf_map_memcpy_to(dst, src, len);
+ dma_buf_map_incr(dst, fb->pitches[0]);
src += fb->pitches[0];
- dst += fb->pitches[0];
}
}
@@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
ret = drm_client_buffer_vmap(helper->buffer, &map);
if (ret)
return;
- drm_fb_helper_dirty_blit_real(helper, &clip_copy);
+ drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
}
+
if (helper->fb->funcs->dirty)
helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
&clip_copy, 1);
@@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
}
EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
+static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned long p = *ppos;
+ u8 *dst;
+ u8 __iomem *src;
+ int c, err = 0;
+ unsigned long total_size;
+ unsigned long alloc_size;
+ ssize_t ret = 0;
+
+ if (info->state != FBINFO_STATE_RUNNING)
+ return -EPERM;
+
+ total_size = info->screen_size;
+
+ if (total_size == 0)
+ total_size = info->fix.smem_len;
+
+ if (p >= total_size)
+ return 0;
+
+ if (count >= total_size)
+ count = total_size;
+
+ if (count + p > total_size)
+ count = total_size - p;
+
+ src = (u8 __iomem *)(info->screen_base + p);
+
+ alloc_size = min(count, PAGE_SIZE);
+
+ dst = kmalloc(alloc_size, GFP_KERNEL);
+ if (!dst)
+ return -ENOMEM;
+
+ while (count) {
+ c = min(count, alloc_size);
+
+ memcpy_fromio(dst, src, c);
+ if (copy_to_user(buf, dst, c)) {
+ err = -EFAULT;
+ break;
+ }
+
+ src += c;
+ *ppos += c;
+ buf += c;
+ ret += c;
+ count -= c;
+ }
+
+ kfree(dst);
+
+ if (err)
+ return err;
+
+ return ret;
+}
+
+static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned long p = *ppos;
+ u8 *src;
+ u8 __iomem *dst;
+ int c, err = 0;
+ unsigned long total_size;
+ unsigned long alloc_size;
+ ssize_t ret = 0;
+
+ if (info->state != FBINFO_STATE_RUNNING)
+ return -EPERM;
+
+ total_size = info->screen_size;
+
+ if (total_size == 0)
+ total_size = info->fix.smem_len;
+
+ if (p > total_size)
+ return -EFBIG;
+
+ if (count > total_size) {
+ err = -EFBIG;
+ count = total_size;
+ }
+
+ if (count + p > total_size) {
+ /*
+ * The framebuffer is too small. We do the
+ * copy operation, but return an error code
+ * afterwards. Taken from fbdev.
+ */
+ if (!err)
+ err = -ENOSPC;
+ count = total_size - p;
+ }
+
+ alloc_size = min(count, PAGE_SIZE);
+
+ src = kmalloc(alloc_size, GFP_KERNEL);
+ if (!src)
+ return -ENOMEM;
+
+ dst = (u8 __iomem *)(info->screen_base + p);
+
+ while (count) {
+ c = min(count, alloc_size);
+
+ if (copy_from_user(src, buf, c)) {
+ err = -EFAULT;
+ break;
+ }
+ memcpy_toio(dst, src, c);
+
+ dst += c;
+ *ppos += c;
+ buf += c;
+ ret += c;
+ count -= c;
+ }
+
+ kfree(src);
+
+ if (err)
+ return err;
+
+ return ret;
+}
+
/**
* drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
* @info: fbdev registered by the helper
@@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
return -ENODEV;
}
+static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ return drm_fb_helper_sys_read(info, buf, count, ppos);
+ else
+ return drm_fb_helper_cfb_read(info, buf, count, ppos);
+}
+
+static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ return drm_fb_helper_sys_write(info, buf, count, ppos);
+ else
+ return drm_fb_helper_cfb_write(info, buf, count, ppos);
+}
+
+static void drm_fbdev_fb_fillrect(struct fb_info *info,
+ const struct fb_fillrect *rect)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ drm_fb_helper_sys_fillrect(info, rect);
+ else
+ drm_fb_helper_cfb_fillrect(info, rect);
+}
+
+static void drm_fbdev_fb_copyarea(struct fb_info *info,
+ const struct fb_copyarea *area)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ drm_fb_helper_sys_copyarea(info, area);
+ else
+ drm_fb_helper_cfb_copyarea(info, area);
+}
+
+static void drm_fbdev_fb_imageblit(struct fb_info *info,
+ const struct fb_image *image)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ drm_fb_helper_sys_imageblit(info, image);
+ else
+ drm_fb_helper_cfb_imageblit(info, image);
+}
+
static const struct fb_ops drm_fbdev_fb_ops = {
.owner = THIS_MODULE,
DRM_FB_HELPER_DEFAULT_OPS,
@@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
.fb_release = drm_fbdev_fb_release,
.fb_destroy = drm_fbdev_fb_destroy,
.fb_mmap = drm_fbdev_fb_mmap,
- .fb_read = drm_fb_helper_sys_read,
- .fb_write = drm_fb_helper_sys_write,
- .fb_fillrect = drm_fb_helper_sys_fillrect,
- .fb_copyarea = drm_fb_helper_sys_copyarea,
- .fb_imageblit = drm_fb_helper_sys_imageblit,
+ .fb_read = drm_fbdev_fb_read,
+ .fb_write = drm_fbdev_fb_write,
+ .fb_fillrect = drm_fbdev_fb_fillrect,
+ .fb_copyarea = drm_fbdev_fb_copyarea,
+ .fb_imageblit = drm_fbdev_fb_imageblit,
};
static struct fb_deferred_io drm_fbdev_defio = {
diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
index 5ffbb4ed5b35..ab424ddd7665 100644
--- a/include/drm/drm_mode_config.h
+++ b/include/drm/drm_mode_config.h
@@ -877,18 +877,6 @@ struct drm_mode_config {
*/
bool prefer_shadow_fbdev;
- /**
- * @fbdev_use_iomem:
- *
- * Set to true if framebuffer reside in iomem.
- * When set to true memcpy_toio() is used when copying the framebuffer in
- * drm_fb_helper.drm_fb_helper_dirty_blit_real().
- *
- * FIXME: This should be replaced with a per-mapping is_iomem
- * flag (like ttm does), and then used everywhere in fbdev code.
- */
- bool fbdev_use_iomem;
-
/**
* @quirk_addfb_prefer_xbgr_30bpp:
*
--
2.28.0
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
At least sparc64 requires I/O-specific access to framebuffers. This
patch updates the fbdev console accordingly.
For drivers with direct access to the framebuffer memory, the callback
functions in struct fb_ops test for the type of memory and call the rsp
fb_sys_ of fb_cfb_ functions.
For drivers that employ a shadow buffer, fbdev's blit function retrieves
the framebuffer address as struct dma_buf_map, and uses dma_buf_map
interfaces to access the buffer.
The bochs driver on sparc64 uses a workaround to flag the framebuffer as
I/O memory and avoid a HW exception. With the introduction of struct
dma_buf_map, this is not required any longer. The patch removes the rsp
code from both, bochs and fbdev.
v4:
* move dma_buf_map changes into separate patch (Daniel)
* TODO list: comment on fbdev updates (Daniel)
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
Documentation/gpu/todo.rst | 19 ++-
drivers/gpu/drm/bochs/bochs_kms.c | 1 -
drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
include/drm/drm_mode_config.h | 12 --
4 files changed, 220 insertions(+), 29 deletions(-)
diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 7e6fc3c04add..638b7f704339 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
------------------------------------------------
Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
-atomic modesetting and GEM vmap support. Current generic fbdev emulation
-expects the framebuffer in system memory (or system-like memory).
+atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
+expected the framebuffer in system memory or system-like memory. By employing
+struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
+as well.
Contact: Maintainer of the driver you plan to convert
Level: Intermediate
+Reimplement functions in drm_fbdev_fb_ops without fbdev
+-------------------------------------------------------
+
+A number of callback functions in drm_fbdev_fb_ops could benefit from
+being rewritten without dependencies on the fbdev module. Some of the
+helpers could further benefit from using struct dma_buf_map instead of
+raw pointers.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
+
+Level: Advanced
+
+
drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
-----------------------------------------------------------------
diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
index 13d0d04c4457..853081d186d5 100644
--- a/drivers/gpu/drm/bochs/bochs_kms.c
+++ b/drivers/gpu/drm/bochs/bochs_kms.c
@@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
bochs->dev->mode_config.preferred_depth = 24;
bochs->dev->mode_config.prefer_shadow = 0;
bochs->dev->mode_config.prefer_shadow_fbdev = 1;
- bochs->dev->mode_config.fbdev_use_iomem = true;
bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
bochs->dev->mode_config.funcs = &bochs_mode_funcs;
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 6212cd7cde1d..462b0c130ebb 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
}
static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
- struct drm_clip_rect *clip)
+ struct drm_clip_rect *clip,
+ struct dma_buf_map *dst)
{
struct drm_framebuffer *fb = fb_helper->fb;
unsigned int cpp = fb->format->cpp[0];
size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
void *src = fb_helper->fbdev->screen_buffer + offset;
- void *dst = fb_helper->buffer->map.vaddr + offset;
size_t len = (clip->x2 - clip->x1) * cpp;
unsigned int y;
- for (y = clip->y1; y < clip->y2; y++) {
- if (!fb_helper->dev->mode_config.fbdev_use_iomem)
- memcpy(dst, src, len);
- else
- memcpy_toio((void __iomem *)dst, src, len);
+ dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
+ for (y = clip->y1; y < clip->y2; y++) {
+ dma_buf_map_memcpy_to(dst, src, len);
+ dma_buf_map_incr(dst, fb->pitches[0]);
src += fb->pitches[0];
- dst += fb->pitches[0];
}
}
@@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
ret = drm_client_buffer_vmap(helper->buffer, &map);
if (ret)
return;
- drm_fb_helper_dirty_blit_real(helper, &clip_copy);
+ drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
}
+
if (helper->fb->funcs->dirty)
helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
&clip_copy, 1);
@@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
}
EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
+static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned long p = *ppos;
+ u8 *dst;
+ u8 __iomem *src;
+ int c, err = 0;
+ unsigned long total_size;
+ unsigned long alloc_size;
+ ssize_t ret = 0;
+
+ if (info->state != FBINFO_STATE_RUNNING)
+ return -EPERM;
+
+ total_size = info->screen_size;
+
+ if (total_size == 0)
+ total_size = info->fix.smem_len;
+
+ if (p >= total_size)
+ return 0;
+
+ if (count >= total_size)
+ count = total_size;
+
+ if (count + p > total_size)
+ count = total_size - p;
+
+ src = (u8 __iomem *)(info->screen_base + p);
+
+ alloc_size = min(count, PAGE_SIZE);
+
+ dst = kmalloc(alloc_size, GFP_KERNEL);
+ if (!dst)
+ return -ENOMEM;
+
+ while (count) {
+ c = min(count, alloc_size);
+
+ memcpy_fromio(dst, src, c);
+ if (copy_to_user(buf, dst, c)) {
+ err = -EFAULT;
+ break;
+ }
+
+ src += c;
+ *ppos += c;
+ buf += c;
+ ret += c;
+ count -= c;
+ }
+
+ kfree(dst);
+
+ if (err)
+ return err;
+
+ return ret;
+}
+
+static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned long p = *ppos;
+ u8 *src;
+ u8 __iomem *dst;
+ int c, err = 0;
+ unsigned long total_size;
+ unsigned long alloc_size;
+ ssize_t ret = 0;
+
+ if (info->state != FBINFO_STATE_RUNNING)
+ return -EPERM;
+
+ total_size = info->screen_size;
+
+ if (total_size == 0)
+ total_size = info->fix.smem_len;
+
+ if (p > total_size)
+ return -EFBIG;
+
+ if (count > total_size) {
+ err = -EFBIG;
+ count = total_size;
+ }
+
+ if (count + p > total_size) {
+ /*
+ * The framebuffer is too small. We do the
+ * copy operation, but return an error code
+ * afterwards. Taken from fbdev.
+ */
+ if (!err)
+ err = -ENOSPC;
+ count = total_size - p;
+ }
+
+ alloc_size = min(count, PAGE_SIZE);
+
+ src = kmalloc(alloc_size, GFP_KERNEL);
+ if (!src)
+ return -ENOMEM;
+
+ dst = (u8 __iomem *)(info->screen_base + p);
+
+ while (count) {
+ c = min(count, alloc_size);
+
+ if (copy_from_user(src, buf, c)) {
+ err = -EFAULT;
+ break;
+ }
+ memcpy_toio(dst, src, c);
+
+ dst += c;
+ *ppos += c;
+ buf += c;
+ ret += c;
+ count -= c;
+ }
+
+ kfree(src);
+
+ if (err)
+ return err;
+
+ return ret;
+}
+
/**
* drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
* @info: fbdev registered by the helper
@@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
return -ENODEV;
}
+static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ return drm_fb_helper_sys_read(info, buf, count, ppos);
+ else
+ return drm_fb_helper_cfb_read(info, buf, count, ppos);
+}
+
+static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ return drm_fb_helper_sys_write(info, buf, count, ppos);
+ else
+ return drm_fb_helper_cfb_write(info, buf, count, ppos);
+}
+
+static void drm_fbdev_fb_fillrect(struct fb_info *info,
+ const struct fb_fillrect *rect)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ drm_fb_helper_sys_fillrect(info, rect);
+ else
+ drm_fb_helper_cfb_fillrect(info, rect);
+}
+
+static void drm_fbdev_fb_copyarea(struct fb_info *info,
+ const struct fb_copyarea *area)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ drm_fb_helper_sys_copyarea(info, area);
+ else
+ drm_fb_helper_cfb_copyarea(info, area);
+}
+
+static void drm_fbdev_fb_imageblit(struct fb_info *info,
+ const struct fb_image *image)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ drm_fb_helper_sys_imageblit(info, image);
+ else
+ drm_fb_helper_cfb_imageblit(info, image);
+}
+
static const struct fb_ops drm_fbdev_fb_ops = {
.owner = THIS_MODULE,
DRM_FB_HELPER_DEFAULT_OPS,
@@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
.fb_release = drm_fbdev_fb_release,
.fb_destroy = drm_fbdev_fb_destroy,
.fb_mmap = drm_fbdev_fb_mmap,
- .fb_read = drm_fb_helper_sys_read,
- .fb_write = drm_fb_helper_sys_write,
- .fb_fillrect = drm_fb_helper_sys_fillrect,
- .fb_copyarea = drm_fb_helper_sys_copyarea,
- .fb_imageblit = drm_fb_helper_sys_imageblit,
+ .fb_read = drm_fbdev_fb_read,
+ .fb_write = drm_fbdev_fb_write,
+ .fb_fillrect = drm_fbdev_fb_fillrect,
+ .fb_copyarea = drm_fbdev_fb_copyarea,
+ .fb_imageblit = drm_fbdev_fb_imageblit,
};
static struct fb_deferred_io drm_fbdev_defio = {
diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
index 5ffbb4ed5b35..ab424ddd7665 100644
--- a/include/drm/drm_mode_config.h
+++ b/include/drm/drm_mode_config.h
@@ -877,18 +877,6 @@ struct drm_mode_config {
*/
bool prefer_shadow_fbdev;
- /**
- * @fbdev_use_iomem:
- *
- * Set to true if framebuffer reside in iomem.
- * When set to true memcpy_toio() is used when copying the framebuffer in
- * drm_fb_helper.drm_fb_helper_dirty_blit_real().
- *
- * FIXME: This should be replaced with a per-mapping is_iomem
- * flag (like ttm does), and then used everywhere in fbdev code.
- */
- bool fbdev_use_iomem;
-
/**
* @quirk_addfb_prefer_xbgr_30bpp:
*
--
2.28.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
At least sparc64 requires I/O-specific access to framebuffers. This
patch updates the fbdev console accordingly.
For drivers with direct access to the framebuffer memory, the callback
functions in struct fb_ops test for the type of memory and call the rsp
fb_sys_ of fb_cfb_ functions.
For drivers that employ a shadow buffer, fbdev's blit function retrieves
the framebuffer address as struct dma_buf_map, and uses dma_buf_map
interfaces to access the buffer.
The bochs driver on sparc64 uses a workaround to flag the framebuffer as
I/O memory and avoid a HW exception. With the introduction of struct
dma_buf_map, this is not required any longer. The patch removes the rsp
code from both, bochs and fbdev.
v4:
* move dma_buf_map changes into separate patch (Daniel)
* TODO list: comment on fbdev updates (Daniel)
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
Documentation/gpu/todo.rst | 19 ++-
drivers/gpu/drm/bochs/bochs_kms.c | 1 -
drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
include/drm/drm_mode_config.h | 12 --
4 files changed, 220 insertions(+), 29 deletions(-)
diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 7e6fc3c04add..638b7f704339 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
------------------------------------------------
Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
-atomic modesetting and GEM vmap support. Current generic fbdev emulation
-expects the framebuffer in system memory (or system-like memory).
+atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
+expected the framebuffer in system memory or system-like memory. By employing
+struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
+as well.
Contact: Maintainer of the driver you plan to convert
Level: Intermediate
+Reimplement functions in drm_fbdev_fb_ops without fbdev
+-------------------------------------------------------
+
+A number of callback functions in drm_fbdev_fb_ops could benefit from
+being rewritten without dependencies on the fbdev module. Some of the
+helpers could further benefit from using struct dma_buf_map instead of
+raw pointers.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
+
+Level: Advanced
+
+
drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
-----------------------------------------------------------------
diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
index 13d0d04c4457..853081d186d5 100644
--- a/drivers/gpu/drm/bochs/bochs_kms.c
+++ b/drivers/gpu/drm/bochs/bochs_kms.c
@@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
bochs->dev->mode_config.preferred_depth = 24;
bochs->dev->mode_config.prefer_shadow = 0;
bochs->dev->mode_config.prefer_shadow_fbdev = 1;
- bochs->dev->mode_config.fbdev_use_iomem = true;
bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
bochs->dev->mode_config.funcs = &bochs_mode_funcs;
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 6212cd7cde1d..462b0c130ebb 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
}
static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
- struct drm_clip_rect *clip)
+ struct drm_clip_rect *clip,
+ struct dma_buf_map *dst)
{
struct drm_framebuffer *fb = fb_helper->fb;
unsigned int cpp = fb->format->cpp[0];
size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
void *src = fb_helper->fbdev->screen_buffer + offset;
- void *dst = fb_helper->buffer->map.vaddr + offset;
size_t len = (clip->x2 - clip->x1) * cpp;
unsigned int y;
- for (y = clip->y1; y < clip->y2; y++) {
- if (!fb_helper->dev->mode_config.fbdev_use_iomem)
- memcpy(dst, src, len);
- else
- memcpy_toio((void __iomem *)dst, src, len);
+ dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
+ for (y = clip->y1; y < clip->y2; y++) {
+ dma_buf_map_memcpy_to(dst, src, len);
+ dma_buf_map_incr(dst, fb->pitches[0]);
src += fb->pitches[0];
- dst += fb->pitches[0];
}
}
@@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
ret = drm_client_buffer_vmap(helper->buffer, &map);
if (ret)
return;
- drm_fb_helper_dirty_blit_real(helper, &clip_copy);
+ drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
}
+
if (helper->fb->funcs->dirty)
helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
&clip_copy, 1);
@@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
}
EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
+static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned long p = *ppos;
+ u8 *dst;
+ u8 __iomem *src;
+ int c, err = 0;
+ unsigned long total_size;
+ unsigned long alloc_size;
+ ssize_t ret = 0;
+
+ if (info->state != FBINFO_STATE_RUNNING)
+ return -EPERM;
+
+ total_size = info->screen_size;
+
+ if (total_size == 0)
+ total_size = info->fix.smem_len;
+
+ if (p >= total_size)
+ return 0;
+
+ if (count >= total_size)
+ count = total_size;
+
+ if (count + p > total_size)
+ count = total_size - p;
+
+ src = (u8 __iomem *)(info->screen_base + p);
+
+ alloc_size = min(count, PAGE_SIZE);
+
+ dst = kmalloc(alloc_size, GFP_KERNEL);
+ if (!dst)
+ return -ENOMEM;
+
+ while (count) {
+ c = min(count, alloc_size);
+
+ memcpy_fromio(dst, src, c);
+ if (copy_to_user(buf, dst, c)) {
+ err = -EFAULT;
+ break;
+ }
+
+ src += c;
+ *ppos += c;
+ buf += c;
+ ret += c;
+ count -= c;
+ }
+
+ kfree(dst);
+
+ if (err)
+ return err;
+
+ return ret;
+}
+
+static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned long p = *ppos;
+ u8 *src;
+ u8 __iomem *dst;
+ int c, err = 0;
+ unsigned long total_size;
+ unsigned long alloc_size;
+ ssize_t ret = 0;
+
+ if (info->state != FBINFO_STATE_RUNNING)
+ return -EPERM;
+
+ total_size = info->screen_size;
+
+ if (total_size == 0)
+ total_size = info->fix.smem_len;
+
+ if (p > total_size)
+ return -EFBIG;
+
+ if (count > total_size) {
+ err = -EFBIG;
+ count = total_size;
+ }
+
+ if (count + p > total_size) {
+ /*
+ * The framebuffer is too small. We do the
+ * copy operation, but return an error code
+ * afterwards. Taken from fbdev.
+ */
+ if (!err)
+ err = -ENOSPC;
+ count = total_size - p;
+ }
+
+ alloc_size = min(count, PAGE_SIZE);
+
+ src = kmalloc(alloc_size, GFP_KERNEL);
+ if (!src)
+ return -ENOMEM;
+
+ dst = (u8 __iomem *)(info->screen_base + p);
+
+ while (count) {
+ c = min(count, alloc_size);
+
+ if (copy_from_user(src, buf, c)) {
+ err = -EFAULT;
+ break;
+ }
+ memcpy_toio(dst, src, c);
+
+ dst += c;
+ *ppos += c;
+ buf += c;
+ ret += c;
+ count -= c;
+ }
+
+ kfree(src);
+
+ if (err)
+ return err;
+
+ return ret;
+}
+
/**
* drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
* @info: fbdev registered by the helper
@@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
return -ENODEV;
}
+static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ return drm_fb_helper_sys_read(info, buf, count, ppos);
+ else
+ return drm_fb_helper_cfb_read(info, buf, count, ppos);
+}
+
+static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ return drm_fb_helper_sys_write(info, buf, count, ppos);
+ else
+ return drm_fb_helper_cfb_write(info, buf, count, ppos);
+}
+
+static void drm_fbdev_fb_fillrect(struct fb_info *info,
+ const struct fb_fillrect *rect)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ drm_fb_helper_sys_fillrect(info, rect);
+ else
+ drm_fb_helper_cfb_fillrect(info, rect);
+}
+
+static void drm_fbdev_fb_copyarea(struct fb_info *info,
+ const struct fb_copyarea *area)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ drm_fb_helper_sys_copyarea(info, area);
+ else
+ drm_fb_helper_cfb_copyarea(info, area);
+}
+
+static void drm_fbdev_fb_imageblit(struct fb_info *info,
+ const struct fb_image *image)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ drm_fb_helper_sys_imageblit(info, image);
+ else
+ drm_fb_helper_cfb_imageblit(info, image);
+}
+
static const struct fb_ops drm_fbdev_fb_ops = {
.owner = THIS_MODULE,
DRM_FB_HELPER_DEFAULT_OPS,
@@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
.fb_release = drm_fbdev_fb_release,
.fb_destroy = drm_fbdev_fb_destroy,
.fb_mmap = drm_fbdev_fb_mmap,
- .fb_read = drm_fb_helper_sys_read,
- .fb_write = drm_fb_helper_sys_write,
- .fb_fillrect = drm_fb_helper_sys_fillrect,
- .fb_copyarea = drm_fb_helper_sys_copyarea,
- .fb_imageblit = drm_fb_helper_sys_imageblit,
+ .fb_read = drm_fbdev_fb_read,
+ .fb_write = drm_fbdev_fb_write,
+ .fb_fillrect = drm_fbdev_fb_fillrect,
+ .fb_copyarea = drm_fbdev_fb_copyarea,
+ .fb_imageblit = drm_fbdev_fb_imageblit,
};
static struct fb_deferred_io drm_fbdev_defio = {
diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
index 5ffbb4ed5b35..ab424ddd7665 100644
--- a/include/drm/drm_mode_config.h
+++ b/include/drm/drm_mode_config.h
@@ -877,18 +877,6 @@ struct drm_mode_config {
*/
bool prefer_shadow_fbdev;
- /**
- * @fbdev_use_iomem:
- *
- * Set to true if framebuffer reside in iomem.
- * When set to true memcpy_toio() is used when copying the framebuffer in
- * drm_fb_helper.drm_fb_helper_dirty_blit_real().
- *
- * FIXME: This should be replaced with a per-mapping is_iomem
- * flag (like ttm does), and then used everywhere in fbdev code.
- */
- bool fbdev_use_iomem;
-
/**
* @quirk_addfb_prefer_xbgr_30bpp:
*
--
2.28.0
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-15 12:38 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 12:38 UTC (permalink / raw)
To: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Thomas Zimmermann, xen-devel, spice-devel, linux-arm-kernel,
linux-media
At least sparc64 requires I/O-specific access to framebuffers. This
patch updates the fbdev console accordingly.
For drivers with direct access to the framebuffer memory, the callback
functions in struct fb_ops test for the type of memory and call the rsp
fb_sys_ of fb_cfb_ functions.
For drivers that employ a shadow buffer, fbdev's blit function retrieves
the framebuffer address as struct dma_buf_map, and uses dma_buf_map
interfaces to access the buffer.
The bochs driver on sparc64 uses a workaround to flag the framebuffer as
I/O memory and avoid a HW exception. With the introduction of struct
dma_buf_map, this is not required any longer. The patch removes the rsp
code from both, bochs and fbdev.
v4:
* move dma_buf_map changes into separate patch (Daniel)
* TODO list: comment on fbdev updates (Daniel)
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
Documentation/gpu/todo.rst | 19 ++-
drivers/gpu/drm/bochs/bochs_kms.c | 1 -
drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
include/drm/drm_mode_config.h | 12 --
4 files changed, 220 insertions(+), 29 deletions(-)
diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 7e6fc3c04add..638b7f704339 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
------------------------------------------------
Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
-atomic modesetting and GEM vmap support. Current generic fbdev emulation
-expects the framebuffer in system memory (or system-like memory).
+atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
+expected the framebuffer in system memory or system-like memory. By employing
+struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
+as well.
Contact: Maintainer of the driver you plan to convert
Level: Intermediate
+Reimplement functions in drm_fbdev_fb_ops without fbdev
+-------------------------------------------------------
+
+A number of callback functions in drm_fbdev_fb_ops could benefit from
+being rewritten without dependencies on the fbdev module. Some of the
+helpers could further benefit from using struct dma_buf_map instead of
+raw pointers.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
+
+Level: Advanced
+
+
drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
-----------------------------------------------------------------
diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
index 13d0d04c4457..853081d186d5 100644
--- a/drivers/gpu/drm/bochs/bochs_kms.c
+++ b/drivers/gpu/drm/bochs/bochs_kms.c
@@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
bochs->dev->mode_config.preferred_depth = 24;
bochs->dev->mode_config.prefer_shadow = 0;
bochs->dev->mode_config.prefer_shadow_fbdev = 1;
- bochs->dev->mode_config.fbdev_use_iomem = true;
bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
bochs->dev->mode_config.funcs = &bochs_mode_funcs;
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 6212cd7cde1d..462b0c130ebb 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
}
static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
- struct drm_clip_rect *clip)
+ struct drm_clip_rect *clip,
+ struct dma_buf_map *dst)
{
struct drm_framebuffer *fb = fb_helper->fb;
unsigned int cpp = fb->format->cpp[0];
size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
void *src = fb_helper->fbdev->screen_buffer + offset;
- void *dst = fb_helper->buffer->map.vaddr + offset;
size_t len = (clip->x2 - clip->x1) * cpp;
unsigned int y;
- for (y = clip->y1; y < clip->y2; y++) {
- if (!fb_helper->dev->mode_config.fbdev_use_iomem)
- memcpy(dst, src, len);
- else
- memcpy_toio((void __iomem *)dst, src, len);
+ dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
+ for (y = clip->y1; y < clip->y2; y++) {
+ dma_buf_map_memcpy_to(dst, src, len);
+ dma_buf_map_incr(dst, fb->pitches[0]);
src += fb->pitches[0];
- dst += fb->pitches[0];
}
}
@@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
ret = drm_client_buffer_vmap(helper->buffer, &map);
if (ret)
return;
- drm_fb_helper_dirty_blit_real(helper, &clip_copy);
+ drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
}
+
if (helper->fb->funcs->dirty)
helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
&clip_copy, 1);
@@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
}
EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
+static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned long p = *ppos;
+ u8 *dst;
+ u8 __iomem *src;
+ int c, err = 0;
+ unsigned long total_size;
+ unsigned long alloc_size;
+ ssize_t ret = 0;
+
+ if (info->state != FBINFO_STATE_RUNNING)
+ return -EPERM;
+
+ total_size = info->screen_size;
+
+ if (total_size == 0)
+ total_size = info->fix.smem_len;
+
+ if (p >= total_size)
+ return 0;
+
+ if (count >= total_size)
+ count = total_size;
+
+ if (count + p > total_size)
+ count = total_size - p;
+
+ src = (u8 __iomem *)(info->screen_base + p);
+
+ alloc_size = min(count, PAGE_SIZE);
+
+ dst = kmalloc(alloc_size, GFP_KERNEL);
+ if (!dst)
+ return -ENOMEM;
+
+ while (count) {
+ c = min(count, alloc_size);
+
+ memcpy_fromio(dst, src, c);
+ if (copy_to_user(buf, dst, c)) {
+ err = -EFAULT;
+ break;
+ }
+
+ src += c;
+ *ppos += c;
+ buf += c;
+ ret += c;
+ count -= c;
+ }
+
+ kfree(dst);
+
+ if (err)
+ return err;
+
+ return ret;
+}
+
+static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned long p = *ppos;
+ u8 *src;
+ u8 __iomem *dst;
+ int c, err = 0;
+ unsigned long total_size;
+ unsigned long alloc_size;
+ ssize_t ret = 0;
+
+ if (info->state != FBINFO_STATE_RUNNING)
+ return -EPERM;
+
+ total_size = info->screen_size;
+
+ if (total_size == 0)
+ total_size = info->fix.smem_len;
+
+ if (p > total_size)
+ return -EFBIG;
+
+ if (count > total_size) {
+ err = -EFBIG;
+ count = total_size;
+ }
+
+ if (count + p > total_size) {
+ /*
+ * The framebuffer is too small. We do the
+ * copy operation, but return an error code
+ * afterwards. Taken from fbdev.
+ */
+ if (!err)
+ err = -ENOSPC;
+ count = total_size - p;
+ }
+
+ alloc_size = min(count, PAGE_SIZE);
+
+ src = kmalloc(alloc_size, GFP_KERNEL);
+ if (!src)
+ return -ENOMEM;
+
+ dst = (u8 __iomem *)(info->screen_base + p);
+
+ while (count) {
+ c = min(count, alloc_size);
+
+ if (copy_from_user(src, buf, c)) {
+ err = -EFAULT;
+ break;
+ }
+ memcpy_toio(dst, src, c);
+
+ dst += c;
+ *ppos += c;
+ buf += c;
+ ret += c;
+ count -= c;
+ }
+
+ kfree(src);
+
+ if (err)
+ return err;
+
+ return ret;
+}
+
/**
* drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
* @info: fbdev registered by the helper
@@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
return -ENODEV;
}
+static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ return drm_fb_helper_sys_read(info, buf, count, ppos);
+ else
+ return drm_fb_helper_cfb_read(info, buf, count, ppos);
+}
+
+static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ return drm_fb_helper_sys_write(info, buf, count, ppos);
+ else
+ return drm_fb_helper_cfb_write(info, buf, count, ppos);
+}
+
+static void drm_fbdev_fb_fillrect(struct fb_info *info,
+ const struct fb_fillrect *rect)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ drm_fb_helper_sys_fillrect(info, rect);
+ else
+ drm_fb_helper_cfb_fillrect(info, rect);
+}
+
+static void drm_fbdev_fb_copyarea(struct fb_info *info,
+ const struct fb_copyarea *area)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ drm_fb_helper_sys_copyarea(info, area);
+ else
+ drm_fb_helper_cfb_copyarea(info, area);
+}
+
+static void drm_fbdev_fb_imageblit(struct fb_info *info,
+ const struct fb_image *image)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_client_buffer *buffer = fb_helper->buffer;
+
+ if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+ drm_fb_helper_sys_imageblit(info, image);
+ else
+ drm_fb_helper_cfb_imageblit(info, image);
+}
+
static const struct fb_ops drm_fbdev_fb_ops = {
.owner = THIS_MODULE,
DRM_FB_HELPER_DEFAULT_OPS,
@@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
.fb_release = drm_fbdev_fb_release,
.fb_destroy = drm_fbdev_fb_destroy,
.fb_mmap = drm_fbdev_fb_mmap,
- .fb_read = drm_fb_helper_sys_read,
- .fb_write = drm_fb_helper_sys_write,
- .fb_fillrect = drm_fb_helper_sys_fillrect,
- .fb_copyarea = drm_fb_helper_sys_copyarea,
- .fb_imageblit = drm_fb_helper_sys_imageblit,
+ .fb_read = drm_fbdev_fb_read,
+ .fb_write = drm_fbdev_fb_write,
+ .fb_fillrect = drm_fbdev_fb_fillrect,
+ .fb_copyarea = drm_fbdev_fb_copyarea,
+ .fb_imageblit = drm_fbdev_fb_imageblit,
};
static struct fb_deferred_io drm_fbdev_defio = {
diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
index 5ffbb4ed5b35..ab424ddd7665 100644
--- a/include/drm/drm_mode_config.h
+++ b/include/drm/drm_mode_config.h
@@ -877,18 +877,6 @@ struct drm_mode_config {
*/
bool prefer_shadow_fbdev;
- /**
- * @fbdev_use_iomem:
- *
- * Set to true if framebuffer reside in iomem.
- * When set to true memcpy_toio() is used when copying the framebuffer in
- * drm_fb_helper.drm_fb_helper_dirty_blit_real().
- *
- * FIXME: This should be replaced with a per-mapping is_iomem
- * flag (like ttm does), and then used everywhere in fbdev code.
- */
- bool fbdev_use_iomem;
-
/**
* @quirk_addfb_prefer_xbgr_30bpp:
*
--
2.28.0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 195+ messages in thread
* Re: [PATCH v4 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function
2020-10-15 12:37 ` Thomas Zimmermann
` (3 preceding siblings ...)
(?)
@ 2020-10-15 13:57 ` Christian König
-1 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 13:57 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, Daniel Vetter, etnaviv,
amd-gfx, virtualization, linaro-mm-sig, linux-rockchip,
dri-devel, xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The parameters map and is_iomem are always of the same value. Removed them
> to prepares the function for conversion to struct dma_buf_map.
>
> v4:
> * don't check for !kmap->virtual; will always be false
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++--------------
> 1 file changed, 4 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 3213429f8444..2d5ed30518f1 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -382,32 +382,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
> }
> EXPORT_SYMBOL(drm_gem_vram_unpin);
>
> -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
> - bool map, bool *is_iomem)
> +static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
> {
> int ret;
> struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> + bool is_iomem;
>
> if (gbo->kmap_use_count > 0)
> goto out;
>
> - if (kmap->virtual || !map)
> - goto out;
> -
> ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
> if (ret)
> return ERR_PTR(ret);
>
> out:
> - if (!kmap->virtual) {
> - if (is_iomem)
> - *is_iomem = false;
> - return NULL; /* not mapped; don't increment ref */
> - }
> ++gbo->kmap_use_count;
> - if (is_iomem)
> - return ttm_kmap_obj_virtual(kmap, is_iomem);
> - return kmap->virtual;
> + return ttm_kmap_obj_virtual(kmap, &is_iomem);
> }
>
> static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> @@ -452,7 +442,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
> ret = drm_gem_vram_pin_locked(gbo, 0);
> if (ret)
> goto err_ttm_bo_unreserve;
> - base = drm_gem_vram_kmap_locked(gbo, true, NULL);
> + base = drm_gem_vram_kmap_locked(gbo);
> if (IS_ERR(base)) {
> ret = PTR_ERR(base);
> goto err_drm_gem_vram_unpin_locked;
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function
@ 2020-10-15 13:57 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 13:57 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuik
Cc: linux-samsung-soc, lima, nouveau, Daniel Vetter, etnaviv,
amd-gfx, virtualization, linaro-mm-sig, linux-rockchip,
dri-devel, xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The parameters map and is_iomem are always of the same value. Removed them
> to prepares the function for conversion to struct dma_buf_map.
>
> v4:
> * don't check for !kmap->virtual; will always be false
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++--------------
> 1 file changed, 4 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 3213429f8444..2d5ed30518f1 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -382,32 +382,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
> }
> EXPORT_SYMBOL(drm_gem_vram_unpin);
>
> -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
> - bool map, bool *is_iomem)
> +static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
> {
> int ret;
> struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> + bool is_iomem;
>
> if (gbo->kmap_use_count > 0)
> goto out;
>
> - if (kmap->virtual || !map)
> - goto out;
> -
> ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
> if (ret)
> return ERR_PTR(ret);
>
> out:
> - if (!kmap->virtual) {
> - if (is_iomem)
> - *is_iomem = false;
> - return NULL; /* not mapped; don't increment ref */
> - }
> ++gbo->kmap_use_count;
> - if (is_iomem)
> - return ttm_kmap_obj_virtual(kmap, is_iomem);
> - return kmap->virtual;
> + return ttm_kmap_obj_virtual(kmap, &is_iomem);
> }
>
> static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> @@ -452,7 +442,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
> ret = drm_gem_vram_pin_locked(gbo, 0);
> if (ret)
> goto err_ttm_bo_unreserve;
> - base = drm_gem_vram_kmap_locked(gbo, true, NULL);
> + base = drm_gem_vram_kmap_locked(gbo);
> if (IS_ERR(base)) {
> ret = PTR_ERR(base);
> goto err_drm_gem_vram_unpin_locked;
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function
@ 2020-10-15 13:57 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 13:57 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, spice-devel, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Daniel Vetter, xen-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The parameters map and is_iomem are always of the same value. Removed them
> to prepares the function for conversion to struct dma_buf_map.
>
> v4:
> * don't check for !kmap->virtual; will always be false
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++--------------
> 1 file changed, 4 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 3213429f8444..2d5ed30518f1 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -382,32 +382,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
> }
> EXPORT_SYMBOL(drm_gem_vram_unpin);
>
> -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
> - bool map, bool *is_iomem)
> +static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
> {
> int ret;
> struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> + bool is_iomem;
>
> if (gbo->kmap_use_count > 0)
> goto out;
>
> - if (kmap->virtual || !map)
> - goto out;
> -
> ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
> if (ret)
> return ERR_PTR(ret);
>
> out:
> - if (!kmap->virtual) {
> - if (is_iomem)
> - *is_iomem = false;
> - return NULL; /* not mapped; don't increment ref */
> - }
> ++gbo->kmap_use_count;
> - if (is_iomem)
> - return ttm_kmap_obj_virtual(kmap, is_iomem);
> - return kmap->virtual;
> + return ttm_kmap_obj_virtual(kmap, &is_iomem);
> }
>
> static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> @@ -452,7 +442,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
> ret = drm_gem_vram_pin_locked(gbo, 0);
> if (ret)
> goto err_ttm_bo_unreserve;
> - base = drm_gem_vram_kmap_locked(gbo, true, NULL);
> + base = drm_gem_vram_kmap_locked(gbo);
> if (IS_ERR(base)) {
> ret = PTR_ERR(base);
> goto err_drm_gem_vram_unpin_locked;
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function
@ 2020-10-15 13:57 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 13:57 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, spice-devel, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Daniel Vetter, xen-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The parameters map and is_iomem are always of the same value. Removed them
> to prepares the function for conversion to struct dma_buf_map.
>
> v4:
> * don't check for !kmap->virtual; will always be false
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++--------------
> 1 file changed, 4 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 3213429f8444..2d5ed30518f1 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -382,32 +382,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
> }
> EXPORT_SYMBOL(drm_gem_vram_unpin);
>
> -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
> - bool map, bool *is_iomem)
> +static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
> {
> int ret;
> struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> + bool is_iomem;
>
> if (gbo->kmap_use_count > 0)
> goto out;
>
> - if (kmap->virtual || !map)
> - goto out;
> -
> ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
> if (ret)
> return ERR_PTR(ret);
>
> out:
> - if (!kmap->virtual) {
> - if (is_iomem)
> - *is_iomem = false;
> - return NULL; /* not mapped; don't increment ref */
> - }
> ++gbo->kmap_use_count;
> - if (is_iomem)
> - return ttm_kmap_obj_virtual(kmap, is_iomem);
> - return kmap->virtual;
> + return ttm_kmap_obj_virtual(kmap, &is_iomem);
> }
>
> static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> @@ -452,7 +442,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
> ret = drm_gem_vram_pin_locked(gbo, 0);
> if (ret)
> goto err_ttm_bo_unreserve;
> - base = drm_gem_vram_kmap_locked(gbo, true, NULL);
> + base = drm_gem_vram_kmap_locked(gbo);
> if (IS_ERR(base)) {
> ret = PTR_ERR(base);
> goto err_drm_gem_vram_unpin_locked;
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function
@ 2020-10-15 13:57 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 13:57 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, spice-devel, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Daniel Vetter, xen-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The parameters map and is_iomem are always of the same value. Removed them
> to prepares the function for conversion to struct dma_buf_map.
>
> v4:
> * don't check for !kmap->virtual; will always be false
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++--------------
> 1 file changed, 4 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 3213429f8444..2d5ed30518f1 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -382,32 +382,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
> }
> EXPORT_SYMBOL(drm_gem_vram_unpin);
>
> -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
> - bool map, bool *is_iomem)
> +static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
> {
> int ret;
> struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> + bool is_iomem;
>
> if (gbo->kmap_use_count > 0)
> goto out;
>
> - if (kmap->virtual || !map)
> - goto out;
> -
> ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
> if (ret)
> return ERR_PTR(ret);
>
> out:
> - if (!kmap->virtual) {
> - if (is_iomem)
> - *is_iomem = false;
> - return NULL; /* not mapped; don't increment ref */
> - }
> ++gbo->kmap_use_count;
> - if (is_iomem)
> - return ttm_kmap_obj_virtual(kmap, is_iomem);
> - return kmap->virtual;
> + return ttm_kmap_obj_virtual(kmap, &is_iomem);
> }
>
> static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> @@ -452,7 +442,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
> ret = drm_gem_vram_pin_locked(gbo, 0);
> if (ret)
> goto err_ttm_bo_unreserve;
> - base = drm_gem_vram_kmap_locked(gbo, true, NULL);
> + base = drm_gem_vram_kmap_locked(gbo);
> if (IS_ERR(base)) {
> ret = PTR_ERR(base);
> goto err_drm_gem_vram_unpin_locked;
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function
@ 2020-10-15 13:57 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 13:57 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, christian.koenig, kraxel, l.stach,
linux+etnaviv, christian.gmeiner, inki.dae, jy0922.shim,
sw0312.kim, kyungmin.park, kgene, krzk, yuq825, bskeggs, robh,
tomeu.vizoso, steven.price, alyssa.rosenzweig, hjc, heiko,
hdegoede, sean, eric, oleksandr_andrushchenko, ray.huang,
sumit.semwal, emil.velikov, luben.tuikov, apaneers,
linus.walleij, melissa.srw, chris, miaoqinglang
Cc: linux-samsung-soc, lima, spice-devel, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
Daniel Vetter, xen-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The parameters map and is_iomem are always of the same value. Removed them
> to prepares the function for conversion to struct dma_buf_map.
>
> v4:
> * don't check for !kmap->virtual; will always be false
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++--------------
> 1 file changed, 4 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 3213429f8444..2d5ed30518f1 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -382,32 +382,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
> }
> EXPORT_SYMBOL(drm_gem_vram_unpin);
>
> -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
> - bool map, bool *is_iomem)
> +static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
> {
> int ret;
> struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> + bool is_iomem;
>
> if (gbo->kmap_use_count > 0)
> goto out;
>
> - if (kmap->virtual || !map)
> - goto out;
> -
> ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
> if (ret)
> return ERR_PTR(ret);
>
> out:
> - if (!kmap->virtual) {
> - if (is_iomem)
> - *is_iomem = false;
> - return NULL; /* not mapped; don't increment ref */
> - }
> ++gbo->kmap_use_count;
> - if (is_iomem)
> - return ttm_kmap_obj_virtual(kmap, is_iomem);
> - return kmap->virtual;
> + return ttm_kmap_obj_virtual(kmap, &is_iomem);
> }
>
> static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> @@ -452,7 +442,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
> ret = drm_gem_vram_pin_locked(gbo, 0);
> if (ret)
> goto err_ttm_bo_unreserve;
> - base = drm_gem_vram_kmap_locked(gbo, true, NULL);
> + base = drm_gem_vram_kmap_locked(gbo);
> if (IS_ERR(base)) {
> ret = PTR_ERR(base);
> goto err_drm_gem_vram_unpin_locked;
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
@ 2020-10-15 13:58 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 13:58 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig
Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The function drm_gem_cma_prime_vunmap() is empty. Remove it before
> changing the interface to use struct drm_buf_map.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/drm_gem_cma_helper.c | 17 -----------------
> drivers/gpu/drm/vc4/vc4_bo.c | 1 -
> include/drm/drm_gem_cma_helper.h | 1 -
> 3 files changed, 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 2165633c9b9e..d527485ea0b7 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
> }
> EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
>
> -/**
> - * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
> - * address space
> - * @obj: GEM object
> - * @vaddr: kernel virtual address where the CMA GEM object was mapped
> - *
> - * This function removes a buffer exported via DRM PRIME from the kernel's
> - * virtual address space. This is a no-op because CMA buffers cannot be
> - * unmapped from kernel space. Drivers using the CMA helpers should set this
> - * as their &drm_gem_object_funcs.vunmap callback.
> - */
> -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - /* Nothing to do */
> -}
> -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);
> -
> static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
> .free = drm_gem_cma_free_object,
> .print_info = drm_gem_cma_print_info,
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index f432278173cd..557f0d1e6437 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = {
> .export = vc4_prime_export,
> .get_sg_table = drm_gem_cma_prime_get_sg_table,
> .vmap = vc4_prime_vmap,
> - .vunmap = drm_gem_cma_prime_vunmap,
> .vm_ops = &vc4_vm_ops,
> };
>
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index 2bfa2502607a..a064b0d1c480 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
> int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
> -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
>
> struct drm_gem_object *
> drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
@ 2020-10-15 13:58 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 13:58 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst-VuQAYsv1563Yd54FQh9/CA,
mripard-DgEjT+Ai2ygdnm+yROfE0A, airlied-cv59FeDIM0c,
daniel-/w4YWyX8dFk, sam-uyr5N9Q2VtJg9hUCZPvPmw,
alexander.deucher-5C7GfCeVMHo, kraxel-H+wXaHxf7aLQT0dZR+AlfA,
l.stach-bIcnvbaLZ9MEGnE8C9+IrQ,
linux+etnaviv-I+IVW8TIWO2tmTQ+vhA3Yw,
christian.gmeiner-Re5JQEeQqe8AvxtiuMwx3w,
inki.dae-Sze3O3UU22JBDgjK7y7TUQ,
jy0922.shim-Sze3O3UU22JBDgjK7y7TUQ,
sw0312.kim-Sze3O3UU22JBDgjK7y7TUQ,
kyungmin.park-Sze3O3UU22JBDgjK7y7TUQ,
kgene-DgEjT+Ai2ygdnm+yROfE0A, krzk-DgEjT+Ai2ygdnm+yROfE0A,
yuq825-Re5JQEeQqe8AvxtiuMwx3w, bskeggs-H+wXaHxf7aLQT0dZR+AlfA,
robh-DgEjT+Ai2ygdnm+yROfE0A, tomeu.vizoso-ZGY8ohtN/8qB+jHODAdFcQ,
steven.price-5wv7dgnIgG8,
alyssa.rosenzweig-ZGY8ohtN/8qB+jHODAdFcQ,
hjc-TNX95d0MmH7DzftRWevZcw, heiko-4mtYJXux2i+zQB+pC5nmwQ,
hdegoede-H+wXaHxf7aLQT0dZR+AlfA, sean-p7yTbzM4H96eqtR555YLDQ,
eric-WhKQ6XTQaPysTnJN9+BGXg, oleksandr_andrushchenko-uRwfk40T5oI,
ray.huang-5C7GfCeVMHo, sumit.semwal-QSEj5FYQhm4dnm+yROfE0A,
emil.velikov-ZGY8ohtN/8qB+jHODAdFcQ, luben.tuikov-5C7GfCeVMHo,
apaneers-5C7GfCeVMHo, linus.walleij-QSEj5FYQhm4dnm+yROfE0A,
melissa.srw-Re5JQEeQqe8AvxtiuMwx3w,
chris-Y6uKTt2uX1cEflXRtASbqLVCufUGDwFn, miaoqinglang
Cc: linux-samsung-soc-u79uwXL29TY76Z2rM5mHXA,
lima-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
nouveau-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
etnaviv-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
linaro-mm-sig-cunTk1MwBs8s++Sfvej+rw,
linux-rockchip-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
spice-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
linux-media-u79uwXL29TY76Z2rM5mHXA
Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The function drm_gem_cma_prime_vunmap() is empty. Remove it before
> changing the interface to use struct drm_buf_map.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/drm_gem_cma_helper.c | 17 -----------------
> drivers/gpu/drm/vc4/vc4_bo.c | 1 -
> include/drm/drm_gem_cma_helper.h | 1 -
> 3 files changed, 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 2165633c9b9e..d527485ea0b7 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
> }
> EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
>
> -/**
> - * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
> - * address space
> - * @obj: GEM object
> - * @vaddr: kernel virtual address where the CMA GEM object was mapped
> - *
> - * This function removes a buffer exported via DRM PRIME from the kernel's
> - * virtual address space. This is a no-op because CMA buffers cannot be
> - * unmapped from kernel space. Drivers using the CMA helpers should set this
> - * as their &drm_gem_object_funcs.vunmap callback.
> - */
> -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - /* Nothing to do */
> -}
> -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);
> -
> static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
> .free = drm_gem_cma_free_object,
> .print_info = drm_gem_cma_print_info,
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index f432278173cd..557f0d1e6437 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = {
> .export = vc4_prime_export,
> .get_sg_table = drm_gem_cma_prime_get_sg_table,
> .vmap = vc4_prime_vmap,
> - .vunmap = drm_gem_cma_prime_vunmap,
> .vm_ops = &vc4_vm_ops,
> };
>
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index 2bfa2502607a..a064b0d1c480 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
> int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
> -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
>
> struct drm_gem_object *
> drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
@ 2020-10-15 13:58 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 13:58 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The function drm_gem_cma_prime_vunmap() is empty. Remove it before
> changing the interface to use struct drm_buf_map.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/drm_gem_cma_helper.c | 17 -----------------
> drivers/gpu/drm/vc4/vc4_bo.c | 1 -
> include/drm/drm_gem_cma_helper.h | 1 -
> 3 files changed, 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 2165633c9b9e..d527485ea0b7 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
> }
> EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
>
> -/**
> - * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
> - * address space
> - * @obj: GEM object
> - * @vaddr: kernel virtual address where the CMA GEM object was mapped
> - *
> - * This function removes a buffer exported via DRM PRIME from the kernel's
> - * virtual address space. This is a no-op because CMA buffers cannot be
> - * unmapped from kernel space. Drivers using the CMA helpers should set this
> - * as their &drm_gem_object_funcs.vunmap callback.
> - */
> -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - /* Nothing to do */
> -}
> -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);
> -
> static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
> .free = drm_gem_cma_free_object,
> .print_info = drm_gem_cma_print_info,
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index f432278173cd..557f0d1e6437 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = {
> .export = vc4_prime_export,
> .get_sg_table = drm_gem_cma_prime_get_sg_table,
> .vmap = vc4_prime_vmap,
> - .vunmap = drm_gem_cma_prime_vunmap,
> .vm_ops = &vc4_vm_ops,
> };
>
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index 2bfa2502607a..a064b0d1c480 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
> int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
> -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
>
> struct drm_gem_object *
> drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
@ 2020-10-15 13:58 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 13:58 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The function drm_gem_cma_prime_vunmap() is empty. Remove it before
> changing the interface to use struct drm_buf_map.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/drm_gem_cma_helper.c | 17 -----------------
> drivers/gpu/drm/vc4/vc4_bo.c | 1 -
> include/drm/drm_gem_cma_helper.h | 1 -
> 3 files changed, 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 2165633c9b9e..d527485ea0b7 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
> }
> EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
>
> -/**
> - * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
> - * address space
> - * @obj: GEM object
> - * @vaddr: kernel virtual address where the CMA GEM object was mapped
> - *
> - * This function removes a buffer exported via DRM PRIME from the kernel's
> - * virtual address space. This is a no-op because CMA buffers cannot be
> - * unmapped from kernel space. Drivers using the CMA helpers should set this
> - * as their &drm_gem_object_funcs.vunmap callback.
> - */
> -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - /* Nothing to do */
> -}
> -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);
> -
> static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
> .free = drm_gem_cma_free_object,
> .print_info = drm_gem_cma_print_info,
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index f432278173cd..557f0d1e6437 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = {
> .export = vc4_prime_export,
> .get_sg_table = drm_gem_cma_prime_get_sg_table,
> .vmap = vc4_prime_vmap,
> - .vunmap = drm_gem_cma_prime_vunmap,
> .vm_ops = &vc4_vm_ops,
> };
>
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index 2bfa2502607a..a064b0d1c480 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
> int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
> -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
>
> struct drm_gem_object *
> drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
@ 2020-10-15 13:58 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 13:58 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The function drm_gem_cma_prime_vunmap() is empty. Remove it before
> changing the interface to use struct drm_buf_map.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/drm_gem_cma_helper.c | 17 -----------------
> drivers/gpu/drm/vc4/vc4_bo.c | 1 -
> include/drm/drm_gem_cma_helper.h | 1 -
> 3 files changed, 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 2165633c9b9e..d527485ea0b7 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
> }
> EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
>
> -/**
> - * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
> - * address space
> - * @obj: GEM object
> - * @vaddr: kernel virtual address where the CMA GEM object was mapped
> - *
> - * This function removes a buffer exported via DRM PRIME from the kernel's
> - * virtual address space. This is a no-op because CMA buffers cannot be
> - * unmapped from kernel space. Drivers using the CMA helpers should set this
> - * as their &drm_gem_object_funcs.vunmap callback.
> - */
> -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - /* Nothing to do */
> -}
> -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);
> -
> static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
> .free = drm_gem_cma_free_object,
> .print_info = drm_gem_cma_print_info,
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index f432278173cd..557f0d1e6437 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = {
> .export = vc4_prime_export,
> .get_sg_table = drm_gem_cma_prime_get_sg_table,
> .vmap = vc4_prime_vmap,
> - .vunmap = drm_gem_cma_prime_vunmap,
> .vm_ops = &vc4_vm_ops,
> };
>
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index 2bfa2502607a..a064b0d1c480 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
> int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
> -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
>
> struct drm_gem_object *
> drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
@ 2020-10-15 13:58 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 13:58 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The function drm_gem_cma_prime_vunmap() is empty. Remove it before
> changing the interface to use struct drm_buf_map.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/drm_gem_cma_helper.c | 17 -----------------
> drivers/gpu/drm/vc4/vc4_bo.c | 1 -
> include/drm/drm_gem_cma_helper.h | 1 -
> 3 files changed, 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 2165633c9b9e..d527485ea0b7 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
> }
> EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
>
> -/**
> - * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
> - * address space
> - * @obj: GEM object
> - * @vaddr: kernel virtual address where the CMA GEM object was mapped
> - *
> - * This function removes a buffer exported via DRM PRIME from the kernel's
> - * virtual address space. This is a no-op because CMA buffers cannot be
> - * unmapped from kernel space. Drivers using the CMA helpers should set this
> - * as their &drm_gem_object_funcs.vunmap callback.
> - */
> -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - /* Nothing to do */
> -}
> -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);
> -
> static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
> .free = drm_gem_cma_free_object,
> .print_info = drm_gem_cma_print_info,
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index f432278173cd..557f0d1e6437 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = {
> .export = vc4_prime_export,
> .get_sg_table = drm_gem_cma_prime_get_sg_table,
> .vmap = vc4_prime_vmap,
> - .vunmap = drm_gem_cma_prime_vunmap,
> .vm_ops = &vc4_vm_ops,
> };
>
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index 2bfa2502607a..a064b0d1c480 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
> int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
> -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
>
> struct drm_gem_object *
> drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
2020-10-15 12:37 ` Thomas Zimmermann
` (3 preceding siblings ...)
(?)
@ 2020-10-15 13:59 ` Christian König
-1 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 13:59 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig
Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The function etnaviv_gem_prime_vunmap() is empty. Remove it before
> changing the interface to use struct drm_buf_map.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/etnaviv/etnaviv_drv.h | 1 -
> drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 -
> drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 -----
> 3 files changed, 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> index 914f0867ff71..9682c26d89bb 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> @@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
> struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
> void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
> -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> index 67d9a2b9ea6a..bbd235473645 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> @@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = {
> .unpin = etnaviv_gem_prime_unpin,
> .get_sg_table = etnaviv_gem_prime_get_sg_table,
> .vmap = etnaviv_gem_prime_vmap,
> - .vunmap = etnaviv_gem_prime_vunmap,
> .vm_ops = &vm_ops,
> };
>
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 135fbff6fecf..a6d9932a32ae 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
> return etnaviv_gem_vmap(obj);
> }
>
> -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - /* TODO msm_gem_vunmap() */
> -}
> -
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma)
> {
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
@ 2020-10-15 13:59 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 13:59 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig
Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The function etnaviv_gem_prime_vunmap() is empty. Remove it before
> changing the interface to use struct drm_buf_map.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/etnaviv/etnaviv_drv.h | 1 -
> drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 -
> drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 -----
> 3 files changed, 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> index 914f0867ff71..9682c26d89bb 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> @@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
> struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
> void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
> -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> index 67d9a2b9ea6a..bbd235473645 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> @@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = {
> .unpin = etnaviv_gem_prime_unpin,
> .get_sg_table = etnaviv_gem_prime_get_sg_table,
> .vmap = etnaviv_gem_prime_vmap,
> - .vunmap = etnaviv_gem_prime_vunmap,
> .vm_ops = &vm_ops,
> };
>
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 135fbff6fecf..a6d9932a32ae 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
> return etnaviv_gem_vmap(obj);
> }
>
> -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - /* TODO msm_gem_vunmap() */
> -}
> -
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma)
> {
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
@ 2020-10-15 13:59 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 13:59 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The function etnaviv_gem_prime_vunmap() is empty. Remove it before
> changing the interface to use struct drm_buf_map.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/etnaviv/etnaviv_drv.h | 1 -
> drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 -
> drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 -----
> 3 files changed, 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> index 914f0867ff71..9682c26d89bb 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> @@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
> struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
> void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
> -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> index 67d9a2b9ea6a..bbd235473645 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> @@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = {
> .unpin = etnaviv_gem_prime_unpin,
> .get_sg_table = etnaviv_gem_prime_get_sg_table,
> .vmap = etnaviv_gem_prime_vmap,
> - .vunmap = etnaviv_gem_prime_vunmap,
> .vm_ops = &vm_ops,
> };
>
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 135fbff6fecf..a6d9932a32ae 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
> return etnaviv_gem_vmap(obj);
> }
>
> -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - /* TODO msm_gem_vunmap() */
> -}
> -
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma)
> {
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
@ 2020-10-15 13:59 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 13:59 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The function etnaviv_gem_prime_vunmap() is empty. Remove it before
> changing the interface to use struct drm_buf_map.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/etnaviv/etnaviv_drv.h | 1 -
> drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 -
> drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 -----
> 3 files changed, 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> index 914f0867ff71..9682c26d89bb 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> @@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
> struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
> void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
> -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> index 67d9a2b9ea6a..bbd235473645 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> @@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = {
> .unpin = etnaviv_gem_prime_unpin,
> .get_sg_table = etnaviv_gem_prime_get_sg_table,
> .vmap = etnaviv_gem_prime_vmap,
> - .vunmap = etnaviv_gem_prime_vunmap,
> .vm_ops = &vm_ops,
> };
>
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 135fbff6fecf..a6d9932a32ae 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
> return etnaviv_gem_vmap(obj);
> }
>
> -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - /* TODO msm_gem_vunmap() */
> -}
> -
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma)
> {
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
@ 2020-10-15 13:59 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 13:59 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The function etnaviv_gem_prime_vunmap() is empty. Remove it before
> changing the interface to use struct drm_buf_map.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/etnaviv/etnaviv_drv.h | 1 -
> drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 -
> drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 -----
> 3 files changed, 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> index 914f0867ff71..9682c26d89bb 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> @@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
> struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
> void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
> -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> index 67d9a2b9ea6a..bbd235473645 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> @@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = {
> .unpin = etnaviv_gem_prime_unpin,
> .get_sg_table = etnaviv_gem_prime_get_sg_table,
> .vmap = etnaviv_gem_prime_vmap,
> - .vunmap = etnaviv_gem_prime_vunmap,
> .vm_ops = &vm_ops,
> };
>
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 135fbff6fecf..a6d9932a32ae 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
> return etnaviv_gem_vmap(obj);
> }
>
> -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - /* TODO msm_gem_vunmap() */
> -}
> -
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma)
> {
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
@ 2020-10-15 13:59 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 13:59 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The function etnaviv_gem_prime_vunmap() is empty. Remove it before
> changing the interface to use struct drm_buf_map.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/etnaviv/etnaviv_drv.h | 1 -
> drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 -
> drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 -----
> 3 files changed, 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> index 914f0867ff71..9682c26d89bb 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> @@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
> struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
> void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
> -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> index 67d9a2b9ea6a..bbd235473645 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> @@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = {
> .unpin = etnaviv_gem_prime_unpin,
> .get_sg_table = etnaviv_gem_prime_get_sg_table,
> .vmap = etnaviv_gem_prime_vmap,
> - .vunmap = etnaviv_gem_prime_vunmap,
> .vm_ops = &vm_ops,
> };
>
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 135fbff6fecf..a6d9932a32ae 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
> return etnaviv_gem_vmap(obj);
> }
>
> -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - /* TODO msm_gem_vunmap() */
> -}
> -
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma)
> {
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
2020-10-15 12:38 ` [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}() Thomas Zimmermann
` (3 preceding siblings ...)
(?)
@ 2020-10-15 14:00 ` Christian König
-1 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 14:00 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig
Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove
> them before changing the interface to use struct drm_buf_map. As a side
> effect of removing drm_gem_prime_vmap(), the error code changes from
> ENOMEM to EOPNOTSUPP.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------
> drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 --
> 2 files changed, 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> index e7a6eb96f692..13a35623ac04 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> @@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = {
> static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = {
> .free = exynos_drm_gem_free_object,
> .get_sg_table = exynos_drm_gem_prime_get_sg_table,
> - .vmap = exynos_drm_gem_prime_vmap,
> - .vunmap = exynos_drm_gem_prime_vunmap,
> .vm_ops = &exynos_drm_gem_vm_ops,
> };
>
> @@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
> return &exynos_gem->base;
> }
>
> -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - return NULL;
> -}
> -
> -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - /* Nothing to do */
> -}
> -
> int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma)
> {
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> index 74e926abeff0..a23272fb96fb 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> @@ -107,8 +107,6 @@ struct drm_gem_object *
> exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sgt);
> -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
> -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
@ 2020-10-15 14:00 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 14:00 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig
Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove
> them before changing the interface to use struct drm_buf_map. As a side
> effect of removing drm_gem_prime_vmap(), the error code changes from
> ENOMEM to EOPNOTSUPP.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------
> drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 --
> 2 files changed, 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> index e7a6eb96f692..13a35623ac04 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> @@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = {
> static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = {
> .free = exynos_drm_gem_free_object,
> .get_sg_table = exynos_drm_gem_prime_get_sg_table,
> - .vmap = exynos_drm_gem_prime_vmap,
> - .vunmap = exynos_drm_gem_prime_vunmap,
> .vm_ops = &exynos_drm_gem_vm_ops,
> };
>
> @@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
> return &exynos_gem->base;
> }
>
> -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - return NULL;
> -}
> -
> -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - /* Nothing to do */
> -}
> -
> int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma)
> {
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> index 74e926abeff0..a23272fb96fb 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> @@ -107,8 +107,6 @@ struct drm_gem_object *
> exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sgt);
> -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
> -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
@ 2020-10-15 14:00 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 14:00 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove
> them before changing the interface to use struct drm_buf_map. As a side
> effect of removing drm_gem_prime_vmap(), the error code changes from
> ENOMEM to EOPNOTSUPP.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------
> drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 --
> 2 files changed, 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> index e7a6eb96f692..13a35623ac04 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> @@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = {
> static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = {
> .free = exynos_drm_gem_free_object,
> .get_sg_table = exynos_drm_gem_prime_get_sg_table,
> - .vmap = exynos_drm_gem_prime_vmap,
> - .vunmap = exynos_drm_gem_prime_vunmap,
> .vm_ops = &exynos_drm_gem_vm_ops,
> };
>
> @@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
> return &exynos_gem->base;
> }
>
> -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - return NULL;
> -}
> -
> -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - /* Nothing to do */
> -}
> -
> int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma)
> {
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> index 74e926abeff0..a23272fb96fb 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> @@ -107,8 +107,6 @@ struct drm_gem_object *
> exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sgt);
> -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
> -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
>
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
@ 2020-10-15 14:00 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 14:00 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove
> them before changing the interface to use struct drm_buf_map. As a side
> effect of removing drm_gem_prime_vmap(), the error code changes from
> ENOMEM to EOPNOTSUPP.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------
> drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 --
> 2 files changed, 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> index e7a6eb96f692..13a35623ac04 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> @@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = {
> static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = {
> .free = exynos_drm_gem_free_object,
> .get_sg_table = exynos_drm_gem_prime_get_sg_table,
> - .vmap = exynos_drm_gem_prime_vmap,
> - .vunmap = exynos_drm_gem_prime_vunmap,
> .vm_ops = &exynos_drm_gem_vm_ops,
> };
>
> @@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
> return &exynos_gem->base;
> }
>
> -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - return NULL;
> -}
> -
> -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - /* Nothing to do */
> -}
> -
> int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma)
> {
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> index 74e926abeff0..a23272fb96fb 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> @@ -107,8 +107,6 @@ struct drm_gem_object *
> exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sgt);
> -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
> -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
@ 2020-10-15 14:00 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 14:00 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove
> them before changing the interface to use struct drm_buf_map. As a side
> effect of removing drm_gem_prime_vmap(), the error code changes from
> ENOMEM to EOPNOTSUPP.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------
> drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 --
> 2 files changed, 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> index e7a6eb96f692..13a35623ac04 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> @@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = {
> static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = {
> .free = exynos_drm_gem_free_object,
> .get_sg_table = exynos_drm_gem_prime_get_sg_table,
> - .vmap = exynos_drm_gem_prime_vmap,
> - .vunmap = exynos_drm_gem_prime_vunmap,
> .vm_ops = &exynos_drm_gem_vm_ops,
> };
>
> @@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
> return &exynos_gem->base;
> }
>
> -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - return NULL;
> -}
> -
> -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - /* Nothing to do */
> -}
> -
> int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma)
> {
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> index 74e926abeff0..a23272fb96fb 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> @@ -107,8 +107,6 @@ struct drm_gem_object *
> exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sgt);
> -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
> -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
@ 2020-10-15 14:00 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 14:00 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove
> them before changing the interface to use struct drm_buf_map. As a side
> effect of removing drm_gem_prime_vmap(), the error code changes from
> ENOMEM to EOPNOTSUPP.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------
> drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 --
> 2 files changed, 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> index e7a6eb96f692..13a35623ac04 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> @@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = {
> static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = {
> .free = exynos_drm_gem_free_object,
> .get_sg_table = exynos_drm_gem_prime_get_sg_table,
> - .vmap = exynos_drm_gem_prime_vmap,
> - .vunmap = exynos_drm_gem_prime_vunmap,
> .vm_ops = &exynos_drm_gem_vm_ops,
> };
>
> @@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
> return &exynos_gem->base;
> }
>
> -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - return NULL;
> -}
> -
> -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - /* Nothing to do */
> -}
> -
> int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma)
> {
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> index 74e926abeff0..a23272fb96fb 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> @@ -107,8 +107,6 @@ struct drm_gem_object *
> exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sgt);
> -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
> -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
2020-10-15 12:38 ` Thomas Zimmermann
` (3 preceding siblings ...)
(?)
@ 2020-10-15 14:08 ` Christian König
-1 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 14:08 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig
Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> address space. The mapping's address is returned as struct dma_buf_map.
> Each function is a simplified version of TTM's existing kmap code. Both
> functions respect the memory's location ani/or writecombine flags.
>
> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> two helpers that convert a GEM object into the TTM BO and forward the call
> to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> callbacks.
>
> v4:
> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> Christian)
Bunch of minor comments below, but over all look very solid to me.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> include/drm/drm_gem_ttm_helper.h | 6 +++
> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> include/linux/dma-buf-map.h | 20 ++++++++
> 5 files changed, 164 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
> index 0e4fb9ba43ad..db4c14d78a30 100644
> --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> }
> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>
> +/**
> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> + * @gem: GEM object.
> + * @map: [out] returns the dma-buf mapping.
> + *
> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> + * &drm_gem_object_funcs.vmap callback.
> + *
> + * Returns:
> + * 0 on success, or a negative errno code otherwise.
> + */
> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map)
> +{
> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> +
> + return ttm_bo_vmap(bo, map);
> +
> +}
> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> +
> +/**
> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> + * @gem: GEM object.
> + * @map: dma-buf mapping.
> + *
> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> + * &drm_gem_object_funcs.vmap callback.
> + */
> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map)
> +{
> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> +
> + ttm_bo_vunmap(bo, map);
> +}
> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> +
> /**
> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> * @gem: GEM object.
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> index bdee4df1f3f2..80c42c774c7d 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> @@ -32,6 +32,7 @@
> #include <drm/ttm/ttm_bo_driver.h>
> #include <drm/ttm/ttm_placement.h>
> #include <drm/drm_vma_manager.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/io.h>
> #include <linux/highmem.h>
> #include <linux/wait.h>
> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> }
> EXPORT_SYMBOL(ttm_bo_kunmap);
>
> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> +{
> + struct ttm_resource *mem = &bo->mem;
> + int ret;
> +
> + ret = ttm_mem_io_reserve(bo->bdev, mem);
> + if (ret)
> + return ret;
> +
> + if (mem->bus.is_iomem) {
> + void __iomem *vaddr_iomem;
> + unsigned long size = bo->num_pages << PAGE_SHIFT;
Please use uint64_t here and make sure to cast bo->num_pages before
shifting.
We have an unit tests of allocating a 8GB BO and that should work on a
32bit machine as well :)
> +
> + if (mem->bus.addr)
> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> + else if (mem->placement & TTM_PL_FLAG_WC)
I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
mem->bus.caching enum as replacement.
> + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> + else
> + vaddr_iomem = ioremap(mem->bus.offset, size);
> +
> + if (!vaddr_iomem)
> + return -ENOMEM;
> +
> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> +
> + } else {
> + struct ttm_operation_ctx ctx = {
> + .interruptible = false,
> + .no_wait_gpu = false
> + };
> + struct ttm_tt *ttm = bo->ttm;
> + pgprot_t prot;
> + void *vaddr;
> +
> + BUG_ON(!ttm);
I think we can drop this, populate will just crash badly anyway.
> +
> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> + if (ret)
> + return ret;
> +
> + /*
> + * We need to use vmap to get the desired page protection
> + * or to make the buffer object look contiguous.
> + */
> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
The calling convention has changed on drm-misc-next as well, but should
be trivial to adapt.
Regards,
Christian.
> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> + if (!vaddr)
> + return -ENOMEM;
> +
> + dma_buf_map_set_vaddr(map, vaddr);
> + }
> +
> + return 0;
> +}
> +EXPORT_SYMBOL(ttm_bo_vmap);
> +
> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> +{
> + if (dma_buf_map_is_null(map))
> + return;
> +
> + if (map->is_iomem)
> + iounmap(map->vaddr_iomem);
> + else
> + vunmap(map->vaddr);
> + dma_buf_map_clear(map);
> +
> + ttm_mem_io_free(bo->bdev, &bo->mem);
> +}
> +EXPORT_SYMBOL(ttm_bo_vunmap);
> +
> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> bool dst_use_tt)
> {
> diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
> index 118cef76f84f..7c6d874910b8 100644
> --- a/include/drm/drm_gem_ttm_helper.h
> +++ b/include/drm/drm_gem_ttm_helper.h
> @@ -10,11 +10,17 @@
> #include <drm/ttm/ttm_bo_api.h>
> #include <drm/ttm/ttm_bo_driver.h>
>
> +struct dma_buf_map;
> +
> #define drm_gem_ttm_of_gem(gem_obj) \
> container_of(gem_obj, struct ttm_buffer_object, base)
>
> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> const struct drm_gem_object *gem);
> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map);
> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map);
> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> struct vm_area_struct *vma);
>
> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> index 37102e45e496..2c59a785374c 100644
> --- a/include/drm/ttm/ttm_bo_api.h
> +++ b/include/drm/ttm/ttm_bo_api.h
> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>
> struct ttm_bo_device;
>
> +struct dma_buf_map;
> +
> struct drm_mm_node;
>
> struct ttm_placement;
> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
> */
> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>
> +/**
> + * ttm_bo_vmap
> + *
> + * @bo: The buffer object.
> + * @map: pointer to a struct dma_buf_map representing the map.
> + *
> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> + * data in the buffer object. The parameter @map returns the virtual
> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> + *
> + * Returns
> + * -ENOMEM: Out of memory.
> + * -EINVAL: Invalid range.
> + */
> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> +
> +/**
> + * ttm_bo_vunmap
> + *
> + * @bo: The buffer object.
> + * @map: Object describing the map to unmap.
> + *
> + * Unmaps a kernel map set up by ttm_bo_vmap().
> + */
> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> +
> /**
> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> *
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index fd1aba545fdf..2e8bbecb5091 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -45,6 +45,12 @@
> *
> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> *
> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> + *
> + * .. code-block:: c
> + *
> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> + *
> * Test if a mapping is valid with either dma_buf_map_is_set() or
> * dma_buf_map_is_null().
> *
> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
> map->is_iomem = false;
> }
>
> +/**
> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
> + * @map: The dma-buf mapping structure
> + * @vaddr_iomem: An I/O-memory address
> + *
> + * Sets the address and the I/O-memory flag.
> + */
> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> + void __iomem *vaddr_iomem)
> +{
> + map->vaddr_iomem = vaddr_iomem;
> + map->is_iomem = true;
> +}
> +
> /**
> * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
> * @lhs: The dma-buf mapping structure
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 14:08 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 14:08 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig
Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> address space. The mapping's address is returned as struct dma_buf_map.
> Each function is a simplified version of TTM's existing kmap code. Both
> functions respect the memory's location ani/or writecombine flags.
>
> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> two helpers that convert a GEM object into the TTM BO and forward the call
> to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> callbacks.
>
> v4:
> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> Christian)
Bunch of minor comments below, but over all look very solid to me.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> include/drm/drm_gem_ttm_helper.h | 6 +++
> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> include/linux/dma-buf-map.h | 20 ++++++++
> 5 files changed, 164 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
> index 0e4fb9ba43ad..db4c14d78a30 100644
> --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> }
> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>
> +/**
> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> + * @gem: GEM object.
> + * @map: [out] returns the dma-buf mapping.
> + *
> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> + * &drm_gem_object_funcs.vmap callback.
> + *
> + * Returns:
> + * 0 on success, or a negative errno code otherwise.
> + */
> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map)
> +{
> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> +
> + return ttm_bo_vmap(bo, map);
> +
> +}
> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> +
> +/**
> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> + * @gem: GEM object.
> + * @map: dma-buf mapping.
> + *
> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> + * &drm_gem_object_funcs.vmap callback.
> + */
> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map)
> +{
> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> +
> + ttm_bo_vunmap(bo, map);
> +}
> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> +
> /**
> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> * @gem: GEM object.
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> index bdee4df1f3f2..80c42c774c7d 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> @@ -32,6 +32,7 @@
> #include <drm/ttm/ttm_bo_driver.h>
> #include <drm/ttm/ttm_placement.h>
> #include <drm/drm_vma_manager.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/io.h>
> #include <linux/highmem.h>
> #include <linux/wait.h>
> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> }
> EXPORT_SYMBOL(ttm_bo_kunmap);
>
> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> +{
> + struct ttm_resource *mem = &bo->mem;
> + int ret;
> +
> + ret = ttm_mem_io_reserve(bo->bdev, mem);
> + if (ret)
> + return ret;
> +
> + if (mem->bus.is_iomem) {
> + void __iomem *vaddr_iomem;
> + unsigned long size = bo->num_pages << PAGE_SHIFT;
Please use uint64_t here and make sure to cast bo->num_pages before
shifting.
We have an unit tests of allocating a 8GB BO and that should work on a
32bit machine as well :)
> +
> + if (mem->bus.addr)
> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> + else if (mem->placement & TTM_PL_FLAG_WC)
I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
mem->bus.caching enum as replacement.
> + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> + else
> + vaddr_iomem = ioremap(mem->bus.offset, size);
> +
> + if (!vaddr_iomem)
> + return -ENOMEM;
> +
> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> +
> + } else {
> + struct ttm_operation_ctx ctx = {
> + .interruptible = false,
> + .no_wait_gpu = false
> + };
> + struct ttm_tt *ttm = bo->ttm;
> + pgprot_t prot;
> + void *vaddr;
> +
> + BUG_ON(!ttm);
I think we can drop this, populate will just crash badly anyway.
> +
> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> + if (ret)
> + return ret;
> +
> + /*
> + * We need to use vmap to get the desired page protection
> + * or to make the buffer object look contiguous.
> + */
> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
The calling convention has changed on drm-misc-next as well, but should
be trivial to adapt.
Regards,
Christian.
> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> + if (!vaddr)
> + return -ENOMEM;
> +
> + dma_buf_map_set_vaddr(map, vaddr);
> + }
> +
> + return 0;
> +}
> +EXPORT_SYMBOL(ttm_bo_vmap);
> +
> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> +{
> + if (dma_buf_map_is_null(map))
> + return;
> +
> + if (map->is_iomem)
> + iounmap(map->vaddr_iomem);
> + else
> + vunmap(map->vaddr);
> + dma_buf_map_clear(map);
> +
> + ttm_mem_io_free(bo->bdev, &bo->mem);
> +}
> +EXPORT_SYMBOL(ttm_bo_vunmap);
> +
> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> bool dst_use_tt)
> {
> diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
> index 118cef76f84f..7c6d874910b8 100644
> --- a/include/drm/drm_gem_ttm_helper.h
> +++ b/include/drm/drm_gem_ttm_helper.h
> @@ -10,11 +10,17 @@
> #include <drm/ttm/ttm_bo_api.h>
> #include <drm/ttm/ttm_bo_driver.h>
>
> +struct dma_buf_map;
> +
> #define drm_gem_ttm_of_gem(gem_obj) \
> container_of(gem_obj, struct ttm_buffer_object, base)
>
> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> const struct drm_gem_object *gem);
> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map);
> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map);
> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> struct vm_area_struct *vma);
>
> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> index 37102e45e496..2c59a785374c 100644
> --- a/include/drm/ttm/ttm_bo_api.h
> +++ b/include/drm/ttm/ttm_bo_api.h
> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>
> struct ttm_bo_device;
>
> +struct dma_buf_map;
> +
> struct drm_mm_node;
>
> struct ttm_placement;
> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
> */
> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>
> +/**
> + * ttm_bo_vmap
> + *
> + * @bo: The buffer object.
> + * @map: pointer to a struct dma_buf_map representing the map.
> + *
> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> + * data in the buffer object. The parameter @map returns the virtual
> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> + *
> + * Returns
> + * -ENOMEM: Out of memory.
> + * -EINVAL: Invalid range.
> + */
> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> +
> +/**
> + * ttm_bo_vunmap
> + *
> + * @bo: The buffer object.
> + * @map: Object describing the map to unmap.
> + *
> + * Unmaps a kernel map set up by ttm_bo_vmap().
> + */
> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> +
> /**
> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> *
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index fd1aba545fdf..2e8bbecb5091 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -45,6 +45,12 @@
> *
> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> *
> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> + *
> + * .. code-block:: c
> + *
> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> + *
> * Test if a mapping is valid with either dma_buf_map_is_set() or
> * dma_buf_map_is_null().
> *
> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
> map->is_iomem = false;
> }
>
> +/**
> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
> + * @map: The dma-buf mapping structure
> + * @vaddr_iomem: An I/O-memory address
> + *
> + * Sets the address and the I/O-memory flag.
> + */
> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> + void __iomem *vaddr_iomem)
> +{
> + map->vaddr_iomem = vaddr_iomem;
> + map->is_iomem = true;
> +}
> +
> /**
> * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
> * @lhs: The dma-buf mapping structure
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 14:08 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 14:08 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> address space. The mapping's address is returned as struct dma_buf_map.
> Each function is a simplified version of TTM's existing kmap code. Both
> functions respect the memory's location ani/or writecombine flags.
>
> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> two helpers that convert a GEM object into the TTM BO and forward the call
> to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> callbacks.
>
> v4:
> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> Christian)
Bunch of minor comments below, but over all look very solid to me.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> include/drm/drm_gem_ttm_helper.h | 6 +++
> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> include/linux/dma-buf-map.h | 20 ++++++++
> 5 files changed, 164 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
> index 0e4fb9ba43ad..db4c14d78a30 100644
> --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> }
> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>
> +/**
> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> + * @gem: GEM object.
> + * @map: [out] returns the dma-buf mapping.
> + *
> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> + * &drm_gem_object_funcs.vmap callback.
> + *
> + * Returns:
> + * 0 on success, or a negative errno code otherwise.
> + */
> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map)
> +{
> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> +
> + return ttm_bo_vmap(bo, map);
> +
> +}
> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> +
> +/**
> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> + * @gem: GEM object.
> + * @map: dma-buf mapping.
> + *
> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> + * &drm_gem_object_funcs.vmap callback.
> + */
> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map)
> +{
> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> +
> + ttm_bo_vunmap(bo, map);
> +}
> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> +
> /**
> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> * @gem: GEM object.
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> index bdee4df1f3f2..80c42c774c7d 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> @@ -32,6 +32,7 @@
> #include <drm/ttm/ttm_bo_driver.h>
> #include <drm/ttm/ttm_placement.h>
> #include <drm/drm_vma_manager.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/io.h>
> #include <linux/highmem.h>
> #include <linux/wait.h>
> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> }
> EXPORT_SYMBOL(ttm_bo_kunmap);
>
> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> +{
> + struct ttm_resource *mem = &bo->mem;
> + int ret;
> +
> + ret = ttm_mem_io_reserve(bo->bdev, mem);
> + if (ret)
> + return ret;
> +
> + if (mem->bus.is_iomem) {
> + void __iomem *vaddr_iomem;
> + unsigned long size = bo->num_pages << PAGE_SHIFT;
Please use uint64_t here and make sure to cast bo->num_pages before
shifting.
We have an unit tests of allocating a 8GB BO and that should work on a
32bit machine as well :)
> +
> + if (mem->bus.addr)
> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> + else if (mem->placement & TTM_PL_FLAG_WC)
I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
mem->bus.caching enum as replacement.
> + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> + else
> + vaddr_iomem = ioremap(mem->bus.offset, size);
> +
> + if (!vaddr_iomem)
> + return -ENOMEM;
> +
> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> +
> + } else {
> + struct ttm_operation_ctx ctx = {
> + .interruptible = false,
> + .no_wait_gpu = false
> + };
> + struct ttm_tt *ttm = bo->ttm;
> + pgprot_t prot;
> + void *vaddr;
> +
> + BUG_ON(!ttm);
I think we can drop this, populate will just crash badly anyway.
> +
> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> + if (ret)
> + return ret;
> +
> + /*
> + * We need to use vmap to get the desired page protection
> + * or to make the buffer object look contiguous.
> + */
> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
The calling convention has changed on drm-misc-next as well, but should
be trivial to adapt.
Regards,
Christian.
> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> + if (!vaddr)
> + return -ENOMEM;
> +
> + dma_buf_map_set_vaddr(map, vaddr);
> + }
> +
> + return 0;
> +}
> +EXPORT_SYMBOL(ttm_bo_vmap);
> +
> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> +{
> + if (dma_buf_map_is_null(map))
> + return;
> +
> + if (map->is_iomem)
> + iounmap(map->vaddr_iomem);
> + else
> + vunmap(map->vaddr);
> + dma_buf_map_clear(map);
> +
> + ttm_mem_io_free(bo->bdev, &bo->mem);
> +}
> +EXPORT_SYMBOL(ttm_bo_vunmap);
> +
> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> bool dst_use_tt)
> {
> diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
> index 118cef76f84f..7c6d874910b8 100644
> --- a/include/drm/drm_gem_ttm_helper.h
> +++ b/include/drm/drm_gem_ttm_helper.h
> @@ -10,11 +10,17 @@
> #include <drm/ttm/ttm_bo_api.h>
> #include <drm/ttm/ttm_bo_driver.h>
>
> +struct dma_buf_map;
> +
> #define drm_gem_ttm_of_gem(gem_obj) \
> container_of(gem_obj, struct ttm_buffer_object, base)
>
> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> const struct drm_gem_object *gem);
> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map);
> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map);
> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> struct vm_area_struct *vma);
>
> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> index 37102e45e496..2c59a785374c 100644
> --- a/include/drm/ttm/ttm_bo_api.h
> +++ b/include/drm/ttm/ttm_bo_api.h
> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>
> struct ttm_bo_device;
>
> +struct dma_buf_map;
> +
> struct drm_mm_node;
>
> struct ttm_placement;
> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
> */
> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>
> +/**
> + * ttm_bo_vmap
> + *
> + * @bo: The buffer object.
> + * @map: pointer to a struct dma_buf_map representing the map.
> + *
> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> + * data in the buffer object. The parameter @map returns the virtual
> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> + *
> + * Returns
> + * -ENOMEM: Out of memory.
> + * -EINVAL: Invalid range.
> + */
> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> +
> +/**
> + * ttm_bo_vunmap
> + *
> + * @bo: The buffer object.
> + * @map: Object describing the map to unmap.
> + *
> + * Unmaps a kernel map set up by ttm_bo_vmap().
> + */
> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> +
> /**
> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> *
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index fd1aba545fdf..2e8bbecb5091 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -45,6 +45,12 @@
> *
> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> *
> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> + *
> + * .. code-block:: c
> + *
> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> + *
> * Test if a mapping is valid with either dma_buf_map_is_set() or
> * dma_buf_map_is_null().
> *
> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
> map->is_iomem = false;
> }
>
> +/**
> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
> + * @map: The dma-buf mapping structure
> + * @vaddr_iomem: An I/O-memory address
> + *
> + * Sets the address and the I/O-memory flag.
> + */
> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> + void __iomem *vaddr_iomem)
> +{
> + map->vaddr_iomem = vaddr_iomem;
> + map->is_iomem = true;
> +}
> +
> /**
> * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
> * @lhs: The dma-buf mapping structure
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 14:08 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 14:08 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> address space. The mapping's address is returned as struct dma_buf_map.
> Each function is a simplified version of TTM's existing kmap code. Both
> functions respect the memory's location ani/or writecombine flags.
>
> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> two helpers that convert a GEM object into the TTM BO and forward the call
> to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> callbacks.
>
> v4:
> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> Christian)
Bunch of minor comments below, but over all look very solid to me.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> include/drm/drm_gem_ttm_helper.h | 6 +++
> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> include/linux/dma-buf-map.h | 20 ++++++++
> 5 files changed, 164 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
> index 0e4fb9ba43ad..db4c14d78a30 100644
> --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> }
> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>
> +/**
> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> + * @gem: GEM object.
> + * @map: [out] returns the dma-buf mapping.
> + *
> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> + * &drm_gem_object_funcs.vmap callback.
> + *
> + * Returns:
> + * 0 on success, or a negative errno code otherwise.
> + */
> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map)
> +{
> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> +
> + return ttm_bo_vmap(bo, map);
> +
> +}
> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> +
> +/**
> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> + * @gem: GEM object.
> + * @map: dma-buf mapping.
> + *
> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> + * &drm_gem_object_funcs.vmap callback.
> + */
> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map)
> +{
> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> +
> + ttm_bo_vunmap(bo, map);
> +}
> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> +
> /**
> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> * @gem: GEM object.
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> index bdee4df1f3f2..80c42c774c7d 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> @@ -32,6 +32,7 @@
> #include <drm/ttm/ttm_bo_driver.h>
> #include <drm/ttm/ttm_placement.h>
> #include <drm/drm_vma_manager.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/io.h>
> #include <linux/highmem.h>
> #include <linux/wait.h>
> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> }
> EXPORT_SYMBOL(ttm_bo_kunmap);
>
> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> +{
> + struct ttm_resource *mem = &bo->mem;
> + int ret;
> +
> + ret = ttm_mem_io_reserve(bo->bdev, mem);
> + if (ret)
> + return ret;
> +
> + if (mem->bus.is_iomem) {
> + void __iomem *vaddr_iomem;
> + unsigned long size = bo->num_pages << PAGE_SHIFT;
Please use uint64_t here and make sure to cast bo->num_pages before
shifting.
We have an unit tests of allocating a 8GB BO and that should work on a
32bit machine as well :)
> +
> + if (mem->bus.addr)
> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> + else if (mem->placement & TTM_PL_FLAG_WC)
I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
mem->bus.caching enum as replacement.
> + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> + else
> + vaddr_iomem = ioremap(mem->bus.offset, size);
> +
> + if (!vaddr_iomem)
> + return -ENOMEM;
> +
> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> +
> + } else {
> + struct ttm_operation_ctx ctx = {
> + .interruptible = false,
> + .no_wait_gpu = false
> + };
> + struct ttm_tt *ttm = bo->ttm;
> + pgprot_t prot;
> + void *vaddr;
> +
> + BUG_ON(!ttm);
I think we can drop this, populate will just crash badly anyway.
> +
> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> + if (ret)
> + return ret;
> +
> + /*
> + * We need to use vmap to get the desired page protection
> + * or to make the buffer object look contiguous.
> + */
> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
The calling convention has changed on drm-misc-next as well, but should
be trivial to adapt.
Regards,
Christian.
> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> + if (!vaddr)
> + return -ENOMEM;
> +
> + dma_buf_map_set_vaddr(map, vaddr);
> + }
> +
> + return 0;
> +}
> +EXPORT_SYMBOL(ttm_bo_vmap);
> +
> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> +{
> + if (dma_buf_map_is_null(map))
> + return;
> +
> + if (map->is_iomem)
> + iounmap(map->vaddr_iomem);
> + else
> + vunmap(map->vaddr);
> + dma_buf_map_clear(map);
> +
> + ttm_mem_io_free(bo->bdev, &bo->mem);
> +}
> +EXPORT_SYMBOL(ttm_bo_vunmap);
> +
> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> bool dst_use_tt)
> {
> diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
> index 118cef76f84f..7c6d874910b8 100644
> --- a/include/drm/drm_gem_ttm_helper.h
> +++ b/include/drm/drm_gem_ttm_helper.h
> @@ -10,11 +10,17 @@
> #include <drm/ttm/ttm_bo_api.h>
> #include <drm/ttm/ttm_bo_driver.h>
>
> +struct dma_buf_map;
> +
> #define drm_gem_ttm_of_gem(gem_obj) \
> container_of(gem_obj, struct ttm_buffer_object, base)
>
> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> const struct drm_gem_object *gem);
> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map);
> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map);
> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> struct vm_area_struct *vma);
>
> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> index 37102e45e496..2c59a785374c 100644
> --- a/include/drm/ttm/ttm_bo_api.h
> +++ b/include/drm/ttm/ttm_bo_api.h
> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>
> struct ttm_bo_device;
>
> +struct dma_buf_map;
> +
> struct drm_mm_node;
>
> struct ttm_placement;
> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
> */
> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>
> +/**
> + * ttm_bo_vmap
> + *
> + * @bo: The buffer object.
> + * @map: pointer to a struct dma_buf_map representing the map.
> + *
> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> + * data in the buffer object. The parameter @map returns the virtual
> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> + *
> + * Returns
> + * -ENOMEM: Out of memory.
> + * -EINVAL: Invalid range.
> + */
> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> +
> +/**
> + * ttm_bo_vunmap
> + *
> + * @bo: The buffer object.
> + * @map: Object describing the map to unmap.
> + *
> + * Unmaps a kernel map set up by ttm_bo_vmap().
> + */
> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> +
> /**
> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> *
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index fd1aba545fdf..2e8bbecb5091 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -45,6 +45,12 @@
> *
> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> *
> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> + *
> + * .. code-block:: c
> + *
> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> + *
> * Test if a mapping is valid with either dma_buf_map_is_set() or
> * dma_buf_map_is_null().
> *
> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
> map->is_iomem = false;
> }
>
> +/**
> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
> + * @map: The dma-buf mapping structure
> + * @vaddr_iomem: An I/O-memory address
> + *
> + * Sets the address and the I/O-memory flag.
> + */
> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> + void __iomem *vaddr_iomem)
> +{
> + map->vaddr_iomem = vaddr_iomem;
> + map->is_iomem = true;
> +}
> +
> /**
> * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
> * @lhs: The dma-buf mapping structure
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 14:08 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 14:08 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> address space. The mapping's address is returned as struct dma_buf_map.
> Each function is a simplified version of TTM's existing kmap code. Both
> functions respect the memory's location ani/or writecombine flags.
>
> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> two helpers that convert a GEM object into the TTM BO and forward the call
> to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> callbacks.
>
> v4:
> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> Christian)
Bunch of minor comments below, but over all look very solid to me.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> include/drm/drm_gem_ttm_helper.h | 6 +++
> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> include/linux/dma-buf-map.h | 20 ++++++++
> 5 files changed, 164 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
> index 0e4fb9ba43ad..db4c14d78a30 100644
> --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> }
> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>
> +/**
> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> + * @gem: GEM object.
> + * @map: [out] returns the dma-buf mapping.
> + *
> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> + * &drm_gem_object_funcs.vmap callback.
> + *
> + * Returns:
> + * 0 on success, or a negative errno code otherwise.
> + */
> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map)
> +{
> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> +
> + return ttm_bo_vmap(bo, map);
> +
> +}
> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> +
> +/**
> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> + * @gem: GEM object.
> + * @map: dma-buf mapping.
> + *
> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> + * &drm_gem_object_funcs.vmap callback.
> + */
> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map)
> +{
> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> +
> + ttm_bo_vunmap(bo, map);
> +}
> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> +
> /**
> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> * @gem: GEM object.
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> index bdee4df1f3f2..80c42c774c7d 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> @@ -32,6 +32,7 @@
> #include <drm/ttm/ttm_bo_driver.h>
> #include <drm/ttm/ttm_placement.h>
> #include <drm/drm_vma_manager.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/io.h>
> #include <linux/highmem.h>
> #include <linux/wait.h>
> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> }
> EXPORT_SYMBOL(ttm_bo_kunmap);
>
> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> +{
> + struct ttm_resource *mem = &bo->mem;
> + int ret;
> +
> + ret = ttm_mem_io_reserve(bo->bdev, mem);
> + if (ret)
> + return ret;
> +
> + if (mem->bus.is_iomem) {
> + void __iomem *vaddr_iomem;
> + unsigned long size = bo->num_pages << PAGE_SHIFT;
Please use uint64_t here and make sure to cast bo->num_pages before
shifting.
We have an unit tests of allocating a 8GB BO and that should work on a
32bit machine as well :)
> +
> + if (mem->bus.addr)
> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> + else if (mem->placement & TTM_PL_FLAG_WC)
I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
mem->bus.caching enum as replacement.
> + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> + else
> + vaddr_iomem = ioremap(mem->bus.offset, size);
> +
> + if (!vaddr_iomem)
> + return -ENOMEM;
> +
> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> +
> + } else {
> + struct ttm_operation_ctx ctx = {
> + .interruptible = false,
> + .no_wait_gpu = false
> + };
> + struct ttm_tt *ttm = bo->ttm;
> + pgprot_t prot;
> + void *vaddr;
> +
> + BUG_ON(!ttm);
I think we can drop this, populate will just crash badly anyway.
> +
> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> + if (ret)
> + return ret;
> +
> + /*
> + * We need to use vmap to get the desired page protection
> + * or to make the buffer object look contiguous.
> + */
> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
The calling convention has changed on drm-misc-next as well, but should
be trivial to adapt.
Regards,
Christian.
> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> + if (!vaddr)
> + return -ENOMEM;
> +
> + dma_buf_map_set_vaddr(map, vaddr);
> + }
> +
> + return 0;
> +}
> +EXPORT_SYMBOL(ttm_bo_vmap);
> +
> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> +{
> + if (dma_buf_map_is_null(map))
> + return;
> +
> + if (map->is_iomem)
> + iounmap(map->vaddr_iomem);
> + else
> + vunmap(map->vaddr);
> + dma_buf_map_clear(map);
> +
> + ttm_mem_io_free(bo->bdev, &bo->mem);
> +}
> +EXPORT_SYMBOL(ttm_bo_vunmap);
> +
> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> bool dst_use_tt)
> {
> diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
> index 118cef76f84f..7c6d874910b8 100644
> --- a/include/drm/drm_gem_ttm_helper.h
> +++ b/include/drm/drm_gem_ttm_helper.h
> @@ -10,11 +10,17 @@
> #include <drm/ttm/ttm_bo_api.h>
> #include <drm/ttm/ttm_bo_driver.h>
>
> +struct dma_buf_map;
> +
> #define drm_gem_ttm_of_gem(gem_obj) \
> container_of(gem_obj, struct ttm_buffer_object, base)
>
> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> const struct drm_gem_object *gem);
> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map);
> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map);
> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> struct vm_area_struct *vma);
>
> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> index 37102e45e496..2c59a785374c 100644
> --- a/include/drm/ttm/ttm_bo_api.h
> +++ b/include/drm/ttm/ttm_bo_api.h
> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>
> struct ttm_bo_device;
>
> +struct dma_buf_map;
> +
> struct drm_mm_node;
>
> struct ttm_placement;
> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
> */
> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>
> +/**
> + * ttm_bo_vmap
> + *
> + * @bo: The buffer object.
> + * @map: pointer to a struct dma_buf_map representing the map.
> + *
> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> + * data in the buffer object. The parameter @map returns the virtual
> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> + *
> + * Returns
> + * -ENOMEM: Out of memory.
> + * -EINVAL: Invalid range.
> + */
> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> +
> +/**
> + * ttm_bo_vunmap
> + *
> + * @bo: The buffer object.
> + * @map: Object describing the map to unmap.
> + *
> + * Unmaps a kernel map set up by ttm_bo_vmap().
> + */
> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> +
> /**
> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> *
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index fd1aba545fdf..2e8bbecb5091 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -45,6 +45,12 @@
> *
> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> *
> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> + *
> + * .. code-block:: c
> + *
> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> + *
> * Test if a mapping is valid with either dma_buf_map_is_set() or
> * dma_buf_map_is_null().
> *
> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
> map->is_iomem = false;
> }
>
> +/**
> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
> + * @map: The dma-buf mapping structure
> + * @vaddr_iomem: An I/O-memory address
> + *
> + * Sets the address and the I/O-memory flag.
> + */
> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> + void __iomem *vaddr_iomem)
> +{
> + map->vaddr_iomem = vaddr_iomem;
> + map->is_iomem = true;
> +}
> +
> /**
> * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
> * @lhs: The dma-buf mapping structure
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 14:08 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 14:08 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> address space. The mapping's address is returned as struct dma_buf_map.
> Each function is a simplified version of TTM's existing kmap code. Both
> functions respect the memory's location ani/or writecombine flags.
>
> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> two helpers that convert a GEM object into the TTM BO and forward the call
> to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> callbacks.
>
> v4:
> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> Christian)
Bunch of minor comments below, but over all look very solid to me.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> include/drm/drm_gem_ttm_helper.h | 6 +++
> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> include/linux/dma-buf-map.h | 20 ++++++++
> 5 files changed, 164 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
> index 0e4fb9ba43ad..db4c14d78a30 100644
> --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> }
> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>
> +/**
> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> + * @gem: GEM object.
> + * @map: [out] returns the dma-buf mapping.
> + *
> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> + * &drm_gem_object_funcs.vmap callback.
> + *
> + * Returns:
> + * 0 on success, or a negative errno code otherwise.
> + */
> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map)
> +{
> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> +
> + return ttm_bo_vmap(bo, map);
> +
> +}
> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> +
> +/**
> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> + * @gem: GEM object.
> + * @map: dma-buf mapping.
> + *
> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> + * &drm_gem_object_funcs.vmap callback.
> + */
> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map)
> +{
> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> +
> + ttm_bo_vunmap(bo, map);
> +}
> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> +
> /**
> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> * @gem: GEM object.
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> index bdee4df1f3f2..80c42c774c7d 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> @@ -32,6 +32,7 @@
> #include <drm/ttm/ttm_bo_driver.h>
> #include <drm/ttm/ttm_placement.h>
> #include <drm/drm_vma_manager.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/io.h>
> #include <linux/highmem.h>
> #include <linux/wait.h>
> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> }
> EXPORT_SYMBOL(ttm_bo_kunmap);
>
> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> +{
> + struct ttm_resource *mem = &bo->mem;
> + int ret;
> +
> + ret = ttm_mem_io_reserve(bo->bdev, mem);
> + if (ret)
> + return ret;
> +
> + if (mem->bus.is_iomem) {
> + void __iomem *vaddr_iomem;
> + unsigned long size = bo->num_pages << PAGE_SHIFT;
Please use uint64_t here and make sure to cast bo->num_pages before
shifting.
We have an unit tests of allocating a 8GB BO and that should work on a
32bit machine as well :)
> +
> + if (mem->bus.addr)
> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> + else if (mem->placement & TTM_PL_FLAG_WC)
I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
mem->bus.caching enum as replacement.
> + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> + else
> + vaddr_iomem = ioremap(mem->bus.offset, size);
> +
> + if (!vaddr_iomem)
> + return -ENOMEM;
> +
> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> +
> + } else {
> + struct ttm_operation_ctx ctx = {
> + .interruptible = false,
> + .no_wait_gpu = false
> + };
> + struct ttm_tt *ttm = bo->ttm;
> + pgprot_t prot;
> + void *vaddr;
> +
> + BUG_ON(!ttm);
I think we can drop this, populate will just crash badly anyway.
> +
> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> + if (ret)
> + return ret;
> +
> + /*
> + * We need to use vmap to get the desired page protection
> + * or to make the buffer object look contiguous.
> + */
> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
The calling convention has changed on drm-misc-next as well, but should
be trivial to adapt.
Regards,
Christian.
> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> + if (!vaddr)
> + return -ENOMEM;
> +
> + dma_buf_map_set_vaddr(map, vaddr);
> + }
> +
> + return 0;
> +}
> +EXPORT_SYMBOL(ttm_bo_vmap);
> +
> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> +{
> + if (dma_buf_map_is_null(map))
> + return;
> +
> + if (map->is_iomem)
> + iounmap(map->vaddr_iomem);
> + else
> + vunmap(map->vaddr);
> + dma_buf_map_clear(map);
> +
> + ttm_mem_io_free(bo->bdev, &bo->mem);
> +}
> +EXPORT_SYMBOL(ttm_bo_vunmap);
> +
> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> bool dst_use_tt)
> {
> diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
> index 118cef76f84f..7c6d874910b8 100644
> --- a/include/drm/drm_gem_ttm_helper.h
> +++ b/include/drm/drm_gem_ttm_helper.h
> @@ -10,11 +10,17 @@
> #include <drm/ttm/ttm_bo_api.h>
> #include <drm/ttm/ttm_bo_driver.h>
>
> +struct dma_buf_map;
> +
> #define drm_gem_ttm_of_gem(gem_obj) \
> container_of(gem_obj, struct ttm_buffer_object, base)
>
> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> const struct drm_gem_object *gem);
> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map);
> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> + struct dma_buf_map *map);
> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> struct vm_area_struct *vma);
>
> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> index 37102e45e496..2c59a785374c 100644
> --- a/include/drm/ttm/ttm_bo_api.h
> +++ b/include/drm/ttm/ttm_bo_api.h
> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>
> struct ttm_bo_device;
>
> +struct dma_buf_map;
> +
> struct drm_mm_node;
>
> struct ttm_placement;
> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
> */
> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>
> +/**
> + * ttm_bo_vmap
> + *
> + * @bo: The buffer object.
> + * @map: pointer to a struct dma_buf_map representing the map.
> + *
> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> + * data in the buffer object. The parameter @map returns the virtual
> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> + *
> + * Returns
> + * -ENOMEM: Out of memory.
> + * -EINVAL: Invalid range.
> + */
> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> +
> +/**
> + * ttm_bo_vunmap
> + *
> + * @bo: The buffer object.
> + * @map: Object describing the map to unmap.
> + *
> + * Unmaps a kernel map set up by ttm_bo_vmap().
> + */
> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> +
> /**
> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> *
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index fd1aba545fdf..2e8bbecb5091 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -45,6 +45,12 @@
> *
> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> *
> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> + *
> + * .. code-block:: c
> + *
> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> + *
> * Test if a mapping is valid with either dma_buf_map_is_set() or
> * dma_buf_map_is_null().
> *
> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
> map->is_iomem = false;
> }
>
> +/**
> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
> + * @map: The dma-buf mapping structure
> + * @vaddr_iomem: An I/O-memory address
> + *
> + * Sets the address and the I/O-memory flag.
> + */
> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> + void __iomem *vaddr_iomem)
> +{
> + map->vaddr_iomem = vaddr_iomem;
> + map->is_iomem = true;
> +}
> +
> /**
> * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
> * @lhs: The dma-buf mapping structure
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends
@ 2020-10-15 14:21 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 14:21 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: dri-devel, amd-gfx, virtualization, etnaviv, linux-arm-kernel,
linux-samsung-soc, lima, nouveau, spice-devel, linux-rockchip,
xen-devel, linux-media, linaro-mm-sig
Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> This patch replaces the vmap/vunmap's use of raw pointers in GEM object
> functions with instances of struct dma_buf_map. GEM backends are
> converted as well. For most of them, this simply changes the returned type.
>
> TTM-based drivers now return information about the location of the memory,
> either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap()
> et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of
> implementing their own vmap callbacks.
>
> v4:
> * use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian)
> * fix a trailing { in drm_gem_vmap()
> * remove several empty functions instead of converting them (Daniel)
> * comment uses of raw pointers with a TODO (Daniel)
> * TODO list: convert more helpers to use struct dma_buf_map
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
The amdgpu changes look good to me, but I can't fully judge the other stuff.
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> Documentation/gpu/todo.rst | 18 ++++
> drivers/gpu/drm/Kconfig | 2 +
> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 -------
> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 -
> drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 -
> drivers/gpu/drm/ast/ast_cursor.c | 27 +++--
> drivers/gpu/drm/ast/ast_drv.h | 7 +-
> drivers/gpu/drm/drm_gem.c | 23 +++--
> drivers/gpu/drm/drm_gem_cma_helper.c | 10 +-
> drivers/gpu/drm/drm_gem_shmem_helper.c | 48 +++++----
> drivers/gpu/drm/drm_gem_vram_helper.c | 107 ++++++++++----------
> drivers/gpu/drm/etnaviv/etnaviv_drv.h | 2 +-
> drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 9 +-
> drivers/gpu/drm/lima/lima_gem.c | 6 +-
> drivers/gpu/drm/lima/lima_sched.c | 11 +-
> drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +-
> drivers/gpu/drm/nouveau/Kconfig | 1 +
> drivers/gpu/drm/nouveau/nouveau_bo.h | 2 -
> drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +-
> drivers/gpu/drm/nouveau/nouveau_gem.h | 2 -
> drivers/gpu/drm/nouveau/nouveau_prime.c | 20 ----
> drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +--
> drivers/gpu/drm/qxl/qxl_display.c | 11 +-
> drivers/gpu/drm/qxl/qxl_draw.c | 14 ++-
> drivers/gpu/drm/qxl/qxl_drv.h | 11 +-
> drivers/gpu/drm/qxl/qxl_object.c | 31 +++---
> drivers/gpu/drm/qxl/qxl_object.h | 2 +-
> drivers/gpu/drm/qxl/qxl_prime.c | 12 +--
> drivers/gpu/drm/radeon/radeon.h | 1 -
> drivers/gpu/drm/radeon/radeon_gem.c | 7 +-
> drivers/gpu/drm/radeon/radeon_prime.c | 20 ----
> drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 ++--
> drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +-
> drivers/gpu/drm/tiny/cirrus.c | 10 +-
> drivers/gpu/drm/tiny/gm12u320.c | 10 +-
> drivers/gpu/drm/udl/udl_modeset.c | 8 +-
> drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +-
> drivers/gpu/drm/vc4/vc4_bo.c | 6 +-
> drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
> drivers/gpu/drm/vgem/vgem_drv.c | 16 ++-
> drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 ++--
> drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +-
> include/drm/drm_gem.h | 5 +-
> include/drm/drm_gem_cma_helper.h | 2 +-
> include/drm/drm_gem_shmem_helper.h | 4 +-
> include/drm/drm_gem_vram_helper.h | 14 +--
> 47 files changed, 321 insertions(+), 295 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 700637e25ecd..7e6fc3c04add 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -446,6 +446,24 @@ Contact: Ville Syrjälä, Daniel Vetter
>
> Level: Intermediate
>
> +Use struct dma_buf_map throughout codebase
> +------------------------------------------
> +
> +Pointers to shared device memory are stored in struct dma_buf_map. Each
> +instance knows whether it refers to system or I/O memory. Most of the DRM-wide
> +interface have been converted to use struct dma_buf_map, but implementations
> +often still use raw pointers.
> +
> +The task is to use struct dma_buf_map where it makes sense.
> +
> +* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
> +* TTM might benefit from using struct dma_buf_map internally.
> +* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
> +
> +Level: Intermediate
> +
>
> Core refactorings
> =================
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index 147d61b9674e..319839b87d37 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -239,6 +239,7 @@ config DRM_RADEON
> select FW_LOADER
> select DRM_KMS_HELPER
> select DRM_TTM
> + select DRM_TTM_HELPER
> select POWER_SUPPLY
> select HWMON
> select BACKLIGHT_CLASS_DEVICE
> @@ -259,6 +260,7 @@ config DRM_AMDGPU
> select DRM_KMS_HELPER
> select DRM_SCHED
> select DRM_TTM
> + select DRM_TTM_HELPER
> select POWER_SUPPLY
> select HWMON
> select BACKLIGHT_CLASS_DEVICE
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> index 5b465ab774d1..e5919efca870 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> @@ -41,42 +41,6 @@
> #include <linux/dma-fence-array.h>
> #include <linux/pci-p2pdma.h>
>
> -/**
> - * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
> - * @obj: GEM BO
> - *
> - * Sets up an in-kernel virtual mapping of the BO's memory.
> - *
> - * Returns:
> - * The virtual address of the mapping or an error pointer.
> - */
> -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> - int ret;
> -
> - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
> - &bo->dma_buf_vmap);
> - if (ret)
> - return ERR_PTR(ret);
> -
> - return bo->dma_buf_vmap.virtual;
> -}
> -
> -/**
> - * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
> - * @obj: GEM BO
> - * @vaddr: Virtual address (unused)
> - *
> - * Tears down the in-kernel virtual mapping of the BO's memory.
> - */
> -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> -
> - ttm_bo_kunmap(&bo->dma_buf_vmap);
> -}
> -
> /**
> * amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation
> * @obj: GEM BO
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> index 2c5c84a06bb9..39b5b9616fd8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> @@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
> struct dma_buf *dma_buf);
> bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
> struct amdgpu_bo *bo);
> -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
> -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index be08a63ef58c..576659827e74 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -33,6 +33,7 @@
>
> #include <drm/amdgpu_drm.h>
> #include <drm/drm_debugfs.h>
> +#include <drm/drm_gem_ttm_helper.h>
>
> #include "amdgpu.h"
> #include "amdgpu_display.h"
> @@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = {
> .open = amdgpu_gem_object_open,
> .close = amdgpu_gem_object_close,
> .export = amdgpu_gem_prime_export,
> - .vmap = amdgpu_gem_prime_vmap,
> - .vunmap = amdgpu_gem_prime_vunmap,
> + .vmap = drm_gem_ttm_vmap,
> + .vunmap = drm_gem_ttm_vunmap,
> };
>
> /*
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> index 132e5f955180..01296ef0d673 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> @@ -100,7 +100,6 @@ struct amdgpu_bo {
> struct amdgpu_bo *parent;
> struct amdgpu_bo *shadow;
>
> - struct ttm_bo_kmap_obj dma_buf_vmap;
> struct amdgpu_mn *mn;
>
>
> diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
> index e0f4613918ad..742d43a7edf4 100644
> --- a/drivers/gpu/drm/ast/ast_cursor.c
> +++ b/drivers/gpu/drm/ast/ast_cursor.c
> @@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
>
> for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
> gbo = ast->cursor.gbo[i];
> - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
> + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
> drm_gem_vram_unpin(gbo);
> drm_gem_vram_put(gbo);
> }
> @@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
> struct drm_device *dev = &ast->base;
> size_t size, i;
> struct drm_gem_vram_object *gbo;
> - void __iomem *vaddr;
> + struct dma_buf_map map;
> int ret;
>
> size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
> @@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
> drm_gem_vram_put(gbo);
> goto err_drm_gem_vram_put;
> }
> - vaddr = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(vaddr)) {
> - ret = PTR_ERR(vaddr);
> + ret = drm_gem_vram_vmap(gbo, &map);
> + if (ret) {
> drm_gem_vram_unpin(gbo);
> drm_gem_vram_put(gbo);
> goto err_drm_gem_vram_put;
> }
>
> ast->cursor.gbo[i] = gbo;
> - ast->cursor.vaddr[i] = vaddr;
> + ast->cursor.map[i] = map;
> }
>
> return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
> @@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
> while (i) {
> --i;
> gbo = ast->cursor.gbo[i];
> - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
> + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
> drm_gem_vram_unpin(gbo);
> drm_gem_vram_put(gbo);
> }
> @@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
> {
> struct drm_device *dev = &ast->base;
> struct drm_gem_vram_object *gbo;
> + struct dma_buf_map map;
> int ret;
> void *src;
> void __iomem *dst;
> @@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
> ret = drm_gem_vram_pin(gbo, 0);
> if (ret)
> return ret;
> - src = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(src)) {
> - ret = PTR_ERR(src);
> + ret = drm_gem_vram_vmap(gbo, &map);
> + if (ret)
> goto err_drm_gem_vram_unpin;
> - }
> + src = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> - dst = ast->cursor.vaddr[ast->cursor.next_index];
> + dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
>
> /* do data transfer to cursor BO */
> update_cursor_image(dst, src, fb->width, fb->height);
>
> - drm_gem_vram_vunmap(gbo, src);
> + drm_gem_vram_vunmap(gbo, &map);
> drm_gem_vram_unpin(gbo);
>
> return 0;
> @@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
> u8 __iomem *sig;
> u8 jreg;
>
> - dst = ast->cursor.vaddr[ast->cursor.next_index];
> + dst = ast->cursor.map[ast->cursor.next_index].vaddr;
>
> sig = dst + AST_HWC_SIZE;
> writel(x, sig + AST_HWC_SIGNATURE_X);
> diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
> index 467049ca8430..f963141dd851 100644
> --- a/drivers/gpu/drm/ast/ast_drv.h
> +++ b/drivers/gpu/drm/ast/ast_drv.h
> @@ -28,10 +28,11 @@
> #ifndef __AST_DRV_H__
> #define __AST_DRV_H__
>
> -#include <linux/types.h>
> -#include <linux/io.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/i2c.h>
> #include <linux/i2c-algo-bit.h>
> +#include <linux/io.h>
> +#include <linux/types.h>
>
> #include <drm/drm_connector.h>
> #include <drm/drm_crtc.h>
> @@ -131,7 +132,7 @@ struct ast_private {
>
> struct {
> struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
> - void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
> + struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
> unsigned int next_index;
> } cursor;
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 1da67d34e55d..a89ad4570e3c 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -36,6 +36,7 @@
> #include <linux/pagemap.h>
> #include <linux/shmem_fs.h>
> #include <linux/dma-buf.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/mem_encrypt.h>
> #include <linux/pagevec.h>
>
> @@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
>
> void *drm_gem_vmap(struct drm_gem_object *obj)
> {
> - void *vaddr;
> + struct dma_buf_map map;
> + int ret;
>
> - if (obj->funcs->vmap)
> - vaddr = obj->funcs->vmap(obj);
> - else
> - vaddr = ERR_PTR(-EOPNOTSUPP);
> + if (!obj->funcs->vmap)
> + return ERR_PTR(-EOPNOTSUPP);
>
> - if (!vaddr)
> - vaddr = ERR_PTR(-ENOMEM);
> + ret = obj->funcs->vmap(obj, &map);
> + if (ret)
> + return ERR_PTR(ret);
> + else if (dma_buf_map_is_null(&map))
> + return ERR_PTR(-ENOMEM);
>
> - return vaddr;
> + return map.vaddr;
> }
>
> void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
> {
> + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
> +
> if (!vaddr)
> return;
>
> if (obj->funcs->vunmap)
> - obj->funcs->vunmap(obj, vaddr);
> + obj->funcs->vunmap(obj, &map);
> }
>
> /**
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index d527485ea0b7..b57e3e9222f0 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
> * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
> * address space
> * @obj: GEM object
> + * @map: Returns the kernel virtual address of the CMA GEM object's backing
> + * store.
> *
> * This function maps a buffer exported via DRM PRIME into the kernel's
> * virtual address space. Since the CMA buffers are already mapped into the
> @@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
> * driver's &drm_gem_object_funcs.vmap callback.
> *
> * Returns:
> - * The kernel virtual address of the CMA GEM object's backing store.
> + * 0 on success, or a negative error code otherwise.
> */
> -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
> +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
>
> - return cma_obj->vaddr;
> + dma_buf_map_set_vaddr(map, cma_obj->vaddr);
> +
> + return 0;
> }
> EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index fb11df7aced5..5553f58f68f3 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
> }
> EXPORT_SYMBOL(drm_gem_shmem_unpin);
>
> -static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> +static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
> - struct dma_buf_map map;
> int ret = 0;
>
> - if (shmem->vmap_use_count++ > 0)
> - return shmem->vaddr;
> + if (shmem->vmap_use_count++ > 0) {
> + dma_buf_map_set_vaddr(map, shmem->vaddr);
> + return 0;
> + }
>
> if (obj->import_attach) {
> - ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
> - if (!ret)
> - shmem->vaddr = map.vaddr;
> + ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
> + if (!ret) {
> + if (WARN_ON(map->is_iomem)) {
> + ret = -EIO;
> + goto err_put_pages;
> + }
> + shmem->vaddr = map->vaddr;
> + }
> } else {
> pgprot_t prot = PAGE_KERNEL;
>
> @@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> VM_MAP, prot);
> if (!shmem->vaddr)
> ret = -ENOMEM;
> + else
> + dma_buf_map_set_vaddr(map, shmem->vaddr);
> }
>
> if (ret) {
> @@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> goto err_put_pages;
> }
>
> - return shmem->vaddr;
> + return 0;
>
> err_put_pages:
> if (!obj->import_attach)
> @@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> err_zero_use:
> shmem->vmap_use_count = 0;
>
> - return ERR_PTR(ret);
> + return ret;
> }
>
> /*
> * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
> * @shmem: shmem GEM object
> + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> + * store.
> *
> * This function makes sure that a contiguous kernel virtual address mapping
> * exists for the buffer backing the shmem GEM object.
> @@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> * Returns:
> * 0 on success or a negative error code on failure.
> */
> -void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
> +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> - void *vaddr;
> int ret;
>
> ret = mutex_lock_interruptible(&shmem->vmap_lock);
> if (ret)
> - return ERR_PTR(ret);
> - vaddr = drm_gem_shmem_vmap_locked(shmem);
> + return ret;
> + ret = drm_gem_shmem_vmap_locked(shmem, map);
> mutex_unlock(&shmem->vmap_lock);
>
> - return vaddr;
> + return ret;
> }
> EXPORT_SYMBOL(drm_gem_shmem_vmap);
>
> -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> +static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> + struct dma_buf_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
> - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
>
> if (WARN_ON_ONCE(!shmem->vmap_use_count))
> return;
> @@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> return;
>
> if (obj->import_attach)
> - dma_buf_vunmap(obj->import_attach->dmabuf, &map);
> + dma_buf_vunmap(obj->import_attach->dmabuf, map);
> else
> vunmap(shmem->vaddr);
>
> @@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> /*
> * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
> * @shmem: shmem GEM object
> + * @map: Kernel virtual address where the SHMEM GEM object was mapped
> *
> * This function cleans up a kernel virtual address mapping acquired by
> * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
> @@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> * also be called by drivers directly, in which case it will hide the
> * differences between dma-buf imported and natively allocated objects.
> */
> -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>
> mutex_lock(&shmem->vmap_lock);
> - drm_gem_shmem_vunmap_locked(shmem);
> + drm_gem_shmem_vunmap_locked(shmem, map);
> mutex_unlock(&shmem->vmap_lock);
> }
> EXPORT_SYMBOL(drm_gem_shmem_vunmap);
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 2d5ed30518f1..4d8553b28558 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -1,5 +1,6 @@
> // SPDX-License-Identifier: GPL-2.0-or-later
>
> +#include <linux/dma-buf-map.h>
> #include <linux/module.h>
>
> #include <drm/drm_debugfs.h>
> @@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
> * up; only release the GEM object.
> */
>
> - WARN_ON(gbo->kmap_use_count);
> - WARN_ON(gbo->kmap.virtual);
> + WARN_ON(gbo->vmap_use_count);
> + WARN_ON(dma_buf_map_is_set(&gbo->map));
>
> drm_gem_object_release(&gbo->bo.base);
> }
> @@ -382,29 +383,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
> }
> EXPORT_SYMBOL(drm_gem_vram_unpin);
>
> -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
> +static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
> + struct dma_buf_map *map)
> {
> int ret;
> - struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> - bool is_iomem;
>
> - if (gbo->kmap_use_count > 0)
> + if (gbo->vmap_use_count > 0)
> goto out;
>
> - ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
> + ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
> if (ret)
> - return ERR_PTR(ret);
> + return ret;
>
> out:
> - ++gbo->kmap_use_count;
> - return ttm_kmap_obj_virtual(kmap, &is_iomem);
> + ++gbo->vmap_use_count;
> + *map = gbo->map;
> +
> + return 0;
> }
>
> -static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> +static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
> + struct dma_buf_map *map)
> {
> - if (WARN_ON_ONCE(!gbo->kmap_use_count))
> + struct drm_device *dev = gbo->bo.base.dev;
> +
> + if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count))
> return;
> - if (--gbo->kmap_use_count > 0)
> +
> + if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
> + return; /* BUG: map not mapped from this BO */
> +
> + if (--gbo->vmap_use_count > 0)
> return;
>
> /*
> @@ -418,7 +427,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> /**
> * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
> * space
> - * @gbo: The GEM VRAM object to map
> + * @gbo: The GEM VRAM object to map
> + * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> + * store.
> *
> * The vmap function pins a GEM VRAM object to its current location, either
> * system or video memory, and maps its buffer into kernel address space.
> @@ -427,48 +438,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> * unmap and unpin the GEM VRAM object.
> *
> * Returns:
> - * The buffer's virtual address on success, or
> - * an ERR_PTR()-encoded error code otherwise.
> + * 0 on success, or a negative error code otherwise.
> */
> -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
> +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
> {
> int ret;
> - void *base;
>
> ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
> if (ret)
> - return ERR_PTR(ret);
> + return ret;
>
> ret = drm_gem_vram_pin_locked(gbo, 0);
> if (ret)
> goto err_ttm_bo_unreserve;
> - base = drm_gem_vram_kmap_locked(gbo);
> - if (IS_ERR(base)) {
> - ret = PTR_ERR(base);
> + ret = drm_gem_vram_kmap_locked(gbo, map);
> + if (ret)
> goto err_drm_gem_vram_unpin_locked;
> - }
>
> ttm_bo_unreserve(&gbo->bo);
>
> - return base;
> + return 0;
>
> err_drm_gem_vram_unpin_locked:
> drm_gem_vram_unpin_locked(gbo);
> err_ttm_bo_unreserve:
> ttm_bo_unreserve(&gbo->bo);
> - return ERR_PTR(ret);
> + return ret;
> }
> EXPORT_SYMBOL(drm_gem_vram_vmap);
>
> /**
> * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
> - * @gbo: The GEM VRAM object to unmap
> - * @vaddr: The mapping's base address as returned by drm_gem_vram_vmap()
> + * @gbo: The GEM VRAM object to unmap
> + * @map: Kernel virtual address where the VRAM GEM object was mapped
> *
> * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
> * the documentation for drm_gem_vram_vmap() for more information.
> */
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
> +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
> {
> int ret;
>
> @@ -476,7 +483,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
> if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
> return;
>
> - drm_gem_vram_kunmap_locked(gbo);
> + drm_gem_vram_kunmap_locked(gbo, map);
> drm_gem_vram_unpin_locked(gbo);
>
> ttm_bo_unreserve(&gbo->bo);
> @@ -567,15 +574,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
> bool evict,
> struct ttm_resource *new_mem)
> {
> - struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> + struct ttm_buffer_object *bo = &gbo->bo;
> + struct drm_device *dev = bo->base.dev;
>
> - if (WARN_ON_ONCE(gbo->kmap_use_count))
> + if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count))
> return;
>
> - if (!kmap->virtual)
> - return;
> - ttm_bo_kunmap(kmap);
> - kmap->virtual = NULL;
> + ttm_bo_vunmap(bo, &gbo->map);
> }
>
> static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
> @@ -832,37 +837,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
> }
>
> /**
> - * drm_gem_vram_object_vmap() - \
> - Implements &struct drm_gem_object_funcs.vmap
> - * @gem: The GEM object to map
> + * drm_gem_vram_object_vmap() -
> + * Implements &struct drm_gem_object_funcs.vmap
> + * @gem: The GEM object to map
> + * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> + * store.
> *
> * Returns:
> - * The buffers virtual address on success, or
> - * NULL otherwise.
> + * 0 on success, or a negative error code otherwise.
> */
> -static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
> +static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
> {
> struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
> - void *base;
>
> - base = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(base))
> - return NULL;
> - return base;
> + return drm_gem_vram_vmap(gbo, map);
> }
>
> /**
> - * drm_gem_vram_object_vunmap() - \
> - Implements &struct drm_gem_object_funcs.vunmap
> - * @gem: The GEM object to unmap
> - * @vaddr: The mapping's base address
> + * drm_gem_vram_object_vunmap() -
> + * Implements &struct drm_gem_object_funcs.vunmap
> + * @gem: The GEM object to unmap
> + * @map: Kernel virtual address where the VRAM GEM object was mapped
> */
> -static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
> - void *vaddr)
> +static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
> {
> struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
>
> - drm_gem_vram_vunmap(gbo, vaddr);
> + drm_gem_vram_vunmap(gbo, map);
> }
>
> /*
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> index 9682c26d89bb..f5be627e1de0 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> @@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
> int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
> struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
> -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
> +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index a6d9932a32ae..bc2543dd987d 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
> return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
> }
>
> -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
> +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> - return etnaviv_gem_vmap(obj);
> + void *vaddr = etnaviv_gem_vmap(obj);
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
> +
> + return 0;
> }
>
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
> index 11223fe348df..832e5280a6ed 100644
> --- a/drivers/gpu/drm/lima/lima_gem.c
> +++ b/drivers/gpu/drm/lima/lima_gem.c
> @@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
> return drm_gem_shmem_pin(obj);
> }
>
> -static void *lima_gem_vmap(struct drm_gem_object *obj)
> +static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct lima_bo *bo = to_lima_bo(obj);
>
> if (bo->heap_size)
> - return ERR_PTR(-EINVAL);
> + return -EINVAL;
>
> - return drm_gem_shmem_vmap(obj);
> + return drm_gem_shmem_vmap(obj, map);
> }
>
> static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
> index dc6df9e9a40d..a070a85f8f36 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -1,6 +1,7 @@
> // SPDX-License-Identifier: GPL-2.0 OR MIT
> /* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
>
> +#include <linux/dma-buf-map.h>
> #include <linux/kthread.h>
> #include <linux/slab.h>
> #include <linux/vmalloc.h>
> @@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
> struct lima_dump_chunk_buffer *buffer_chunk;
> u32 size, task_size, mem_size;
> int i;
> + struct dma_buf_map map;
> + int ret;
>
> mutex_lock(&dev->error_task_list_lock);
>
> @@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
> } else {
> buffer_chunk->size = lima_bo_size(bo);
>
> - data = drm_gem_shmem_vmap(&bo->base.base);
> - if (IS_ERR_OR_NULL(data)) {
> + ret = drm_gem_shmem_vmap(&bo->base.base, &map);
> + if (ret) {
> kvfree(et);
> goto out;
> }
>
> - memcpy(buffer_chunk + 1, data, buffer_chunk->size);
> + memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
>
> - drm_gem_shmem_vunmap(&bo->base.base, data);
> + drm_gem_shmem_vunmap(&bo->base.base, &map);
> }
>
> buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
> diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
> index 38672f9e5c4f..8ef76769b97f 100644
> --- a/drivers/gpu/drm/mgag200/mgag200_mode.c
> +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
> @@ -9,6 +9,7 @@
> */
>
> #include <linux/delay.h>
> +#include <linux/dma-buf-map.h>
>
> #include <drm/drm_atomic_helper.h>
> #include <drm/drm_atomic_state_helper.h>
> @@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
> struct drm_rect *clip)
> {
> struct drm_device *dev = &mdev->base;
> + struct dma_buf_map map;
> void *vmap;
> + int ret;
>
> - vmap = drm_gem_shmem_vmap(fb->obj[0]);
> - if (drm_WARN_ON(dev, !vmap))
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (drm_WARN_ON(dev, ret))
> return; /* BUG: SHMEM BO should always be vmapped */
> + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
>
> - drm_gem_shmem_vunmap(fb->obj[0], vmap);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
>
> /* Always scanout image at VRAM offset 0 */
> mgag200_set_startadd(mdev, (u32)0);
> diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
> index 5dec1e5694b7..9436310d0854 100644
> --- a/drivers/gpu/drm/nouveau/Kconfig
> +++ b/drivers/gpu/drm/nouveau/Kconfig
> @@ -6,6 +6,7 @@ config DRM_NOUVEAU
> select FW_LOADER
> select DRM_KMS_HELPER
> select DRM_TTM
> + select DRM_TTM_HELPER
> select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT
> select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT
> select X86_PLATFORM_DEVICES if ACPI && X86
> diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h
> index 641ef6298a0e..6045b85a762a 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_bo.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_bo.h
> @@ -39,8 +39,6 @@ struct nouveau_bo {
> unsigned mode;
>
> struct nouveau_drm_tile *tile;
> -
> - struct ttm_bo_kmap_obj dma_buf_vmap;
> };
>
> static inline struct nouveau_bo *
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> index 9a421c3949de..f942b526b0a5 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> @@ -24,6 +24,8 @@
> *
> */
>
> +#include <drm/drm_gem_ttm_helper.h>
> +
> #include "nouveau_drv.h"
> #include "nouveau_dma.h"
> #include "nouveau_fence.h"
> @@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = {
> .pin = nouveau_gem_prime_pin,
> .unpin = nouveau_gem_prime_unpin,
> .get_sg_table = nouveau_gem_prime_get_sg_table,
> - .vmap = nouveau_gem_prime_vmap,
> - .vunmap = nouveau_gem_prime_vunmap,
> + .vmap = drm_gem_ttm_vmap,
> + .vunmap = drm_gem_ttm_vunmap,
> };
>
> int
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
> index b35c180322e2..3b919c7c931c 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
> @@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
> extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
> extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
> struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
> -extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
> -extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
>
> #endif
> diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
> index a8264aebf3d4..2f16b5249283 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_prime.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
> @@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
> return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
> }
>
> -void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - struct nouveau_bo *nvbo = nouveau_gem_object(obj);
> - int ret;
> -
> - ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
> - &nvbo->dma_buf_vmap);
> - if (ret)
> - return ERR_PTR(ret);
> -
> - return nvbo->dma_buf_vmap.virtual;
> -}
> -
> -void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - struct nouveau_bo *nvbo = nouveau_gem_object(obj);
> -
> - ttm_bo_kunmap(&nvbo->dma_buf_vmap);
> -}
> -
> struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sg)
> diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> index fdbc8d949135..5ab03d605f57 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> @@ -5,6 +5,7 @@
> #include <drm/drm_gem_shmem_helper.h>
> #include <drm/panfrost_drm.h>
> #include <linux/completion.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/iopoll.h>
> #include <linux/pm_runtime.h>
> #include <linux/slab.h>
> @@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> {
> struct panfrost_file_priv *user = file_priv->driver_priv;
> struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
> + struct dma_buf_map map;
> struct drm_gem_shmem_object *bo;
> u32 cfg, as;
> int ret;
> @@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> goto err_close_bo;
> }
>
> - perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
> - if (IS_ERR(perfcnt->buf)) {
> - ret = PTR_ERR(perfcnt->buf);
> + ret = drm_gem_shmem_vmap(&bo->base, &map);
> + if (ret)
> goto err_put_mapping;
> - }
> + perfcnt->buf = map.vaddr;
>
> /*
> * Invalidate the cache and clear the counters to start from a fresh
> @@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> return 0;
>
> err_vunmap:
> - drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
> + drm_gem_shmem_vunmap(&bo->base, &map);
> err_put_mapping:
> panfrost_gem_mapping_put(perfcnt->mapping);
> err_close_bo:
> @@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
> {
> struct panfrost_file_priv *user = file_priv->driver_priv;
> struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
> + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
>
> if (user != perfcnt->user)
> return -EINVAL;
> @@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
> GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
>
> perfcnt->user = NULL;
> - drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
> + drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
> perfcnt->buf = NULL;
> panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
> panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
> diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
> index 45fd76e04bdc..e165fa9b2089 100644
> --- a/drivers/gpu/drm/qxl/qxl_display.c
> +++ b/drivers/gpu/drm/qxl/qxl_display.c
> @@ -25,6 +25,7 @@
>
> #include <linux/crc32.h>
> #include <linux/delay.h>
> +#include <linux/dma-buf-map.h>
>
> #include <drm/drm_drv.h>
> #include <drm/drm_atomic.h>
> @@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
> struct drm_gem_object *obj;
> struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
> int ret;
> + struct dma_buf_map user_map;
> + struct dma_buf_map cursor_map;
> void *user_ptr;
> int size = 64*64*4;
>
> @@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
> user_bo = gem_to_qxl_bo(obj);
>
> /* pinning is done in the prepare/cleanup framevbuffer */
> - ret = qxl_bo_kmap(user_bo, &user_ptr);
> + ret = qxl_bo_kmap(user_bo, &user_map);
> if (ret)
> goto out_free_release;
> + user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */
>
> ret = qxl_alloc_bo_reserved(qdev, release,
> sizeof(struct qxl_cursor) + size,
> @@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
> if (ret)
> goto out_unpin;
>
> - ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
> + ret = qxl_bo_kmap(cursor_bo, &cursor_map);
> if (ret)
> goto out_backoff;
>
> @@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
> {
> int ret;
> struct drm_gem_object *gobj;
> + struct dma_buf_map map;
> int monitors_config_size = sizeof(struct qxl_monitors_config) +
> qxl_num_crtc * sizeof(struct qxl_head);
>
> @@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
> if (ret)
> return ret;
>
> - qxl_bo_kmap(qdev->monitors_config_bo, NULL);
> + qxl_bo_kmap(qdev->monitors_config_bo, &map);
>
> qdev->monitors_config = qdev->monitors_config_bo->kptr;
> qdev->ram_header->monitors_config =
> diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
> index 3599db096973..7b7acb910780 100644
> --- a/drivers/gpu/drm/qxl/qxl_draw.c
> +++ b/drivers/gpu/drm/qxl/qxl_draw.c
> @@ -20,6 +20,8 @@
> * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
> */
>
> +#include <linux/dma-buf-map.h>
> +
> #include <drm/drm_fourcc.h>
>
> #include "qxl_drv.h"
> @@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
> unsigned int num_clips,
> struct qxl_bo *clips_bo)
> {
> + struct dma_buf_map map;
> struct qxl_clip_rects *dev_clips;
> int ret;
>
> - ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
> - if (ret) {
> + ret = qxl_bo_kmap(clips_bo, &map);
> + if (ret)
> return NULL;
> - }
> + dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
> +
> dev_clips->num_rects = num_clips;
> dev_clips->chunk.next_chunk = 0;
> dev_clips->chunk.prev_chunk = 0;
> @@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
> int stride = fb->pitches[0];
> /* depth is not actually interesting, we don't mask with it */
> int depth = fb->format->cpp[0] * 8;
> + struct dma_buf_map surface_map;
> uint8_t *surface_base;
> struct qxl_release *release;
> struct qxl_bo *clips_bo;
> @@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
> if (ret)
> goto out_release_backoff;
>
> - ret = qxl_bo_kmap(bo, (void **)&surface_base);
> + ret = qxl_bo_kmap(bo, &surface_map);
> if (ret)
> goto out_release_backoff;
> + surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */
>
> ret = qxl_image_init(qdev, release, dimage, surface_base,
> left - dumb_shadow_offset,
> diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
> index 3602e8b34189..eb437fea5d9e 100644
> --- a/drivers/gpu/drm/qxl/qxl_drv.h
> +++ b/drivers/gpu/drm/qxl/qxl_drv.h
> @@ -30,6 +30,7 @@
> * Definitions taken from spice-protocol, plus kernel driver specific bits.
> */
>
> +#include <linux/dma-buf-map.h>
> #include <linux/dma-fence.h>
> #include <linux/firmware.h>
> #include <linux/platform_device.h>
> @@ -50,6 +51,8 @@
>
> #include "qxl_dev.h"
>
> +struct dma_buf_map;
> +
> #define DRIVER_AUTHOR "Dave Airlie"
>
> #define DRIVER_NAME "qxl"
> @@ -79,7 +82,7 @@ struct qxl_bo {
> /* Protected by tbo.reserved */
> struct ttm_place placements[3];
> struct ttm_placement placement;
> - struct ttm_bo_kmap_obj kmap;
> + struct dma_buf_map map;
> void *kptr;
> unsigned int map_count;
> int type;
> @@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
> void qxl_gem_object_close(struct drm_gem_object *obj,
> struct drm_file *file_priv);
> void qxl_bo_force_delete(struct qxl_device *qdev);
> -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
>
> /* qxl_dumb.c */
> int qxl_mode_dumb_create(struct drm_file *file_priv,
> @@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
> struct drm_gem_object *qxl_gem_prime_import_sg_table(
> struct drm_device *dev, struct dma_buf_attachment *attach,
> struct sg_table *sgt);
> -void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
> -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
> + struct dma_buf_map *map);
> int qxl_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
>
> diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
> index 940e99354f49..755df4d8f95f 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.c
> +++ b/drivers/gpu/drm/qxl/qxl_object.c
> @@ -23,10 +23,12 @@
> * Alon Levy
> */
>
> +#include <linux/dma-buf-map.h>
> +#include <linux/io-mapping.h>
> +
> #include "qxl_drv.h"
> #include "qxl_object.h"
>
> -#include <linux/io-mapping.h>
> static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
> {
> struct qxl_bo *bo;
> @@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev,
> return 0;
> }
>
> -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
> +int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
> {
> - bool is_iomem;
> int r;
>
> if (bo->kptr) {
> - if (ptr)
> - *ptr = bo->kptr;
> bo->map_count++;
> - return 0;
> + goto out;
> }
> - r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
> + r = ttm_bo_vmap(&bo->tbo, &bo->map);
> if (r)
> return r;
> - bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
> - if (ptr)
> - *ptr = bo->kptr;
> bo->map_count = 1;
> +
> + /* TODO: Remove kptr in favor of map everywhere. */
> + if (bo->map.is_iomem)
> + bo->kptr = (void *)bo->map.vaddr_iomem;
> + else
> + bo->kptr = bo->map.vaddr;
> +
> +out:
> + *map = bo->map;
> return 0;
> }
>
> @@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
> void *rptr;
> int ret;
> struct io_mapping *map;
> + struct dma_buf_map bo_map;
>
> if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
> map = qdev->vram_mapping;
> @@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
> return rptr;
> }
>
> - ret = qxl_bo_kmap(bo, &rptr);
> + ret = qxl_bo_kmap(bo, &bo_map);
> if (ret)
> return NULL;
> + rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */
>
> rptr += page_offset * PAGE_SIZE;
> return rptr;
> @@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo)
> if (bo->map_count > 0)
> return;
> bo->kptr = NULL;
> - ttm_bo_kunmap(&bo->kmap);
> + ttm_bo_vunmap(&bo->tbo, &bo->map);
> }
>
> void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
> diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
> index 09a5c818324d..ebf24c9d2bf2 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.h
> +++ b/drivers/gpu/drm/qxl/qxl_object.h
> @@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
> bool kernel, bool pinned, u32 domain,
> struct qxl_surface *surf,
> struct qxl_bo **bo_ptr);
> -extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
> +extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
> extern void qxl_bo_kunmap(struct qxl_bo *bo);
> void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
> void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
> diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
> index 7d3816fca5a8..4aa949799446 100644
> --- a/drivers/gpu/drm/qxl/qxl_prime.c
> +++ b/drivers/gpu/drm/qxl/qxl_prime.c
> @@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
> return ERR_PTR(-ENOSYS);
> }
>
> -void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
> +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct qxl_bo *bo = gem_to_qxl_bo(obj);
> - void *ptr;
> int ret;
>
> - ret = qxl_bo_kmap(bo, &ptr);
> + ret = qxl_bo_kmap(bo, map);
> if (ret < 0)
> - return ERR_PTR(ret);
> + return ret;
>
> - return ptr;
> + return 0;
> }
>
> -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
> + struct dma_buf_map *map)
> {
> struct qxl_bo *bo = gem_to_qxl_bo(obj);
>
> diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
> index 5d54bccebd4d..44cb5ee6fc20 100644
> --- a/drivers/gpu/drm/radeon/radeon.h
> +++ b/drivers/gpu/drm/radeon/radeon.h
> @@ -509,7 +509,6 @@ struct radeon_bo {
> /* Constant after initialization */
> struct radeon_device *rdev;
>
> - struct ttm_bo_kmap_obj dma_buf_vmap;
> pid_t pid;
>
> #ifdef CONFIG_MMU_NOTIFIER
> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> index 0ccd7213e41f..d2876ce3bc9e 100644
> --- a/drivers/gpu/drm/radeon/radeon_gem.c
> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> @@ -31,6 +31,7 @@
> #include <drm/drm_debugfs.h>
> #include <drm/drm_device.h>
> #include <drm/drm_file.h>
> +#include <drm/drm_gem_ttm_helper.h>
> #include <drm/radeon_drm.h>
>
> #include "radeon.h"
> @@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
> struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
> int radeon_gem_prime_pin(struct drm_gem_object *obj);
> void radeon_gem_prime_unpin(struct drm_gem_object *obj);
> -void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
> -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
>
> static const struct drm_gem_object_funcs radeon_gem_object_funcs;
>
> @@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = {
> .pin = radeon_gem_prime_pin,
> .unpin = radeon_gem_prime_unpin,
> .get_sg_table = radeon_gem_prime_get_sg_table,
> - .vmap = radeon_gem_prime_vmap,
> - .vunmap = radeon_gem_prime_vunmap,
> + .vmap = drm_gem_ttm_vmap,
> + .vunmap = drm_gem_ttm_vunmap,
> };
>
> /*
> diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
> index b9de0e51c0be..088d39a51c0d 100644
> --- a/drivers/gpu/drm/radeon/radeon_prime.c
> +++ b/drivers/gpu/drm/radeon/radeon_prime.c
> @@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
> return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
> }
>
> -void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - struct radeon_bo *bo = gem_to_radeon_bo(obj);
> - int ret;
> -
> - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
> - &bo->dma_buf_vmap);
> - if (ret)
> - return ERR_PTR(ret);
> -
> - return bo->dma_buf_vmap.virtual;
> -}
> -
> -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - struct radeon_bo *bo = gem_to_radeon_bo(obj);
> -
> - ttm_bo_kunmap(&bo->dma_buf_vmap);
> -}
> -
> struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sg)
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> index 7d5ebb10323b..7971f57436dd 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> @@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
> return ERR_PTR(ret);
> }
>
> -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
> +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>
> - if (rk_obj->pages)
> - return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
> - pgprot_writecombine(PAGE_KERNEL));
> + if (rk_obj->pages) {
> + void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
> + pgprot_writecombine(PAGE_KERNEL));
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
> + return 0;
> + }
>
> if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
> - return NULL;
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
>
> - return rk_obj->kvaddr;
> + return 0;
> }
>
> -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>
> if (rk_obj->pages) {
> - vunmap(vaddr);
> + vunmap(map->vaddr);
> return;
> }
>
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> index 7ffc541bea07..5a70a56cd406 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> @@ -31,8 +31,8 @@ struct drm_gem_object *
> rockchip_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sg);
> -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
> -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> /* drm driver mmap file operations */
> int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
> index 744a8e337e41..c02e35ed6e76 100644
> --- a/drivers/gpu/drm/tiny/cirrus.c
> +++ b/drivers/gpu/drm/tiny/cirrus.c
> @@ -17,6 +17,7 @@
> */
>
> #include <linux/console.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/module.h>
> #include <linux/pci.h>
>
> @@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
> struct drm_rect *rect)
> {
> struct cirrus_device *cirrus = to_cirrus(fb->dev);
> + struct dma_buf_map map;
> void *vmap;
> int idx, ret;
>
> @@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
> if (!drm_dev_enter(&cirrus->dev, &idx))
> goto out;
>
> - ret = -ENOMEM;
> - vmap = drm_gem_shmem_vmap(fb->obj[0]);
> - if (!vmap)
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (ret)
> goto out_dev_exit;
> + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> if (cirrus->cpp == fb->format->cpp[0])
> drm_fb_memcpy_dstclip(cirrus->vram,
> @@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
> else
> WARN_ON_ONCE("cpp mismatch");
>
> - drm_gem_shmem_vunmap(fb->obj[0], vmap);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
> ret = 0;
>
> out_dev_exit:
> diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
> index cc397671f689..12a890cea6e9 100644
> --- a/drivers/gpu/drm/tiny/gm12u320.c
> +++ b/drivers/gpu/drm/tiny/gm12u320.c
> @@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
> {
> int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
> struct drm_framebuffer *fb;
> + struct dma_buf_map map;
> void *vaddr;
> u8 *src;
>
> @@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
> y1 = gm12u320->fb_update.rect.y1;
> y2 = gm12u320->fb_update.rect.y2;
>
> - vaddr = drm_gem_shmem_vmap(fb->obj[0]);
> - if (IS_ERR(vaddr)) {
> - GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (ret) {
> + GM12U320_ERR("failed to vmap fb: %d\n", ret);
> goto put_fb;
> }
> + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> if (fb->obj[0]->import_attach) {
> ret = dma_buf_begin_cpu_access(
> @@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
> GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
> }
> vunmap:
> - drm_gem_shmem_vunmap(fb->obj[0], vaddr);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
> put_fb:
> drm_framebuffer_put(fb);
> gm12u320->fb_update.fb = NULL;
> diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
> index fef43f4e3bac..42eeba1dfdbf 100644
> --- a/drivers/gpu/drm/udl/udl_modeset.c
> +++ b/drivers/gpu/drm/udl/udl_modeset.c
> @@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
> struct urb *urb;
> struct drm_rect clip;
> int log_bpp;
> + struct dma_buf_map map;
> void *vaddr;
>
> ret = udl_log_cpp(fb->format->cpp[0]);
> @@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
> return ret;
> }
>
> - vaddr = drm_gem_shmem_vmap(fb->obj[0]);
> - if (IS_ERR(vaddr)) {
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (ret) {
> DRM_ERROR("failed to vmap fb\n");
> goto out_dma_buf_end_cpu_access;
> }
> + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> urb = udl_get_urb(dev);
> if (!urb)
> @@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
> ret = 0;
>
> out_drm_gem_shmem_vunmap:
> - drm_gem_shmem_vunmap(fb->obj[0], vaddr);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
> out_dma_buf_end_cpu_access:
> if (import_attach) {
> tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
> diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> index 931c55126148..f268fb258c83 100644
> --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
> +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> @@ -9,6 +9,8 @@
> * Michael Thayer <michael.thayer@oracle.com,
> * Hans de Goede <hdegoede@redhat.com>
> */
> +
> +#include <linux/dma-buf-map.h>
> #include <linux/export.h>
>
> #include <drm/drm_atomic.h>
> @@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> u32 height = plane->state->crtc_h;
> size_t data_size, mask_size;
> u32 flags;
> + struct dma_buf_map map;
> + int ret;
> u8 *src;
>
> /*
> @@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>
> vbox_crtc->cursor_enabled = true;
>
> - src = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(src)) {
> + ret = drm_gem_vram_vmap(gbo, &map);
> + if (ret) {
> /*
> * BUG: we should have pinned the BO in prepare_fb().
> */
> @@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> DRM_WARN("Could not map cursor bo, skipping update\n");
> return;
> }
> + src = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> /*
> * The mask must be calculated based on the alpha
> @@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> data_size = width * height * 4 + mask_size;
>
> copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
> - drm_gem_vram_vunmap(gbo, src);
> + drm_gem_vram_vunmap(gbo, &map);
>
> flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
> VBOX_MOUSE_POINTER_ALPHA;
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index 557f0d1e6437..f290a9a942dc 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> return drm_gem_cma_prime_mmap(obj, vma);
> }
>
> -void *vc4_prime_vmap(struct drm_gem_object *obj)
> +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct vc4_bo *bo = to_vc4_bo(obj);
>
> if (bo->validated_shader) {
> DRM_DEBUG("mmaping of shader BOs not allowed.\n");
> - return ERR_PTR(-EINVAL);
> + return -EINVAL;
> }
>
> - return drm_gem_cma_prime_vmap(obj);
> + return drm_gem_cma_prime_vmap(obj, map);
> }
>
> struct drm_gem_object *
> diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
> index cc79b1aaa878..904f2c36c963 100644
> --- a/drivers/gpu/drm/vc4/vc4_drv.h
> +++ b/drivers/gpu/drm/vc4/vc4_drv.h
> @@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
> struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sgt);
> -void *vc4_prime_vmap(struct drm_gem_object *obj);
> +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> int vc4_bo_cache_init(struct drm_device *dev);
> void vc4_bo_cache_destroy(struct drm_device *dev);
> int vc4_bo_inc_usecnt(struct vc4_bo *bo);
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index fa54a6d1403d..b2aa26e1e4a2 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
> return &obj->base;
> }
>
> -static void *vgem_prime_vmap(struct drm_gem_object *obj)
> +static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
> long n_pages = obj->size >> PAGE_SHIFT;
> struct page **pages;
> + void *vaddr;
>
> pages = vgem_pin_pages(bo);
> if (IS_ERR(pages))
> - return NULL;
> + return PTR_ERR(pages);
> +
> + vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
>
> - return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> + return 0;
> }
>
> -static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
>
> - vunmap(vaddr);
> + vunmap(map->vaddr);
> vgem_unpin_pages(bo);
> }
>
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> index 4f34ef34ba60..74db5a840bed 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> @@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
> return gem_mmap_obj(xen_obj, vma);
> }
>
> -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
> +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
> {
> struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
> + void *vaddr;
>
> if (!xen_obj->pages)
> - return NULL;
> + return -ENOMEM;
>
> /* Please see comment in gem_mmap_obj on mapping and attributes. */
> - return vmap(xen_obj->pages, xen_obj->num_pages,
> - VM_MAP, PAGE_KERNEL);
> + vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
> + VM_MAP, PAGE_KERNEL);
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
> +
> + return 0;
> }
>
> void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
> - void *vaddr)
> + struct dma_buf_map *map)
> {
> - vunmap(vaddr);
> + vunmap(map->vaddr);
> }
>
> int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> index a39675fa31b2..a4e67d0a149c 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> @@ -12,6 +12,7 @@
> #define __XEN_DRM_FRONT_GEM_H
>
> struct dma_buf_attachment;
> +struct dma_buf_map;
> struct drm_device;
> struct drm_gem_object;
> struct file;
> @@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
>
> int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
>
> -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
> +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
> + struct dma_buf_map *map);
>
> void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
> - void *vaddr);
> + struct dma_buf_map *map);
>
> int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
> struct vm_area_struct *vma);
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index c38dd35da00b..5e6daa1c982f 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -39,6 +39,7 @@
>
> #include <drm/drm_vma_manager.h>
>
> +struct dma_buf_map;
> struct drm_gem_object;
>
> /**
> @@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
> *
> * This callback is optional.
> */
> - void *(*vmap)(struct drm_gem_object *obj);
> + int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> /**
> * @vunmap:
> @@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
> *
> * This callback is optional.
> */
> - void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
> + void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> /**
> * @mmap:
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index a064b0d1c480..caf98b9cf4b4 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
> struct sg_table *sgt);
> int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
> +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> struct drm_gem_object *
> drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 5381f0c8cf6f..3449a0353fe0 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
> void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
> int drm_gem_shmem_pin(struct drm_gem_object *obj);
> void drm_gem_shmem_unpin(struct drm_gem_object *obj);
> -void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
> -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
>
> diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
> index 128f88174d32..c0d28ba0f5c9 100644
> --- a/include/drm/drm_gem_vram_helper.h
> +++ b/include/drm/drm_gem_vram_helper.h
> @@ -10,6 +10,7 @@
> #include <drm/ttm/ttm_bo_api.h>
> #include <drm/ttm/ttm_bo_driver.h>
>
> +#include <linux/dma-buf-map.h>
> #include <linux/kernel.h> /* for container_of() */
>
> struct drm_mode_create_dumb;
> @@ -29,9 +30,8 @@ struct vm_area_struct;
>
> /**
> * struct drm_gem_vram_object - GEM object backed by VRAM
> - * @gem: GEM object
> * @bo: TTM buffer object
> - * @kmap: Mapping information for @bo
> + * @map: Mapping information for @bo
> * @placement: TTM placement information. Supported placements are \
> %TTM_PL_VRAM and %TTM_PL_SYSTEM
> * @placements: TTM placement information.
> @@ -50,15 +50,15 @@ struct vm_area_struct;
> */
> struct drm_gem_vram_object {
> struct ttm_buffer_object bo;
> - struct ttm_bo_kmap_obj kmap;
> + struct dma_buf_map map;
>
> /**
> - * @kmap_use_count:
> + * @vmap_use_count:
> *
> * Reference count on the virtual address.
> * The address are un-mapped when the count reaches zero.
> */
> - unsigned int kmap_use_count;
> + unsigned int vmap_use_count;
>
> /* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
> struct ttm_placement placement;
> @@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
> s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
> int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
> int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
> -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
> +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
> +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
>
> int drm_gem_vram_fill_create_dumb(struct drm_file *file,
> struct drm_device *dev,
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends
@ 2020-10-15 14:21 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 14:21 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst-VuQAYsv1563Yd54FQh9/CA,
mripard-DgEjT+Ai2ygdnm+yROfE0A, airlied-cv59FeDIM0c,
daniel-/w4YWyX8dFk, sam-uyr5N9Q2VtJg9hUCZPvPmw,
alexander.deucher-5C7GfCeVMHo, kraxel-H+wXaHxf7aLQT0dZR+AlfA,
l.stach-bIcnvbaLZ9MEGnE8C9+IrQ,
linux+etnaviv-I+IVW8TIWO2tmTQ+vhA3Yw,
christian.gmeiner-Re5JQEeQqe8AvxtiuMwx3w,
inki.dae-Sze3O3UU22JBDgjK7y7TUQ,
jy0922.shim-Sze3O3UU22JBDgjK7y7TUQ,
sw0312.kim-Sze3O3UU22JBDgjK7y7TUQ,
kyungmin.park-Sze3O3UU22JBDgjK7y7TUQ,
kgene-DgEjT+Ai2ygdnm+yROfE0A, krzk-DgEjT+Ai2ygdnm+yROfE0A,
yuq825-Re5JQEeQqe8AvxtiuMwx3w, bskeggs-H+wXaHxf7aLQT0dZR+AlfA,
robh-DgEjT+Ai2ygdnm+yROfE0A, tomeu.vizoso-ZGY8ohtN/8qB+jHODAdFcQ,
steven.price-5wv7dgnIgG8,
alyssa.rosenzweig-ZGY8ohtN/8qB+jHODAdFcQ,
hjc-TNX95d0MmH7DzftRWevZcw, heiko-4mtYJXux2i+zQB+pC5nmwQ,
hdegoede-H+wXaHxf7aLQT0dZR+AlfA, sean-p7yTbzM4H96eqtR555YLDQ,
eric-WhKQ6XTQaPysTnJN9+BGXg, oleksandr_andrushchenko-uRwfk40T5oI,
ray.huang-5C7GfCeVMHo, sumit.semwal-QSEj5FYQhm4dnm+yROfE0A,
emil.velikov-ZGY8ohtN/8qB+jHODAdFcQ, luben.tuikov-5C7GfCeVMHo,
apaneers-5C7GfCeVMHo, linus.walleij-QSEj5FYQhm4dnm+yROfE0A,
melissa.srw-Re5JQEeQqe8AvxtiuMwx3w,
chris-Y6uKTt2uX1cEflXRtASbqLVCufUGDwFn, miaoqinglang
Cc: linux-samsung-soc-u79uwXL29TY76Z2rM5mHXA,
lima-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
nouveau-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
etnaviv-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
linaro-mm-sig-cunTk1MwBs8s++Sfvej+rw,
linux-rockchip-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
spice-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
linux-media-u79uwXL29TY76Z2rM5mHXA
Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> This patch replaces the vmap/vunmap's use of raw pointers in GEM object
> functions with instances of struct dma_buf_map. GEM backends are
> converted as well. For most of them, this simply changes the returned type.
>
> TTM-based drivers now return information about the location of the memory,
> either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap()
> et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of
> implementing their own vmap callbacks.
>
> v4:
> * use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian)
> * fix a trailing { in drm_gem_vmap()
> * remove several empty functions instead of converting them (Daniel)
> * comment uses of raw pointers with a TODO (Daniel)
> * TODO list: convert more helpers to use struct dma_buf_map
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
The amdgpu changes look good to me, but I can't fully judge the other stuff.
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> Documentation/gpu/todo.rst | 18 ++++
> drivers/gpu/drm/Kconfig | 2 +
> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 -------
> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 -
> drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 -
> drivers/gpu/drm/ast/ast_cursor.c | 27 +++--
> drivers/gpu/drm/ast/ast_drv.h | 7 +-
> drivers/gpu/drm/drm_gem.c | 23 +++--
> drivers/gpu/drm/drm_gem_cma_helper.c | 10 +-
> drivers/gpu/drm/drm_gem_shmem_helper.c | 48 +++++----
> drivers/gpu/drm/drm_gem_vram_helper.c | 107 ++++++++++----------
> drivers/gpu/drm/etnaviv/etnaviv_drv.h | 2 +-
> drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 9 +-
> drivers/gpu/drm/lima/lima_gem.c | 6 +-
> drivers/gpu/drm/lima/lima_sched.c | 11 +-
> drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +-
> drivers/gpu/drm/nouveau/Kconfig | 1 +
> drivers/gpu/drm/nouveau/nouveau_bo.h | 2 -
> drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +-
> drivers/gpu/drm/nouveau/nouveau_gem.h | 2 -
> drivers/gpu/drm/nouveau/nouveau_prime.c | 20 ----
> drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +--
> drivers/gpu/drm/qxl/qxl_display.c | 11 +-
> drivers/gpu/drm/qxl/qxl_draw.c | 14 ++-
> drivers/gpu/drm/qxl/qxl_drv.h | 11 +-
> drivers/gpu/drm/qxl/qxl_object.c | 31 +++---
> drivers/gpu/drm/qxl/qxl_object.h | 2 +-
> drivers/gpu/drm/qxl/qxl_prime.c | 12 +--
> drivers/gpu/drm/radeon/radeon.h | 1 -
> drivers/gpu/drm/radeon/radeon_gem.c | 7 +-
> drivers/gpu/drm/radeon/radeon_prime.c | 20 ----
> drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 ++--
> drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +-
> drivers/gpu/drm/tiny/cirrus.c | 10 +-
> drivers/gpu/drm/tiny/gm12u320.c | 10 +-
> drivers/gpu/drm/udl/udl_modeset.c | 8 +-
> drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +-
> drivers/gpu/drm/vc4/vc4_bo.c | 6 +-
> drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
> drivers/gpu/drm/vgem/vgem_drv.c | 16 ++-
> drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 ++--
> drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +-
> include/drm/drm_gem.h | 5 +-
> include/drm/drm_gem_cma_helper.h | 2 +-
> include/drm/drm_gem_shmem_helper.h | 4 +-
> include/drm/drm_gem_vram_helper.h | 14 +--
> 47 files changed, 321 insertions(+), 295 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 700637e25ecd..7e6fc3c04add 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -446,6 +446,24 @@ Contact: Ville Syrjälä, Daniel Vetter
>
> Level: Intermediate
>
> +Use struct dma_buf_map throughout codebase
> +------------------------------------------
> +
> +Pointers to shared device memory are stored in struct dma_buf_map. Each
> +instance knows whether it refers to system or I/O memory. Most of the DRM-wide
> +interface have been converted to use struct dma_buf_map, but implementations
> +often still use raw pointers.
> +
> +The task is to use struct dma_buf_map where it makes sense.
> +
> +* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
> +* TTM might benefit from using struct dma_buf_map internally.
> +* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
> +
> +Level: Intermediate
> +
>
> Core refactorings
> =================
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index 147d61b9674e..319839b87d37 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -239,6 +239,7 @@ config DRM_RADEON
> select FW_LOADER
> select DRM_KMS_HELPER
> select DRM_TTM
> + select DRM_TTM_HELPER
> select POWER_SUPPLY
> select HWMON
> select BACKLIGHT_CLASS_DEVICE
> @@ -259,6 +260,7 @@ config DRM_AMDGPU
> select DRM_KMS_HELPER
> select DRM_SCHED
> select DRM_TTM
> + select DRM_TTM_HELPER
> select POWER_SUPPLY
> select HWMON
> select BACKLIGHT_CLASS_DEVICE
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> index 5b465ab774d1..e5919efca870 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> @@ -41,42 +41,6 @@
> #include <linux/dma-fence-array.h>
> #include <linux/pci-p2pdma.h>
>
> -/**
> - * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
> - * @obj: GEM BO
> - *
> - * Sets up an in-kernel virtual mapping of the BO's memory.
> - *
> - * Returns:
> - * The virtual address of the mapping or an error pointer.
> - */
> -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> - int ret;
> -
> - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
> - &bo->dma_buf_vmap);
> - if (ret)
> - return ERR_PTR(ret);
> -
> - return bo->dma_buf_vmap.virtual;
> -}
> -
> -/**
> - * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
> - * @obj: GEM BO
> - * @vaddr: Virtual address (unused)
> - *
> - * Tears down the in-kernel virtual mapping of the BO's memory.
> - */
> -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> -
> - ttm_bo_kunmap(&bo->dma_buf_vmap);
> -}
> -
> /**
> * amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation
> * @obj: GEM BO
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> index 2c5c84a06bb9..39b5b9616fd8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> @@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
> struct dma_buf *dma_buf);
> bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
> struct amdgpu_bo *bo);
> -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
> -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index be08a63ef58c..576659827e74 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -33,6 +33,7 @@
>
> #include <drm/amdgpu_drm.h>
> #include <drm/drm_debugfs.h>
> +#include <drm/drm_gem_ttm_helper.h>
>
> #include "amdgpu.h"
> #include "amdgpu_display.h"
> @@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = {
> .open = amdgpu_gem_object_open,
> .close = amdgpu_gem_object_close,
> .export = amdgpu_gem_prime_export,
> - .vmap = amdgpu_gem_prime_vmap,
> - .vunmap = amdgpu_gem_prime_vunmap,
> + .vmap = drm_gem_ttm_vmap,
> + .vunmap = drm_gem_ttm_vunmap,
> };
>
> /*
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> index 132e5f955180..01296ef0d673 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> @@ -100,7 +100,6 @@ struct amdgpu_bo {
> struct amdgpu_bo *parent;
> struct amdgpu_bo *shadow;
>
> - struct ttm_bo_kmap_obj dma_buf_vmap;
> struct amdgpu_mn *mn;
>
>
> diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
> index e0f4613918ad..742d43a7edf4 100644
> --- a/drivers/gpu/drm/ast/ast_cursor.c
> +++ b/drivers/gpu/drm/ast/ast_cursor.c
> @@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
>
> for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
> gbo = ast->cursor.gbo[i];
> - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
> + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
> drm_gem_vram_unpin(gbo);
> drm_gem_vram_put(gbo);
> }
> @@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
> struct drm_device *dev = &ast->base;
> size_t size, i;
> struct drm_gem_vram_object *gbo;
> - void __iomem *vaddr;
> + struct dma_buf_map map;
> int ret;
>
> size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
> @@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
> drm_gem_vram_put(gbo);
> goto err_drm_gem_vram_put;
> }
> - vaddr = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(vaddr)) {
> - ret = PTR_ERR(vaddr);
> + ret = drm_gem_vram_vmap(gbo, &map);
> + if (ret) {
> drm_gem_vram_unpin(gbo);
> drm_gem_vram_put(gbo);
> goto err_drm_gem_vram_put;
> }
>
> ast->cursor.gbo[i] = gbo;
> - ast->cursor.vaddr[i] = vaddr;
> + ast->cursor.map[i] = map;
> }
>
> return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
> @@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
> while (i) {
> --i;
> gbo = ast->cursor.gbo[i];
> - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
> + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
> drm_gem_vram_unpin(gbo);
> drm_gem_vram_put(gbo);
> }
> @@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
> {
> struct drm_device *dev = &ast->base;
> struct drm_gem_vram_object *gbo;
> + struct dma_buf_map map;
> int ret;
> void *src;
> void __iomem *dst;
> @@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
> ret = drm_gem_vram_pin(gbo, 0);
> if (ret)
> return ret;
> - src = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(src)) {
> - ret = PTR_ERR(src);
> + ret = drm_gem_vram_vmap(gbo, &map);
> + if (ret)
> goto err_drm_gem_vram_unpin;
> - }
> + src = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> - dst = ast->cursor.vaddr[ast->cursor.next_index];
> + dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
>
> /* do data transfer to cursor BO */
> update_cursor_image(dst, src, fb->width, fb->height);
>
> - drm_gem_vram_vunmap(gbo, src);
> + drm_gem_vram_vunmap(gbo, &map);
> drm_gem_vram_unpin(gbo);
>
> return 0;
> @@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
> u8 __iomem *sig;
> u8 jreg;
>
> - dst = ast->cursor.vaddr[ast->cursor.next_index];
> + dst = ast->cursor.map[ast->cursor.next_index].vaddr;
>
> sig = dst + AST_HWC_SIZE;
> writel(x, sig + AST_HWC_SIGNATURE_X);
> diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
> index 467049ca8430..f963141dd851 100644
> --- a/drivers/gpu/drm/ast/ast_drv.h
> +++ b/drivers/gpu/drm/ast/ast_drv.h
> @@ -28,10 +28,11 @@
> #ifndef __AST_DRV_H__
> #define __AST_DRV_H__
>
> -#include <linux/types.h>
> -#include <linux/io.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/i2c.h>
> #include <linux/i2c-algo-bit.h>
> +#include <linux/io.h>
> +#include <linux/types.h>
>
> #include <drm/drm_connector.h>
> #include <drm/drm_crtc.h>
> @@ -131,7 +132,7 @@ struct ast_private {
>
> struct {
> struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
> - void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
> + struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
> unsigned int next_index;
> } cursor;
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 1da67d34e55d..a89ad4570e3c 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -36,6 +36,7 @@
> #include <linux/pagemap.h>
> #include <linux/shmem_fs.h>
> #include <linux/dma-buf.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/mem_encrypt.h>
> #include <linux/pagevec.h>
>
> @@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
>
> void *drm_gem_vmap(struct drm_gem_object *obj)
> {
> - void *vaddr;
> + struct dma_buf_map map;
> + int ret;
>
> - if (obj->funcs->vmap)
> - vaddr = obj->funcs->vmap(obj);
> - else
> - vaddr = ERR_PTR(-EOPNOTSUPP);
> + if (!obj->funcs->vmap)
> + return ERR_PTR(-EOPNOTSUPP);
>
> - if (!vaddr)
> - vaddr = ERR_PTR(-ENOMEM);
> + ret = obj->funcs->vmap(obj, &map);
> + if (ret)
> + return ERR_PTR(ret);
> + else if (dma_buf_map_is_null(&map))
> + return ERR_PTR(-ENOMEM);
>
> - return vaddr;
> + return map.vaddr;
> }
>
> void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
> {
> + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
> +
> if (!vaddr)
> return;
>
> if (obj->funcs->vunmap)
> - obj->funcs->vunmap(obj, vaddr);
> + obj->funcs->vunmap(obj, &map);
> }
>
> /**
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index d527485ea0b7..b57e3e9222f0 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
> * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
> * address space
> * @obj: GEM object
> + * @map: Returns the kernel virtual address of the CMA GEM object's backing
> + * store.
> *
> * This function maps a buffer exported via DRM PRIME into the kernel's
> * virtual address space. Since the CMA buffers are already mapped into the
> @@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
> * driver's &drm_gem_object_funcs.vmap callback.
> *
> * Returns:
> - * The kernel virtual address of the CMA GEM object's backing store.
> + * 0 on success, or a negative error code otherwise.
> */
> -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
> +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
>
> - return cma_obj->vaddr;
> + dma_buf_map_set_vaddr(map, cma_obj->vaddr);
> +
> + return 0;
> }
> EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index fb11df7aced5..5553f58f68f3 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
> }
> EXPORT_SYMBOL(drm_gem_shmem_unpin);
>
> -static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> +static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
> - struct dma_buf_map map;
> int ret = 0;
>
> - if (shmem->vmap_use_count++ > 0)
> - return shmem->vaddr;
> + if (shmem->vmap_use_count++ > 0) {
> + dma_buf_map_set_vaddr(map, shmem->vaddr);
> + return 0;
> + }
>
> if (obj->import_attach) {
> - ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
> - if (!ret)
> - shmem->vaddr = map.vaddr;
> + ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
> + if (!ret) {
> + if (WARN_ON(map->is_iomem)) {
> + ret = -EIO;
> + goto err_put_pages;
> + }
> + shmem->vaddr = map->vaddr;
> + }
> } else {
> pgprot_t prot = PAGE_KERNEL;
>
> @@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> VM_MAP, prot);
> if (!shmem->vaddr)
> ret = -ENOMEM;
> + else
> + dma_buf_map_set_vaddr(map, shmem->vaddr);
> }
>
> if (ret) {
> @@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> goto err_put_pages;
> }
>
> - return shmem->vaddr;
> + return 0;
>
> err_put_pages:
> if (!obj->import_attach)
> @@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> err_zero_use:
> shmem->vmap_use_count = 0;
>
> - return ERR_PTR(ret);
> + return ret;
> }
>
> /*
> * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
> * @shmem: shmem GEM object
> + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> + * store.
> *
> * This function makes sure that a contiguous kernel virtual address mapping
> * exists for the buffer backing the shmem GEM object.
> @@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> * Returns:
> * 0 on success or a negative error code on failure.
> */
> -void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
> +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> - void *vaddr;
> int ret;
>
> ret = mutex_lock_interruptible(&shmem->vmap_lock);
> if (ret)
> - return ERR_PTR(ret);
> - vaddr = drm_gem_shmem_vmap_locked(shmem);
> + return ret;
> + ret = drm_gem_shmem_vmap_locked(shmem, map);
> mutex_unlock(&shmem->vmap_lock);
>
> - return vaddr;
> + return ret;
> }
> EXPORT_SYMBOL(drm_gem_shmem_vmap);
>
> -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> +static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> + struct dma_buf_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
> - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
>
> if (WARN_ON_ONCE(!shmem->vmap_use_count))
> return;
> @@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> return;
>
> if (obj->import_attach)
> - dma_buf_vunmap(obj->import_attach->dmabuf, &map);
> + dma_buf_vunmap(obj->import_attach->dmabuf, map);
> else
> vunmap(shmem->vaddr);
>
> @@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> /*
> * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
> * @shmem: shmem GEM object
> + * @map: Kernel virtual address where the SHMEM GEM object was mapped
> *
> * This function cleans up a kernel virtual address mapping acquired by
> * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
> @@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> * also be called by drivers directly, in which case it will hide the
> * differences between dma-buf imported and natively allocated objects.
> */
> -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>
> mutex_lock(&shmem->vmap_lock);
> - drm_gem_shmem_vunmap_locked(shmem);
> + drm_gem_shmem_vunmap_locked(shmem, map);
> mutex_unlock(&shmem->vmap_lock);
> }
> EXPORT_SYMBOL(drm_gem_shmem_vunmap);
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 2d5ed30518f1..4d8553b28558 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -1,5 +1,6 @@
> // SPDX-License-Identifier: GPL-2.0-or-later
>
> +#include <linux/dma-buf-map.h>
> #include <linux/module.h>
>
> #include <drm/drm_debugfs.h>
> @@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
> * up; only release the GEM object.
> */
>
> - WARN_ON(gbo->kmap_use_count);
> - WARN_ON(gbo->kmap.virtual);
> + WARN_ON(gbo->vmap_use_count);
> + WARN_ON(dma_buf_map_is_set(&gbo->map));
>
> drm_gem_object_release(&gbo->bo.base);
> }
> @@ -382,29 +383,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
> }
> EXPORT_SYMBOL(drm_gem_vram_unpin);
>
> -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
> +static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
> + struct dma_buf_map *map)
> {
> int ret;
> - struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> - bool is_iomem;
>
> - if (gbo->kmap_use_count > 0)
> + if (gbo->vmap_use_count > 0)
> goto out;
>
> - ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
> + ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
> if (ret)
> - return ERR_PTR(ret);
> + return ret;
>
> out:
> - ++gbo->kmap_use_count;
> - return ttm_kmap_obj_virtual(kmap, &is_iomem);
> + ++gbo->vmap_use_count;
> + *map = gbo->map;
> +
> + return 0;
> }
>
> -static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> +static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
> + struct dma_buf_map *map)
> {
> - if (WARN_ON_ONCE(!gbo->kmap_use_count))
> + struct drm_device *dev = gbo->bo.base.dev;
> +
> + if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count))
> return;
> - if (--gbo->kmap_use_count > 0)
> +
> + if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
> + return; /* BUG: map not mapped from this BO */
> +
> + if (--gbo->vmap_use_count > 0)
> return;
>
> /*
> @@ -418,7 +427,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> /**
> * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
> * space
> - * @gbo: The GEM VRAM object to map
> + * @gbo: The GEM VRAM object to map
> + * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> + * store.
> *
> * The vmap function pins a GEM VRAM object to its current location, either
> * system or video memory, and maps its buffer into kernel address space.
> @@ -427,48 +438,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> * unmap and unpin the GEM VRAM object.
> *
> * Returns:
> - * The buffer's virtual address on success, or
> - * an ERR_PTR()-encoded error code otherwise.
> + * 0 on success, or a negative error code otherwise.
> */
> -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
> +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
> {
> int ret;
> - void *base;
>
> ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
> if (ret)
> - return ERR_PTR(ret);
> + return ret;
>
> ret = drm_gem_vram_pin_locked(gbo, 0);
> if (ret)
> goto err_ttm_bo_unreserve;
> - base = drm_gem_vram_kmap_locked(gbo);
> - if (IS_ERR(base)) {
> - ret = PTR_ERR(base);
> + ret = drm_gem_vram_kmap_locked(gbo, map);
> + if (ret)
> goto err_drm_gem_vram_unpin_locked;
> - }
>
> ttm_bo_unreserve(&gbo->bo);
>
> - return base;
> + return 0;
>
> err_drm_gem_vram_unpin_locked:
> drm_gem_vram_unpin_locked(gbo);
> err_ttm_bo_unreserve:
> ttm_bo_unreserve(&gbo->bo);
> - return ERR_PTR(ret);
> + return ret;
> }
> EXPORT_SYMBOL(drm_gem_vram_vmap);
>
> /**
> * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
> - * @gbo: The GEM VRAM object to unmap
> - * @vaddr: The mapping's base address as returned by drm_gem_vram_vmap()
> + * @gbo: The GEM VRAM object to unmap
> + * @map: Kernel virtual address where the VRAM GEM object was mapped
> *
> * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
> * the documentation for drm_gem_vram_vmap() for more information.
> */
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
> +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
> {
> int ret;
>
> @@ -476,7 +483,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
> if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
> return;
>
> - drm_gem_vram_kunmap_locked(gbo);
> + drm_gem_vram_kunmap_locked(gbo, map);
> drm_gem_vram_unpin_locked(gbo);
>
> ttm_bo_unreserve(&gbo->bo);
> @@ -567,15 +574,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
> bool evict,
> struct ttm_resource *new_mem)
> {
> - struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> + struct ttm_buffer_object *bo = &gbo->bo;
> + struct drm_device *dev = bo->base.dev;
>
> - if (WARN_ON_ONCE(gbo->kmap_use_count))
> + if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count))
> return;
>
> - if (!kmap->virtual)
> - return;
> - ttm_bo_kunmap(kmap);
> - kmap->virtual = NULL;
> + ttm_bo_vunmap(bo, &gbo->map);
> }
>
> static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
> @@ -832,37 +837,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
> }
>
> /**
> - * drm_gem_vram_object_vmap() - \
> - Implements &struct drm_gem_object_funcs.vmap
> - * @gem: The GEM object to map
> + * drm_gem_vram_object_vmap() -
> + * Implements &struct drm_gem_object_funcs.vmap
> + * @gem: The GEM object to map
> + * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> + * store.
> *
> * Returns:
> - * The buffers virtual address on success, or
> - * NULL otherwise.
> + * 0 on success, or a negative error code otherwise.
> */
> -static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
> +static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
> {
> struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
> - void *base;
>
> - base = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(base))
> - return NULL;
> - return base;
> + return drm_gem_vram_vmap(gbo, map);
> }
>
> /**
> - * drm_gem_vram_object_vunmap() - \
> - Implements &struct drm_gem_object_funcs.vunmap
> - * @gem: The GEM object to unmap
> - * @vaddr: The mapping's base address
> + * drm_gem_vram_object_vunmap() -
> + * Implements &struct drm_gem_object_funcs.vunmap
> + * @gem: The GEM object to unmap
> + * @map: Kernel virtual address where the VRAM GEM object was mapped
> */
> -static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
> - void *vaddr)
> +static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
> {
> struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
>
> - drm_gem_vram_vunmap(gbo, vaddr);
> + drm_gem_vram_vunmap(gbo, map);
> }
>
> /*
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> index 9682c26d89bb..f5be627e1de0 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> @@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
> int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
> struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
> -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
> +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index a6d9932a32ae..bc2543dd987d 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
> return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
> }
>
> -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
> +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> - return etnaviv_gem_vmap(obj);
> + void *vaddr = etnaviv_gem_vmap(obj);
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
> +
> + return 0;
> }
>
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
> index 11223fe348df..832e5280a6ed 100644
> --- a/drivers/gpu/drm/lima/lima_gem.c
> +++ b/drivers/gpu/drm/lima/lima_gem.c
> @@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
> return drm_gem_shmem_pin(obj);
> }
>
> -static void *lima_gem_vmap(struct drm_gem_object *obj)
> +static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct lima_bo *bo = to_lima_bo(obj);
>
> if (bo->heap_size)
> - return ERR_PTR(-EINVAL);
> + return -EINVAL;
>
> - return drm_gem_shmem_vmap(obj);
> + return drm_gem_shmem_vmap(obj, map);
> }
>
> static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
> index dc6df9e9a40d..a070a85f8f36 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -1,6 +1,7 @@
> // SPDX-License-Identifier: GPL-2.0 OR MIT
> /* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
>
> +#include <linux/dma-buf-map.h>
> #include <linux/kthread.h>
> #include <linux/slab.h>
> #include <linux/vmalloc.h>
> @@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
> struct lima_dump_chunk_buffer *buffer_chunk;
> u32 size, task_size, mem_size;
> int i;
> + struct dma_buf_map map;
> + int ret;
>
> mutex_lock(&dev->error_task_list_lock);
>
> @@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
> } else {
> buffer_chunk->size = lima_bo_size(bo);
>
> - data = drm_gem_shmem_vmap(&bo->base.base);
> - if (IS_ERR_OR_NULL(data)) {
> + ret = drm_gem_shmem_vmap(&bo->base.base, &map);
> + if (ret) {
> kvfree(et);
> goto out;
> }
>
> - memcpy(buffer_chunk + 1, data, buffer_chunk->size);
> + memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
>
> - drm_gem_shmem_vunmap(&bo->base.base, data);
> + drm_gem_shmem_vunmap(&bo->base.base, &map);
> }
>
> buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
> diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
> index 38672f9e5c4f..8ef76769b97f 100644
> --- a/drivers/gpu/drm/mgag200/mgag200_mode.c
> +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
> @@ -9,6 +9,7 @@
> */
>
> #include <linux/delay.h>
> +#include <linux/dma-buf-map.h>
>
> #include <drm/drm_atomic_helper.h>
> #include <drm/drm_atomic_state_helper.h>
> @@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
> struct drm_rect *clip)
> {
> struct drm_device *dev = &mdev->base;
> + struct dma_buf_map map;
> void *vmap;
> + int ret;
>
> - vmap = drm_gem_shmem_vmap(fb->obj[0]);
> - if (drm_WARN_ON(dev, !vmap))
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (drm_WARN_ON(dev, ret))
> return; /* BUG: SHMEM BO should always be vmapped */
> + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
>
> - drm_gem_shmem_vunmap(fb->obj[0], vmap);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
>
> /* Always scanout image at VRAM offset 0 */
> mgag200_set_startadd(mdev, (u32)0);
> diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
> index 5dec1e5694b7..9436310d0854 100644
> --- a/drivers/gpu/drm/nouveau/Kconfig
> +++ b/drivers/gpu/drm/nouveau/Kconfig
> @@ -6,6 +6,7 @@ config DRM_NOUVEAU
> select FW_LOADER
> select DRM_KMS_HELPER
> select DRM_TTM
> + select DRM_TTM_HELPER
> select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT
> select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT
> select X86_PLATFORM_DEVICES if ACPI && X86
> diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h
> index 641ef6298a0e..6045b85a762a 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_bo.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_bo.h
> @@ -39,8 +39,6 @@ struct nouveau_bo {
> unsigned mode;
>
> struct nouveau_drm_tile *tile;
> -
> - struct ttm_bo_kmap_obj dma_buf_vmap;
> };
>
> static inline struct nouveau_bo *
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> index 9a421c3949de..f942b526b0a5 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> @@ -24,6 +24,8 @@
> *
> */
>
> +#include <drm/drm_gem_ttm_helper.h>
> +
> #include "nouveau_drv.h"
> #include "nouveau_dma.h"
> #include "nouveau_fence.h"
> @@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = {
> .pin = nouveau_gem_prime_pin,
> .unpin = nouveau_gem_prime_unpin,
> .get_sg_table = nouveau_gem_prime_get_sg_table,
> - .vmap = nouveau_gem_prime_vmap,
> - .vunmap = nouveau_gem_prime_vunmap,
> + .vmap = drm_gem_ttm_vmap,
> + .vunmap = drm_gem_ttm_vunmap,
> };
>
> int
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
> index b35c180322e2..3b919c7c931c 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
> @@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
> extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
> extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
> struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
> -extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
> -extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
>
> #endif
> diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
> index a8264aebf3d4..2f16b5249283 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_prime.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
> @@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
> return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
> }
>
> -void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - struct nouveau_bo *nvbo = nouveau_gem_object(obj);
> - int ret;
> -
> - ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
> - &nvbo->dma_buf_vmap);
> - if (ret)
> - return ERR_PTR(ret);
> -
> - return nvbo->dma_buf_vmap.virtual;
> -}
> -
> -void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - struct nouveau_bo *nvbo = nouveau_gem_object(obj);
> -
> - ttm_bo_kunmap(&nvbo->dma_buf_vmap);
> -}
> -
> struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sg)
> diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> index fdbc8d949135..5ab03d605f57 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> @@ -5,6 +5,7 @@
> #include <drm/drm_gem_shmem_helper.h>
> #include <drm/panfrost_drm.h>
> #include <linux/completion.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/iopoll.h>
> #include <linux/pm_runtime.h>
> #include <linux/slab.h>
> @@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> {
> struct panfrost_file_priv *user = file_priv->driver_priv;
> struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
> + struct dma_buf_map map;
> struct drm_gem_shmem_object *bo;
> u32 cfg, as;
> int ret;
> @@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> goto err_close_bo;
> }
>
> - perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
> - if (IS_ERR(perfcnt->buf)) {
> - ret = PTR_ERR(perfcnt->buf);
> + ret = drm_gem_shmem_vmap(&bo->base, &map);
> + if (ret)
> goto err_put_mapping;
> - }
> + perfcnt->buf = map.vaddr;
>
> /*
> * Invalidate the cache and clear the counters to start from a fresh
> @@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> return 0;
>
> err_vunmap:
> - drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
> + drm_gem_shmem_vunmap(&bo->base, &map);
> err_put_mapping:
> panfrost_gem_mapping_put(perfcnt->mapping);
> err_close_bo:
> @@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
> {
> struct panfrost_file_priv *user = file_priv->driver_priv;
> struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
> + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
>
> if (user != perfcnt->user)
> return -EINVAL;
> @@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
> GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
>
> perfcnt->user = NULL;
> - drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
> + drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
> perfcnt->buf = NULL;
> panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
> panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
> diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
> index 45fd76e04bdc..e165fa9b2089 100644
> --- a/drivers/gpu/drm/qxl/qxl_display.c
> +++ b/drivers/gpu/drm/qxl/qxl_display.c
> @@ -25,6 +25,7 @@
>
> #include <linux/crc32.h>
> #include <linux/delay.h>
> +#include <linux/dma-buf-map.h>
>
> #include <drm/drm_drv.h>
> #include <drm/drm_atomic.h>
> @@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
> struct drm_gem_object *obj;
> struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
> int ret;
> + struct dma_buf_map user_map;
> + struct dma_buf_map cursor_map;
> void *user_ptr;
> int size = 64*64*4;
>
> @@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
> user_bo = gem_to_qxl_bo(obj);
>
> /* pinning is done in the prepare/cleanup framevbuffer */
> - ret = qxl_bo_kmap(user_bo, &user_ptr);
> + ret = qxl_bo_kmap(user_bo, &user_map);
> if (ret)
> goto out_free_release;
> + user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */
>
> ret = qxl_alloc_bo_reserved(qdev, release,
> sizeof(struct qxl_cursor) + size,
> @@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
> if (ret)
> goto out_unpin;
>
> - ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
> + ret = qxl_bo_kmap(cursor_bo, &cursor_map);
> if (ret)
> goto out_backoff;
>
> @@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
> {
> int ret;
> struct drm_gem_object *gobj;
> + struct dma_buf_map map;
> int monitors_config_size = sizeof(struct qxl_monitors_config) +
> qxl_num_crtc * sizeof(struct qxl_head);
>
> @@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
> if (ret)
> return ret;
>
> - qxl_bo_kmap(qdev->monitors_config_bo, NULL);
> + qxl_bo_kmap(qdev->monitors_config_bo, &map);
>
> qdev->monitors_config = qdev->monitors_config_bo->kptr;
> qdev->ram_header->monitors_config =
> diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
> index 3599db096973..7b7acb910780 100644
> --- a/drivers/gpu/drm/qxl/qxl_draw.c
> +++ b/drivers/gpu/drm/qxl/qxl_draw.c
> @@ -20,6 +20,8 @@
> * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
> */
>
> +#include <linux/dma-buf-map.h>
> +
> #include <drm/drm_fourcc.h>
>
> #include "qxl_drv.h"
> @@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
> unsigned int num_clips,
> struct qxl_bo *clips_bo)
> {
> + struct dma_buf_map map;
> struct qxl_clip_rects *dev_clips;
> int ret;
>
> - ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
> - if (ret) {
> + ret = qxl_bo_kmap(clips_bo, &map);
> + if (ret)
> return NULL;
> - }
> + dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
> +
> dev_clips->num_rects = num_clips;
> dev_clips->chunk.next_chunk = 0;
> dev_clips->chunk.prev_chunk = 0;
> @@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
> int stride = fb->pitches[0];
> /* depth is not actually interesting, we don't mask with it */
> int depth = fb->format->cpp[0] * 8;
> + struct dma_buf_map surface_map;
> uint8_t *surface_base;
> struct qxl_release *release;
> struct qxl_bo *clips_bo;
> @@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
> if (ret)
> goto out_release_backoff;
>
> - ret = qxl_bo_kmap(bo, (void **)&surface_base);
> + ret = qxl_bo_kmap(bo, &surface_map);
> if (ret)
> goto out_release_backoff;
> + surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */
>
> ret = qxl_image_init(qdev, release, dimage, surface_base,
> left - dumb_shadow_offset,
> diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
> index 3602e8b34189..eb437fea5d9e 100644
> --- a/drivers/gpu/drm/qxl/qxl_drv.h
> +++ b/drivers/gpu/drm/qxl/qxl_drv.h
> @@ -30,6 +30,7 @@
> * Definitions taken from spice-protocol, plus kernel driver specific bits.
> */
>
> +#include <linux/dma-buf-map.h>
> #include <linux/dma-fence.h>
> #include <linux/firmware.h>
> #include <linux/platform_device.h>
> @@ -50,6 +51,8 @@
>
> #include "qxl_dev.h"
>
> +struct dma_buf_map;
> +
> #define DRIVER_AUTHOR "Dave Airlie"
>
> #define DRIVER_NAME "qxl"
> @@ -79,7 +82,7 @@ struct qxl_bo {
> /* Protected by tbo.reserved */
> struct ttm_place placements[3];
> struct ttm_placement placement;
> - struct ttm_bo_kmap_obj kmap;
> + struct dma_buf_map map;
> void *kptr;
> unsigned int map_count;
> int type;
> @@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
> void qxl_gem_object_close(struct drm_gem_object *obj,
> struct drm_file *file_priv);
> void qxl_bo_force_delete(struct qxl_device *qdev);
> -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
>
> /* qxl_dumb.c */
> int qxl_mode_dumb_create(struct drm_file *file_priv,
> @@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
> struct drm_gem_object *qxl_gem_prime_import_sg_table(
> struct drm_device *dev, struct dma_buf_attachment *attach,
> struct sg_table *sgt);
> -void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
> -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
> + struct dma_buf_map *map);
> int qxl_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
>
> diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
> index 940e99354f49..755df4d8f95f 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.c
> +++ b/drivers/gpu/drm/qxl/qxl_object.c
> @@ -23,10 +23,12 @@
> * Alon Levy
> */
>
> +#include <linux/dma-buf-map.h>
> +#include <linux/io-mapping.h>
> +
> #include "qxl_drv.h"
> #include "qxl_object.h"
>
> -#include <linux/io-mapping.h>
> static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
> {
> struct qxl_bo *bo;
> @@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev,
> return 0;
> }
>
> -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
> +int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
> {
> - bool is_iomem;
> int r;
>
> if (bo->kptr) {
> - if (ptr)
> - *ptr = bo->kptr;
> bo->map_count++;
> - return 0;
> + goto out;
> }
> - r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
> + r = ttm_bo_vmap(&bo->tbo, &bo->map);
> if (r)
> return r;
> - bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
> - if (ptr)
> - *ptr = bo->kptr;
> bo->map_count = 1;
> +
> + /* TODO: Remove kptr in favor of map everywhere. */
> + if (bo->map.is_iomem)
> + bo->kptr = (void *)bo->map.vaddr_iomem;
> + else
> + bo->kptr = bo->map.vaddr;
> +
> +out:
> + *map = bo->map;
> return 0;
> }
>
> @@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
> void *rptr;
> int ret;
> struct io_mapping *map;
> + struct dma_buf_map bo_map;
>
> if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
> map = qdev->vram_mapping;
> @@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
> return rptr;
> }
>
> - ret = qxl_bo_kmap(bo, &rptr);
> + ret = qxl_bo_kmap(bo, &bo_map);
> if (ret)
> return NULL;
> + rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */
>
> rptr += page_offset * PAGE_SIZE;
> return rptr;
> @@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo)
> if (bo->map_count > 0)
> return;
> bo->kptr = NULL;
> - ttm_bo_kunmap(&bo->kmap);
> + ttm_bo_vunmap(&bo->tbo, &bo->map);
> }
>
> void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
> diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
> index 09a5c818324d..ebf24c9d2bf2 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.h
> +++ b/drivers/gpu/drm/qxl/qxl_object.h
> @@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
> bool kernel, bool pinned, u32 domain,
> struct qxl_surface *surf,
> struct qxl_bo **bo_ptr);
> -extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
> +extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
> extern void qxl_bo_kunmap(struct qxl_bo *bo);
> void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
> void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
> diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
> index 7d3816fca5a8..4aa949799446 100644
> --- a/drivers/gpu/drm/qxl/qxl_prime.c
> +++ b/drivers/gpu/drm/qxl/qxl_prime.c
> @@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
> return ERR_PTR(-ENOSYS);
> }
>
> -void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
> +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct qxl_bo *bo = gem_to_qxl_bo(obj);
> - void *ptr;
> int ret;
>
> - ret = qxl_bo_kmap(bo, &ptr);
> + ret = qxl_bo_kmap(bo, map);
> if (ret < 0)
> - return ERR_PTR(ret);
> + return ret;
>
> - return ptr;
> + return 0;
> }
>
> -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
> + struct dma_buf_map *map)
> {
> struct qxl_bo *bo = gem_to_qxl_bo(obj);
>
> diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
> index 5d54bccebd4d..44cb5ee6fc20 100644
> --- a/drivers/gpu/drm/radeon/radeon.h
> +++ b/drivers/gpu/drm/radeon/radeon.h
> @@ -509,7 +509,6 @@ struct radeon_bo {
> /* Constant after initialization */
> struct radeon_device *rdev;
>
> - struct ttm_bo_kmap_obj dma_buf_vmap;
> pid_t pid;
>
> #ifdef CONFIG_MMU_NOTIFIER
> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> index 0ccd7213e41f..d2876ce3bc9e 100644
> --- a/drivers/gpu/drm/radeon/radeon_gem.c
> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> @@ -31,6 +31,7 @@
> #include <drm/drm_debugfs.h>
> #include <drm/drm_device.h>
> #include <drm/drm_file.h>
> +#include <drm/drm_gem_ttm_helper.h>
> #include <drm/radeon_drm.h>
>
> #include "radeon.h"
> @@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
> struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
> int radeon_gem_prime_pin(struct drm_gem_object *obj);
> void radeon_gem_prime_unpin(struct drm_gem_object *obj);
> -void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
> -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
>
> static const struct drm_gem_object_funcs radeon_gem_object_funcs;
>
> @@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = {
> .pin = radeon_gem_prime_pin,
> .unpin = radeon_gem_prime_unpin,
> .get_sg_table = radeon_gem_prime_get_sg_table,
> - .vmap = radeon_gem_prime_vmap,
> - .vunmap = radeon_gem_prime_vunmap,
> + .vmap = drm_gem_ttm_vmap,
> + .vunmap = drm_gem_ttm_vunmap,
> };
>
> /*
> diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
> index b9de0e51c0be..088d39a51c0d 100644
> --- a/drivers/gpu/drm/radeon/radeon_prime.c
> +++ b/drivers/gpu/drm/radeon/radeon_prime.c
> @@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
> return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
> }
>
> -void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - struct radeon_bo *bo = gem_to_radeon_bo(obj);
> - int ret;
> -
> - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
> - &bo->dma_buf_vmap);
> - if (ret)
> - return ERR_PTR(ret);
> -
> - return bo->dma_buf_vmap.virtual;
> -}
> -
> -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - struct radeon_bo *bo = gem_to_radeon_bo(obj);
> -
> - ttm_bo_kunmap(&bo->dma_buf_vmap);
> -}
> -
> struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sg)
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> index 7d5ebb10323b..7971f57436dd 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> @@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
> return ERR_PTR(ret);
> }
>
> -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
> +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>
> - if (rk_obj->pages)
> - return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
> - pgprot_writecombine(PAGE_KERNEL));
> + if (rk_obj->pages) {
> + void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
> + pgprot_writecombine(PAGE_KERNEL));
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
> + return 0;
> + }
>
> if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
> - return NULL;
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
>
> - return rk_obj->kvaddr;
> + return 0;
> }
>
> -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>
> if (rk_obj->pages) {
> - vunmap(vaddr);
> + vunmap(map->vaddr);
> return;
> }
>
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> index 7ffc541bea07..5a70a56cd406 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> @@ -31,8 +31,8 @@ struct drm_gem_object *
> rockchip_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sg);
> -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
> -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> /* drm driver mmap file operations */
> int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
> index 744a8e337e41..c02e35ed6e76 100644
> --- a/drivers/gpu/drm/tiny/cirrus.c
> +++ b/drivers/gpu/drm/tiny/cirrus.c
> @@ -17,6 +17,7 @@
> */
>
> #include <linux/console.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/module.h>
> #include <linux/pci.h>
>
> @@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
> struct drm_rect *rect)
> {
> struct cirrus_device *cirrus = to_cirrus(fb->dev);
> + struct dma_buf_map map;
> void *vmap;
> int idx, ret;
>
> @@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
> if (!drm_dev_enter(&cirrus->dev, &idx))
> goto out;
>
> - ret = -ENOMEM;
> - vmap = drm_gem_shmem_vmap(fb->obj[0]);
> - if (!vmap)
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (ret)
> goto out_dev_exit;
> + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> if (cirrus->cpp == fb->format->cpp[0])
> drm_fb_memcpy_dstclip(cirrus->vram,
> @@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
> else
> WARN_ON_ONCE("cpp mismatch");
>
> - drm_gem_shmem_vunmap(fb->obj[0], vmap);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
> ret = 0;
>
> out_dev_exit:
> diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
> index cc397671f689..12a890cea6e9 100644
> --- a/drivers/gpu/drm/tiny/gm12u320.c
> +++ b/drivers/gpu/drm/tiny/gm12u320.c
> @@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
> {
> int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
> struct drm_framebuffer *fb;
> + struct dma_buf_map map;
> void *vaddr;
> u8 *src;
>
> @@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
> y1 = gm12u320->fb_update.rect.y1;
> y2 = gm12u320->fb_update.rect.y2;
>
> - vaddr = drm_gem_shmem_vmap(fb->obj[0]);
> - if (IS_ERR(vaddr)) {
> - GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (ret) {
> + GM12U320_ERR("failed to vmap fb: %d\n", ret);
> goto put_fb;
> }
> + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> if (fb->obj[0]->import_attach) {
> ret = dma_buf_begin_cpu_access(
> @@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
> GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
> }
> vunmap:
> - drm_gem_shmem_vunmap(fb->obj[0], vaddr);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
> put_fb:
> drm_framebuffer_put(fb);
> gm12u320->fb_update.fb = NULL;
> diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
> index fef43f4e3bac..42eeba1dfdbf 100644
> --- a/drivers/gpu/drm/udl/udl_modeset.c
> +++ b/drivers/gpu/drm/udl/udl_modeset.c
> @@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
> struct urb *urb;
> struct drm_rect clip;
> int log_bpp;
> + struct dma_buf_map map;
> void *vaddr;
>
> ret = udl_log_cpp(fb->format->cpp[0]);
> @@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
> return ret;
> }
>
> - vaddr = drm_gem_shmem_vmap(fb->obj[0]);
> - if (IS_ERR(vaddr)) {
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (ret) {
> DRM_ERROR("failed to vmap fb\n");
> goto out_dma_buf_end_cpu_access;
> }
> + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> urb = udl_get_urb(dev);
> if (!urb)
> @@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
> ret = 0;
>
> out_drm_gem_shmem_vunmap:
> - drm_gem_shmem_vunmap(fb->obj[0], vaddr);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
> out_dma_buf_end_cpu_access:
> if (import_attach) {
> tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
> diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> index 931c55126148..f268fb258c83 100644
> --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
> +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> @@ -9,6 +9,8 @@
> * Michael Thayer <michael.thayer@oracle.com,
> * Hans de Goede <hdegoede@redhat.com>
> */
> +
> +#include <linux/dma-buf-map.h>
> #include <linux/export.h>
>
> #include <drm/drm_atomic.h>
> @@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> u32 height = plane->state->crtc_h;
> size_t data_size, mask_size;
> u32 flags;
> + struct dma_buf_map map;
> + int ret;
> u8 *src;
>
> /*
> @@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>
> vbox_crtc->cursor_enabled = true;
>
> - src = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(src)) {
> + ret = drm_gem_vram_vmap(gbo, &map);
> + if (ret) {
> /*
> * BUG: we should have pinned the BO in prepare_fb().
> */
> @@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> DRM_WARN("Could not map cursor bo, skipping update\n");
> return;
> }
> + src = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> /*
> * The mask must be calculated based on the alpha
> @@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> data_size = width * height * 4 + mask_size;
>
> copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
> - drm_gem_vram_vunmap(gbo, src);
> + drm_gem_vram_vunmap(gbo, &map);
>
> flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
> VBOX_MOUSE_POINTER_ALPHA;
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index 557f0d1e6437..f290a9a942dc 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> return drm_gem_cma_prime_mmap(obj, vma);
> }
>
> -void *vc4_prime_vmap(struct drm_gem_object *obj)
> +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct vc4_bo *bo = to_vc4_bo(obj);
>
> if (bo->validated_shader) {
> DRM_DEBUG("mmaping of shader BOs not allowed.\n");
> - return ERR_PTR(-EINVAL);
> + return -EINVAL;
> }
>
> - return drm_gem_cma_prime_vmap(obj);
> + return drm_gem_cma_prime_vmap(obj, map);
> }
>
> struct drm_gem_object *
> diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
> index cc79b1aaa878..904f2c36c963 100644
> --- a/drivers/gpu/drm/vc4/vc4_drv.h
> +++ b/drivers/gpu/drm/vc4/vc4_drv.h
> @@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
> struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sgt);
> -void *vc4_prime_vmap(struct drm_gem_object *obj);
> +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> int vc4_bo_cache_init(struct drm_device *dev);
> void vc4_bo_cache_destroy(struct drm_device *dev);
> int vc4_bo_inc_usecnt(struct vc4_bo *bo);
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index fa54a6d1403d..b2aa26e1e4a2 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
> return &obj->base;
> }
>
> -static void *vgem_prime_vmap(struct drm_gem_object *obj)
> +static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
> long n_pages = obj->size >> PAGE_SHIFT;
> struct page **pages;
> + void *vaddr;
>
> pages = vgem_pin_pages(bo);
> if (IS_ERR(pages))
> - return NULL;
> + return PTR_ERR(pages);
> +
> + vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
>
> - return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> + return 0;
> }
>
> -static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
>
> - vunmap(vaddr);
> + vunmap(map->vaddr);
> vgem_unpin_pages(bo);
> }
>
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> index 4f34ef34ba60..74db5a840bed 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> @@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
> return gem_mmap_obj(xen_obj, vma);
> }
>
> -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
> +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
> {
> struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
> + void *vaddr;
>
> if (!xen_obj->pages)
> - return NULL;
> + return -ENOMEM;
>
> /* Please see comment in gem_mmap_obj on mapping and attributes. */
> - return vmap(xen_obj->pages, xen_obj->num_pages,
> - VM_MAP, PAGE_KERNEL);
> + vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
> + VM_MAP, PAGE_KERNEL);
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
> +
> + return 0;
> }
>
> void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
> - void *vaddr)
> + struct dma_buf_map *map)
> {
> - vunmap(vaddr);
> + vunmap(map->vaddr);
> }
>
> int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> index a39675fa31b2..a4e67d0a149c 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> @@ -12,6 +12,7 @@
> #define __XEN_DRM_FRONT_GEM_H
>
> struct dma_buf_attachment;
> +struct dma_buf_map;
> struct drm_device;
> struct drm_gem_object;
> struct file;
> @@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
>
> int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
>
> -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
> +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
> + struct dma_buf_map *map);
>
> void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
> - void *vaddr);
> + struct dma_buf_map *map);
>
> int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
> struct vm_area_struct *vma);
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index c38dd35da00b..5e6daa1c982f 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -39,6 +39,7 @@
>
> #include <drm/drm_vma_manager.h>
>
> +struct dma_buf_map;
> struct drm_gem_object;
>
> /**
> @@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
> *
> * This callback is optional.
> */
> - void *(*vmap)(struct drm_gem_object *obj);
> + int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> /**
> * @vunmap:
> @@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
> *
> * This callback is optional.
> */
> - void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
> + void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> /**
> * @mmap:
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index a064b0d1c480..caf98b9cf4b4 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
> struct sg_table *sgt);
> int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
> +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> struct drm_gem_object *
> drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 5381f0c8cf6f..3449a0353fe0 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
> void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
> int drm_gem_shmem_pin(struct drm_gem_object *obj);
> void drm_gem_shmem_unpin(struct drm_gem_object *obj);
> -void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
> -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
>
> diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
> index 128f88174d32..c0d28ba0f5c9 100644
> --- a/include/drm/drm_gem_vram_helper.h
> +++ b/include/drm/drm_gem_vram_helper.h
> @@ -10,6 +10,7 @@
> #include <drm/ttm/ttm_bo_api.h>
> #include <drm/ttm/ttm_bo_driver.h>
>
> +#include <linux/dma-buf-map.h>
> #include <linux/kernel.h> /* for container_of() */
>
> struct drm_mode_create_dumb;
> @@ -29,9 +30,8 @@ struct vm_area_struct;
>
> /**
> * struct drm_gem_vram_object - GEM object backed by VRAM
> - * @gem: GEM object
> * @bo: TTM buffer object
> - * @kmap: Mapping information for @bo
> + * @map: Mapping information for @bo
> * @placement: TTM placement information. Supported placements are \
> %TTM_PL_VRAM and %TTM_PL_SYSTEM
> * @placements: TTM placement information.
> @@ -50,15 +50,15 @@ struct vm_area_struct;
> */
> struct drm_gem_vram_object {
> struct ttm_buffer_object bo;
> - struct ttm_bo_kmap_obj kmap;
> + struct dma_buf_map map;
>
> /**
> - * @kmap_use_count:
> + * @vmap_use_count:
> *
> * Reference count on the virtual address.
> * The address are un-mapped when the count reaches zero.
> */
> - unsigned int kmap_use_count;
> + unsigned int vmap_use_count;
>
> /* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
> struct ttm_placement placement;
> @@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
> s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
> int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
> int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
> -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
> +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
> +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
>
> int drm_gem_vram_fill_create_dumb(struct drm_file *file,
> struct drm_device *dev,
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends
@ 2020-10-15 14:21 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 14:21 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> This patch replaces the vmap/vunmap's use of raw pointers in GEM object
> functions with instances of struct dma_buf_map. GEM backends are
> converted as well. For most of them, this simply changes the returned type.
>
> TTM-based drivers now return information about the location of the memory,
> either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap()
> et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of
> implementing their own vmap callbacks.
>
> v4:
> * use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian)
> * fix a trailing { in drm_gem_vmap()
> * remove several empty functions instead of converting them (Daniel)
> * comment uses of raw pointers with a TODO (Daniel)
> * TODO list: convert more helpers to use struct dma_buf_map
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
The amdgpu changes look good to me, but I can't fully judge the other stuff.
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> Documentation/gpu/todo.rst | 18 ++++
> drivers/gpu/drm/Kconfig | 2 +
> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 -------
> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 -
> drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 -
> drivers/gpu/drm/ast/ast_cursor.c | 27 +++--
> drivers/gpu/drm/ast/ast_drv.h | 7 +-
> drivers/gpu/drm/drm_gem.c | 23 +++--
> drivers/gpu/drm/drm_gem_cma_helper.c | 10 +-
> drivers/gpu/drm/drm_gem_shmem_helper.c | 48 +++++----
> drivers/gpu/drm/drm_gem_vram_helper.c | 107 ++++++++++----------
> drivers/gpu/drm/etnaviv/etnaviv_drv.h | 2 +-
> drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 9 +-
> drivers/gpu/drm/lima/lima_gem.c | 6 +-
> drivers/gpu/drm/lima/lima_sched.c | 11 +-
> drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +-
> drivers/gpu/drm/nouveau/Kconfig | 1 +
> drivers/gpu/drm/nouveau/nouveau_bo.h | 2 -
> drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +-
> drivers/gpu/drm/nouveau/nouveau_gem.h | 2 -
> drivers/gpu/drm/nouveau/nouveau_prime.c | 20 ----
> drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +--
> drivers/gpu/drm/qxl/qxl_display.c | 11 +-
> drivers/gpu/drm/qxl/qxl_draw.c | 14 ++-
> drivers/gpu/drm/qxl/qxl_drv.h | 11 +-
> drivers/gpu/drm/qxl/qxl_object.c | 31 +++---
> drivers/gpu/drm/qxl/qxl_object.h | 2 +-
> drivers/gpu/drm/qxl/qxl_prime.c | 12 +--
> drivers/gpu/drm/radeon/radeon.h | 1 -
> drivers/gpu/drm/radeon/radeon_gem.c | 7 +-
> drivers/gpu/drm/radeon/radeon_prime.c | 20 ----
> drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 ++--
> drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +-
> drivers/gpu/drm/tiny/cirrus.c | 10 +-
> drivers/gpu/drm/tiny/gm12u320.c | 10 +-
> drivers/gpu/drm/udl/udl_modeset.c | 8 +-
> drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +-
> drivers/gpu/drm/vc4/vc4_bo.c | 6 +-
> drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
> drivers/gpu/drm/vgem/vgem_drv.c | 16 ++-
> drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 ++--
> drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +-
> include/drm/drm_gem.h | 5 +-
> include/drm/drm_gem_cma_helper.h | 2 +-
> include/drm/drm_gem_shmem_helper.h | 4 +-
> include/drm/drm_gem_vram_helper.h | 14 +--
> 47 files changed, 321 insertions(+), 295 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 700637e25ecd..7e6fc3c04add 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -446,6 +446,24 @@ Contact: Ville Syrjälä, Daniel Vetter
>
> Level: Intermediate
>
> +Use struct dma_buf_map throughout codebase
> +------------------------------------------
> +
> +Pointers to shared device memory are stored in struct dma_buf_map. Each
> +instance knows whether it refers to system or I/O memory. Most of the DRM-wide
> +interface have been converted to use struct dma_buf_map, but implementations
> +often still use raw pointers.
> +
> +The task is to use struct dma_buf_map where it makes sense.
> +
> +* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
> +* TTM might benefit from using struct dma_buf_map internally.
> +* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
> +
> +Level: Intermediate
> +
>
> Core refactorings
> =================
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index 147d61b9674e..319839b87d37 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -239,6 +239,7 @@ config DRM_RADEON
> select FW_LOADER
> select DRM_KMS_HELPER
> select DRM_TTM
> + select DRM_TTM_HELPER
> select POWER_SUPPLY
> select HWMON
> select BACKLIGHT_CLASS_DEVICE
> @@ -259,6 +260,7 @@ config DRM_AMDGPU
> select DRM_KMS_HELPER
> select DRM_SCHED
> select DRM_TTM
> + select DRM_TTM_HELPER
> select POWER_SUPPLY
> select HWMON
> select BACKLIGHT_CLASS_DEVICE
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> index 5b465ab774d1..e5919efca870 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> @@ -41,42 +41,6 @@
> #include <linux/dma-fence-array.h>
> #include <linux/pci-p2pdma.h>
>
> -/**
> - * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
> - * @obj: GEM BO
> - *
> - * Sets up an in-kernel virtual mapping of the BO's memory.
> - *
> - * Returns:
> - * The virtual address of the mapping or an error pointer.
> - */
> -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> - int ret;
> -
> - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
> - &bo->dma_buf_vmap);
> - if (ret)
> - return ERR_PTR(ret);
> -
> - return bo->dma_buf_vmap.virtual;
> -}
> -
> -/**
> - * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
> - * @obj: GEM BO
> - * @vaddr: Virtual address (unused)
> - *
> - * Tears down the in-kernel virtual mapping of the BO's memory.
> - */
> -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> -
> - ttm_bo_kunmap(&bo->dma_buf_vmap);
> -}
> -
> /**
> * amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation
> * @obj: GEM BO
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> index 2c5c84a06bb9..39b5b9616fd8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> @@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
> struct dma_buf *dma_buf);
> bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
> struct amdgpu_bo *bo);
> -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
> -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index be08a63ef58c..576659827e74 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -33,6 +33,7 @@
>
> #include <drm/amdgpu_drm.h>
> #include <drm/drm_debugfs.h>
> +#include <drm/drm_gem_ttm_helper.h>
>
> #include "amdgpu.h"
> #include "amdgpu_display.h"
> @@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = {
> .open = amdgpu_gem_object_open,
> .close = amdgpu_gem_object_close,
> .export = amdgpu_gem_prime_export,
> - .vmap = amdgpu_gem_prime_vmap,
> - .vunmap = amdgpu_gem_prime_vunmap,
> + .vmap = drm_gem_ttm_vmap,
> + .vunmap = drm_gem_ttm_vunmap,
> };
>
> /*
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> index 132e5f955180..01296ef0d673 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> @@ -100,7 +100,6 @@ struct amdgpu_bo {
> struct amdgpu_bo *parent;
> struct amdgpu_bo *shadow;
>
> - struct ttm_bo_kmap_obj dma_buf_vmap;
> struct amdgpu_mn *mn;
>
>
> diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
> index e0f4613918ad..742d43a7edf4 100644
> --- a/drivers/gpu/drm/ast/ast_cursor.c
> +++ b/drivers/gpu/drm/ast/ast_cursor.c
> @@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
>
> for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
> gbo = ast->cursor.gbo[i];
> - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
> + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
> drm_gem_vram_unpin(gbo);
> drm_gem_vram_put(gbo);
> }
> @@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
> struct drm_device *dev = &ast->base;
> size_t size, i;
> struct drm_gem_vram_object *gbo;
> - void __iomem *vaddr;
> + struct dma_buf_map map;
> int ret;
>
> size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
> @@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
> drm_gem_vram_put(gbo);
> goto err_drm_gem_vram_put;
> }
> - vaddr = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(vaddr)) {
> - ret = PTR_ERR(vaddr);
> + ret = drm_gem_vram_vmap(gbo, &map);
> + if (ret) {
> drm_gem_vram_unpin(gbo);
> drm_gem_vram_put(gbo);
> goto err_drm_gem_vram_put;
> }
>
> ast->cursor.gbo[i] = gbo;
> - ast->cursor.vaddr[i] = vaddr;
> + ast->cursor.map[i] = map;
> }
>
> return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
> @@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
> while (i) {
> --i;
> gbo = ast->cursor.gbo[i];
> - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
> + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
> drm_gem_vram_unpin(gbo);
> drm_gem_vram_put(gbo);
> }
> @@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
> {
> struct drm_device *dev = &ast->base;
> struct drm_gem_vram_object *gbo;
> + struct dma_buf_map map;
> int ret;
> void *src;
> void __iomem *dst;
> @@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
> ret = drm_gem_vram_pin(gbo, 0);
> if (ret)
> return ret;
> - src = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(src)) {
> - ret = PTR_ERR(src);
> + ret = drm_gem_vram_vmap(gbo, &map);
> + if (ret)
> goto err_drm_gem_vram_unpin;
> - }
> + src = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> - dst = ast->cursor.vaddr[ast->cursor.next_index];
> + dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
>
> /* do data transfer to cursor BO */
> update_cursor_image(dst, src, fb->width, fb->height);
>
> - drm_gem_vram_vunmap(gbo, src);
> + drm_gem_vram_vunmap(gbo, &map);
> drm_gem_vram_unpin(gbo);
>
> return 0;
> @@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
> u8 __iomem *sig;
> u8 jreg;
>
> - dst = ast->cursor.vaddr[ast->cursor.next_index];
> + dst = ast->cursor.map[ast->cursor.next_index].vaddr;
>
> sig = dst + AST_HWC_SIZE;
> writel(x, sig + AST_HWC_SIGNATURE_X);
> diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
> index 467049ca8430..f963141dd851 100644
> --- a/drivers/gpu/drm/ast/ast_drv.h
> +++ b/drivers/gpu/drm/ast/ast_drv.h
> @@ -28,10 +28,11 @@
> #ifndef __AST_DRV_H__
> #define __AST_DRV_H__
>
> -#include <linux/types.h>
> -#include <linux/io.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/i2c.h>
> #include <linux/i2c-algo-bit.h>
> +#include <linux/io.h>
> +#include <linux/types.h>
>
> #include <drm/drm_connector.h>
> #include <drm/drm_crtc.h>
> @@ -131,7 +132,7 @@ struct ast_private {
>
> struct {
> struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
> - void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
> + struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
> unsigned int next_index;
> } cursor;
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 1da67d34e55d..a89ad4570e3c 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -36,6 +36,7 @@
> #include <linux/pagemap.h>
> #include <linux/shmem_fs.h>
> #include <linux/dma-buf.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/mem_encrypt.h>
> #include <linux/pagevec.h>
>
> @@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
>
> void *drm_gem_vmap(struct drm_gem_object *obj)
> {
> - void *vaddr;
> + struct dma_buf_map map;
> + int ret;
>
> - if (obj->funcs->vmap)
> - vaddr = obj->funcs->vmap(obj);
> - else
> - vaddr = ERR_PTR(-EOPNOTSUPP);
> + if (!obj->funcs->vmap)
> + return ERR_PTR(-EOPNOTSUPP);
>
> - if (!vaddr)
> - vaddr = ERR_PTR(-ENOMEM);
> + ret = obj->funcs->vmap(obj, &map);
> + if (ret)
> + return ERR_PTR(ret);
> + else if (dma_buf_map_is_null(&map))
> + return ERR_PTR(-ENOMEM);
>
> - return vaddr;
> + return map.vaddr;
> }
>
> void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
> {
> + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
> +
> if (!vaddr)
> return;
>
> if (obj->funcs->vunmap)
> - obj->funcs->vunmap(obj, vaddr);
> + obj->funcs->vunmap(obj, &map);
> }
>
> /**
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index d527485ea0b7..b57e3e9222f0 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
> * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
> * address space
> * @obj: GEM object
> + * @map: Returns the kernel virtual address of the CMA GEM object's backing
> + * store.
> *
> * This function maps a buffer exported via DRM PRIME into the kernel's
> * virtual address space. Since the CMA buffers are already mapped into the
> @@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
> * driver's &drm_gem_object_funcs.vmap callback.
> *
> * Returns:
> - * The kernel virtual address of the CMA GEM object's backing store.
> + * 0 on success, or a negative error code otherwise.
> */
> -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
> +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
>
> - return cma_obj->vaddr;
> + dma_buf_map_set_vaddr(map, cma_obj->vaddr);
> +
> + return 0;
> }
> EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index fb11df7aced5..5553f58f68f3 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
> }
> EXPORT_SYMBOL(drm_gem_shmem_unpin);
>
> -static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> +static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
> - struct dma_buf_map map;
> int ret = 0;
>
> - if (shmem->vmap_use_count++ > 0)
> - return shmem->vaddr;
> + if (shmem->vmap_use_count++ > 0) {
> + dma_buf_map_set_vaddr(map, shmem->vaddr);
> + return 0;
> + }
>
> if (obj->import_attach) {
> - ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
> - if (!ret)
> - shmem->vaddr = map.vaddr;
> + ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
> + if (!ret) {
> + if (WARN_ON(map->is_iomem)) {
> + ret = -EIO;
> + goto err_put_pages;
> + }
> + shmem->vaddr = map->vaddr;
> + }
> } else {
> pgprot_t prot = PAGE_KERNEL;
>
> @@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> VM_MAP, prot);
> if (!shmem->vaddr)
> ret = -ENOMEM;
> + else
> + dma_buf_map_set_vaddr(map, shmem->vaddr);
> }
>
> if (ret) {
> @@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> goto err_put_pages;
> }
>
> - return shmem->vaddr;
> + return 0;
>
> err_put_pages:
> if (!obj->import_attach)
> @@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> err_zero_use:
> shmem->vmap_use_count = 0;
>
> - return ERR_PTR(ret);
> + return ret;
> }
>
> /*
> * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
> * @shmem: shmem GEM object
> + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> + * store.
> *
> * This function makes sure that a contiguous kernel virtual address mapping
> * exists for the buffer backing the shmem GEM object.
> @@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> * Returns:
> * 0 on success or a negative error code on failure.
> */
> -void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
> +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> - void *vaddr;
> int ret;
>
> ret = mutex_lock_interruptible(&shmem->vmap_lock);
> if (ret)
> - return ERR_PTR(ret);
> - vaddr = drm_gem_shmem_vmap_locked(shmem);
> + return ret;
> + ret = drm_gem_shmem_vmap_locked(shmem, map);
> mutex_unlock(&shmem->vmap_lock);
>
> - return vaddr;
> + return ret;
> }
> EXPORT_SYMBOL(drm_gem_shmem_vmap);
>
> -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> +static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> + struct dma_buf_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
> - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
>
> if (WARN_ON_ONCE(!shmem->vmap_use_count))
> return;
> @@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> return;
>
> if (obj->import_attach)
> - dma_buf_vunmap(obj->import_attach->dmabuf, &map);
> + dma_buf_vunmap(obj->import_attach->dmabuf, map);
> else
> vunmap(shmem->vaddr);
>
> @@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> /*
> * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
> * @shmem: shmem GEM object
> + * @map: Kernel virtual address where the SHMEM GEM object was mapped
> *
> * This function cleans up a kernel virtual address mapping acquired by
> * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
> @@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> * also be called by drivers directly, in which case it will hide the
> * differences between dma-buf imported and natively allocated objects.
> */
> -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>
> mutex_lock(&shmem->vmap_lock);
> - drm_gem_shmem_vunmap_locked(shmem);
> + drm_gem_shmem_vunmap_locked(shmem, map);
> mutex_unlock(&shmem->vmap_lock);
> }
> EXPORT_SYMBOL(drm_gem_shmem_vunmap);
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 2d5ed30518f1..4d8553b28558 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -1,5 +1,6 @@
> // SPDX-License-Identifier: GPL-2.0-or-later
>
> +#include <linux/dma-buf-map.h>
> #include <linux/module.h>
>
> #include <drm/drm_debugfs.h>
> @@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
> * up; only release the GEM object.
> */
>
> - WARN_ON(gbo->kmap_use_count);
> - WARN_ON(gbo->kmap.virtual);
> + WARN_ON(gbo->vmap_use_count);
> + WARN_ON(dma_buf_map_is_set(&gbo->map));
>
> drm_gem_object_release(&gbo->bo.base);
> }
> @@ -382,29 +383,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
> }
> EXPORT_SYMBOL(drm_gem_vram_unpin);
>
> -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
> +static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
> + struct dma_buf_map *map)
> {
> int ret;
> - struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> - bool is_iomem;
>
> - if (gbo->kmap_use_count > 0)
> + if (gbo->vmap_use_count > 0)
> goto out;
>
> - ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
> + ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
> if (ret)
> - return ERR_PTR(ret);
> + return ret;
>
> out:
> - ++gbo->kmap_use_count;
> - return ttm_kmap_obj_virtual(kmap, &is_iomem);
> + ++gbo->vmap_use_count;
> + *map = gbo->map;
> +
> + return 0;
> }
>
> -static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> +static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
> + struct dma_buf_map *map)
> {
> - if (WARN_ON_ONCE(!gbo->kmap_use_count))
> + struct drm_device *dev = gbo->bo.base.dev;
> +
> + if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count))
> return;
> - if (--gbo->kmap_use_count > 0)
> +
> + if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
> + return; /* BUG: map not mapped from this BO */
> +
> + if (--gbo->vmap_use_count > 0)
> return;
>
> /*
> @@ -418,7 +427,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> /**
> * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
> * space
> - * @gbo: The GEM VRAM object to map
> + * @gbo: The GEM VRAM object to map
> + * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> + * store.
> *
> * The vmap function pins a GEM VRAM object to its current location, either
> * system or video memory, and maps its buffer into kernel address space.
> @@ -427,48 +438,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> * unmap and unpin the GEM VRAM object.
> *
> * Returns:
> - * The buffer's virtual address on success, or
> - * an ERR_PTR()-encoded error code otherwise.
> + * 0 on success, or a negative error code otherwise.
> */
> -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
> +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
> {
> int ret;
> - void *base;
>
> ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
> if (ret)
> - return ERR_PTR(ret);
> + return ret;
>
> ret = drm_gem_vram_pin_locked(gbo, 0);
> if (ret)
> goto err_ttm_bo_unreserve;
> - base = drm_gem_vram_kmap_locked(gbo);
> - if (IS_ERR(base)) {
> - ret = PTR_ERR(base);
> + ret = drm_gem_vram_kmap_locked(gbo, map);
> + if (ret)
> goto err_drm_gem_vram_unpin_locked;
> - }
>
> ttm_bo_unreserve(&gbo->bo);
>
> - return base;
> + return 0;
>
> err_drm_gem_vram_unpin_locked:
> drm_gem_vram_unpin_locked(gbo);
> err_ttm_bo_unreserve:
> ttm_bo_unreserve(&gbo->bo);
> - return ERR_PTR(ret);
> + return ret;
> }
> EXPORT_SYMBOL(drm_gem_vram_vmap);
>
> /**
> * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
> - * @gbo: The GEM VRAM object to unmap
> - * @vaddr: The mapping's base address as returned by drm_gem_vram_vmap()
> + * @gbo: The GEM VRAM object to unmap
> + * @map: Kernel virtual address where the VRAM GEM object was mapped
> *
> * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
> * the documentation for drm_gem_vram_vmap() for more information.
> */
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
> +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
> {
> int ret;
>
> @@ -476,7 +483,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
> if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
> return;
>
> - drm_gem_vram_kunmap_locked(gbo);
> + drm_gem_vram_kunmap_locked(gbo, map);
> drm_gem_vram_unpin_locked(gbo);
>
> ttm_bo_unreserve(&gbo->bo);
> @@ -567,15 +574,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
> bool evict,
> struct ttm_resource *new_mem)
> {
> - struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> + struct ttm_buffer_object *bo = &gbo->bo;
> + struct drm_device *dev = bo->base.dev;
>
> - if (WARN_ON_ONCE(gbo->kmap_use_count))
> + if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count))
> return;
>
> - if (!kmap->virtual)
> - return;
> - ttm_bo_kunmap(kmap);
> - kmap->virtual = NULL;
> + ttm_bo_vunmap(bo, &gbo->map);
> }
>
> static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
> @@ -832,37 +837,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
> }
>
> /**
> - * drm_gem_vram_object_vmap() - \
> - Implements &struct drm_gem_object_funcs.vmap
> - * @gem: The GEM object to map
> + * drm_gem_vram_object_vmap() -
> + * Implements &struct drm_gem_object_funcs.vmap
> + * @gem: The GEM object to map
> + * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> + * store.
> *
> * Returns:
> - * The buffers virtual address on success, or
> - * NULL otherwise.
> + * 0 on success, or a negative error code otherwise.
> */
> -static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
> +static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
> {
> struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
> - void *base;
>
> - base = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(base))
> - return NULL;
> - return base;
> + return drm_gem_vram_vmap(gbo, map);
> }
>
> /**
> - * drm_gem_vram_object_vunmap() - \
> - Implements &struct drm_gem_object_funcs.vunmap
> - * @gem: The GEM object to unmap
> - * @vaddr: The mapping's base address
> + * drm_gem_vram_object_vunmap() -
> + * Implements &struct drm_gem_object_funcs.vunmap
> + * @gem: The GEM object to unmap
> + * @map: Kernel virtual address where the VRAM GEM object was mapped
> */
> -static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
> - void *vaddr)
> +static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
> {
> struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
>
> - drm_gem_vram_vunmap(gbo, vaddr);
> + drm_gem_vram_vunmap(gbo, map);
> }
>
> /*
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> index 9682c26d89bb..f5be627e1de0 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> @@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
> int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
> struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
> -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
> +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index a6d9932a32ae..bc2543dd987d 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
> return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
> }
>
> -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
> +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> - return etnaviv_gem_vmap(obj);
> + void *vaddr = etnaviv_gem_vmap(obj);
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
> +
> + return 0;
> }
>
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
> index 11223fe348df..832e5280a6ed 100644
> --- a/drivers/gpu/drm/lima/lima_gem.c
> +++ b/drivers/gpu/drm/lima/lima_gem.c
> @@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
> return drm_gem_shmem_pin(obj);
> }
>
> -static void *lima_gem_vmap(struct drm_gem_object *obj)
> +static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct lima_bo *bo = to_lima_bo(obj);
>
> if (bo->heap_size)
> - return ERR_PTR(-EINVAL);
> + return -EINVAL;
>
> - return drm_gem_shmem_vmap(obj);
> + return drm_gem_shmem_vmap(obj, map);
> }
>
> static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
> index dc6df9e9a40d..a070a85f8f36 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -1,6 +1,7 @@
> // SPDX-License-Identifier: GPL-2.0 OR MIT
> /* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
>
> +#include <linux/dma-buf-map.h>
> #include <linux/kthread.h>
> #include <linux/slab.h>
> #include <linux/vmalloc.h>
> @@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
> struct lima_dump_chunk_buffer *buffer_chunk;
> u32 size, task_size, mem_size;
> int i;
> + struct dma_buf_map map;
> + int ret;
>
> mutex_lock(&dev->error_task_list_lock);
>
> @@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
> } else {
> buffer_chunk->size = lima_bo_size(bo);
>
> - data = drm_gem_shmem_vmap(&bo->base.base);
> - if (IS_ERR_OR_NULL(data)) {
> + ret = drm_gem_shmem_vmap(&bo->base.base, &map);
> + if (ret) {
> kvfree(et);
> goto out;
> }
>
> - memcpy(buffer_chunk + 1, data, buffer_chunk->size);
> + memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
>
> - drm_gem_shmem_vunmap(&bo->base.base, data);
> + drm_gem_shmem_vunmap(&bo->base.base, &map);
> }
>
> buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
> diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
> index 38672f9e5c4f..8ef76769b97f 100644
> --- a/drivers/gpu/drm/mgag200/mgag200_mode.c
> +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
> @@ -9,6 +9,7 @@
> */
>
> #include <linux/delay.h>
> +#include <linux/dma-buf-map.h>
>
> #include <drm/drm_atomic_helper.h>
> #include <drm/drm_atomic_state_helper.h>
> @@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
> struct drm_rect *clip)
> {
> struct drm_device *dev = &mdev->base;
> + struct dma_buf_map map;
> void *vmap;
> + int ret;
>
> - vmap = drm_gem_shmem_vmap(fb->obj[0]);
> - if (drm_WARN_ON(dev, !vmap))
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (drm_WARN_ON(dev, ret))
> return; /* BUG: SHMEM BO should always be vmapped */
> + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
>
> - drm_gem_shmem_vunmap(fb->obj[0], vmap);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
>
> /* Always scanout image at VRAM offset 0 */
> mgag200_set_startadd(mdev, (u32)0);
> diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
> index 5dec1e5694b7..9436310d0854 100644
> --- a/drivers/gpu/drm/nouveau/Kconfig
> +++ b/drivers/gpu/drm/nouveau/Kconfig
> @@ -6,6 +6,7 @@ config DRM_NOUVEAU
> select FW_LOADER
> select DRM_KMS_HELPER
> select DRM_TTM
> + select DRM_TTM_HELPER
> select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT
> select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT
> select X86_PLATFORM_DEVICES if ACPI && X86
> diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h
> index 641ef6298a0e..6045b85a762a 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_bo.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_bo.h
> @@ -39,8 +39,6 @@ struct nouveau_bo {
> unsigned mode;
>
> struct nouveau_drm_tile *tile;
> -
> - struct ttm_bo_kmap_obj dma_buf_vmap;
> };
>
> static inline struct nouveau_bo *
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> index 9a421c3949de..f942b526b0a5 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> @@ -24,6 +24,8 @@
> *
> */
>
> +#include <drm/drm_gem_ttm_helper.h>
> +
> #include "nouveau_drv.h"
> #include "nouveau_dma.h"
> #include "nouveau_fence.h"
> @@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = {
> .pin = nouveau_gem_prime_pin,
> .unpin = nouveau_gem_prime_unpin,
> .get_sg_table = nouveau_gem_prime_get_sg_table,
> - .vmap = nouveau_gem_prime_vmap,
> - .vunmap = nouveau_gem_prime_vunmap,
> + .vmap = drm_gem_ttm_vmap,
> + .vunmap = drm_gem_ttm_vunmap,
> };
>
> int
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
> index b35c180322e2..3b919c7c931c 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
> @@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
> extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
> extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
> struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
> -extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
> -extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
>
> #endif
> diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
> index a8264aebf3d4..2f16b5249283 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_prime.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
> @@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
> return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
> }
>
> -void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - struct nouveau_bo *nvbo = nouveau_gem_object(obj);
> - int ret;
> -
> - ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
> - &nvbo->dma_buf_vmap);
> - if (ret)
> - return ERR_PTR(ret);
> -
> - return nvbo->dma_buf_vmap.virtual;
> -}
> -
> -void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - struct nouveau_bo *nvbo = nouveau_gem_object(obj);
> -
> - ttm_bo_kunmap(&nvbo->dma_buf_vmap);
> -}
> -
> struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sg)
> diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> index fdbc8d949135..5ab03d605f57 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> @@ -5,6 +5,7 @@
> #include <drm/drm_gem_shmem_helper.h>
> #include <drm/panfrost_drm.h>
> #include <linux/completion.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/iopoll.h>
> #include <linux/pm_runtime.h>
> #include <linux/slab.h>
> @@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> {
> struct panfrost_file_priv *user = file_priv->driver_priv;
> struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
> + struct dma_buf_map map;
> struct drm_gem_shmem_object *bo;
> u32 cfg, as;
> int ret;
> @@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> goto err_close_bo;
> }
>
> - perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
> - if (IS_ERR(perfcnt->buf)) {
> - ret = PTR_ERR(perfcnt->buf);
> + ret = drm_gem_shmem_vmap(&bo->base, &map);
> + if (ret)
> goto err_put_mapping;
> - }
> + perfcnt->buf = map.vaddr;
>
> /*
> * Invalidate the cache and clear the counters to start from a fresh
> @@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> return 0;
>
> err_vunmap:
> - drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
> + drm_gem_shmem_vunmap(&bo->base, &map);
> err_put_mapping:
> panfrost_gem_mapping_put(perfcnt->mapping);
> err_close_bo:
> @@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
> {
> struct panfrost_file_priv *user = file_priv->driver_priv;
> struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
> + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
>
> if (user != perfcnt->user)
> return -EINVAL;
> @@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
> GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
>
> perfcnt->user = NULL;
> - drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
> + drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
> perfcnt->buf = NULL;
> panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
> panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
> diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
> index 45fd76e04bdc..e165fa9b2089 100644
> --- a/drivers/gpu/drm/qxl/qxl_display.c
> +++ b/drivers/gpu/drm/qxl/qxl_display.c
> @@ -25,6 +25,7 @@
>
> #include <linux/crc32.h>
> #include <linux/delay.h>
> +#include <linux/dma-buf-map.h>
>
> #include <drm/drm_drv.h>
> #include <drm/drm_atomic.h>
> @@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
> struct drm_gem_object *obj;
> struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
> int ret;
> + struct dma_buf_map user_map;
> + struct dma_buf_map cursor_map;
> void *user_ptr;
> int size = 64*64*4;
>
> @@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
> user_bo = gem_to_qxl_bo(obj);
>
> /* pinning is done in the prepare/cleanup framevbuffer */
> - ret = qxl_bo_kmap(user_bo, &user_ptr);
> + ret = qxl_bo_kmap(user_bo, &user_map);
> if (ret)
> goto out_free_release;
> + user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */
>
> ret = qxl_alloc_bo_reserved(qdev, release,
> sizeof(struct qxl_cursor) + size,
> @@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
> if (ret)
> goto out_unpin;
>
> - ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
> + ret = qxl_bo_kmap(cursor_bo, &cursor_map);
> if (ret)
> goto out_backoff;
>
> @@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
> {
> int ret;
> struct drm_gem_object *gobj;
> + struct dma_buf_map map;
> int monitors_config_size = sizeof(struct qxl_monitors_config) +
> qxl_num_crtc * sizeof(struct qxl_head);
>
> @@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
> if (ret)
> return ret;
>
> - qxl_bo_kmap(qdev->monitors_config_bo, NULL);
> + qxl_bo_kmap(qdev->monitors_config_bo, &map);
>
> qdev->monitors_config = qdev->monitors_config_bo->kptr;
> qdev->ram_header->monitors_config =
> diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
> index 3599db096973..7b7acb910780 100644
> --- a/drivers/gpu/drm/qxl/qxl_draw.c
> +++ b/drivers/gpu/drm/qxl/qxl_draw.c
> @@ -20,6 +20,8 @@
> * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
> */
>
> +#include <linux/dma-buf-map.h>
> +
> #include <drm/drm_fourcc.h>
>
> #include "qxl_drv.h"
> @@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
> unsigned int num_clips,
> struct qxl_bo *clips_bo)
> {
> + struct dma_buf_map map;
> struct qxl_clip_rects *dev_clips;
> int ret;
>
> - ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
> - if (ret) {
> + ret = qxl_bo_kmap(clips_bo, &map);
> + if (ret)
> return NULL;
> - }
> + dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
> +
> dev_clips->num_rects = num_clips;
> dev_clips->chunk.next_chunk = 0;
> dev_clips->chunk.prev_chunk = 0;
> @@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
> int stride = fb->pitches[0];
> /* depth is not actually interesting, we don't mask with it */
> int depth = fb->format->cpp[0] * 8;
> + struct dma_buf_map surface_map;
> uint8_t *surface_base;
> struct qxl_release *release;
> struct qxl_bo *clips_bo;
> @@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
> if (ret)
> goto out_release_backoff;
>
> - ret = qxl_bo_kmap(bo, (void **)&surface_base);
> + ret = qxl_bo_kmap(bo, &surface_map);
> if (ret)
> goto out_release_backoff;
> + surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */
>
> ret = qxl_image_init(qdev, release, dimage, surface_base,
> left - dumb_shadow_offset,
> diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
> index 3602e8b34189..eb437fea5d9e 100644
> --- a/drivers/gpu/drm/qxl/qxl_drv.h
> +++ b/drivers/gpu/drm/qxl/qxl_drv.h
> @@ -30,6 +30,7 @@
> * Definitions taken from spice-protocol, plus kernel driver specific bits.
> */
>
> +#include <linux/dma-buf-map.h>
> #include <linux/dma-fence.h>
> #include <linux/firmware.h>
> #include <linux/platform_device.h>
> @@ -50,6 +51,8 @@
>
> #include "qxl_dev.h"
>
> +struct dma_buf_map;
> +
> #define DRIVER_AUTHOR "Dave Airlie"
>
> #define DRIVER_NAME "qxl"
> @@ -79,7 +82,7 @@ struct qxl_bo {
> /* Protected by tbo.reserved */
> struct ttm_place placements[3];
> struct ttm_placement placement;
> - struct ttm_bo_kmap_obj kmap;
> + struct dma_buf_map map;
> void *kptr;
> unsigned int map_count;
> int type;
> @@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
> void qxl_gem_object_close(struct drm_gem_object *obj,
> struct drm_file *file_priv);
> void qxl_bo_force_delete(struct qxl_device *qdev);
> -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
>
> /* qxl_dumb.c */
> int qxl_mode_dumb_create(struct drm_file *file_priv,
> @@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
> struct drm_gem_object *qxl_gem_prime_import_sg_table(
> struct drm_device *dev, struct dma_buf_attachment *attach,
> struct sg_table *sgt);
> -void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
> -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
> + struct dma_buf_map *map);
> int qxl_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
>
> diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
> index 940e99354f49..755df4d8f95f 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.c
> +++ b/drivers/gpu/drm/qxl/qxl_object.c
> @@ -23,10 +23,12 @@
> * Alon Levy
> */
>
> +#include <linux/dma-buf-map.h>
> +#include <linux/io-mapping.h>
> +
> #include "qxl_drv.h"
> #include "qxl_object.h"
>
> -#include <linux/io-mapping.h>
> static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
> {
> struct qxl_bo *bo;
> @@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev,
> return 0;
> }
>
> -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
> +int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
> {
> - bool is_iomem;
> int r;
>
> if (bo->kptr) {
> - if (ptr)
> - *ptr = bo->kptr;
> bo->map_count++;
> - return 0;
> + goto out;
> }
> - r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
> + r = ttm_bo_vmap(&bo->tbo, &bo->map);
> if (r)
> return r;
> - bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
> - if (ptr)
> - *ptr = bo->kptr;
> bo->map_count = 1;
> +
> + /* TODO: Remove kptr in favor of map everywhere. */
> + if (bo->map.is_iomem)
> + bo->kptr = (void *)bo->map.vaddr_iomem;
> + else
> + bo->kptr = bo->map.vaddr;
> +
> +out:
> + *map = bo->map;
> return 0;
> }
>
> @@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
> void *rptr;
> int ret;
> struct io_mapping *map;
> + struct dma_buf_map bo_map;
>
> if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
> map = qdev->vram_mapping;
> @@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
> return rptr;
> }
>
> - ret = qxl_bo_kmap(bo, &rptr);
> + ret = qxl_bo_kmap(bo, &bo_map);
> if (ret)
> return NULL;
> + rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */
>
> rptr += page_offset * PAGE_SIZE;
> return rptr;
> @@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo)
> if (bo->map_count > 0)
> return;
> bo->kptr = NULL;
> - ttm_bo_kunmap(&bo->kmap);
> + ttm_bo_vunmap(&bo->tbo, &bo->map);
> }
>
> void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
> diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
> index 09a5c818324d..ebf24c9d2bf2 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.h
> +++ b/drivers/gpu/drm/qxl/qxl_object.h
> @@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
> bool kernel, bool pinned, u32 domain,
> struct qxl_surface *surf,
> struct qxl_bo **bo_ptr);
> -extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
> +extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
> extern void qxl_bo_kunmap(struct qxl_bo *bo);
> void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
> void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
> diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
> index 7d3816fca5a8..4aa949799446 100644
> --- a/drivers/gpu/drm/qxl/qxl_prime.c
> +++ b/drivers/gpu/drm/qxl/qxl_prime.c
> @@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
> return ERR_PTR(-ENOSYS);
> }
>
> -void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
> +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct qxl_bo *bo = gem_to_qxl_bo(obj);
> - void *ptr;
> int ret;
>
> - ret = qxl_bo_kmap(bo, &ptr);
> + ret = qxl_bo_kmap(bo, map);
> if (ret < 0)
> - return ERR_PTR(ret);
> + return ret;
>
> - return ptr;
> + return 0;
> }
>
> -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
> + struct dma_buf_map *map)
> {
> struct qxl_bo *bo = gem_to_qxl_bo(obj);
>
> diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
> index 5d54bccebd4d..44cb5ee6fc20 100644
> --- a/drivers/gpu/drm/radeon/radeon.h
> +++ b/drivers/gpu/drm/radeon/radeon.h
> @@ -509,7 +509,6 @@ struct radeon_bo {
> /* Constant after initialization */
> struct radeon_device *rdev;
>
> - struct ttm_bo_kmap_obj dma_buf_vmap;
> pid_t pid;
>
> #ifdef CONFIG_MMU_NOTIFIER
> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> index 0ccd7213e41f..d2876ce3bc9e 100644
> --- a/drivers/gpu/drm/radeon/radeon_gem.c
> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> @@ -31,6 +31,7 @@
> #include <drm/drm_debugfs.h>
> #include <drm/drm_device.h>
> #include <drm/drm_file.h>
> +#include <drm/drm_gem_ttm_helper.h>
> #include <drm/radeon_drm.h>
>
> #include "radeon.h"
> @@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
> struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
> int radeon_gem_prime_pin(struct drm_gem_object *obj);
> void radeon_gem_prime_unpin(struct drm_gem_object *obj);
> -void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
> -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
>
> static const struct drm_gem_object_funcs radeon_gem_object_funcs;
>
> @@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = {
> .pin = radeon_gem_prime_pin,
> .unpin = radeon_gem_prime_unpin,
> .get_sg_table = radeon_gem_prime_get_sg_table,
> - .vmap = radeon_gem_prime_vmap,
> - .vunmap = radeon_gem_prime_vunmap,
> + .vmap = drm_gem_ttm_vmap,
> + .vunmap = drm_gem_ttm_vunmap,
> };
>
> /*
> diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
> index b9de0e51c0be..088d39a51c0d 100644
> --- a/drivers/gpu/drm/radeon/radeon_prime.c
> +++ b/drivers/gpu/drm/radeon/radeon_prime.c
> @@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
> return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
> }
>
> -void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - struct radeon_bo *bo = gem_to_radeon_bo(obj);
> - int ret;
> -
> - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
> - &bo->dma_buf_vmap);
> - if (ret)
> - return ERR_PTR(ret);
> -
> - return bo->dma_buf_vmap.virtual;
> -}
> -
> -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - struct radeon_bo *bo = gem_to_radeon_bo(obj);
> -
> - ttm_bo_kunmap(&bo->dma_buf_vmap);
> -}
> -
> struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sg)
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> index 7d5ebb10323b..7971f57436dd 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> @@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
> return ERR_PTR(ret);
> }
>
> -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
> +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>
> - if (rk_obj->pages)
> - return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
> - pgprot_writecombine(PAGE_KERNEL));
> + if (rk_obj->pages) {
> + void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
> + pgprot_writecombine(PAGE_KERNEL));
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
> + return 0;
> + }
>
> if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
> - return NULL;
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
>
> - return rk_obj->kvaddr;
> + return 0;
> }
>
> -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>
> if (rk_obj->pages) {
> - vunmap(vaddr);
> + vunmap(map->vaddr);
> return;
> }
>
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> index 7ffc541bea07..5a70a56cd406 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> @@ -31,8 +31,8 @@ struct drm_gem_object *
> rockchip_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sg);
> -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
> -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> /* drm driver mmap file operations */
> int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
> index 744a8e337e41..c02e35ed6e76 100644
> --- a/drivers/gpu/drm/tiny/cirrus.c
> +++ b/drivers/gpu/drm/tiny/cirrus.c
> @@ -17,6 +17,7 @@
> */
>
> #include <linux/console.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/module.h>
> #include <linux/pci.h>
>
> @@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
> struct drm_rect *rect)
> {
> struct cirrus_device *cirrus = to_cirrus(fb->dev);
> + struct dma_buf_map map;
> void *vmap;
> int idx, ret;
>
> @@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
> if (!drm_dev_enter(&cirrus->dev, &idx))
> goto out;
>
> - ret = -ENOMEM;
> - vmap = drm_gem_shmem_vmap(fb->obj[0]);
> - if (!vmap)
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (ret)
> goto out_dev_exit;
> + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> if (cirrus->cpp == fb->format->cpp[0])
> drm_fb_memcpy_dstclip(cirrus->vram,
> @@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
> else
> WARN_ON_ONCE("cpp mismatch");
>
> - drm_gem_shmem_vunmap(fb->obj[0], vmap);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
> ret = 0;
>
> out_dev_exit:
> diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
> index cc397671f689..12a890cea6e9 100644
> --- a/drivers/gpu/drm/tiny/gm12u320.c
> +++ b/drivers/gpu/drm/tiny/gm12u320.c
> @@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
> {
> int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
> struct drm_framebuffer *fb;
> + struct dma_buf_map map;
> void *vaddr;
> u8 *src;
>
> @@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
> y1 = gm12u320->fb_update.rect.y1;
> y2 = gm12u320->fb_update.rect.y2;
>
> - vaddr = drm_gem_shmem_vmap(fb->obj[0]);
> - if (IS_ERR(vaddr)) {
> - GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (ret) {
> + GM12U320_ERR("failed to vmap fb: %d\n", ret);
> goto put_fb;
> }
> + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> if (fb->obj[0]->import_attach) {
> ret = dma_buf_begin_cpu_access(
> @@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
> GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
> }
> vunmap:
> - drm_gem_shmem_vunmap(fb->obj[0], vaddr);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
> put_fb:
> drm_framebuffer_put(fb);
> gm12u320->fb_update.fb = NULL;
> diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
> index fef43f4e3bac..42eeba1dfdbf 100644
> --- a/drivers/gpu/drm/udl/udl_modeset.c
> +++ b/drivers/gpu/drm/udl/udl_modeset.c
> @@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
> struct urb *urb;
> struct drm_rect clip;
> int log_bpp;
> + struct dma_buf_map map;
> void *vaddr;
>
> ret = udl_log_cpp(fb->format->cpp[0]);
> @@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
> return ret;
> }
>
> - vaddr = drm_gem_shmem_vmap(fb->obj[0]);
> - if (IS_ERR(vaddr)) {
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (ret) {
> DRM_ERROR("failed to vmap fb\n");
> goto out_dma_buf_end_cpu_access;
> }
> + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> urb = udl_get_urb(dev);
> if (!urb)
> @@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
> ret = 0;
>
> out_drm_gem_shmem_vunmap:
> - drm_gem_shmem_vunmap(fb->obj[0], vaddr);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
> out_dma_buf_end_cpu_access:
> if (import_attach) {
> tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
> diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> index 931c55126148..f268fb258c83 100644
> --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
> +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> @@ -9,6 +9,8 @@
> * Michael Thayer <michael.thayer@oracle.com,
> * Hans de Goede <hdegoede@redhat.com>
> */
> +
> +#include <linux/dma-buf-map.h>
> #include <linux/export.h>
>
> #include <drm/drm_atomic.h>
> @@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> u32 height = plane->state->crtc_h;
> size_t data_size, mask_size;
> u32 flags;
> + struct dma_buf_map map;
> + int ret;
> u8 *src;
>
> /*
> @@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>
> vbox_crtc->cursor_enabled = true;
>
> - src = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(src)) {
> + ret = drm_gem_vram_vmap(gbo, &map);
> + if (ret) {
> /*
> * BUG: we should have pinned the BO in prepare_fb().
> */
> @@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> DRM_WARN("Could not map cursor bo, skipping update\n");
> return;
> }
> + src = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> /*
> * The mask must be calculated based on the alpha
> @@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> data_size = width * height * 4 + mask_size;
>
> copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
> - drm_gem_vram_vunmap(gbo, src);
> + drm_gem_vram_vunmap(gbo, &map);
>
> flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
> VBOX_MOUSE_POINTER_ALPHA;
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index 557f0d1e6437..f290a9a942dc 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> return drm_gem_cma_prime_mmap(obj, vma);
> }
>
> -void *vc4_prime_vmap(struct drm_gem_object *obj)
> +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct vc4_bo *bo = to_vc4_bo(obj);
>
> if (bo->validated_shader) {
> DRM_DEBUG("mmaping of shader BOs not allowed.\n");
> - return ERR_PTR(-EINVAL);
> + return -EINVAL;
> }
>
> - return drm_gem_cma_prime_vmap(obj);
> + return drm_gem_cma_prime_vmap(obj, map);
> }
>
> struct drm_gem_object *
> diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
> index cc79b1aaa878..904f2c36c963 100644
> --- a/drivers/gpu/drm/vc4/vc4_drv.h
> +++ b/drivers/gpu/drm/vc4/vc4_drv.h
> @@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
> struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sgt);
> -void *vc4_prime_vmap(struct drm_gem_object *obj);
> +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> int vc4_bo_cache_init(struct drm_device *dev);
> void vc4_bo_cache_destroy(struct drm_device *dev);
> int vc4_bo_inc_usecnt(struct vc4_bo *bo);
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index fa54a6d1403d..b2aa26e1e4a2 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
> return &obj->base;
> }
>
> -static void *vgem_prime_vmap(struct drm_gem_object *obj)
> +static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
> long n_pages = obj->size >> PAGE_SHIFT;
> struct page **pages;
> + void *vaddr;
>
> pages = vgem_pin_pages(bo);
> if (IS_ERR(pages))
> - return NULL;
> + return PTR_ERR(pages);
> +
> + vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
>
> - return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> + return 0;
> }
>
> -static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
>
> - vunmap(vaddr);
> + vunmap(map->vaddr);
> vgem_unpin_pages(bo);
> }
>
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> index 4f34ef34ba60..74db5a840bed 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> @@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
> return gem_mmap_obj(xen_obj, vma);
> }
>
> -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
> +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
> {
> struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
> + void *vaddr;
>
> if (!xen_obj->pages)
> - return NULL;
> + return -ENOMEM;
>
> /* Please see comment in gem_mmap_obj on mapping and attributes. */
> - return vmap(xen_obj->pages, xen_obj->num_pages,
> - VM_MAP, PAGE_KERNEL);
> + vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
> + VM_MAP, PAGE_KERNEL);
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
> +
> + return 0;
> }
>
> void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
> - void *vaddr)
> + struct dma_buf_map *map)
> {
> - vunmap(vaddr);
> + vunmap(map->vaddr);
> }
>
> int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> index a39675fa31b2..a4e67d0a149c 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> @@ -12,6 +12,7 @@
> #define __XEN_DRM_FRONT_GEM_H
>
> struct dma_buf_attachment;
> +struct dma_buf_map;
> struct drm_device;
> struct drm_gem_object;
> struct file;
> @@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
>
> int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
>
> -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
> +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
> + struct dma_buf_map *map);
>
> void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
> - void *vaddr);
> + struct dma_buf_map *map);
>
> int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
> struct vm_area_struct *vma);
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index c38dd35da00b..5e6daa1c982f 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -39,6 +39,7 @@
>
> #include <drm/drm_vma_manager.h>
>
> +struct dma_buf_map;
> struct drm_gem_object;
>
> /**
> @@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
> *
> * This callback is optional.
> */
> - void *(*vmap)(struct drm_gem_object *obj);
> + int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> /**
> * @vunmap:
> @@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
> *
> * This callback is optional.
> */
> - void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
> + void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> /**
> * @mmap:
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index a064b0d1c480..caf98b9cf4b4 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
> struct sg_table *sgt);
> int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
> +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> struct drm_gem_object *
> drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 5381f0c8cf6f..3449a0353fe0 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
> void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
> int drm_gem_shmem_pin(struct drm_gem_object *obj);
> void drm_gem_shmem_unpin(struct drm_gem_object *obj);
> -void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
> -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
>
> diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
> index 128f88174d32..c0d28ba0f5c9 100644
> --- a/include/drm/drm_gem_vram_helper.h
> +++ b/include/drm/drm_gem_vram_helper.h
> @@ -10,6 +10,7 @@
> #include <drm/ttm/ttm_bo_api.h>
> #include <drm/ttm/ttm_bo_driver.h>
>
> +#include <linux/dma-buf-map.h>
> #include <linux/kernel.h> /* for container_of() */
>
> struct drm_mode_create_dumb;
> @@ -29,9 +30,8 @@ struct vm_area_struct;
>
> /**
> * struct drm_gem_vram_object - GEM object backed by VRAM
> - * @gem: GEM object
> * @bo: TTM buffer object
> - * @kmap: Mapping information for @bo
> + * @map: Mapping information for @bo
> * @placement: TTM placement information. Supported placements are \
> %TTM_PL_VRAM and %TTM_PL_SYSTEM
> * @placements: TTM placement information.
> @@ -50,15 +50,15 @@ struct vm_area_struct;
> */
> struct drm_gem_vram_object {
> struct ttm_buffer_object bo;
> - struct ttm_bo_kmap_obj kmap;
> + struct dma_buf_map map;
>
> /**
> - * @kmap_use_count:
> + * @vmap_use_count:
> *
> * Reference count on the virtual address.
> * The address are un-mapped when the count reaches zero.
> */
> - unsigned int kmap_use_count;
> + unsigned int vmap_use_count;
>
> /* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
> struct ttm_placement placement;
> @@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
> s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
> int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
> int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
> -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
> +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
> +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
>
> int drm_gem_vram_fill_create_dumb(struct drm_file *file,
> struct drm_device *dev,
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends
@ 2020-10-15 14:21 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 14:21 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> This patch replaces the vmap/vunmap's use of raw pointers in GEM object
> functions with instances of struct dma_buf_map. GEM backends are
> converted as well. For most of them, this simply changes the returned type.
>
> TTM-based drivers now return information about the location of the memory,
> either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap()
> et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of
> implementing their own vmap callbacks.
>
> v4:
> * use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian)
> * fix a trailing { in drm_gem_vmap()
> * remove several empty functions instead of converting them (Daniel)
> * comment uses of raw pointers with a TODO (Daniel)
> * TODO list: convert more helpers to use struct dma_buf_map
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
The amdgpu changes look good to me, but I can't fully judge the other stuff.
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> Documentation/gpu/todo.rst | 18 ++++
> drivers/gpu/drm/Kconfig | 2 +
> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 -------
> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 -
> drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 -
> drivers/gpu/drm/ast/ast_cursor.c | 27 +++--
> drivers/gpu/drm/ast/ast_drv.h | 7 +-
> drivers/gpu/drm/drm_gem.c | 23 +++--
> drivers/gpu/drm/drm_gem_cma_helper.c | 10 +-
> drivers/gpu/drm/drm_gem_shmem_helper.c | 48 +++++----
> drivers/gpu/drm/drm_gem_vram_helper.c | 107 ++++++++++----------
> drivers/gpu/drm/etnaviv/etnaviv_drv.h | 2 +-
> drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 9 +-
> drivers/gpu/drm/lima/lima_gem.c | 6 +-
> drivers/gpu/drm/lima/lima_sched.c | 11 +-
> drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +-
> drivers/gpu/drm/nouveau/Kconfig | 1 +
> drivers/gpu/drm/nouveau/nouveau_bo.h | 2 -
> drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +-
> drivers/gpu/drm/nouveau/nouveau_gem.h | 2 -
> drivers/gpu/drm/nouveau/nouveau_prime.c | 20 ----
> drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +--
> drivers/gpu/drm/qxl/qxl_display.c | 11 +-
> drivers/gpu/drm/qxl/qxl_draw.c | 14 ++-
> drivers/gpu/drm/qxl/qxl_drv.h | 11 +-
> drivers/gpu/drm/qxl/qxl_object.c | 31 +++---
> drivers/gpu/drm/qxl/qxl_object.h | 2 +-
> drivers/gpu/drm/qxl/qxl_prime.c | 12 +--
> drivers/gpu/drm/radeon/radeon.h | 1 -
> drivers/gpu/drm/radeon/radeon_gem.c | 7 +-
> drivers/gpu/drm/radeon/radeon_prime.c | 20 ----
> drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 ++--
> drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +-
> drivers/gpu/drm/tiny/cirrus.c | 10 +-
> drivers/gpu/drm/tiny/gm12u320.c | 10 +-
> drivers/gpu/drm/udl/udl_modeset.c | 8 +-
> drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +-
> drivers/gpu/drm/vc4/vc4_bo.c | 6 +-
> drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
> drivers/gpu/drm/vgem/vgem_drv.c | 16 ++-
> drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 ++--
> drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +-
> include/drm/drm_gem.h | 5 +-
> include/drm/drm_gem_cma_helper.h | 2 +-
> include/drm/drm_gem_shmem_helper.h | 4 +-
> include/drm/drm_gem_vram_helper.h | 14 +--
> 47 files changed, 321 insertions(+), 295 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 700637e25ecd..7e6fc3c04add 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -446,6 +446,24 @@ Contact: Ville Syrjälä, Daniel Vetter
>
> Level: Intermediate
>
> +Use struct dma_buf_map throughout codebase
> +------------------------------------------
> +
> +Pointers to shared device memory are stored in struct dma_buf_map. Each
> +instance knows whether it refers to system or I/O memory. Most of the DRM-wide
> +interface have been converted to use struct dma_buf_map, but implementations
> +often still use raw pointers.
> +
> +The task is to use struct dma_buf_map where it makes sense.
> +
> +* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
> +* TTM might benefit from using struct dma_buf_map internally.
> +* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
> +
> +Level: Intermediate
> +
>
> Core refactorings
> =================
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index 147d61b9674e..319839b87d37 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -239,6 +239,7 @@ config DRM_RADEON
> select FW_LOADER
> select DRM_KMS_HELPER
> select DRM_TTM
> + select DRM_TTM_HELPER
> select POWER_SUPPLY
> select HWMON
> select BACKLIGHT_CLASS_DEVICE
> @@ -259,6 +260,7 @@ config DRM_AMDGPU
> select DRM_KMS_HELPER
> select DRM_SCHED
> select DRM_TTM
> + select DRM_TTM_HELPER
> select POWER_SUPPLY
> select HWMON
> select BACKLIGHT_CLASS_DEVICE
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> index 5b465ab774d1..e5919efca870 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> @@ -41,42 +41,6 @@
> #include <linux/dma-fence-array.h>
> #include <linux/pci-p2pdma.h>
>
> -/**
> - * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
> - * @obj: GEM BO
> - *
> - * Sets up an in-kernel virtual mapping of the BO's memory.
> - *
> - * Returns:
> - * The virtual address of the mapping or an error pointer.
> - */
> -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> - int ret;
> -
> - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
> - &bo->dma_buf_vmap);
> - if (ret)
> - return ERR_PTR(ret);
> -
> - return bo->dma_buf_vmap.virtual;
> -}
> -
> -/**
> - * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
> - * @obj: GEM BO
> - * @vaddr: Virtual address (unused)
> - *
> - * Tears down the in-kernel virtual mapping of the BO's memory.
> - */
> -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> -
> - ttm_bo_kunmap(&bo->dma_buf_vmap);
> -}
> -
> /**
> * amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation
> * @obj: GEM BO
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> index 2c5c84a06bb9..39b5b9616fd8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> @@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
> struct dma_buf *dma_buf);
> bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
> struct amdgpu_bo *bo);
> -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
> -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index be08a63ef58c..576659827e74 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -33,6 +33,7 @@
>
> #include <drm/amdgpu_drm.h>
> #include <drm/drm_debugfs.h>
> +#include <drm/drm_gem_ttm_helper.h>
>
> #include "amdgpu.h"
> #include "amdgpu_display.h"
> @@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = {
> .open = amdgpu_gem_object_open,
> .close = amdgpu_gem_object_close,
> .export = amdgpu_gem_prime_export,
> - .vmap = amdgpu_gem_prime_vmap,
> - .vunmap = amdgpu_gem_prime_vunmap,
> + .vmap = drm_gem_ttm_vmap,
> + .vunmap = drm_gem_ttm_vunmap,
> };
>
> /*
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> index 132e5f955180..01296ef0d673 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> @@ -100,7 +100,6 @@ struct amdgpu_bo {
> struct amdgpu_bo *parent;
> struct amdgpu_bo *shadow;
>
> - struct ttm_bo_kmap_obj dma_buf_vmap;
> struct amdgpu_mn *mn;
>
>
> diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
> index e0f4613918ad..742d43a7edf4 100644
> --- a/drivers/gpu/drm/ast/ast_cursor.c
> +++ b/drivers/gpu/drm/ast/ast_cursor.c
> @@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
>
> for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
> gbo = ast->cursor.gbo[i];
> - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
> + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
> drm_gem_vram_unpin(gbo);
> drm_gem_vram_put(gbo);
> }
> @@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
> struct drm_device *dev = &ast->base;
> size_t size, i;
> struct drm_gem_vram_object *gbo;
> - void __iomem *vaddr;
> + struct dma_buf_map map;
> int ret;
>
> size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
> @@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
> drm_gem_vram_put(gbo);
> goto err_drm_gem_vram_put;
> }
> - vaddr = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(vaddr)) {
> - ret = PTR_ERR(vaddr);
> + ret = drm_gem_vram_vmap(gbo, &map);
> + if (ret) {
> drm_gem_vram_unpin(gbo);
> drm_gem_vram_put(gbo);
> goto err_drm_gem_vram_put;
> }
>
> ast->cursor.gbo[i] = gbo;
> - ast->cursor.vaddr[i] = vaddr;
> + ast->cursor.map[i] = map;
> }
>
> return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
> @@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
> while (i) {
> --i;
> gbo = ast->cursor.gbo[i];
> - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
> + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
> drm_gem_vram_unpin(gbo);
> drm_gem_vram_put(gbo);
> }
> @@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
> {
> struct drm_device *dev = &ast->base;
> struct drm_gem_vram_object *gbo;
> + struct dma_buf_map map;
> int ret;
> void *src;
> void __iomem *dst;
> @@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
> ret = drm_gem_vram_pin(gbo, 0);
> if (ret)
> return ret;
> - src = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(src)) {
> - ret = PTR_ERR(src);
> + ret = drm_gem_vram_vmap(gbo, &map);
> + if (ret)
> goto err_drm_gem_vram_unpin;
> - }
> + src = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> - dst = ast->cursor.vaddr[ast->cursor.next_index];
> + dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
>
> /* do data transfer to cursor BO */
> update_cursor_image(dst, src, fb->width, fb->height);
>
> - drm_gem_vram_vunmap(gbo, src);
> + drm_gem_vram_vunmap(gbo, &map);
> drm_gem_vram_unpin(gbo);
>
> return 0;
> @@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
> u8 __iomem *sig;
> u8 jreg;
>
> - dst = ast->cursor.vaddr[ast->cursor.next_index];
> + dst = ast->cursor.map[ast->cursor.next_index].vaddr;
>
> sig = dst + AST_HWC_SIZE;
> writel(x, sig + AST_HWC_SIGNATURE_X);
> diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
> index 467049ca8430..f963141dd851 100644
> --- a/drivers/gpu/drm/ast/ast_drv.h
> +++ b/drivers/gpu/drm/ast/ast_drv.h
> @@ -28,10 +28,11 @@
> #ifndef __AST_DRV_H__
> #define __AST_DRV_H__
>
> -#include <linux/types.h>
> -#include <linux/io.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/i2c.h>
> #include <linux/i2c-algo-bit.h>
> +#include <linux/io.h>
> +#include <linux/types.h>
>
> #include <drm/drm_connector.h>
> #include <drm/drm_crtc.h>
> @@ -131,7 +132,7 @@ struct ast_private {
>
> struct {
> struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
> - void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
> + struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
> unsigned int next_index;
> } cursor;
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 1da67d34e55d..a89ad4570e3c 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -36,6 +36,7 @@
> #include <linux/pagemap.h>
> #include <linux/shmem_fs.h>
> #include <linux/dma-buf.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/mem_encrypt.h>
> #include <linux/pagevec.h>
>
> @@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
>
> void *drm_gem_vmap(struct drm_gem_object *obj)
> {
> - void *vaddr;
> + struct dma_buf_map map;
> + int ret;
>
> - if (obj->funcs->vmap)
> - vaddr = obj->funcs->vmap(obj);
> - else
> - vaddr = ERR_PTR(-EOPNOTSUPP);
> + if (!obj->funcs->vmap)
> + return ERR_PTR(-EOPNOTSUPP);
>
> - if (!vaddr)
> - vaddr = ERR_PTR(-ENOMEM);
> + ret = obj->funcs->vmap(obj, &map);
> + if (ret)
> + return ERR_PTR(ret);
> + else if (dma_buf_map_is_null(&map))
> + return ERR_PTR(-ENOMEM);
>
> - return vaddr;
> + return map.vaddr;
> }
>
> void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
> {
> + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
> +
> if (!vaddr)
> return;
>
> if (obj->funcs->vunmap)
> - obj->funcs->vunmap(obj, vaddr);
> + obj->funcs->vunmap(obj, &map);
> }
>
> /**
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index d527485ea0b7..b57e3e9222f0 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
> * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
> * address space
> * @obj: GEM object
> + * @map: Returns the kernel virtual address of the CMA GEM object's backing
> + * store.
> *
> * This function maps a buffer exported via DRM PRIME into the kernel's
> * virtual address space. Since the CMA buffers are already mapped into the
> @@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
> * driver's &drm_gem_object_funcs.vmap callback.
> *
> * Returns:
> - * The kernel virtual address of the CMA GEM object's backing store.
> + * 0 on success, or a negative error code otherwise.
> */
> -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
> +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
>
> - return cma_obj->vaddr;
> + dma_buf_map_set_vaddr(map, cma_obj->vaddr);
> +
> + return 0;
> }
> EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index fb11df7aced5..5553f58f68f3 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
> }
> EXPORT_SYMBOL(drm_gem_shmem_unpin);
>
> -static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> +static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
> - struct dma_buf_map map;
> int ret = 0;
>
> - if (shmem->vmap_use_count++ > 0)
> - return shmem->vaddr;
> + if (shmem->vmap_use_count++ > 0) {
> + dma_buf_map_set_vaddr(map, shmem->vaddr);
> + return 0;
> + }
>
> if (obj->import_attach) {
> - ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
> - if (!ret)
> - shmem->vaddr = map.vaddr;
> + ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
> + if (!ret) {
> + if (WARN_ON(map->is_iomem)) {
> + ret = -EIO;
> + goto err_put_pages;
> + }
> + shmem->vaddr = map->vaddr;
> + }
> } else {
> pgprot_t prot = PAGE_KERNEL;
>
> @@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> VM_MAP, prot);
> if (!shmem->vaddr)
> ret = -ENOMEM;
> + else
> + dma_buf_map_set_vaddr(map, shmem->vaddr);
> }
>
> if (ret) {
> @@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> goto err_put_pages;
> }
>
> - return shmem->vaddr;
> + return 0;
>
> err_put_pages:
> if (!obj->import_attach)
> @@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> err_zero_use:
> shmem->vmap_use_count = 0;
>
> - return ERR_PTR(ret);
> + return ret;
> }
>
> /*
> * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
> * @shmem: shmem GEM object
> + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> + * store.
> *
> * This function makes sure that a contiguous kernel virtual address mapping
> * exists for the buffer backing the shmem GEM object.
> @@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> * Returns:
> * 0 on success or a negative error code on failure.
> */
> -void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
> +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> - void *vaddr;
> int ret;
>
> ret = mutex_lock_interruptible(&shmem->vmap_lock);
> if (ret)
> - return ERR_PTR(ret);
> - vaddr = drm_gem_shmem_vmap_locked(shmem);
> + return ret;
> + ret = drm_gem_shmem_vmap_locked(shmem, map);
> mutex_unlock(&shmem->vmap_lock);
>
> - return vaddr;
> + return ret;
> }
> EXPORT_SYMBOL(drm_gem_shmem_vmap);
>
> -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> +static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> + struct dma_buf_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
> - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
>
> if (WARN_ON_ONCE(!shmem->vmap_use_count))
> return;
> @@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> return;
>
> if (obj->import_attach)
> - dma_buf_vunmap(obj->import_attach->dmabuf, &map);
> + dma_buf_vunmap(obj->import_attach->dmabuf, map);
> else
> vunmap(shmem->vaddr);
>
> @@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> /*
> * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
> * @shmem: shmem GEM object
> + * @map: Kernel virtual address where the SHMEM GEM object was mapped
> *
> * This function cleans up a kernel virtual address mapping acquired by
> * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
> @@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> * also be called by drivers directly, in which case it will hide the
> * differences between dma-buf imported and natively allocated objects.
> */
> -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>
> mutex_lock(&shmem->vmap_lock);
> - drm_gem_shmem_vunmap_locked(shmem);
> + drm_gem_shmem_vunmap_locked(shmem, map);
> mutex_unlock(&shmem->vmap_lock);
> }
> EXPORT_SYMBOL(drm_gem_shmem_vunmap);
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 2d5ed30518f1..4d8553b28558 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -1,5 +1,6 @@
> // SPDX-License-Identifier: GPL-2.0-or-later
>
> +#include <linux/dma-buf-map.h>
> #include <linux/module.h>
>
> #include <drm/drm_debugfs.h>
> @@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
> * up; only release the GEM object.
> */
>
> - WARN_ON(gbo->kmap_use_count);
> - WARN_ON(gbo->kmap.virtual);
> + WARN_ON(gbo->vmap_use_count);
> + WARN_ON(dma_buf_map_is_set(&gbo->map));
>
> drm_gem_object_release(&gbo->bo.base);
> }
> @@ -382,29 +383,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
> }
> EXPORT_SYMBOL(drm_gem_vram_unpin);
>
> -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
> +static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
> + struct dma_buf_map *map)
> {
> int ret;
> - struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> - bool is_iomem;
>
> - if (gbo->kmap_use_count > 0)
> + if (gbo->vmap_use_count > 0)
> goto out;
>
> - ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
> + ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
> if (ret)
> - return ERR_PTR(ret);
> + return ret;
>
> out:
> - ++gbo->kmap_use_count;
> - return ttm_kmap_obj_virtual(kmap, &is_iomem);
> + ++gbo->vmap_use_count;
> + *map = gbo->map;
> +
> + return 0;
> }
>
> -static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> +static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
> + struct dma_buf_map *map)
> {
> - if (WARN_ON_ONCE(!gbo->kmap_use_count))
> + struct drm_device *dev = gbo->bo.base.dev;
> +
> + if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count))
> return;
> - if (--gbo->kmap_use_count > 0)
> +
> + if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
> + return; /* BUG: map not mapped from this BO */
> +
> + if (--gbo->vmap_use_count > 0)
> return;
>
> /*
> @@ -418,7 +427,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> /**
> * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
> * space
> - * @gbo: The GEM VRAM object to map
> + * @gbo: The GEM VRAM object to map
> + * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> + * store.
> *
> * The vmap function pins a GEM VRAM object to its current location, either
> * system or video memory, and maps its buffer into kernel address space.
> @@ -427,48 +438,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> * unmap and unpin the GEM VRAM object.
> *
> * Returns:
> - * The buffer's virtual address on success, or
> - * an ERR_PTR()-encoded error code otherwise.
> + * 0 on success, or a negative error code otherwise.
> */
> -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
> +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
> {
> int ret;
> - void *base;
>
> ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
> if (ret)
> - return ERR_PTR(ret);
> + return ret;
>
> ret = drm_gem_vram_pin_locked(gbo, 0);
> if (ret)
> goto err_ttm_bo_unreserve;
> - base = drm_gem_vram_kmap_locked(gbo);
> - if (IS_ERR(base)) {
> - ret = PTR_ERR(base);
> + ret = drm_gem_vram_kmap_locked(gbo, map);
> + if (ret)
> goto err_drm_gem_vram_unpin_locked;
> - }
>
> ttm_bo_unreserve(&gbo->bo);
>
> - return base;
> + return 0;
>
> err_drm_gem_vram_unpin_locked:
> drm_gem_vram_unpin_locked(gbo);
> err_ttm_bo_unreserve:
> ttm_bo_unreserve(&gbo->bo);
> - return ERR_PTR(ret);
> + return ret;
> }
> EXPORT_SYMBOL(drm_gem_vram_vmap);
>
> /**
> * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
> - * @gbo: The GEM VRAM object to unmap
> - * @vaddr: The mapping's base address as returned by drm_gem_vram_vmap()
> + * @gbo: The GEM VRAM object to unmap
> + * @map: Kernel virtual address where the VRAM GEM object was mapped
> *
> * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
> * the documentation for drm_gem_vram_vmap() for more information.
> */
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
> +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
> {
> int ret;
>
> @@ -476,7 +483,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
> if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
> return;
>
> - drm_gem_vram_kunmap_locked(gbo);
> + drm_gem_vram_kunmap_locked(gbo, map);
> drm_gem_vram_unpin_locked(gbo);
>
> ttm_bo_unreserve(&gbo->bo);
> @@ -567,15 +574,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
> bool evict,
> struct ttm_resource *new_mem)
> {
> - struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> + struct ttm_buffer_object *bo = &gbo->bo;
> + struct drm_device *dev = bo->base.dev;
>
> - if (WARN_ON_ONCE(gbo->kmap_use_count))
> + if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count))
> return;
>
> - if (!kmap->virtual)
> - return;
> - ttm_bo_kunmap(kmap);
> - kmap->virtual = NULL;
> + ttm_bo_vunmap(bo, &gbo->map);
> }
>
> static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
> @@ -832,37 +837,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
> }
>
> /**
> - * drm_gem_vram_object_vmap() - \
> - Implements &struct drm_gem_object_funcs.vmap
> - * @gem: The GEM object to map
> + * drm_gem_vram_object_vmap() -
> + * Implements &struct drm_gem_object_funcs.vmap
> + * @gem: The GEM object to map
> + * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> + * store.
> *
> * Returns:
> - * The buffers virtual address on success, or
> - * NULL otherwise.
> + * 0 on success, or a negative error code otherwise.
> */
> -static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
> +static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
> {
> struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
> - void *base;
>
> - base = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(base))
> - return NULL;
> - return base;
> + return drm_gem_vram_vmap(gbo, map);
> }
>
> /**
> - * drm_gem_vram_object_vunmap() - \
> - Implements &struct drm_gem_object_funcs.vunmap
> - * @gem: The GEM object to unmap
> - * @vaddr: The mapping's base address
> + * drm_gem_vram_object_vunmap() -
> + * Implements &struct drm_gem_object_funcs.vunmap
> + * @gem: The GEM object to unmap
> + * @map: Kernel virtual address where the VRAM GEM object was mapped
> */
> -static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
> - void *vaddr)
> +static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
> {
> struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
>
> - drm_gem_vram_vunmap(gbo, vaddr);
> + drm_gem_vram_vunmap(gbo, map);
> }
>
> /*
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> index 9682c26d89bb..f5be627e1de0 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> @@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
> int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
> struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
> -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
> +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index a6d9932a32ae..bc2543dd987d 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
> return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
> }
>
> -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
> +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> - return etnaviv_gem_vmap(obj);
> + void *vaddr = etnaviv_gem_vmap(obj);
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
> +
> + return 0;
> }
>
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
> index 11223fe348df..832e5280a6ed 100644
> --- a/drivers/gpu/drm/lima/lima_gem.c
> +++ b/drivers/gpu/drm/lima/lima_gem.c
> @@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
> return drm_gem_shmem_pin(obj);
> }
>
> -static void *lima_gem_vmap(struct drm_gem_object *obj)
> +static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct lima_bo *bo = to_lima_bo(obj);
>
> if (bo->heap_size)
> - return ERR_PTR(-EINVAL);
> + return -EINVAL;
>
> - return drm_gem_shmem_vmap(obj);
> + return drm_gem_shmem_vmap(obj, map);
> }
>
> static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
> index dc6df9e9a40d..a070a85f8f36 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -1,6 +1,7 @@
> // SPDX-License-Identifier: GPL-2.0 OR MIT
> /* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
>
> +#include <linux/dma-buf-map.h>
> #include <linux/kthread.h>
> #include <linux/slab.h>
> #include <linux/vmalloc.h>
> @@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
> struct lima_dump_chunk_buffer *buffer_chunk;
> u32 size, task_size, mem_size;
> int i;
> + struct dma_buf_map map;
> + int ret;
>
> mutex_lock(&dev->error_task_list_lock);
>
> @@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
> } else {
> buffer_chunk->size = lima_bo_size(bo);
>
> - data = drm_gem_shmem_vmap(&bo->base.base);
> - if (IS_ERR_OR_NULL(data)) {
> + ret = drm_gem_shmem_vmap(&bo->base.base, &map);
> + if (ret) {
> kvfree(et);
> goto out;
> }
>
> - memcpy(buffer_chunk + 1, data, buffer_chunk->size);
> + memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
>
> - drm_gem_shmem_vunmap(&bo->base.base, data);
> + drm_gem_shmem_vunmap(&bo->base.base, &map);
> }
>
> buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
> diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
> index 38672f9e5c4f..8ef76769b97f 100644
> --- a/drivers/gpu/drm/mgag200/mgag200_mode.c
> +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
> @@ -9,6 +9,7 @@
> */
>
> #include <linux/delay.h>
> +#include <linux/dma-buf-map.h>
>
> #include <drm/drm_atomic_helper.h>
> #include <drm/drm_atomic_state_helper.h>
> @@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
> struct drm_rect *clip)
> {
> struct drm_device *dev = &mdev->base;
> + struct dma_buf_map map;
> void *vmap;
> + int ret;
>
> - vmap = drm_gem_shmem_vmap(fb->obj[0]);
> - if (drm_WARN_ON(dev, !vmap))
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (drm_WARN_ON(dev, ret))
> return; /* BUG: SHMEM BO should always be vmapped */
> + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
>
> - drm_gem_shmem_vunmap(fb->obj[0], vmap);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
>
> /* Always scanout image at VRAM offset 0 */
> mgag200_set_startadd(mdev, (u32)0);
> diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
> index 5dec1e5694b7..9436310d0854 100644
> --- a/drivers/gpu/drm/nouveau/Kconfig
> +++ b/drivers/gpu/drm/nouveau/Kconfig
> @@ -6,6 +6,7 @@ config DRM_NOUVEAU
> select FW_LOADER
> select DRM_KMS_HELPER
> select DRM_TTM
> + select DRM_TTM_HELPER
> select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT
> select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT
> select X86_PLATFORM_DEVICES if ACPI && X86
> diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h
> index 641ef6298a0e..6045b85a762a 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_bo.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_bo.h
> @@ -39,8 +39,6 @@ struct nouveau_bo {
> unsigned mode;
>
> struct nouveau_drm_tile *tile;
> -
> - struct ttm_bo_kmap_obj dma_buf_vmap;
> };
>
> static inline struct nouveau_bo *
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> index 9a421c3949de..f942b526b0a5 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> @@ -24,6 +24,8 @@
> *
> */
>
> +#include <drm/drm_gem_ttm_helper.h>
> +
> #include "nouveau_drv.h"
> #include "nouveau_dma.h"
> #include "nouveau_fence.h"
> @@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = {
> .pin = nouveau_gem_prime_pin,
> .unpin = nouveau_gem_prime_unpin,
> .get_sg_table = nouveau_gem_prime_get_sg_table,
> - .vmap = nouveau_gem_prime_vmap,
> - .vunmap = nouveau_gem_prime_vunmap,
> + .vmap = drm_gem_ttm_vmap,
> + .vunmap = drm_gem_ttm_vunmap,
> };
>
> int
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
> index b35c180322e2..3b919c7c931c 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
> @@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
> extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
> extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
> struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
> -extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
> -extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
>
> #endif
> diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
> index a8264aebf3d4..2f16b5249283 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_prime.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
> @@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
> return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
> }
>
> -void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - struct nouveau_bo *nvbo = nouveau_gem_object(obj);
> - int ret;
> -
> - ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
> - &nvbo->dma_buf_vmap);
> - if (ret)
> - return ERR_PTR(ret);
> -
> - return nvbo->dma_buf_vmap.virtual;
> -}
> -
> -void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - struct nouveau_bo *nvbo = nouveau_gem_object(obj);
> -
> - ttm_bo_kunmap(&nvbo->dma_buf_vmap);
> -}
> -
> struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sg)
> diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> index fdbc8d949135..5ab03d605f57 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> @@ -5,6 +5,7 @@
> #include <drm/drm_gem_shmem_helper.h>
> #include <drm/panfrost_drm.h>
> #include <linux/completion.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/iopoll.h>
> #include <linux/pm_runtime.h>
> #include <linux/slab.h>
> @@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> {
> struct panfrost_file_priv *user = file_priv->driver_priv;
> struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
> + struct dma_buf_map map;
> struct drm_gem_shmem_object *bo;
> u32 cfg, as;
> int ret;
> @@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> goto err_close_bo;
> }
>
> - perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
> - if (IS_ERR(perfcnt->buf)) {
> - ret = PTR_ERR(perfcnt->buf);
> + ret = drm_gem_shmem_vmap(&bo->base, &map);
> + if (ret)
> goto err_put_mapping;
> - }
> + perfcnt->buf = map.vaddr;
>
> /*
> * Invalidate the cache and clear the counters to start from a fresh
> @@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> return 0;
>
> err_vunmap:
> - drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
> + drm_gem_shmem_vunmap(&bo->base, &map);
> err_put_mapping:
> panfrost_gem_mapping_put(perfcnt->mapping);
> err_close_bo:
> @@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
> {
> struct panfrost_file_priv *user = file_priv->driver_priv;
> struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
> + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
>
> if (user != perfcnt->user)
> return -EINVAL;
> @@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
> GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
>
> perfcnt->user = NULL;
> - drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
> + drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
> perfcnt->buf = NULL;
> panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
> panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
> diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
> index 45fd76e04bdc..e165fa9b2089 100644
> --- a/drivers/gpu/drm/qxl/qxl_display.c
> +++ b/drivers/gpu/drm/qxl/qxl_display.c
> @@ -25,6 +25,7 @@
>
> #include <linux/crc32.h>
> #include <linux/delay.h>
> +#include <linux/dma-buf-map.h>
>
> #include <drm/drm_drv.h>
> #include <drm/drm_atomic.h>
> @@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
> struct drm_gem_object *obj;
> struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
> int ret;
> + struct dma_buf_map user_map;
> + struct dma_buf_map cursor_map;
> void *user_ptr;
> int size = 64*64*4;
>
> @@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
> user_bo = gem_to_qxl_bo(obj);
>
> /* pinning is done in the prepare/cleanup framevbuffer */
> - ret = qxl_bo_kmap(user_bo, &user_ptr);
> + ret = qxl_bo_kmap(user_bo, &user_map);
> if (ret)
> goto out_free_release;
> + user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */
>
> ret = qxl_alloc_bo_reserved(qdev, release,
> sizeof(struct qxl_cursor) + size,
> @@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
> if (ret)
> goto out_unpin;
>
> - ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
> + ret = qxl_bo_kmap(cursor_bo, &cursor_map);
> if (ret)
> goto out_backoff;
>
> @@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
> {
> int ret;
> struct drm_gem_object *gobj;
> + struct dma_buf_map map;
> int monitors_config_size = sizeof(struct qxl_monitors_config) +
> qxl_num_crtc * sizeof(struct qxl_head);
>
> @@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
> if (ret)
> return ret;
>
> - qxl_bo_kmap(qdev->monitors_config_bo, NULL);
> + qxl_bo_kmap(qdev->monitors_config_bo, &map);
>
> qdev->monitors_config = qdev->monitors_config_bo->kptr;
> qdev->ram_header->monitors_config =
> diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
> index 3599db096973..7b7acb910780 100644
> --- a/drivers/gpu/drm/qxl/qxl_draw.c
> +++ b/drivers/gpu/drm/qxl/qxl_draw.c
> @@ -20,6 +20,8 @@
> * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
> */
>
> +#include <linux/dma-buf-map.h>
> +
> #include <drm/drm_fourcc.h>
>
> #include "qxl_drv.h"
> @@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
> unsigned int num_clips,
> struct qxl_bo *clips_bo)
> {
> + struct dma_buf_map map;
> struct qxl_clip_rects *dev_clips;
> int ret;
>
> - ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
> - if (ret) {
> + ret = qxl_bo_kmap(clips_bo, &map);
> + if (ret)
> return NULL;
> - }
> + dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
> +
> dev_clips->num_rects = num_clips;
> dev_clips->chunk.next_chunk = 0;
> dev_clips->chunk.prev_chunk = 0;
> @@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
> int stride = fb->pitches[0];
> /* depth is not actually interesting, we don't mask with it */
> int depth = fb->format->cpp[0] * 8;
> + struct dma_buf_map surface_map;
> uint8_t *surface_base;
> struct qxl_release *release;
> struct qxl_bo *clips_bo;
> @@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
> if (ret)
> goto out_release_backoff;
>
> - ret = qxl_bo_kmap(bo, (void **)&surface_base);
> + ret = qxl_bo_kmap(bo, &surface_map);
> if (ret)
> goto out_release_backoff;
> + surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */
>
> ret = qxl_image_init(qdev, release, dimage, surface_base,
> left - dumb_shadow_offset,
> diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
> index 3602e8b34189..eb437fea5d9e 100644
> --- a/drivers/gpu/drm/qxl/qxl_drv.h
> +++ b/drivers/gpu/drm/qxl/qxl_drv.h
> @@ -30,6 +30,7 @@
> * Definitions taken from spice-protocol, plus kernel driver specific bits.
> */
>
> +#include <linux/dma-buf-map.h>
> #include <linux/dma-fence.h>
> #include <linux/firmware.h>
> #include <linux/platform_device.h>
> @@ -50,6 +51,8 @@
>
> #include "qxl_dev.h"
>
> +struct dma_buf_map;
> +
> #define DRIVER_AUTHOR "Dave Airlie"
>
> #define DRIVER_NAME "qxl"
> @@ -79,7 +82,7 @@ struct qxl_bo {
> /* Protected by tbo.reserved */
> struct ttm_place placements[3];
> struct ttm_placement placement;
> - struct ttm_bo_kmap_obj kmap;
> + struct dma_buf_map map;
> void *kptr;
> unsigned int map_count;
> int type;
> @@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
> void qxl_gem_object_close(struct drm_gem_object *obj,
> struct drm_file *file_priv);
> void qxl_bo_force_delete(struct qxl_device *qdev);
> -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
>
> /* qxl_dumb.c */
> int qxl_mode_dumb_create(struct drm_file *file_priv,
> @@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
> struct drm_gem_object *qxl_gem_prime_import_sg_table(
> struct drm_device *dev, struct dma_buf_attachment *attach,
> struct sg_table *sgt);
> -void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
> -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
> + struct dma_buf_map *map);
> int qxl_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
>
> diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
> index 940e99354f49..755df4d8f95f 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.c
> +++ b/drivers/gpu/drm/qxl/qxl_object.c
> @@ -23,10 +23,12 @@
> * Alon Levy
> */
>
> +#include <linux/dma-buf-map.h>
> +#include <linux/io-mapping.h>
> +
> #include "qxl_drv.h"
> #include "qxl_object.h"
>
> -#include <linux/io-mapping.h>
> static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
> {
> struct qxl_bo *bo;
> @@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev,
> return 0;
> }
>
> -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
> +int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
> {
> - bool is_iomem;
> int r;
>
> if (bo->kptr) {
> - if (ptr)
> - *ptr = bo->kptr;
> bo->map_count++;
> - return 0;
> + goto out;
> }
> - r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
> + r = ttm_bo_vmap(&bo->tbo, &bo->map);
> if (r)
> return r;
> - bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
> - if (ptr)
> - *ptr = bo->kptr;
> bo->map_count = 1;
> +
> + /* TODO: Remove kptr in favor of map everywhere. */
> + if (bo->map.is_iomem)
> + bo->kptr = (void *)bo->map.vaddr_iomem;
> + else
> + bo->kptr = bo->map.vaddr;
> +
> +out:
> + *map = bo->map;
> return 0;
> }
>
> @@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
> void *rptr;
> int ret;
> struct io_mapping *map;
> + struct dma_buf_map bo_map;
>
> if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
> map = qdev->vram_mapping;
> @@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
> return rptr;
> }
>
> - ret = qxl_bo_kmap(bo, &rptr);
> + ret = qxl_bo_kmap(bo, &bo_map);
> if (ret)
> return NULL;
> + rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */
>
> rptr += page_offset * PAGE_SIZE;
> return rptr;
> @@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo)
> if (bo->map_count > 0)
> return;
> bo->kptr = NULL;
> - ttm_bo_kunmap(&bo->kmap);
> + ttm_bo_vunmap(&bo->tbo, &bo->map);
> }
>
> void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
> diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
> index 09a5c818324d..ebf24c9d2bf2 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.h
> +++ b/drivers/gpu/drm/qxl/qxl_object.h
> @@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
> bool kernel, bool pinned, u32 domain,
> struct qxl_surface *surf,
> struct qxl_bo **bo_ptr);
> -extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
> +extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
> extern void qxl_bo_kunmap(struct qxl_bo *bo);
> void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
> void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
> diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
> index 7d3816fca5a8..4aa949799446 100644
> --- a/drivers/gpu/drm/qxl/qxl_prime.c
> +++ b/drivers/gpu/drm/qxl/qxl_prime.c
> @@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
> return ERR_PTR(-ENOSYS);
> }
>
> -void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
> +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct qxl_bo *bo = gem_to_qxl_bo(obj);
> - void *ptr;
> int ret;
>
> - ret = qxl_bo_kmap(bo, &ptr);
> + ret = qxl_bo_kmap(bo, map);
> if (ret < 0)
> - return ERR_PTR(ret);
> + return ret;
>
> - return ptr;
> + return 0;
> }
>
> -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
> + struct dma_buf_map *map)
> {
> struct qxl_bo *bo = gem_to_qxl_bo(obj);
>
> diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
> index 5d54bccebd4d..44cb5ee6fc20 100644
> --- a/drivers/gpu/drm/radeon/radeon.h
> +++ b/drivers/gpu/drm/radeon/radeon.h
> @@ -509,7 +509,6 @@ struct radeon_bo {
> /* Constant after initialization */
> struct radeon_device *rdev;
>
> - struct ttm_bo_kmap_obj dma_buf_vmap;
> pid_t pid;
>
> #ifdef CONFIG_MMU_NOTIFIER
> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> index 0ccd7213e41f..d2876ce3bc9e 100644
> --- a/drivers/gpu/drm/radeon/radeon_gem.c
> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> @@ -31,6 +31,7 @@
> #include <drm/drm_debugfs.h>
> #include <drm/drm_device.h>
> #include <drm/drm_file.h>
> +#include <drm/drm_gem_ttm_helper.h>
> #include <drm/radeon_drm.h>
>
> #include "radeon.h"
> @@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
> struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
> int radeon_gem_prime_pin(struct drm_gem_object *obj);
> void radeon_gem_prime_unpin(struct drm_gem_object *obj);
> -void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
> -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
>
> static const struct drm_gem_object_funcs radeon_gem_object_funcs;
>
> @@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = {
> .pin = radeon_gem_prime_pin,
> .unpin = radeon_gem_prime_unpin,
> .get_sg_table = radeon_gem_prime_get_sg_table,
> - .vmap = radeon_gem_prime_vmap,
> - .vunmap = radeon_gem_prime_vunmap,
> + .vmap = drm_gem_ttm_vmap,
> + .vunmap = drm_gem_ttm_vunmap,
> };
>
> /*
> diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
> index b9de0e51c0be..088d39a51c0d 100644
> --- a/drivers/gpu/drm/radeon/radeon_prime.c
> +++ b/drivers/gpu/drm/radeon/radeon_prime.c
> @@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
> return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
> }
>
> -void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - struct radeon_bo *bo = gem_to_radeon_bo(obj);
> - int ret;
> -
> - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
> - &bo->dma_buf_vmap);
> - if (ret)
> - return ERR_PTR(ret);
> -
> - return bo->dma_buf_vmap.virtual;
> -}
> -
> -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - struct radeon_bo *bo = gem_to_radeon_bo(obj);
> -
> - ttm_bo_kunmap(&bo->dma_buf_vmap);
> -}
> -
> struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sg)
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> index 7d5ebb10323b..7971f57436dd 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> @@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
> return ERR_PTR(ret);
> }
>
> -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
> +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>
> - if (rk_obj->pages)
> - return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
> - pgprot_writecombine(PAGE_KERNEL));
> + if (rk_obj->pages) {
> + void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
> + pgprot_writecombine(PAGE_KERNEL));
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
> + return 0;
> + }
>
> if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
> - return NULL;
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
>
> - return rk_obj->kvaddr;
> + return 0;
> }
>
> -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>
> if (rk_obj->pages) {
> - vunmap(vaddr);
> + vunmap(map->vaddr);
> return;
> }
>
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> index 7ffc541bea07..5a70a56cd406 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> @@ -31,8 +31,8 @@ struct drm_gem_object *
> rockchip_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sg);
> -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
> -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> /* drm driver mmap file operations */
> int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
> index 744a8e337e41..c02e35ed6e76 100644
> --- a/drivers/gpu/drm/tiny/cirrus.c
> +++ b/drivers/gpu/drm/tiny/cirrus.c
> @@ -17,6 +17,7 @@
> */
>
> #include <linux/console.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/module.h>
> #include <linux/pci.h>
>
> @@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
> struct drm_rect *rect)
> {
> struct cirrus_device *cirrus = to_cirrus(fb->dev);
> + struct dma_buf_map map;
> void *vmap;
> int idx, ret;
>
> @@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
> if (!drm_dev_enter(&cirrus->dev, &idx))
> goto out;
>
> - ret = -ENOMEM;
> - vmap = drm_gem_shmem_vmap(fb->obj[0]);
> - if (!vmap)
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (ret)
> goto out_dev_exit;
> + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> if (cirrus->cpp == fb->format->cpp[0])
> drm_fb_memcpy_dstclip(cirrus->vram,
> @@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
> else
> WARN_ON_ONCE("cpp mismatch");
>
> - drm_gem_shmem_vunmap(fb->obj[0], vmap);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
> ret = 0;
>
> out_dev_exit:
> diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
> index cc397671f689..12a890cea6e9 100644
> --- a/drivers/gpu/drm/tiny/gm12u320.c
> +++ b/drivers/gpu/drm/tiny/gm12u320.c
> @@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
> {
> int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
> struct drm_framebuffer *fb;
> + struct dma_buf_map map;
> void *vaddr;
> u8 *src;
>
> @@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
> y1 = gm12u320->fb_update.rect.y1;
> y2 = gm12u320->fb_update.rect.y2;
>
> - vaddr = drm_gem_shmem_vmap(fb->obj[0]);
> - if (IS_ERR(vaddr)) {
> - GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (ret) {
> + GM12U320_ERR("failed to vmap fb: %d\n", ret);
> goto put_fb;
> }
> + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> if (fb->obj[0]->import_attach) {
> ret = dma_buf_begin_cpu_access(
> @@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
> GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
> }
> vunmap:
> - drm_gem_shmem_vunmap(fb->obj[0], vaddr);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
> put_fb:
> drm_framebuffer_put(fb);
> gm12u320->fb_update.fb = NULL;
> diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
> index fef43f4e3bac..42eeba1dfdbf 100644
> --- a/drivers/gpu/drm/udl/udl_modeset.c
> +++ b/drivers/gpu/drm/udl/udl_modeset.c
> @@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
> struct urb *urb;
> struct drm_rect clip;
> int log_bpp;
> + struct dma_buf_map map;
> void *vaddr;
>
> ret = udl_log_cpp(fb->format->cpp[0]);
> @@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
> return ret;
> }
>
> - vaddr = drm_gem_shmem_vmap(fb->obj[0]);
> - if (IS_ERR(vaddr)) {
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (ret) {
> DRM_ERROR("failed to vmap fb\n");
> goto out_dma_buf_end_cpu_access;
> }
> + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> urb = udl_get_urb(dev);
> if (!urb)
> @@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
> ret = 0;
>
> out_drm_gem_shmem_vunmap:
> - drm_gem_shmem_vunmap(fb->obj[0], vaddr);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
> out_dma_buf_end_cpu_access:
> if (import_attach) {
> tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
> diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> index 931c55126148..f268fb258c83 100644
> --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
> +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> @@ -9,6 +9,8 @@
> * Michael Thayer <michael.thayer@oracle.com,
> * Hans de Goede <hdegoede@redhat.com>
> */
> +
> +#include <linux/dma-buf-map.h>
> #include <linux/export.h>
>
> #include <drm/drm_atomic.h>
> @@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> u32 height = plane->state->crtc_h;
> size_t data_size, mask_size;
> u32 flags;
> + struct dma_buf_map map;
> + int ret;
> u8 *src;
>
> /*
> @@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>
> vbox_crtc->cursor_enabled = true;
>
> - src = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(src)) {
> + ret = drm_gem_vram_vmap(gbo, &map);
> + if (ret) {
> /*
> * BUG: we should have pinned the BO in prepare_fb().
> */
> @@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> DRM_WARN("Could not map cursor bo, skipping update\n");
> return;
> }
> + src = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> /*
> * The mask must be calculated based on the alpha
> @@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> data_size = width * height * 4 + mask_size;
>
> copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
> - drm_gem_vram_vunmap(gbo, src);
> + drm_gem_vram_vunmap(gbo, &map);
>
> flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
> VBOX_MOUSE_POINTER_ALPHA;
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index 557f0d1e6437..f290a9a942dc 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> return drm_gem_cma_prime_mmap(obj, vma);
> }
>
> -void *vc4_prime_vmap(struct drm_gem_object *obj)
> +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct vc4_bo *bo = to_vc4_bo(obj);
>
> if (bo->validated_shader) {
> DRM_DEBUG("mmaping of shader BOs not allowed.\n");
> - return ERR_PTR(-EINVAL);
> + return -EINVAL;
> }
>
> - return drm_gem_cma_prime_vmap(obj);
> + return drm_gem_cma_prime_vmap(obj, map);
> }
>
> struct drm_gem_object *
> diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
> index cc79b1aaa878..904f2c36c963 100644
> --- a/drivers/gpu/drm/vc4/vc4_drv.h
> +++ b/drivers/gpu/drm/vc4/vc4_drv.h
> @@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
> struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sgt);
> -void *vc4_prime_vmap(struct drm_gem_object *obj);
> +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> int vc4_bo_cache_init(struct drm_device *dev);
> void vc4_bo_cache_destroy(struct drm_device *dev);
> int vc4_bo_inc_usecnt(struct vc4_bo *bo);
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index fa54a6d1403d..b2aa26e1e4a2 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
> return &obj->base;
> }
>
> -static void *vgem_prime_vmap(struct drm_gem_object *obj)
> +static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
> long n_pages = obj->size >> PAGE_SHIFT;
> struct page **pages;
> + void *vaddr;
>
> pages = vgem_pin_pages(bo);
> if (IS_ERR(pages))
> - return NULL;
> + return PTR_ERR(pages);
> +
> + vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
>
> - return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> + return 0;
> }
>
> -static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
>
> - vunmap(vaddr);
> + vunmap(map->vaddr);
> vgem_unpin_pages(bo);
> }
>
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> index 4f34ef34ba60..74db5a840bed 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> @@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
> return gem_mmap_obj(xen_obj, vma);
> }
>
> -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
> +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
> {
> struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
> + void *vaddr;
>
> if (!xen_obj->pages)
> - return NULL;
> + return -ENOMEM;
>
> /* Please see comment in gem_mmap_obj on mapping and attributes. */
> - return vmap(xen_obj->pages, xen_obj->num_pages,
> - VM_MAP, PAGE_KERNEL);
> + vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
> + VM_MAP, PAGE_KERNEL);
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
> +
> + return 0;
> }
>
> void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
> - void *vaddr)
> + struct dma_buf_map *map)
> {
> - vunmap(vaddr);
> + vunmap(map->vaddr);
> }
>
> int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> index a39675fa31b2..a4e67d0a149c 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> @@ -12,6 +12,7 @@
> #define __XEN_DRM_FRONT_GEM_H
>
> struct dma_buf_attachment;
> +struct dma_buf_map;
> struct drm_device;
> struct drm_gem_object;
> struct file;
> @@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
>
> int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
>
> -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
> +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
> + struct dma_buf_map *map);
>
> void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
> - void *vaddr);
> + struct dma_buf_map *map);
>
> int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
> struct vm_area_struct *vma);
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index c38dd35da00b..5e6daa1c982f 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -39,6 +39,7 @@
>
> #include <drm/drm_vma_manager.h>
>
> +struct dma_buf_map;
> struct drm_gem_object;
>
> /**
> @@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
> *
> * This callback is optional.
> */
> - void *(*vmap)(struct drm_gem_object *obj);
> + int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> /**
> * @vunmap:
> @@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
> *
> * This callback is optional.
> */
> - void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
> + void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> /**
> * @mmap:
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index a064b0d1c480..caf98b9cf4b4 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
> struct sg_table *sgt);
> int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
> +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> struct drm_gem_object *
> drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 5381f0c8cf6f..3449a0353fe0 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
> void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
> int drm_gem_shmem_pin(struct drm_gem_object *obj);
> void drm_gem_shmem_unpin(struct drm_gem_object *obj);
> -void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
> -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
>
> diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
> index 128f88174d32..c0d28ba0f5c9 100644
> --- a/include/drm/drm_gem_vram_helper.h
> +++ b/include/drm/drm_gem_vram_helper.h
> @@ -10,6 +10,7 @@
> #include <drm/ttm/ttm_bo_api.h>
> #include <drm/ttm/ttm_bo_driver.h>
>
> +#include <linux/dma-buf-map.h>
> #include <linux/kernel.h> /* for container_of() */
>
> struct drm_mode_create_dumb;
> @@ -29,9 +30,8 @@ struct vm_area_struct;
>
> /**
> * struct drm_gem_vram_object - GEM object backed by VRAM
> - * @gem: GEM object
> * @bo: TTM buffer object
> - * @kmap: Mapping information for @bo
> + * @map: Mapping information for @bo
> * @placement: TTM placement information. Supported placements are \
> %TTM_PL_VRAM and %TTM_PL_SYSTEM
> * @placements: TTM placement information.
> @@ -50,15 +50,15 @@ struct vm_area_struct;
> */
> struct drm_gem_vram_object {
> struct ttm_buffer_object bo;
> - struct ttm_bo_kmap_obj kmap;
> + struct dma_buf_map map;
>
> /**
> - * @kmap_use_count:
> + * @vmap_use_count:
> *
> * Reference count on the virtual address.
> * The address are un-mapped when the count reaches zero.
> */
> - unsigned int kmap_use_count;
> + unsigned int vmap_use_count;
>
> /* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
> struct ttm_placement placement;
> @@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
> s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
> int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
> int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
> -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
> +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
> +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
>
> int drm_gem_vram_fill_create_dumb(struct drm_file *file,
> struct drm_device *dev,
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends
@ 2020-10-15 14:21 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 14:21 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> This patch replaces the vmap/vunmap's use of raw pointers in GEM object
> functions with instances of struct dma_buf_map. GEM backends are
> converted as well. For most of them, this simply changes the returned type.
>
> TTM-based drivers now return information about the location of the memory,
> either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap()
> et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of
> implementing their own vmap callbacks.
>
> v4:
> * use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian)
> * fix a trailing { in drm_gem_vmap()
> * remove several empty functions instead of converting them (Daniel)
> * comment uses of raw pointers with a TODO (Daniel)
> * TODO list: convert more helpers to use struct dma_buf_map
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
The amdgpu changes look good to me, but I can't fully judge the other stuff.
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> Documentation/gpu/todo.rst | 18 ++++
> drivers/gpu/drm/Kconfig | 2 +
> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 -------
> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 -
> drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 -
> drivers/gpu/drm/ast/ast_cursor.c | 27 +++--
> drivers/gpu/drm/ast/ast_drv.h | 7 +-
> drivers/gpu/drm/drm_gem.c | 23 +++--
> drivers/gpu/drm/drm_gem_cma_helper.c | 10 +-
> drivers/gpu/drm/drm_gem_shmem_helper.c | 48 +++++----
> drivers/gpu/drm/drm_gem_vram_helper.c | 107 ++++++++++----------
> drivers/gpu/drm/etnaviv/etnaviv_drv.h | 2 +-
> drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 9 +-
> drivers/gpu/drm/lima/lima_gem.c | 6 +-
> drivers/gpu/drm/lima/lima_sched.c | 11 +-
> drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +-
> drivers/gpu/drm/nouveau/Kconfig | 1 +
> drivers/gpu/drm/nouveau/nouveau_bo.h | 2 -
> drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +-
> drivers/gpu/drm/nouveau/nouveau_gem.h | 2 -
> drivers/gpu/drm/nouveau/nouveau_prime.c | 20 ----
> drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +--
> drivers/gpu/drm/qxl/qxl_display.c | 11 +-
> drivers/gpu/drm/qxl/qxl_draw.c | 14 ++-
> drivers/gpu/drm/qxl/qxl_drv.h | 11 +-
> drivers/gpu/drm/qxl/qxl_object.c | 31 +++---
> drivers/gpu/drm/qxl/qxl_object.h | 2 +-
> drivers/gpu/drm/qxl/qxl_prime.c | 12 +--
> drivers/gpu/drm/radeon/radeon.h | 1 -
> drivers/gpu/drm/radeon/radeon_gem.c | 7 +-
> drivers/gpu/drm/radeon/radeon_prime.c | 20 ----
> drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 ++--
> drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +-
> drivers/gpu/drm/tiny/cirrus.c | 10 +-
> drivers/gpu/drm/tiny/gm12u320.c | 10 +-
> drivers/gpu/drm/udl/udl_modeset.c | 8 +-
> drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +-
> drivers/gpu/drm/vc4/vc4_bo.c | 6 +-
> drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
> drivers/gpu/drm/vgem/vgem_drv.c | 16 ++-
> drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 ++--
> drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +-
> include/drm/drm_gem.h | 5 +-
> include/drm/drm_gem_cma_helper.h | 2 +-
> include/drm/drm_gem_shmem_helper.h | 4 +-
> include/drm/drm_gem_vram_helper.h | 14 +--
> 47 files changed, 321 insertions(+), 295 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 700637e25ecd..7e6fc3c04add 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -446,6 +446,24 @@ Contact: Ville Syrjälä, Daniel Vetter
>
> Level: Intermediate
>
> +Use struct dma_buf_map throughout codebase
> +------------------------------------------
> +
> +Pointers to shared device memory are stored in struct dma_buf_map. Each
> +instance knows whether it refers to system or I/O memory. Most of the DRM-wide
> +interface have been converted to use struct dma_buf_map, but implementations
> +often still use raw pointers.
> +
> +The task is to use struct dma_buf_map where it makes sense.
> +
> +* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
> +* TTM might benefit from using struct dma_buf_map internally.
> +* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
> +
> +Level: Intermediate
> +
>
> Core refactorings
> =================
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index 147d61b9674e..319839b87d37 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -239,6 +239,7 @@ config DRM_RADEON
> select FW_LOADER
> select DRM_KMS_HELPER
> select DRM_TTM
> + select DRM_TTM_HELPER
> select POWER_SUPPLY
> select HWMON
> select BACKLIGHT_CLASS_DEVICE
> @@ -259,6 +260,7 @@ config DRM_AMDGPU
> select DRM_KMS_HELPER
> select DRM_SCHED
> select DRM_TTM
> + select DRM_TTM_HELPER
> select POWER_SUPPLY
> select HWMON
> select BACKLIGHT_CLASS_DEVICE
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> index 5b465ab774d1..e5919efca870 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> @@ -41,42 +41,6 @@
> #include <linux/dma-fence-array.h>
> #include <linux/pci-p2pdma.h>
>
> -/**
> - * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
> - * @obj: GEM BO
> - *
> - * Sets up an in-kernel virtual mapping of the BO's memory.
> - *
> - * Returns:
> - * The virtual address of the mapping or an error pointer.
> - */
> -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> - int ret;
> -
> - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
> - &bo->dma_buf_vmap);
> - if (ret)
> - return ERR_PTR(ret);
> -
> - return bo->dma_buf_vmap.virtual;
> -}
> -
> -/**
> - * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
> - * @obj: GEM BO
> - * @vaddr: Virtual address (unused)
> - *
> - * Tears down the in-kernel virtual mapping of the BO's memory.
> - */
> -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> -
> - ttm_bo_kunmap(&bo->dma_buf_vmap);
> -}
> -
> /**
> * amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation
> * @obj: GEM BO
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> index 2c5c84a06bb9..39b5b9616fd8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> @@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
> struct dma_buf *dma_buf);
> bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
> struct amdgpu_bo *bo);
> -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
> -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index be08a63ef58c..576659827e74 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -33,6 +33,7 @@
>
> #include <drm/amdgpu_drm.h>
> #include <drm/drm_debugfs.h>
> +#include <drm/drm_gem_ttm_helper.h>
>
> #include "amdgpu.h"
> #include "amdgpu_display.h"
> @@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = {
> .open = amdgpu_gem_object_open,
> .close = amdgpu_gem_object_close,
> .export = amdgpu_gem_prime_export,
> - .vmap = amdgpu_gem_prime_vmap,
> - .vunmap = amdgpu_gem_prime_vunmap,
> + .vmap = drm_gem_ttm_vmap,
> + .vunmap = drm_gem_ttm_vunmap,
> };
>
> /*
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> index 132e5f955180..01296ef0d673 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> @@ -100,7 +100,6 @@ struct amdgpu_bo {
> struct amdgpu_bo *parent;
> struct amdgpu_bo *shadow;
>
> - struct ttm_bo_kmap_obj dma_buf_vmap;
> struct amdgpu_mn *mn;
>
>
> diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
> index e0f4613918ad..742d43a7edf4 100644
> --- a/drivers/gpu/drm/ast/ast_cursor.c
> +++ b/drivers/gpu/drm/ast/ast_cursor.c
> @@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
>
> for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
> gbo = ast->cursor.gbo[i];
> - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
> + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
> drm_gem_vram_unpin(gbo);
> drm_gem_vram_put(gbo);
> }
> @@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
> struct drm_device *dev = &ast->base;
> size_t size, i;
> struct drm_gem_vram_object *gbo;
> - void __iomem *vaddr;
> + struct dma_buf_map map;
> int ret;
>
> size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
> @@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
> drm_gem_vram_put(gbo);
> goto err_drm_gem_vram_put;
> }
> - vaddr = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(vaddr)) {
> - ret = PTR_ERR(vaddr);
> + ret = drm_gem_vram_vmap(gbo, &map);
> + if (ret) {
> drm_gem_vram_unpin(gbo);
> drm_gem_vram_put(gbo);
> goto err_drm_gem_vram_put;
> }
>
> ast->cursor.gbo[i] = gbo;
> - ast->cursor.vaddr[i] = vaddr;
> + ast->cursor.map[i] = map;
> }
>
> return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
> @@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
> while (i) {
> --i;
> gbo = ast->cursor.gbo[i];
> - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
> + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
> drm_gem_vram_unpin(gbo);
> drm_gem_vram_put(gbo);
> }
> @@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
> {
> struct drm_device *dev = &ast->base;
> struct drm_gem_vram_object *gbo;
> + struct dma_buf_map map;
> int ret;
> void *src;
> void __iomem *dst;
> @@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
> ret = drm_gem_vram_pin(gbo, 0);
> if (ret)
> return ret;
> - src = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(src)) {
> - ret = PTR_ERR(src);
> + ret = drm_gem_vram_vmap(gbo, &map);
> + if (ret)
> goto err_drm_gem_vram_unpin;
> - }
> + src = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> - dst = ast->cursor.vaddr[ast->cursor.next_index];
> + dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
>
> /* do data transfer to cursor BO */
> update_cursor_image(dst, src, fb->width, fb->height);
>
> - drm_gem_vram_vunmap(gbo, src);
> + drm_gem_vram_vunmap(gbo, &map);
> drm_gem_vram_unpin(gbo);
>
> return 0;
> @@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
> u8 __iomem *sig;
> u8 jreg;
>
> - dst = ast->cursor.vaddr[ast->cursor.next_index];
> + dst = ast->cursor.map[ast->cursor.next_index].vaddr;
>
> sig = dst + AST_HWC_SIZE;
> writel(x, sig + AST_HWC_SIGNATURE_X);
> diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
> index 467049ca8430..f963141dd851 100644
> --- a/drivers/gpu/drm/ast/ast_drv.h
> +++ b/drivers/gpu/drm/ast/ast_drv.h
> @@ -28,10 +28,11 @@
> #ifndef __AST_DRV_H__
> #define __AST_DRV_H__
>
> -#include <linux/types.h>
> -#include <linux/io.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/i2c.h>
> #include <linux/i2c-algo-bit.h>
> +#include <linux/io.h>
> +#include <linux/types.h>
>
> #include <drm/drm_connector.h>
> #include <drm/drm_crtc.h>
> @@ -131,7 +132,7 @@ struct ast_private {
>
> struct {
> struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
> - void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
> + struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
> unsigned int next_index;
> } cursor;
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 1da67d34e55d..a89ad4570e3c 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -36,6 +36,7 @@
> #include <linux/pagemap.h>
> #include <linux/shmem_fs.h>
> #include <linux/dma-buf.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/mem_encrypt.h>
> #include <linux/pagevec.h>
>
> @@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
>
> void *drm_gem_vmap(struct drm_gem_object *obj)
> {
> - void *vaddr;
> + struct dma_buf_map map;
> + int ret;
>
> - if (obj->funcs->vmap)
> - vaddr = obj->funcs->vmap(obj);
> - else
> - vaddr = ERR_PTR(-EOPNOTSUPP);
> + if (!obj->funcs->vmap)
> + return ERR_PTR(-EOPNOTSUPP);
>
> - if (!vaddr)
> - vaddr = ERR_PTR(-ENOMEM);
> + ret = obj->funcs->vmap(obj, &map);
> + if (ret)
> + return ERR_PTR(ret);
> + else if (dma_buf_map_is_null(&map))
> + return ERR_PTR(-ENOMEM);
>
> - return vaddr;
> + return map.vaddr;
> }
>
> void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
> {
> + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
> +
> if (!vaddr)
> return;
>
> if (obj->funcs->vunmap)
> - obj->funcs->vunmap(obj, vaddr);
> + obj->funcs->vunmap(obj, &map);
> }
>
> /**
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index d527485ea0b7..b57e3e9222f0 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
> * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
> * address space
> * @obj: GEM object
> + * @map: Returns the kernel virtual address of the CMA GEM object's backing
> + * store.
> *
> * This function maps a buffer exported via DRM PRIME into the kernel's
> * virtual address space. Since the CMA buffers are already mapped into the
> @@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
> * driver's &drm_gem_object_funcs.vmap callback.
> *
> * Returns:
> - * The kernel virtual address of the CMA GEM object's backing store.
> + * 0 on success, or a negative error code otherwise.
> */
> -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
> +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
>
> - return cma_obj->vaddr;
> + dma_buf_map_set_vaddr(map, cma_obj->vaddr);
> +
> + return 0;
> }
> EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index fb11df7aced5..5553f58f68f3 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
> }
> EXPORT_SYMBOL(drm_gem_shmem_unpin);
>
> -static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> +static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
> - struct dma_buf_map map;
> int ret = 0;
>
> - if (shmem->vmap_use_count++ > 0)
> - return shmem->vaddr;
> + if (shmem->vmap_use_count++ > 0) {
> + dma_buf_map_set_vaddr(map, shmem->vaddr);
> + return 0;
> + }
>
> if (obj->import_attach) {
> - ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
> - if (!ret)
> - shmem->vaddr = map.vaddr;
> + ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
> + if (!ret) {
> + if (WARN_ON(map->is_iomem)) {
> + ret = -EIO;
> + goto err_put_pages;
> + }
> + shmem->vaddr = map->vaddr;
> + }
> } else {
> pgprot_t prot = PAGE_KERNEL;
>
> @@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> VM_MAP, prot);
> if (!shmem->vaddr)
> ret = -ENOMEM;
> + else
> + dma_buf_map_set_vaddr(map, shmem->vaddr);
> }
>
> if (ret) {
> @@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> goto err_put_pages;
> }
>
> - return shmem->vaddr;
> + return 0;
>
> err_put_pages:
> if (!obj->import_attach)
> @@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> err_zero_use:
> shmem->vmap_use_count = 0;
>
> - return ERR_PTR(ret);
> + return ret;
> }
>
> /*
> * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
> * @shmem: shmem GEM object
> + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> + * store.
> *
> * This function makes sure that a contiguous kernel virtual address mapping
> * exists for the buffer backing the shmem GEM object.
> @@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> * Returns:
> * 0 on success or a negative error code on failure.
> */
> -void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
> +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> - void *vaddr;
> int ret;
>
> ret = mutex_lock_interruptible(&shmem->vmap_lock);
> if (ret)
> - return ERR_PTR(ret);
> - vaddr = drm_gem_shmem_vmap_locked(shmem);
> + return ret;
> + ret = drm_gem_shmem_vmap_locked(shmem, map);
> mutex_unlock(&shmem->vmap_lock);
>
> - return vaddr;
> + return ret;
> }
> EXPORT_SYMBOL(drm_gem_shmem_vmap);
>
> -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> +static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> + struct dma_buf_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
> - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
>
> if (WARN_ON_ONCE(!shmem->vmap_use_count))
> return;
> @@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> return;
>
> if (obj->import_attach)
> - dma_buf_vunmap(obj->import_attach->dmabuf, &map);
> + dma_buf_vunmap(obj->import_attach->dmabuf, map);
> else
> vunmap(shmem->vaddr);
>
> @@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> /*
> * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
> * @shmem: shmem GEM object
> + * @map: Kernel virtual address where the SHMEM GEM object was mapped
> *
> * This function cleans up a kernel virtual address mapping acquired by
> * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
> @@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> * also be called by drivers directly, in which case it will hide the
> * differences between dma-buf imported and natively allocated objects.
> */
> -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>
> mutex_lock(&shmem->vmap_lock);
> - drm_gem_shmem_vunmap_locked(shmem);
> + drm_gem_shmem_vunmap_locked(shmem, map);
> mutex_unlock(&shmem->vmap_lock);
> }
> EXPORT_SYMBOL(drm_gem_shmem_vunmap);
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 2d5ed30518f1..4d8553b28558 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -1,5 +1,6 @@
> // SPDX-License-Identifier: GPL-2.0-or-later
>
> +#include <linux/dma-buf-map.h>
> #include <linux/module.h>
>
> #include <drm/drm_debugfs.h>
> @@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
> * up; only release the GEM object.
> */
>
> - WARN_ON(gbo->kmap_use_count);
> - WARN_ON(gbo->kmap.virtual);
> + WARN_ON(gbo->vmap_use_count);
> + WARN_ON(dma_buf_map_is_set(&gbo->map));
>
> drm_gem_object_release(&gbo->bo.base);
> }
> @@ -382,29 +383,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
> }
> EXPORT_SYMBOL(drm_gem_vram_unpin);
>
> -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
> +static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
> + struct dma_buf_map *map)
> {
> int ret;
> - struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> - bool is_iomem;
>
> - if (gbo->kmap_use_count > 0)
> + if (gbo->vmap_use_count > 0)
> goto out;
>
> - ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
> + ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
> if (ret)
> - return ERR_PTR(ret);
> + return ret;
>
> out:
> - ++gbo->kmap_use_count;
> - return ttm_kmap_obj_virtual(kmap, &is_iomem);
> + ++gbo->vmap_use_count;
> + *map = gbo->map;
> +
> + return 0;
> }
>
> -static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> +static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
> + struct dma_buf_map *map)
> {
> - if (WARN_ON_ONCE(!gbo->kmap_use_count))
> + struct drm_device *dev = gbo->bo.base.dev;
> +
> + if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count))
> return;
> - if (--gbo->kmap_use_count > 0)
> +
> + if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
> + return; /* BUG: map not mapped from this BO */
> +
> + if (--gbo->vmap_use_count > 0)
> return;
>
> /*
> @@ -418,7 +427,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> /**
> * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
> * space
> - * @gbo: The GEM VRAM object to map
> + * @gbo: The GEM VRAM object to map
> + * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> + * store.
> *
> * The vmap function pins a GEM VRAM object to its current location, either
> * system or video memory, and maps its buffer into kernel address space.
> @@ -427,48 +438,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> * unmap and unpin the GEM VRAM object.
> *
> * Returns:
> - * The buffer's virtual address on success, or
> - * an ERR_PTR()-encoded error code otherwise.
> + * 0 on success, or a negative error code otherwise.
> */
> -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
> +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
> {
> int ret;
> - void *base;
>
> ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
> if (ret)
> - return ERR_PTR(ret);
> + return ret;
>
> ret = drm_gem_vram_pin_locked(gbo, 0);
> if (ret)
> goto err_ttm_bo_unreserve;
> - base = drm_gem_vram_kmap_locked(gbo);
> - if (IS_ERR(base)) {
> - ret = PTR_ERR(base);
> + ret = drm_gem_vram_kmap_locked(gbo, map);
> + if (ret)
> goto err_drm_gem_vram_unpin_locked;
> - }
>
> ttm_bo_unreserve(&gbo->bo);
>
> - return base;
> + return 0;
>
> err_drm_gem_vram_unpin_locked:
> drm_gem_vram_unpin_locked(gbo);
> err_ttm_bo_unreserve:
> ttm_bo_unreserve(&gbo->bo);
> - return ERR_PTR(ret);
> + return ret;
> }
> EXPORT_SYMBOL(drm_gem_vram_vmap);
>
> /**
> * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
> - * @gbo: The GEM VRAM object to unmap
> - * @vaddr: The mapping's base address as returned by drm_gem_vram_vmap()
> + * @gbo: The GEM VRAM object to unmap
> + * @map: Kernel virtual address where the VRAM GEM object was mapped
> *
> * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
> * the documentation for drm_gem_vram_vmap() for more information.
> */
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
> +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
> {
> int ret;
>
> @@ -476,7 +483,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
> if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
> return;
>
> - drm_gem_vram_kunmap_locked(gbo);
> + drm_gem_vram_kunmap_locked(gbo, map);
> drm_gem_vram_unpin_locked(gbo);
>
> ttm_bo_unreserve(&gbo->bo);
> @@ -567,15 +574,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
> bool evict,
> struct ttm_resource *new_mem)
> {
> - struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> + struct ttm_buffer_object *bo = &gbo->bo;
> + struct drm_device *dev = bo->base.dev;
>
> - if (WARN_ON_ONCE(gbo->kmap_use_count))
> + if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count))
> return;
>
> - if (!kmap->virtual)
> - return;
> - ttm_bo_kunmap(kmap);
> - kmap->virtual = NULL;
> + ttm_bo_vunmap(bo, &gbo->map);
> }
>
> static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
> @@ -832,37 +837,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
> }
>
> /**
> - * drm_gem_vram_object_vmap() - \
> - Implements &struct drm_gem_object_funcs.vmap
> - * @gem: The GEM object to map
> + * drm_gem_vram_object_vmap() -
> + * Implements &struct drm_gem_object_funcs.vmap
> + * @gem: The GEM object to map
> + * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> + * store.
> *
> * Returns:
> - * The buffers virtual address on success, or
> - * NULL otherwise.
> + * 0 on success, or a negative error code otherwise.
> */
> -static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
> +static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
> {
> struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
> - void *base;
>
> - base = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(base))
> - return NULL;
> - return base;
> + return drm_gem_vram_vmap(gbo, map);
> }
>
> /**
> - * drm_gem_vram_object_vunmap() - \
> - Implements &struct drm_gem_object_funcs.vunmap
> - * @gem: The GEM object to unmap
> - * @vaddr: The mapping's base address
> + * drm_gem_vram_object_vunmap() -
> + * Implements &struct drm_gem_object_funcs.vunmap
> + * @gem: The GEM object to unmap
> + * @map: Kernel virtual address where the VRAM GEM object was mapped
> */
> -static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
> - void *vaddr)
> +static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
> {
> struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
>
> - drm_gem_vram_vunmap(gbo, vaddr);
> + drm_gem_vram_vunmap(gbo, map);
> }
>
> /*
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> index 9682c26d89bb..f5be627e1de0 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> @@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
> int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
> struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
> -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
> +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index a6d9932a32ae..bc2543dd987d 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
> return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
> }
>
> -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
> +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> - return etnaviv_gem_vmap(obj);
> + void *vaddr = etnaviv_gem_vmap(obj);
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
> +
> + return 0;
> }
>
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
> index 11223fe348df..832e5280a6ed 100644
> --- a/drivers/gpu/drm/lima/lima_gem.c
> +++ b/drivers/gpu/drm/lima/lima_gem.c
> @@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
> return drm_gem_shmem_pin(obj);
> }
>
> -static void *lima_gem_vmap(struct drm_gem_object *obj)
> +static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct lima_bo *bo = to_lima_bo(obj);
>
> if (bo->heap_size)
> - return ERR_PTR(-EINVAL);
> + return -EINVAL;
>
> - return drm_gem_shmem_vmap(obj);
> + return drm_gem_shmem_vmap(obj, map);
> }
>
> static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
> index dc6df9e9a40d..a070a85f8f36 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -1,6 +1,7 @@
> // SPDX-License-Identifier: GPL-2.0 OR MIT
> /* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
>
> +#include <linux/dma-buf-map.h>
> #include <linux/kthread.h>
> #include <linux/slab.h>
> #include <linux/vmalloc.h>
> @@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
> struct lima_dump_chunk_buffer *buffer_chunk;
> u32 size, task_size, mem_size;
> int i;
> + struct dma_buf_map map;
> + int ret;
>
> mutex_lock(&dev->error_task_list_lock);
>
> @@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
> } else {
> buffer_chunk->size = lima_bo_size(bo);
>
> - data = drm_gem_shmem_vmap(&bo->base.base);
> - if (IS_ERR_OR_NULL(data)) {
> + ret = drm_gem_shmem_vmap(&bo->base.base, &map);
> + if (ret) {
> kvfree(et);
> goto out;
> }
>
> - memcpy(buffer_chunk + 1, data, buffer_chunk->size);
> + memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
>
> - drm_gem_shmem_vunmap(&bo->base.base, data);
> + drm_gem_shmem_vunmap(&bo->base.base, &map);
> }
>
> buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
> diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
> index 38672f9e5c4f..8ef76769b97f 100644
> --- a/drivers/gpu/drm/mgag200/mgag200_mode.c
> +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
> @@ -9,6 +9,7 @@
> */
>
> #include <linux/delay.h>
> +#include <linux/dma-buf-map.h>
>
> #include <drm/drm_atomic_helper.h>
> #include <drm/drm_atomic_state_helper.h>
> @@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
> struct drm_rect *clip)
> {
> struct drm_device *dev = &mdev->base;
> + struct dma_buf_map map;
> void *vmap;
> + int ret;
>
> - vmap = drm_gem_shmem_vmap(fb->obj[0]);
> - if (drm_WARN_ON(dev, !vmap))
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (drm_WARN_ON(dev, ret))
> return; /* BUG: SHMEM BO should always be vmapped */
> + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
>
> - drm_gem_shmem_vunmap(fb->obj[0], vmap);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
>
> /* Always scanout image at VRAM offset 0 */
> mgag200_set_startadd(mdev, (u32)0);
> diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
> index 5dec1e5694b7..9436310d0854 100644
> --- a/drivers/gpu/drm/nouveau/Kconfig
> +++ b/drivers/gpu/drm/nouveau/Kconfig
> @@ -6,6 +6,7 @@ config DRM_NOUVEAU
> select FW_LOADER
> select DRM_KMS_HELPER
> select DRM_TTM
> + select DRM_TTM_HELPER
> select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT
> select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT
> select X86_PLATFORM_DEVICES if ACPI && X86
> diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h
> index 641ef6298a0e..6045b85a762a 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_bo.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_bo.h
> @@ -39,8 +39,6 @@ struct nouveau_bo {
> unsigned mode;
>
> struct nouveau_drm_tile *tile;
> -
> - struct ttm_bo_kmap_obj dma_buf_vmap;
> };
>
> static inline struct nouveau_bo *
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> index 9a421c3949de..f942b526b0a5 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> @@ -24,6 +24,8 @@
> *
> */
>
> +#include <drm/drm_gem_ttm_helper.h>
> +
> #include "nouveau_drv.h"
> #include "nouveau_dma.h"
> #include "nouveau_fence.h"
> @@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = {
> .pin = nouveau_gem_prime_pin,
> .unpin = nouveau_gem_prime_unpin,
> .get_sg_table = nouveau_gem_prime_get_sg_table,
> - .vmap = nouveau_gem_prime_vmap,
> - .vunmap = nouveau_gem_prime_vunmap,
> + .vmap = drm_gem_ttm_vmap,
> + .vunmap = drm_gem_ttm_vunmap,
> };
>
> int
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
> index b35c180322e2..3b919c7c931c 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
> @@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
> extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
> extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
> struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
> -extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
> -extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
>
> #endif
> diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
> index a8264aebf3d4..2f16b5249283 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_prime.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
> @@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
> return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
> }
>
> -void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - struct nouveau_bo *nvbo = nouveau_gem_object(obj);
> - int ret;
> -
> - ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
> - &nvbo->dma_buf_vmap);
> - if (ret)
> - return ERR_PTR(ret);
> -
> - return nvbo->dma_buf_vmap.virtual;
> -}
> -
> -void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - struct nouveau_bo *nvbo = nouveau_gem_object(obj);
> -
> - ttm_bo_kunmap(&nvbo->dma_buf_vmap);
> -}
> -
> struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sg)
> diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> index fdbc8d949135..5ab03d605f57 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> @@ -5,6 +5,7 @@
> #include <drm/drm_gem_shmem_helper.h>
> #include <drm/panfrost_drm.h>
> #include <linux/completion.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/iopoll.h>
> #include <linux/pm_runtime.h>
> #include <linux/slab.h>
> @@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> {
> struct panfrost_file_priv *user = file_priv->driver_priv;
> struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
> + struct dma_buf_map map;
> struct drm_gem_shmem_object *bo;
> u32 cfg, as;
> int ret;
> @@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> goto err_close_bo;
> }
>
> - perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
> - if (IS_ERR(perfcnt->buf)) {
> - ret = PTR_ERR(perfcnt->buf);
> + ret = drm_gem_shmem_vmap(&bo->base, &map);
> + if (ret)
> goto err_put_mapping;
> - }
> + perfcnt->buf = map.vaddr;
>
> /*
> * Invalidate the cache and clear the counters to start from a fresh
> @@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> return 0;
>
> err_vunmap:
> - drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
> + drm_gem_shmem_vunmap(&bo->base, &map);
> err_put_mapping:
> panfrost_gem_mapping_put(perfcnt->mapping);
> err_close_bo:
> @@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
> {
> struct panfrost_file_priv *user = file_priv->driver_priv;
> struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
> + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
>
> if (user != perfcnt->user)
> return -EINVAL;
> @@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
> GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
>
> perfcnt->user = NULL;
> - drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
> + drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
> perfcnt->buf = NULL;
> panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
> panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
> diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
> index 45fd76e04bdc..e165fa9b2089 100644
> --- a/drivers/gpu/drm/qxl/qxl_display.c
> +++ b/drivers/gpu/drm/qxl/qxl_display.c
> @@ -25,6 +25,7 @@
>
> #include <linux/crc32.h>
> #include <linux/delay.h>
> +#include <linux/dma-buf-map.h>
>
> #include <drm/drm_drv.h>
> #include <drm/drm_atomic.h>
> @@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
> struct drm_gem_object *obj;
> struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
> int ret;
> + struct dma_buf_map user_map;
> + struct dma_buf_map cursor_map;
> void *user_ptr;
> int size = 64*64*4;
>
> @@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
> user_bo = gem_to_qxl_bo(obj);
>
> /* pinning is done in the prepare/cleanup framevbuffer */
> - ret = qxl_bo_kmap(user_bo, &user_ptr);
> + ret = qxl_bo_kmap(user_bo, &user_map);
> if (ret)
> goto out_free_release;
> + user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */
>
> ret = qxl_alloc_bo_reserved(qdev, release,
> sizeof(struct qxl_cursor) + size,
> @@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
> if (ret)
> goto out_unpin;
>
> - ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
> + ret = qxl_bo_kmap(cursor_bo, &cursor_map);
> if (ret)
> goto out_backoff;
>
> @@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
> {
> int ret;
> struct drm_gem_object *gobj;
> + struct dma_buf_map map;
> int monitors_config_size = sizeof(struct qxl_monitors_config) +
> qxl_num_crtc * sizeof(struct qxl_head);
>
> @@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
> if (ret)
> return ret;
>
> - qxl_bo_kmap(qdev->monitors_config_bo, NULL);
> + qxl_bo_kmap(qdev->monitors_config_bo, &map);
>
> qdev->monitors_config = qdev->monitors_config_bo->kptr;
> qdev->ram_header->monitors_config =
> diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
> index 3599db096973..7b7acb910780 100644
> --- a/drivers/gpu/drm/qxl/qxl_draw.c
> +++ b/drivers/gpu/drm/qxl/qxl_draw.c
> @@ -20,6 +20,8 @@
> * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
> */
>
> +#include <linux/dma-buf-map.h>
> +
> #include <drm/drm_fourcc.h>
>
> #include "qxl_drv.h"
> @@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
> unsigned int num_clips,
> struct qxl_bo *clips_bo)
> {
> + struct dma_buf_map map;
> struct qxl_clip_rects *dev_clips;
> int ret;
>
> - ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
> - if (ret) {
> + ret = qxl_bo_kmap(clips_bo, &map);
> + if (ret)
> return NULL;
> - }
> + dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
> +
> dev_clips->num_rects = num_clips;
> dev_clips->chunk.next_chunk = 0;
> dev_clips->chunk.prev_chunk = 0;
> @@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
> int stride = fb->pitches[0];
> /* depth is not actually interesting, we don't mask with it */
> int depth = fb->format->cpp[0] * 8;
> + struct dma_buf_map surface_map;
> uint8_t *surface_base;
> struct qxl_release *release;
> struct qxl_bo *clips_bo;
> @@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
> if (ret)
> goto out_release_backoff;
>
> - ret = qxl_bo_kmap(bo, (void **)&surface_base);
> + ret = qxl_bo_kmap(bo, &surface_map);
> if (ret)
> goto out_release_backoff;
> + surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */
>
> ret = qxl_image_init(qdev, release, dimage, surface_base,
> left - dumb_shadow_offset,
> diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
> index 3602e8b34189..eb437fea5d9e 100644
> --- a/drivers/gpu/drm/qxl/qxl_drv.h
> +++ b/drivers/gpu/drm/qxl/qxl_drv.h
> @@ -30,6 +30,7 @@
> * Definitions taken from spice-protocol, plus kernel driver specific bits.
> */
>
> +#include <linux/dma-buf-map.h>
> #include <linux/dma-fence.h>
> #include <linux/firmware.h>
> #include <linux/platform_device.h>
> @@ -50,6 +51,8 @@
>
> #include "qxl_dev.h"
>
> +struct dma_buf_map;
> +
> #define DRIVER_AUTHOR "Dave Airlie"
>
> #define DRIVER_NAME "qxl"
> @@ -79,7 +82,7 @@ struct qxl_bo {
> /* Protected by tbo.reserved */
> struct ttm_place placements[3];
> struct ttm_placement placement;
> - struct ttm_bo_kmap_obj kmap;
> + struct dma_buf_map map;
> void *kptr;
> unsigned int map_count;
> int type;
> @@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
> void qxl_gem_object_close(struct drm_gem_object *obj,
> struct drm_file *file_priv);
> void qxl_bo_force_delete(struct qxl_device *qdev);
> -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
>
> /* qxl_dumb.c */
> int qxl_mode_dumb_create(struct drm_file *file_priv,
> @@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
> struct drm_gem_object *qxl_gem_prime_import_sg_table(
> struct drm_device *dev, struct dma_buf_attachment *attach,
> struct sg_table *sgt);
> -void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
> -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
> + struct dma_buf_map *map);
> int qxl_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
>
> diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
> index 940e99354f49..755df4d8f95f 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.c
> +++ b/drivers/gpu/drm/qxl/qxl_object.c
> @@ -23,10 +23,12 @@
> * Alon Levy
> */
>
> +#include <linux/dma-buf-map.h>
> +#include <linux/io-mapping.h>
> +
> #include "qxl_drv.h"
> #include "qxl_object.h"
>
> -#include <linux/io-mapping.h>
> static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
> {
> struct qxl_bo *bo;
> @@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev,
> return 0;
> }
>
> -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
> +int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
> {
> - bool is_iomem;
> int r;
>
> if (bo->kptr) {
> - if (ptr)
> - *ptr = bo->kptr;
> bo->map_count++;
> - return 0;
> + goto out;
> }
> - r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
> + r = ttm_bo_vmap(&bo->tbo, &bo->map);
> if (r)
> return r;
> - bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
> - if (ptr)
> - *ptr = bo->kptr;
> bo->map_count = 1;
> +
> + /* TODO: Remove kptr in favor of map everywhere. */
> + if (bo->map.is_iomem)
> + bo->kptr = (void *)bo->map.vaddr_iomem;
> + else
> + bo->kptr = bo->map.vaddr;
> +
> +out:
> + *map = bo->map;
> return 0;
> }
>
> @@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
> void *rptr;
> int ret;
> struct io_mapping *map;
> + struct dma_buf_map bo_map;
>
> if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
> map = qdev->vram_mapping;
> @@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
> return rptr;
> }
>
> - ret = qxl_bo_kmap(bo, &rptr);
> + ret = qxl_bo_kmap(bo, &bo_map);
> if (ret)
> return NULL;
> + rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */
>
> rptr += page_offset * PAGE_SIZE;
> return rptr;
> @@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo)
> if (bo->map_count > 0)
> return;
> bo->kptr = NULL;
> - ttm_bo_kunmap(&bo->kmap);
> + ttm_bo_vunmap(&bo->tbo, &bo->map);
> }
>
> void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
> diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
> index 09a5c818324d..ebf24c9d2bf2 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.h
> +++ b/drivers/gpu/drm/qxl/qxl_object.h
> @@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
> bool kernel, bool pinned, u32 domain,
> struct qxl_surface *surf,
> struct qxl_bo **bo_ptr);
> -extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
> +extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
> extern void qxl_bo_kunmap(struct qxl_bo *bo);
> void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
> void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
> diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
> index 7d3816fca5a8..4aa949799446 100644
> --- a/drivers/gpu/drm/qxl/qxl_prime.c
> +++ b/drivers/gpu/drm/qxl/qxl_prime.c
> @@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
> return ERR_PTR(-ENOSYS);
> }
>
> -void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
> +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct qxl_bo *bo = gem_to_qxl_bo(obj);
> - void *ptr;
> int ret;
>
> - ret = qxl_bo_kmap(bo, &ptr);
> + ret = qxl_bo_kmap(bo, map);
> if (ret < 0)
> - return ERR_PTR(ret);
> + return ret;
>
> - return ptr;
> + return 0;
> }
>
> -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
> + struct dma_buf_map *map)
> {
> struct qxl_bo *bo = gem_to_qxl_bo(obj);
>
> diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
> index 5d54bccebd4d..44cb5ee6fc20 100644
> --- a/drivers/gpu/drm/radeon/radeon.h
> +++ b/drivers/gpu/drm/radeon/radeon.h
> @@ -509,7 +509,6 @@ struct radeon_bo {
> /* Constant after initialization */
> struct radeon_device *rdev;
>
> - struct ttm_bo_kmap_obj dma_buf_vmap;
> pid_t pid;
>
> #ifdef CONFIG_MMU_NOTIFIER
> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> index 0ccd7213e41f..d2876ce3bc9e 100644
> --- a/drivers/gpu/drm/radeon/radeon_gem.c
> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> @@ -31,6 +31,7 @@
> #include <drm/drm_debugfs.h>
> #include <drm/drm_device.h>
> #include <drm/drm_file.h>
> +#include <drm/drm_gem_ttm_helper.h>
> #include <drm/radeon_drm.h>
>
> #include "radeon.h"
> @@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
> struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
> int radeon_gem_prime_pin(struct drm_gem_object *obj);
> void radeon_gem_prime_unpin(struct drm_gem_object *obj);
> -void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
> -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
>
> static const struct drm_gem_object_funcs radeon_gem_object_funcs;
>
> @@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = {
> .pin = radeon_gem_prime_pin,
> .unpin = radeon_gem_prime_unpin,
> .get_sg_table = radeon_gem_prime_get_sg_table,
> - .vmap = radeon_gem_prime_vmap,
> - .vunmap = radeon_gem_prime_vunmap,
> + .vmap = drm_gem_ttm_vmap,
> + .vunmap = drm_gem_ttm_vunmap,
> };
>
> /*
> diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
> index b9de0e51c0be..088d39a51c0d 100644
> --- a/drivers/gpu/drm/radeon/radeon_prime.c
> +++ b/drivers/gpu/drm/radeon/radeon_prime.c
> @@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
> return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
> }
>
> -void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - struct radeon_bo *bo = gem_to_radeon_bo(obj);
> - int ret;
> -
> - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
> - &bo->dma_buf_vmap);
> - if (ret)
> - return ERR_PTR(ret);
> -
> - return bo->dma_buf_vmap.virtual;
> -}
> -
> -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - struct radeon_bo *bo = gem_to_radeon_bo(obj);
> -
> - ttm_bo_kunmap(&bo->dma_buf_vmap);
> -}
> -
> struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sg)
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> index 7d5ebb10323b..7971f57436dd 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> @@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
> return ERR_PTR(ret);
> }
>
> -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
> +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>
> - if (rk_obj->pages)
> - return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
> - pgprot_writecombine(PAGE_KERNEL));
> + if (rk_obj->pages) {
> + void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
> + pgprot_writecombine(PAGE_KERNEL));
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
> + return 0;
> + }
>
> if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
> - return NULL;
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
>
> - return rk_obj->kvaddr;
> + return 0;
> }
>
> -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>
> if (rk_obj->pages) {
> - vunmap(vaddr);
> + vunmap(map->vaddr);
> return;
> }
>
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> index 7ffc541bea07..5a70a56cd406 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> @@ -31,8 +31,8 @@ struct drm_gem_object *
> rockchip_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sg);
> -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
> -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> /* drm driver mmap file operations */
> int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
> index 744a8e337e41..c02e35ed6e76 100644
> --- a/drivers/gpu/drm/tiny/cirrus.c
> +++ b/drivers/gpu/drm/tiny/cirrus.c
> @@ -17,6 +17,7 @@
> */
>
> #include <linux/console.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/module.h>
> #include <linux/pci.h>
>
> @@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
> struct drm_rect *rect)
> {
> struct cirrus_device *cirrus = to_cirrus(fb->dev);
> + struct dma_buf_map map;
> void *vmap;
> int idx, ret;
>
> @@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
> if (!drm_dev_enter(&cirrus->dev, &idx))
> goto out;
>
> - ret = -ENOMEM;
> - vmap = drm_gem_shmem_vmap(fb->obj[0]);
> - if (!vmap)
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (ret)
> goto out_dev_exit;
> + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> if (cirrus->cpp == fb->format->cpp[0])
> drm_fb_memcpy_dstclip(cirrus->vram,
> @@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
> else
> WARN_ON_ONCE("cpp mismatch");
>
> - drm_gem_shmem_vunmap(fb->obj[0], vmap);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
> ret = 0;
>
> out_dev_exit:
> diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
> index cc397671f689..12a890cea6e9 100644
> --- a/drivers/gpu/drm/tiny/gm12u320.c
> +++ b/drivers/gpu/drm/tiny/gm12u320.c
> @@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
> {
> int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
> struct drm_framebuffer *fb;
> + struct dma_buf_map map;
> void *vaddr;
> u8 *src;
>
> @@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
> y1 = gm12u320->fb_update.rect.y1;
> y2 = gm12u320->fb_update.rect.y2;
>
> - vaddr = drm_gem_shmem_vmap(fb->obj[0]);
> - if (IS_ERR(vaddr)) {
> - GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (ret) {
> + GM12U320_ERR("failed to vmap fb: %d\n", ret);
> goto put_fb;
> }
> + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> if (fb->obj[0]->import_attach) {
> ret = dma_buf_begin_cpu_access(
> @@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
> GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
> }
> vunmap:
> - drm_gem_shmem_vunmap(fb->obj[0], vaddr);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
> put_fb:
> drm_framebuffer_put(fb);
> gm12u320->fb_update.fb = NULL;
> diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
> index fef43f4e3bac..42eeba1dfdbf 100644
> --- a/drivers/gpu/drm/udl/udl_modeset.c
> +++ b/drivers/gpu/drm/udl/udl_modeset.c
> @@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
> struct urb *urb;
> struct drm_rect clip;
> int log_bpp;
> + struct dma_buf_map map;
> void *vaddr;
>
> ret = udl_log_cpp(fb->format->cpp[0]);
> @@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
> return ret;
> }
>
> - vaddr = drm_gem_shmem_vmap(fb->obj[0]);
> - if (IS_ERR(vaddr)) {
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (ret) {
> DRM_ERROR("failed to vmap fb\n");
> goto out_dma_buf_end_cpu_access;
> }
> + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> urb = udl_get_urb(dev);
> if (!urb)
> @@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
> ret = 0;
>
> out_drm_gem_shmem_vunmap:
> - drm_gem_shmem_vunmap(fb->obj[0], vaddr);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
> out_dma_buf_end_cpu_access:
> if (import_attach) {
> tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
> diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> index 931c55126148..f268fb258c83 100644
> --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
> +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> @@ -9,6 +9,8 @@
> * Michael Thayer <michael.thayer@oracle.com,
> * Hans de Goede <hdegoede@redhat.com>
> */
> +
> +#include <linux/dma-buf-map.h>
> #include <linux/export.h>
>
> #include <drm/drm_atomic.h>
> @@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> u32 height = plane->state->crtc_h;
> size_t data_size, mask_size;
> u32 flags;
> + struct dma_buf_map map;
> + int ret;
> u8 *src;
>
> /*
> @@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>
> vbox_crtc->cursor_enabled = true;
>
> - src = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(src)) {
> + ret = drm_gem_vram_vmap(gbo, &map);
> + if (ret) {
> /*
> * BUG: we should have pinned the BO in prepare_fb().
> */
> @@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> DRM_WARN("Could not map cursor bo, skipping update\n");
> return;
> }
> + src = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> /*
> * The mask must be calculated based on the alpha
> @@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> data_size = width * height * 4 + mask_size;
>
> copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
> - drm_gem_vram_vunmap(gbo, src);
> + drm_gem_vram_vunmap(gbo, &map);
>
> flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
> VBOX_MOUSE_POINTER_ALPHA;
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index 557f0d1e6437..f290a9a942dc 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> return drm_gem_cma_prime_mmap(obj, vma);
> }
>
> -void *vc4_prime_vmap(struct drm_gem_object *obj)
> +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct vc4_bo *bo = to_vc4_bo(obj);
>
> if (bo->validated_shader) {
> DRM_DEBUG("mmaping of shader BOs not allowed.\n");
> - return ERR_PTR(-EINVAL);
> + return -EINVAL;
> }
>
> - return drm_gem_cma_prime_vmap(obj);
> + return drm_gem_cma_prime_vmap(obj, map);
> }
>
> struct drm_gem_object *
> diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
> index cc79b1aaa878..904f2c36c963 100644
> --- a/drivers/gpu/drm/vc4/vc4_drv.h
> +++ b/drivers/gpu/drm/vc4/vc4_drv.h
> @@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
> struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sgt);
> -void *vc4_prime_vmap(struct drm_gem_object *obj);
> +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> int vc4_bo_cache_init(struct drm_device *dev);
> void vc4_bo_cache_destroy(struct drm_device *dev);
> int vc4_bo_inc_usecnt(struct vc4_bo *bo);
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index fa54a6d1403d..b2aa26e1e4a2 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
> return &obj->base;
> }
>
> -static void *vgem_prime_vmap(struct drm_gem_object *obj)
> +static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
> long n_pages = obj->size >> PAGE_SHIFT;
> struct page **pages;
> + void *vaddr;
>
> pages = vgem_pin_pages(bo);
> if (IS_ERR(pages))
> - return NULL;
> + return PTR_ERR(pages);
> +
> + vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
>
> - return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> + return 0;
> }
>
> -static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
>
> - vunmap(vaddr);
> + vunmap(map->vaddr);
> vgem_unpin_pages(bo);
> }
>
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> index 4f34ef34ba60..74db5a840bed 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> @@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
> return gem_mmap_obj(xen_obj, vma);
> }
>
> -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
> +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
> {
> struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
> + void *vaddr;
>
> if (!xen_obj->pages)
> - return NULL;
> + return -ENOMEM;
>
> /* Please see comment in gem_mmap_obj on mapping and attributes. */
> - return vmap(xen_obj->pages, xen_obj->num_pages,
> - VM_MAP, PAGE_KERNEL);
> + vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
> + VM_MAP, PAGE_KERNEL);
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
> +
> + return 0;
> }
>
> void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
> - void *vaddr)
> + struct dma_buf_map *map)
> {
> - vunmap(vaddr);
> + vunmap(map->vaddr);
> }
>
> int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> index a39675fa31b2..a4e67d0a149c 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> @@ -12,6 +12,7 @@
> #define __XEN_DRM_FRONT_GEM_H
>
> struct dma_buf_attachment;
> +struct dma_buf_map;
> struct drm_device;
> struct drm_gem_object;
> struct file;
> @@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
>
> int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
>
> -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
> +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
> + struct dma_buf_map *map);
>
> void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
> - void *vaddr);
> + struct dma_buf_map *map);
>
> int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
> struct vm_area_struct *vma);
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index c38dd35da00b..5e6daa1c982f 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -39,6 +39,7 @@
>
> #include <drm/drm_vma_manager.h>
>
> +struct dma_buf_map;
> struct drm_gem_object;
>
> /**
> @@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
> *
> * This callback is optional.
> */
> - void *(*vmap)(struct drm_gem_object *obj);
> + int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> /**
> * @vunmap:
> @@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
> *
> * This callback is optional.
> */
> - void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
> + void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> /**
> * @mmap:
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index a064b0d1c480..caf98b9cf4b4 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
> struct sg_table *sgt);
> int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
> +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> struct drm_gem_object *
> drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 5381f0c8cf6f..3449a0353fe0 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
> void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
> int drm_gem_shmem_pin(struct drm_gem_object *obj);
> void drm_gem_shmem_unpin(struct drm_gem_object *obj);
> -void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
> -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
>
> diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
> index 128f88174d32..c0d28ba0f5c9 100644
> --- a/include/drm/drm_gem_vram_helper.h
> +++ b/include/drm/drm_gem_vram_helper.h
> @@ -10,6 +10,7 @@
> #include <drm/ttm/ttm_bo_api.h>
> #include <drm/ttm/ttm_bo_driver.h>
>
> +#include <linux/dma-buf-map.h>
> #include <linux/kernel.h> /* for container_of() */
>
> struct drm_mode_create_dumb;
> @@ -29,9 +30,8 @@ struct vm_area_struct;
>
> /**
> * struct drm_gem_vram_object - GEM object backed by VRAM
> - * @gem: GEM object
> * @bo: TTM buffer object
> - * @kmap: Mapping information for @bo
> + * @map: Mapping information for @bo
> * @placement: TTM placement information. Supported placements are \
> %TTM_PL_VRAM and %TTM_PL_SYSTEM
> * @placements: TTM placement information.
> @@ -50,15 +50,15 @@ struct vm_area_struct;
> */
> struct drm_gem_vram_object {
> struct ttm_buffer_object bo;
> - struct ttm_bo_kmap_obj kmap;
> + struct dma_buf_map map;
>
> /**
> - * @kmap_use_count:
> + * @vmap_use_count:
> *
> * Reference count on the virtual address.
> * The address are un-mapped when the count reaches zero.
> */
> - unsigned int kmap_use_count;
> + unsigned int vmap_use_count;
>
> /* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
> struct ttm_placement placement;
> @@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
> s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
> int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
> int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
> -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
> +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
> +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
>
> int drm_gem_vram_fill_create_dumb(struct drm_file *file,
> struct drm_device *dev,
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends
@ 2020-10-15 14:21 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-15 14:21 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> This patch replaces the vmap/vunmap's use of raw pointers in GEM object
> functions with instances of struct dma_buf_map. GEM backends are
> converted as well. For most of them, this simply changes the returned type.
>
> TTM-based drivers now return information about the location of the memory,
> either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap()
> et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of
> implementing their own vmap callbacks.
>
> v4:
> * use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian)
> * fix a trailing { in drm_gem_vmap()
> * remove several empty functions instead of converting them (Daniel)
> * comment uses of raw pointers with a TODO (Daniel)
> * TODO list: convert more helpers to use struct dma_buf_map
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
The amdgpu changes look good to me, but I can't fully judge the other stuff.
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> Documentation/gpu/todo.rst | 18 ++++
> drivers/gpu/drm/Kconfig | 2 +
> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 -------
> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 -
> drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 -
> drivers/gpu/drm/ast/ast_cursor.c | 27 +++--
> drivers/gpu/drm/ast/ast_drv.h | 7 +-
> drivers/gpu/drm/drm_gem.c | 23 +++--
> drivers/gpu/drm/drm_gem_cma_helper.c | 10 +-
> drivers/gpu/drm/drm_gem_shmem_helper.c | 48 +++++----
> drivers/gpu/drm/drm_gem_vram_helper.c | 107 ++++++++++----------
> drivers/gpu/drm/etnaviv/etnaviv_drv.h | 2 +-
> drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 9 +-
> drivers/gpu/drm/lima/lima_gem.c | 6 +-
> drivers/gpu/drm/lima/lima_sched.c | 11 +-
> drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +-
> drivers/gpu/drm/nouveau/Kconfig | 1 +
> drivers/gpu/drm/nouveau/nouveau_bo.h | 2 -
> drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +-
> drivers/gpu/drm/nouveau/nouveau_gem.h | 2 -
> drivers/gpu/drm/nouveau/nouveau_prime.c | 20 ----
> drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +--
> drivers/gpu/drm/qxl/qxl_display.c | 11 +-
> drivers/gpu/drm/qxl/qxl_draw.c | 14 ++-
> drivers/gpu/drm/qxl/qxl_drv.h | 11 +-
> drivers/gpu/drm/qxl/qxl_object.c | 31 +++---
> drivers/gpu/drm/qxl/qxl_object.h | 2 +-
> drivers/gpu/drm/qxl/qxl_prime.c | 12 +--
> drivers/gpu/drm/radeon/radeon.h | 1 -
> drivers/gpu/drm/radeon/radeon_gem.c | 7 +-
> drivers/gpu/drm/radeon/radeon_prime.c | 20 ----
> drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 ++--
> drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +-
> drivers/gpu/drm/tiny/cirrus.c | 10 +-
> drivers/gpu/drm/tiny/gm12u320.c | 10 +-
> drivers/gpu/drm/udl/udl_modeset.c | 8 +-
> drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +-
> drivers/gpu/drm/vc4/vc4_bo.c | 6 +-
> drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
> drivers/gpu/drm/vgem/vgem_drv.c | 16 ++-
> drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 ++--
> drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +-
> include/drm/drm_gem.h | 5 +-
> include/drm/drm_gem_cma_helper.h | 2 +-
> include/drm/drm_gem_shmem_helper.h | 4 +-
> include/drm/drm_gem_vram_helper.h | 14 +--
> 47 files changed, 321 insertions(+), 295 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 700637e25ecd..7e6fc3c04add 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -446,6 +446,24 @@ Contact: Ville Syrjälä, Daniel Vetter
>
> Level: Intermediate
>
> +Use struct dma_buf_map throughout codebase
> +------------------------------------------
> +
> +Pointers to shared device memory are stored in struct dma_buf_map. Each
> +instance knows whether it refers to system or I/O memory. Most of the DRM-wide
> +interface have been converted to use struct dma_buf_map, but implementations
> +often still use raw pointers.
> +
> +The task is to use struct dma_buf_map where it makes sense.
> +
> +* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
> +* TTM might benefit from using struct dma_buf_map internally.
> +* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
> +
> +Level: Intermediate
> +
>
> Core refactorings
> =================
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index 147d61b9674e..319839b87d37 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -239,6 +239,7 @@ config DRM_RADEON
> select FW_LOADER
> select DRM_KMS_HELPER
> select DRM_TTM
> + select DRM_TTM_HELPER
> select POWER_SUPPLY
> select HWMON
> select BACKLIGHT_CLASS_DEVICE
> @@ -259,6 +260,7 @@ config DRM_AMDGPU
> select DRM_KMS_HELPER
> select DRM_SCHED
> select DRM_TTM
> + select DRM_TTM_HELPER
> select POWER_SUPPLY
> select HWMON
> select BACKLIGHT_CLASS_DEVICE
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> index 5b465ab774d1..e5919efca870 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> @@ -41,42 +41,6 @@
> #include <linux/dma-fence-array.h>
> #include <linux/pci-p2pdma.h>
>
> -/**
> - * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
> - * @obj: GEM BO
> - *
> - * Sets up an in-kernel virtual mapping of the BO's memory.
> - *
> - * Returns:
> - * The virtual address of the mapping or an error pointer.
> - */
> -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> - int ret;
> -
> - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
> - &bo->dma_buf_vmap);
> - if (ret)
> - return ERR_PTR(ret);
> -
> - return bo->dma_buf_vmap.virtual;
> -}
> -
> -/**
> - * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
> - * @obj: GEM BO
> - * @vaddr: Virtual address (unused)
> - *
> - * Tears down the in-kernel virtual mapping of the BO's memory.
> - */
> -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> -
> - ttm_bo_kunmap(&bo->dma_buf_vmap);
> -}
> -
> /**
> * amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation
> * @obj: GEM BO
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> index 2c5c84a06bb9..39b5b9616fd8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> @@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
> struct dma_buf *dma_buf);
> bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
> struct amdgpu_bo *bo);
> -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
> -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index be08a63ef58c..576659827e74 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -33,6 +33,7 @@
>
> #include <drm/amdgpu_drm.h>
> #include <drm/drm_debugfs.h>
> +#include <drm/drm_gem_ttm_helper.h>
>
> #include "amdgpu.h"
> #include "amdgpu_display.h"
> @@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = {
> .open = amdgpu_gem_object_open,
> .close = amdgpu_gem_object_close,
> .export = amdgpu_gem_prime_export,
> - .vmap = amdgpu_gem_prime_vmap,
> - .vunmap = amdgpu_gem_prime_vunmap,
> + .vmap = drm_gem_ttm_vmap,
> + .vunmap = drm_gem_ttm_vunmap,
> };
>
> /*
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> index 132e5f955180..01296ef0d673 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> @@ -100,7 +100,6 @@ struct amdgpu_bo {
> struct amdgpu_bo *parent;
> struct amdgpu_bo *shadow;
>
> - struct ttm_bo_kmap_obj dma_buf_vmap;
> struct amdgpu_mn *mn;
>
>
> diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
> index e0f4613918ad..742d43a7edf4 100644
> --- a/drivers/gpu/drm/ast/ast_cursor.c
> +++ b/drivers/gpu/drm/ast/ast_cursor.c
> @@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
>
> for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
> gbo = ast->cursor.gbo[i];
> - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
> + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
> drm_gem_vram_unpin(gbo);
> drm_gem_vram_put(gbo);
> }
> @@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
> struct drm_device *dev = &ast->base;
> size_t size, i;
> struct drm_gem_vram_object *gbo;
> - void __iomem *vaddr;
> + struct dma_buf_map map;
> int ret;
>
> size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
> @@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
> drm_gem_vram_put(gbo);
> goto err_drm_gem_vram_put;
> }
> - vaddr = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(vaddr)) {
> - ret = PTR_ERR(vaddr);
> + ret = drm_gem_vram_vmap(gbo, &map);
> + if (ret) {
> drm_gem_vram_unpin(gbo);
> drm_gem_vram_put(gbo);
> goto err_drm_gem_vram_put;
> }
>
> ast->cursor.gbo[i] = gbo;
> - ast->cursor.vaddr[i] = vaddr;
> + ast->cursor.map[i] = map;
> }
>
> return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
> @@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
> while (i) {
> --i;
> gbo = ast->cursor.gbo[i];
> - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
> + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
> drm_gem_vram_unpin(gbo);
> drm_gem_vram_put(gbo);
> }
> @@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
> {
> struct drm_device *dev = &ast->base;
> struct drm_gem_vram_object *gbo;
> + struct dma_buf_map map;
> int ret;
> void *src;
> void __iomem *dst;
> @@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
> ret = drm_gem_vram_pin(gbo, 0);
> if (ret)
> return ret;
> - src = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(src)) {
> - ret = PTR_ERR(src);
> + ret = drm_gem_vram_vmap(gbo, &map);
> + if (ret)
> goto err_drm_gem_vram_unpin;
> - }
> + src = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> - dst = ast->cursor.vaddr[ast->cursor.next_index];
> + dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
>
> /* do data transfer to cursor BO */
> update_cursor_image(dst, src, fb->width, fb->height);
>
> - drm_gem_vram_vunmap(gbo, src);
> + drm_gem_vram_vunmap(gbo, &map);
> drm_gem_vram_unpin(gbo);
>
> return 0;
> @@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
> u8 __iomem *sig;
> u8 jreg;
>
> - dst = ast->cursor.vaddr[ast->cursor.next_index];
> + dst = ast->cursor.map[ast->cursor.next_index].vaddr;
>
> sig = dst + AST_HWC_SIZE;
> writel(x, sig + AST_HWC_SIGNATURE_X);
> diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
> index 467049ca8430..f963141dd851 100644
> --- a/drivers/gpu/drm/ast/ast_drv.h
> +++ b/drivers/gpu/drm/ast/ast_drv.h
> @@ -28,10 +28,11 @@
> #ifndef __AST_DRV_H__
> #define __AST_DRV_H__
>
> -#include <linux/types.h>
> -#include <linux/io.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/i2c.h>
> #include <linux/i2c-algo-bit.h>
> +#include <linux/io.h>
> +#include <linux/types.h>
>
> #include <drm/drm_connector.h>
> #include <drm/drm_crtc.h>
> @@ -131,7 +132,7 @@ struct ast_private {
>
> struct {
> struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
> - void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
> + struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
> unsigned int next_index;
> } cursor;
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 1da67d34e55d..a89ad4570e3c 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -36,6 +36,7 @@
> #include <linux/pagemap.h>
> #include <linux/shmem_fs.h>
> #include <linux/dma-buf.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/mem_encrypt.h>
> #include <linux/pagevec.h>
>
> @@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
>
> void *drm_gem_vmap(struct drm_gem_object *obj)
> {
> - void *vaddr;
> + struct dma_buf_map map;
> + int ret;
>
> - if (obj->funcs->vmap)
> - vaddr = obj->funcs->vmap(obj);
> - else
> - vaddr = ERR_PTR(-EOPNOTSUPP);
> + if (!obj->funcs->vmap)
> + return ERR_PTR(-EOPNOTSUPP);
>
> - if (!vaddr)
> - vaddr = ERR_PTR(-ENOMEM);
> + ret = obj->funcs->vmap(obj, &map);
> + if (ret)
> + return ERR_PTR(ret);
> + else if (dma_buf_map_is_null(&map))
> + return ERR_PTR(-ENOMEM);
>
> - return vaddr;
> + return map.vaddr;
> }
>
> void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
> {
> + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
> +
> if (!vaddr)
> return;
>
> if (obj->funcs->vunmap)
> - obj->funcs->vunmap(obj, vaddr);
> + obj->funcs->vunmap(obj, &map);
> }
>
> /**
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index d527485ea0b7..b57e3e9222f0 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
> * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
> * address space
> * @obj: GEM object
> + * @map: Returns the kernel virtual address of the CMA GEM object's backing
> + * store.
> *
> * This function maps a buffer exported via DRM PRIME into the kernel's
> * virtual address space. Since the CMA buffers are already mapped into the
> @@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
> * driver's &drm_gem_object_funcs.vmap callback.
> *
> * Returns:
> - * The kernel virtual address of the CMA GEM object's backing store.
> + * 0 on success, or a negative error code otherwise.
> */
> -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
> +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
>
> - return cma_obj->vaddr;
> + dma_buf_map_set_vaddr(map, cma_obj->vaddr);
> +
> + return 0;
> }
> EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index fb11df7aced5..5553f58f68f3 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
> }
> EXPORT_SYMBOL(drm_gem_shmem_unpin);
>
> -static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> +static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
> - struct dma_buf_map map;
> int ret = 0;
>
> - if (shmem->vmap_use_count++ > 0)
> - return shmem->vaddr;
> + if (shmem->vmap_use_count++ > 0) {
> + dma_buf_map_set_vaddr(map, shmem->vaddr);
> + return 0;
> + }
>
> if (obj->import_attach) {
> - ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
> - if (!ret)
> - shmem->vaddr = map.vaddr;
> + ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
> + if (!ret) {
> + if (WARN_ON(map->is_iomem)) {
> + ret = -EIO;
> + goto err_put_pages;
> + }
> + shmem->vaddr = map->vaddr;
> + }
> } else {
> pgprot_t prot = PAGE_KERNEL;
>
> @@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> VM_MAP, prot);
> if (!shmem->vaddr)
> ret = -ENOMEM;
> + else
> + dma_buf_map_set_vaddr(map, shmem->vaddr);
> }
>
> if (ret) {
> @@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> goto err_put_pages;
> }
>
> - return shmem->vaddr;
> + return 0;
>
> err_put_pages:
> if (!obj->import_attach)
> @@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> err_zero_use:
> shmem->vmap_use_count = 0;
>
> - return ERR_PTR(ret);
> + return ret;
> }
>
> /*
> * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
> * @shmem: shmem GEM object
> + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> + * store.
> *
> * This function makes sure that a contiguous kernel virtual address mapping
> * exists for the buffer backing the shmem GEM object.
> @@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> * Returns:
> * 0 on success or a negative error code on failure.
> */
> -void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
> +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> - void *vaddr;
> int ret;
>
> ret = mutex_lock_interruptible(&shmem->vmap_lock);
> if (ret)
> - return ERR_PTR(ret);
> - vaddr = drm_gem_shmem_vmap_locked(shmem);
> + return ret;
> + ret = drm_gem_shmem_vmap_locked(shmem, map);
> mutex_unlock(&shmem->vmap_lock);
>
> - return vaddr;
> + return ret;
> }
> EXPORT_SYMBOL(drm_gem_shmem_vmap);
>
> -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> +static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> + struct dma_buf_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
> - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
>
> if (WARN_ON_ONCE(!shmem->vmap_use_count))
> return;
> @@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> return;
>
> if (obj->import_attach)
> - dma_buf_vunmap(obj->import_attach->dmabuf, &map);
> + dma_buf_vunmap(obj->import_attach->dmabuf, map);
> else
> vunmap(shmem->vaddr);
>
> @@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> /*
> * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
> * @shmem: shmem GEM object
> + * @map: Kernel virtual address where the SHMEM GEM object was mapped
> *
> * This function cleans up a kernel virtual address mapping acquired by
> * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
> @@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> * also be called by drivers directly, in which case it will hide the
> * differences between dma-buf imported and natively allocated objects.
> */
> -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>
> mutex_lock(&shmem->vmap_lock);
> - drm_gem_shmem_vunmap_locked(shmem);
> + drm_gem_shmem_vunmap_locked(shmem, map);
> mutex_unlock(&shmem->vmap_lock);
> }
> EXPORT_SYMBOL(drm_gem_shmem_vunmap);
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 2d5ed30518f1..4d8553b28558 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -1,5 +1,6 @@
> // SPDX-License-Identifier: GPL-2.0-or-later
>
> +#include <linux/dma-buf-map.h>
> #include <linux/module.h>
>
> #include <drm/drm_debugfs.h>
> @@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
> * up; only release the GEM object.
> */
>
> - WARN_ON(gbo->kmap_use_count);
> - WARN_ON(gbo->kmap.virtual);
> + WARN_ON(gbo->vmap_use_count);
> + WARN_ON(dma_buf_map_is_set(&gbo->map));
>
> drm_gem_object_release(&gbo->bo.base);
> }
> @@ -382,29 +383,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
> }
> EXPORT_SYMBOL(drm_gem_vram_unpin);
>
> -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
> +static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
> + struct dma_buf_map *map)
> {
> int ret;
> - struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> - bool is_iomem;
>
> - if (gbo->kmap_use_count > 0)
> + if (gbo->vmap_use_count > 0)
> goto out;
>
> - ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
> + ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
> if (ret)
> - return ERR_PTR(ret);
> + return ret;
>
> out:
> - ++gbo->kmap_use_count;
> - return ttm_kmap_obj_virtual(kmap, &is_iomem);
> + ++gbo->vmap_use_count;
> + *map = gbo->map;
> +
> + return 0;
> }
>
> -static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> +static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
> + struct dma_buf_map *map)
> {
> - if (WARN_ON_ONCE(!gbo->kmap_use_count))
> + struct drm_device *dev = gbo->bo.base.dev;
> +
> + if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count))
> return;
> - if (--gbo->kmap_use_count > 0)
> +
> + if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
> + return; /* BUG: map not mapped from this BO */
> +
> + if (--gbo->vmap_use_count > 0)
> return;
>
> /*
> @@ -418,7 +427,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> /**
> * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
> * space
> - * @gbo: The GEM VRAM object to map
> + * @gbo: The GEM VRAM object to map
> + * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> + * store.
> *
> * The vmap function pins a GEM VRAM object to its current location, either
> * system or video memory, and maps its buffer into kernel address space.
> @@ -427,48 +438,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> * unmap and unpin the GEM VRAM object.
> *
> * Returns:
> - * The buffer's virtual address on success, or
> - * an ERR_PTR()-encoded error code otherwise.
> + * 0 on success, or a negative error code otherwise.
> */
> -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
> +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
> {
> int ret;
> - void *base;
>
> ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
> if (ret)
> - return ERR_PTR(ret);
> + return ret;
>
> ret = drm_gem_vram_pin_locked(gbo, 0);
> if (ret)
> goto err_ttm_bo_unreserve;
> - base = drm_gem_vram_kmap_locked(gbo);
> - if (IS_ERR(base)) {
> - ret = PTR_ERR(base);
> + ret = drm_gem_vram_kmap_locked(gbo, map);
> + if (ret)
> goto err_drm_gem_vram_unpin_locked;
> - }
>
> ttm_bo_unreserve(&gbo->bo);
>
> - return base;
> + return 0;
>
> err_drm_gem_vram_unpin_locked:
> drm_gem_vram_unpin_locked(gbo);
> err_ttm_bo_unreserve:
> ttm_bo_unreserve(&gbo->bo);
> - return ERR_PTR(ret);
> + return ret;
> }
> EXPORT_SYMBOL(drm_gem_vram_vmap);
>
> /**
> * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
> - * @gbo: The GEM VRAM object to unmap
> - * @vaddr: The mapping's base address as returned by drm_gem_vram_vmap()
> + * @gbo: The GEM VRAM object to unmap
> + * @map: Kernel virtual address where the VRAM GEM object was mapped
> *
> * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
> * the documentation for drm_gem_vram_vmap() for more information.
> */
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
> +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
> {
> int ret;
>
> @@ -476,7 +483,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
> if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
> return;
>
> - drm_gem_vram_kunmap_locked(gbo);
> + drm_gem_vram_kunmap_locked(gbo, map);
> drm_gem_vram_unpin_locked(gbo);
>
> ttm_bo_unreserve(&gbo->bo);
> @@ -567,15 +574,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
> bool evict,
> struct ttm_resource *new_mem)
> {
> - struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> + struct ttm_buffer_object *bo = &gbo->bo;
> + struct drm_device *dev = bo->base.dev;
>
> - if (WARN_ON_ONCE(gbo->kmap_use_count))
> + if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count))
> return;
>
> - if (!kmap->virtual)
> - return;
> - ttm_bo_kunmap(kmap);
> - kmap->virtual = NULL;
> + ttm_bo_vunmap(bo, &gbo->map);
> }
>
> static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
> @@ -832,37 +837,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
> }
>
> /**
> - * drm_gem_vram_object_vmap() - \
> - Implements &struct drm_gem_object_funcs.vmap
> - * @gem: The GEM object to map
> + * drm_gem_vram_object_vmap() -
> + * Implements &struct drm_gem_object_funcs.vmap
> + * @gem: The GEM object to map
> + * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> + * store.
> *
> * Returns:
> - * The buffers virtual address on success, or
> - * NULL otherwise.
> + * 0 on success, or a negative error code otherwise.
> */
> -static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
> +static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
> {
> struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
> - void *base;
>
> - base = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(base))
> - return NULL;
> - return base;
> + return drm_gem_vram_vmap(gbo, map);
> }
>
> /**
> - * drm_gem_vram_object_vunmap() - \
> - Implements &struct drm_gem_object_funcs.vunmap
> - * @gem: The GEM object to unmap
> - * @vaddr: The mapping's base address
> + * drm_gem_vram_object_vunmap() -
> + * Implements &struct drm_gem_object_funcs.vunmap
> + * @gem: The GEM object to unmap
> + * @map: Kernel virtual address where the VRAM GEM object was mapped
> */
> -static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
> - void *vaddr)
> +static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
> {
> struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
>
> - drm_gem_vram_vunmap(gbo, vaddr);
> + drm_gem_vram_vunmap(gbo, map);
> }
>
> /*
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> index 9682c26d89bb..f5be627e1de0 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> @@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
> int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
> struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
> -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
> +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index a6d9932a32ae..bc2543dd987d 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
> return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
> }
>
> -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
> +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> - return etnaviv_gem_vmap(obj);
> + void *vaddr = etnaviv_gem_vmap(obj);
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
> +
> + return 0;
> }
>
> int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
> index 11223fe348df..832e5280a6ed 100644
> --- a/drivers/gpu/drm/lima/lima_gem.c
> +++ b/drivers/gpu/drm/lima/lima_gem.c
> @@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
> return drm_gem_shmem_pin(obj);
> }
>
> -static void *lima_gem_vmap(struct drm_gem_object *obj)
> +static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct lima_bo *bo = to_lima_bo(obj);
>
> if (bo->heap_size)
> - return ERR_PTR(-EINVAL);
> + return -EINVAL;
>
> - return drm_gem_shmem_vmap(obj);
> + return drm_gem_shmem_vmap(obj, map);
> }
>
> static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
> index dc6df9e9a40d..a070a85f8f36 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -1,6 +1,7 @@
> // SPDX-License-Identifier: GPL-2.0 OR MIT
> /* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
>
> +#include <linux/dma-buf-map.h>
> #include <linux/kthread.h>
> #include <linux/slab.h>
> #include <linux/vmalloc.h>
> @@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
> struct lima_dump_chunk_buffer *buffer_chunk;
> u32 size, task_size, mem_size;
> int i;
> + struct dma_buf_map map;
> + int ret;
>
> mutex_lock(&dev->error_task_list_lock);
>
> @@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
> } else {
> buffer_chunk->size = lima_bo_size(bo);
>
> - data = drm_gem_shmem_vmap(&bo->base.base);
> - if (IS_ERR_OR_NULL(data)) {
> + ret = drm_gem_shmem_vmap(&bo->base.base, &map);
> + if (ret) {
> kvfree(et);
> goto out;
> }
>
> - memcpy(buffer_chunk + 1, data, buffer_chunk->size);
> + memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
>
> - drm_gem_shmem_vunmap(&bo->base.base, data);
> + drm_gem_shmem_vunmap(&bo->base.base, &map);
> }
>
> buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
> diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
> index 38672f9e5c4f..8ef76769b97f 100644
> --- a/drivers/gpu/drm/mgag200/mgag200_mode.c
> +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
> @@ -9,6 +9,7 @@
> */
>
> #include <linux/delay.h>
> +#include <linux/dma-buf-map.h>
>
> #include <drm/drm_atomic_helper.h>
> #include <drm/drm_atomic_state_helper.h>
> @@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
> struct drm_rect *clip)
> {
> struct drm_device *dev = &mdev->base;
> + struct dma_buf_map map;
> void *vmap;
> + int ret;
>
> - vmap = drm_gem_shmem_vmap(fb->obj[0]);
> - if (drm_WARN_ON(dev, !vmap))
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (drm_WARN_ON(dev, ret))
> return; /* BUG: SHMEM BO should always be vmapped */
> + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
>
> - drm_gem_shmem_vunmap(fb->obj[0], vmap);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
>
> /* Always scanout image at VRAM offset 0 */
> mgag200_set_startadd(mdev, (u32)0);
> diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
> index 5dec1e5694b7..9436310d0854 100644
> --- a/drivers/gpu/drm/nouveau/Kconfig
> +++ b/drivers/gpu/drm/nouveau/Kconfig
> @@ -6,6 +6,7 @@ config DRM_NOUVEAU
> select FW_LOADER
> select DRM_KMS_HELPER
> select DRM_TTM
> + select DRM_TTM_HELPER
> select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT
> select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT
> select X86_PLATFORM_DEVICES if ACPI && X86
> diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h
> index 641ef6298a0e..6045b85a762a 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_bo.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_bo.h
> @@ -39,8 +39,6 @@ struct nouveau_bo {
> unsigned mode;
>
> struct nouveau_drm_tile *tile;
> -
> - struct ttm_bo_kmap_obj dma_buf_vmap;
> };
>
> static inline struct nouveau_bo *
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> index 9a421c3949de..f942b526b0a5 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> @@ -24,6 +24,8 @@
> *
> */
>
> +#include <drm/drm_gem_ttm_helper.h>
> +
> #include "nouveau_drv.h"
> #include "nouveau_dma.h"
> #include "nouveau_fence.h"
> @@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = {
> .pin = nouveau_gem_prime_pin,
> .unpin = nouveau_gem_prime_unpin,
> .get_sg_table = nouveau_gem_prime_get_sg_table,
> - .vmap = nouveau_gem_prime_vmap,
> - .vunmap = nouveau_gem_prime_vunmap,
> + .vmap = drm_gem_ttm_vmap,
> + .vunmap = drm_gem_ttm_vunmap,
> };
>
> int
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
> index b35c180322e2..3b919c7c931c 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
> @@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
> extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
> extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
> struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
> -extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
> -extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
>
> #endif
> diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
> index a8264aebf3d4..2f16b5249283 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_prime.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
> @@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
> return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
> }
>
> -void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - struct nouveau_bo *nvbo = nouveau_gem_object(obj);
> - int ret;
> -
> - ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
> - &nvbo->dma_buf_vmap);
> - if (ret)
> - return ERR_PTR(ret);
> -
> - return nvbo->dma_buf_vmap.virtual;
> -}
> -
> -void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - struct nouveau_bo *nvbo = nouveau_gem_object(obj);
> -
> - ttm_bo_kunmap(&nvbo->dma_buf_vmap);
> -}
> -
> struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sg)
> diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> index fdbc8d949135..5ab03d605f57 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> @@ -5,6 +5,7 @@
> #include <drm/drm_gem_shmem_helper.h>
> #include <drm/panfrost_drm.h>
> #include <linux/completion.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/iopoll.h>
> #include <linux/pm_runtime.h>
> #include <linux/slab.h>
> @@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> {
> struct panfrost_file_priv *user = file_priv->driver_priv;
> struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
> + struct dma_buf_map map;
> struct drm_gem_shmem_object *bo;
> u32 cfg, as;
> int ret;
> @@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> goto err_close_bo;
> }
>
> - perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
> - if (IS_ERR(perfcnt->buf)) {
> - ret = PTR_ERR(perfcnt->buf);
> + ret = drm_gem_shmem_vmap(&bo->base, &map);
> + if (ret)
> goto err_put_mapping;
> - }
> + perfcnt->buf = map.vaddr;
>
> /*
> * Invalidate the cache and clear the counters to start from a fresh
> @@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> return 0;
>
> err_vunmap:
> - drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
> + drm_gem_shmem_vunmap(&bo->base, &map);
> err_put_mapping:
> panfrost_gem_mapping_put(perfcnt->mapping);
> err_close_bo:
> @@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
> {
> struct panfrost_file_priv *user = file_priv->driver_priv;
> struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
> + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
>
> if (user != perfcnt->user)
> return -EINVAL;
> @@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
> GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
>
> perfcnt->user = NULL;
> - drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
> + drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
> perfcnt->buf = NULL;
> panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
> panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
> diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
> index 45fd76e04bdc..e165fa9b2089 100644
> --- a/drivers/gpu/drm/qxl/qxl_display.c
> +++ b/drivers/gpu/drm/qxl/qxl_display.c
> @@ -25,6 +25,7 @@
>
> #include <linux/crc32.h>
> #include <linux/delay.h>
> +#include <linux/dma-buf-map.h>
>
> #include <drm/drm_drv.h>
> #include <drm/drm_atomic.h>
> @@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
> struct drm_gem_object *obj;
> struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
> int ret;
> + struct dma_buf_map user_map;
> + struct dma_buf_map cursor_map;
> void *user_ptr;
> int size = 64*64*4;
>
> @@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
> user_bo = gem_to_qxl_bo(obj);
>
> /* pinning is done in the prepare/cleanup framevbuffer */
> - ret = qxl_bo_kmap(user_bo, &user_ptr);
> + ret = qxl_bo_kmap(user_bo, &user_map);
> if (ret)
> goto out_free_release;
> + user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */
>
> ret = qxl_alloc_bo_reserved(qdev, release,
> sizeof(struct qxl_cursor) + size,
> @@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
> if (ret)
> goto out_unpin;
>
> - ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
> + ret = qxl_bo_kmap(cursor_bo, &cursor_map);
> if (ret)
> goto out_backoff;
>
> @@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
> {
> int ret;
> struct drm_gem_object *gobj;
> + struct dma_buf_map map;
> int monitors_config_size = sizeof(struct qxl_monitors_config) +
> qxl_num_crtc * sizeof(struct qxl_head);
>
> @@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
> if (ret)
> return ret;
>
> - qxl_bo_kmap(qdev->monitors_config_bo, NULL);
> + qxl_bo_kmap(qdev->monitors_config_bo, &map);
>
> qdev->monitors_config = qdev->monitors_config_bo->kptr;
> qdev->ram_header->monitors_config =
> diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
> index 3599db096973..7b7acb910780 100644
> --- a/drivers/gpu/drm/qxl/qxl_draw.c
> +++ b/drivers/gpu/drm/qxl/qxl_draw.c
> @@ -20,6 +20,8 @@
> * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
> */
>
> +#include <linux/dma-buf-map.h>
> +
> #include <drm/drm_fourcc.h>
>
> #include "qxl_drv.h"
> @@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
> unsigned int num_clips,
> struct qxl_bo *clips_bo)
> {
> + struct dma_buf_map map;
> struct qxl_clip_rects *dev_clips;
> int ret;
>
> - ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
> - if (ret) {
> + ret = qxl_bo_kmap(clips_bo, &map);
> + if (ret)
> return NULL;
> - }
> + dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
> +
> dev_clips->num_rects = num_clips;
> dev_clips->chunk.next_chunk = 0;
> dev_clips->chunk.prev_chunk = 0;
> @@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
> int stride = fb->pitches[0];
> /* depth is not actually interesting, we don't mask with it */
> int depth = fb->format->cpp[0] * 8;
> + struct dma_buf_map surface_map;
> uint8_t *surface_base;
> struct qxl_release *release;
> struct qxl_bo *clips_bo;
> @@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
> if (ret)
> goto out_release_backoff;
>
> - ret = qxl_bo_kmap(bo, (void **)&surface_base);
> + ret = qxl_bo_kmap(bo, &surface_map);
> if (ret)
> goto out_release_backoff;
> + surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */
>
> ret = qxl_image_init(qdev, release, dimage, surface_base,
> left - dumb_shadow_offset,
> diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
> index 3602e8b34189..eb437fea5d9e 100644
> --- a/drivers/gpu/drm/qxl/qxl_drv.h
> +++ b/drivers/gpu/drm/qxl/qxl_drv.h
> @@ -30,6 +30,7 @@
> * Definitions taken from spice-protocol, plus kernel driver specific bits.
> */
>
> +#include <linux/dma-buf-map.h>
> #include <linux/dma-fence.h>
> #include <linux/firmware.h>
> #include <linux/platform_device.h>
> @@ -50,6 +51,8 @@
>
> #include "qxl_dev.h"
>
> +struct dma_buf_map;
> +
> #define DRIVER_AUTHOR "Dave Airlie"
>
> #define DRIVER_NAME "qxl"
> @@ -79,7 +82,7 @@ struct qxl_bo {
> /* Protected by tbo.reserved */
> struct ttm_place placements[3];
> struct ttm_placement placement;
> - struct ttm_bo_kmap_obj kmap;
> + struct dma_buf_map map;
> void *kptr;
> unsigned int map_count;
> int type;
> @@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
> void qxl_gem_object_close(struct drm_gem_object *obj,
> struct drm_file *file_priv);
> void qxl_bo_force_delete(struct qxl_device *qdev);
> -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
>
> /* qxl_dumb.c */
> int qxl_mode_dumb_create(struct drm_file *file_priv,
> @@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
> struct drm_gem_object *qxl_gem_prime_import_sg_table(
> struct drm_device *dev, struct dma_buf_attachment *attach,
> struct sg_table *sgt);
> -void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
> -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
> + struct dma_buf_map *map);
> int qxl_gem_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
>
> diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
> index 940e99354f49..755df4d8f95f 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.c
> +++ b/drivers/gpu/drm/qxl/qxl_object.c
> @@ -23,10 +23,12 @@
> * Alon Levy
> */
>
> +#include <linux/dma-buf-map.h>
> +#include <linux/io-mapping.h>
> +
> #include "qxl_drv.h"
> #include "qxl_object.h"
>
> -#include <linux/io-mapping.h>
> static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
> {
> struct qxl_bo *bo;
> @@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev,
> return 0;
> }
>
> -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
> +int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
> {
> - bool is_iomem;
> int r;
>
> if (bo->kptr) {
> - if (ptr)
> - *ptr = bo->kptr;
> bo->map_count++;
> - return 0;
> + goto out;
> }
> - r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
> + r = ttm_bo_vmap(&bo->tbo, &bo->map);
> if (r)
> return r;
> - bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
> - if (ptr)
> - *ptr = bo->kptr;
> bo->map_count = 1;
> +
> + /* TODO: Remove kptr in favor of map everywhere. */
> + if (bo->map.is_iomem)
> + bo->kptr = (void *)bo->map.vaddr_iomem;
> + else
> + bo->kptr = bo->map.vaddr;
> +
> +out:
> + *map = bo->map;
> return 0;
> }
>
> @@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
> void *rptr;
> int ret;
> struct io_mapping *map;
> + struct dma_buf_map bo_map;
>
> if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
> map = qdev->vram_mapping;
> @@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
> return rptr;
> }
>
> - ret = qxl_bo_kmap(bo, &rptr);
> + ret = qxl_bo_kmap(bo, &bo_map);
> if (ret)
> return NULL;
> + rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */
>
> rptr += page_offset * PAGE_SIZE;
> return rptr;
> @@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo)
> if (bo->map_count > 0)
> return;
> bo->kptr = NULL;
> - ttm_bo_kunmap(&bo->kmap);
> + ttm_bo_vunmap(&bo->tbo, &bo->map);
> }
>
> void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
> diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
> index 09a5c818324d..ebf24c9d2bf2 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.h
> +++ b/drivers/gpu/drm/qxl/qxl_object.h
> @@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
> bool kernel, bool pinned, u32 domain,
> struct qxl_surface *surf,
> struct qxl_bo **bo_ptr);
> -extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
> +extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
> extern void qxl_bo_kunmap(struct qxl_bo *bo);
> void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
> void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
> diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
> index 7d3816fca5a8..4aa949799446 100644
> --- a/drivers/gpu/drm/qxl/qxl_prime.c
> +++ b/drivers/gpu/drm/qxl/qxl_prime.c
> @@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
> return ERR_PTR(-ENOSYS);
> }
>
> -void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
> +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct qxl_bo *bo = gem_to_qxl_bo(obj);
> - void *ptr;
> int ret;
>
> - ret = qxl_bo_kmap(bo, &ptr);
> + ret = qxl_bo_kmap(bo, map);
> if (ret < 0)
> - return ERR_PTR(ret);
> + return ret;
>
> - return ptr;
> + return 0;
> }
>
> -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
> + struct dma_buf_map *map)
> {
> struct qxl_bo *bo = gem_to_qxl_bo(obj);
>
> diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
> index 5d54bccebd4d..44cb5ee6fc20 100644
> --- a/drivers/gpu/drm/radeon/radeon.h
> +++ b/drivers/gpu/drm/radeon/radeon.h
> @@ -509,7 +509,6 @@ struct radeon_bo {
> /* Constant after initialization */
> struct radeon_device *rdev;
>
> - struct ttm_bo_kmap_obj dma_buf_vmap;
> pid_t pid;
>
> #ifdef CONFIG_MMU_NOTIFIER
> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> index 0ccd7213e41f..d2876ce3bc9e 100644
> --- a/drivers/gpu/drm/radeon/radeon_gem.c
> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> @@ -31,6 +31,7 @@
> #include <drm/drm_debugfs.h>
> #include <drm/drm_device.h>
> #include <drm/drm_file.h>
> +#include <drm/drm_gem_ttm_helper.h>
> #include <drm/radeon_drm.h>
>
> #include "radeon.h"
> @@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
> struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
> int radeon_gem_prime_pin(struct drm_gem_object *obj);
> void radeon_gem_prime_unpin(struct drm_gem_object *obj);
> -void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
> -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
>
> static const struct drm_gem_object_funcs radeon_gem_object_funcs;
>
> @@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = {
> .pin = radeon_gem_prime_pin,
> .unpin = radeon_gem_prime_unpin,
> .get_sg_table = radeon_gem_prime_get_sg_table,
> - .vmap = radeon_gem_prime_vmap,
> - .vunmap = radeon_gem_prime_vunmap,
> + .vmap = drm_gem_ttm_vmap,
> + .vunmap = drm_gem_ttm_vunmap,
> };
>
> /*
> diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
> index b9de0e51c0be..088d39a51c0d 100644
> --- a/drivers/gpu/drm/radeon/radeon_prime.c
> +++ b/drivers/gpu/drm/radeon/radeon_prime.c
> @@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
> return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
> }
>
> -void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> - struct radeon_bo *bo = gem_to_radeon_bo(obj);
> - int ret;
> -
> - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
> - &bo->dma_buf_vmap);
> - if (ret)
> - return ERR_PTR(ret);
> -
> - return bo->dma_buf_vmap.virtual;
> -}
> -
> -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> - struct radeon_bo *bo = gem_to_radeon_bo(obj);
> -
> - ttm_bo_kunmap(&bo->dma_buf_vmap);
> -}
> -
> struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sg)
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> index 7d5ebb10323b..7971f57436dd 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> @@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
> return ERR_PTR(ret);
> }
>
> -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
> +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>
> - if (rk_obj->pages)
> - return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
> - pgprot_writecombine(PAGE_KERNEL));
> + if (rk_obj->pages) {
> + void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
> + pgprot_writecombine(PAGE_KERNEL));
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
> + return 0;
> + }
>
> if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
> - return NULL;
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
>
> - return rk_obj->kvaddr;
> + return 0;
> }
>
> -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>
> if (rk_obj->pages) {
> - vunmap(vaddr);
> + vunmap(map->vaddr);
> return;
> }
>
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> index 7ffc541bea07..5a70a56cd406 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> @@ -31,8 +31,8 @@ struct drm_gem_object *
> rockchip_gem_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sg);
> -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
> -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> /* drm driver mmap file operations */
> int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
> index 744a8e337e41..c02e35ed6e76 100644
> --- a/drivers/gpu/drm/tiny/cirrus.c
> +++ b/drivers/gpu/drm/tiny/cirrus.c
> @@ -17,6 +17,7 @@
> */
>
> #include <linux/console.h>
> +#include <linux/dma-buf-map.h>
> #include <linux/module.h>
> #include <linux/pci.h>
>
> @@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
> struct drm_rect *rect)
> {
> struct cirrus_device *cirrus = to_cirrus(fb->dev);
> + struct dma_buf_map map;
> void *vmap;
> int idx, ret;
>
> @@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
> if (!drm_dev_enter(&cirrus->dev, &idx))
> goto out;
>
> - ret = -ENOMEM;
> - vmap = drm_gem_shmem_vmap(fb->obj[0]);
> - if (!vmap)
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (ret)
> goto out_dev_exit;
> + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> if (cirrus->cpp == fb->format->cpp[0])
> drm_fb_memcpy_dstclip(cirrus->vram,
> @@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
> else
> WARN_ON_ONCE("cpp mismatch");
>
> - drm_gem_shmem_vunmap(fb->obj[0], vmap);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
> ret = 0;
>
> out_dev_exit:
> diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
> index cc397671f689..12a890cea6e9 100644
> --- a/drivers/gpu/drm/tiny/gm12u320.c
> +++ b/drivers/gpu/drm/tiny/gm12u320.c
> @@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
> {
> int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
> struct drm_framebuffer *fb;
> + struct dma_buf_map map;
> void *vaddr;
> u8 *src;
>
> @@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
> y1 = gm12u320->fb_update.rect.y1;
> y2 = gm12u320->fb_update.rect.y2;
>
> - vaddr = drm_gem_shmem_vmap(fb->obj[0]);
> - if (IS_ERR(vaddr)) {
> - GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (ret) {
> + GM12U320_ERR("failed to vmap fb: %d\n", ret);
> goto put_fb;
> }
> + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> if (fb->obj[0]->import_attach) {
> ret = dma_buf_begin_cpu_access(
> @@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
> GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
> }
> vunmap:
> - drm_gem_shmem_vunmap(fb->obj[0], vaddr);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
> put_fb:
> drm_framebuffer_put(fb);
> gm12u320->fb_update.fb = NULL;
> diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
> index fef43f4e3bac..42eeba1dfdbf 100644
> --- a/drivers/gpu/drm/udl/udl_modeset.c
> +++ b/drivers/gpu/drm/udl/udl_modeset.c
> @@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
> struct urb *urb;
> struct drm_rect clip;
> int log_bpp;
> + struct dma_buf_map map;
> void *vaddr;
>
> ret = udl_log_cpp(fb->format->cpp[0]);
> @@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
> return ret;
> }
>
> - vaddr = drm_gem_shmem_vmap(fb->obj[0]);
> - if (IS_ERR(vaddr)) {
> + ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + if (ret) {
> DRM_ERROR("failed to vmap fb\n");
> goto out_dma_buf_end_cpu_access;
> }
> + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> urb = udl_get_urb(dev);
> if (!urb)
> @@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
> ret = 0;
>
> out_drm_gem_shmem_vunmap:
> - drm_gem_shmem_vunmap(fb->obj[0], vaddr);
> + drm_gem_shmem_vunmap(fb->obj[0], &map);
> out_dma_buf_end_cpu_access:
> if (import_attach) {
> tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
> diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> index 931c55126148..f268fb258c83 100644
> --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
> +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> @@ -9,6 +9,8 @@
> * Michael Thayer <michael.thayer@oracle.com,
> * Hans de Goede <hdegoede@redhat.com>
> */
> +
> +#include <linux/dma-buf-map.h>
> #include <linux/export.h>
>
> #include <drm/drm_atomic.h>
> @@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> u32 height = plane->state->crtc_h;
> size_t data_size, mask_size;
> u32 flags;
> + struct dma_buf_map map;
> + int ret;
> u8 *src;
>
> /*
> @@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>
> vbox_crtc->cursor_enabled = true;
>
> - src = drm_gem_vram_vmap(gbo);
> - if (IS_ERR(src)) {
> + ret = drm_gem_vram_vmap(gbo, &map);
> + if (ret) {
> /*
> * BUG: we should have pinned the BO in prepare_fb().
> */
> @@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> DRM_WARN("Could not map cursor bo, skipping update\n");
> return;
> }
> + src = map.vaddr; /* TODO: Use mapping abstraction properly */
>
> /*
> * The mask must be calculated based on the alpha
> @@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> data_size = width * height * 4 + mask_size;
>
> copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
> - drm_gem_vram_vunmap(gbo, src);
> + drm_gem_vram_vunmap(gbo, &map);
>
> flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
> VBOX_MOUSE_POINTER_ALPHA;
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index 557f0d1e6437..f290a9a942dc 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> return drm_gem_cma_prime_mmap(obj, vma);
> }
>
> -void *vc4_prime_vmap(struct drm_gem_object *obj)
> +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct vc4_bo *bo = to_vc4_bo(obj);
>
> if (bo->validated_shader) {
> DRM_DEBUG("mmaping of shader BOs not allowed.\n");
> - return ERR_PTR(-EINVAL);
> + return -EINVAL;
> }
>
> - return drm_gem_cma_prime_vmap(obj);
> + return drm_gem_cma_prime_vmap(obj, map);
> }
>
> struct drm_gem_object *
> diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
> index cc79b1aaa878..904f2c36c963 100644
> --- a/drivers/gpu/drm/vc4/vc4_drv.h
> +++ b/drivers/gpu/drm/vc4/vc4_drv.h
> @@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
> struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach,
> struct sg_table *sgt);
> -void *vc4_prime_vmap(struct drm_gem_object *obj);
> +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> int vc4_bo_cache_init(struct drm_device *dev);
> void vc4_bo_cache_destroy(struct drm_device *dev);
> int vc4_bo_inc_usecnt(struct vc4_bo *bo);
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index fa54a6d1403d..b2aa26e1e4a2 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
> return &obj->base;
> }
>
> -static void *vgem_prime_vmap(struct drm_gem_object *obj)
> +static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
> long n_pages = obj->size >> PAGE_SHIFT;
> struct page **pages;
> + void *vaddr;
>
> pages = vgem_pin_pages(bo);
> if (IS_ERR(pages))
> - return NULL;
> + return PTR_ERR(pages);
> +
> + vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
>
> - return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> + return 0;
> }
>
> -static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> {
> struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
>
> - vunmap(vaddr);
> + vunmap(map->vaddr);
> vgem_unpin_pages(bo);
> }
>
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> index 4f34ef34ba60..74db5a840bed 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> @@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
> return gem_mmap_obj(xen_obj, vma);
> }
>
> -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
> +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
> {
> struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
> + void *vaddr;
>
> if (!xen_obj->pages)
> - return NULL;
> + return -ENOMEM;
>
> /* Please see comment in gem_mmap_obj on mapping and attributes. */
> - return vmap(xen_obj->pages, xen_obj->num_pages,
> - VM_MAP, PAGE_KERNEL);
> + vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
> + VM_MAP, PAGE_KERNEL);
> + if (!vaddr)
> + return -ENOMEM;
> + dma_buf_map_set_vaddr(map, vaddr);
> +
> + return 0;
> }
>
> void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
> - void *vaddr)
> + struct dma_buf_map *map)
> {
> - vunmap(vaddr);
> + vunmap(map->vaddr);
> }
>
> int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> index a39675fa31b2..a4e67d0a149c 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> @@ -12,6 +12,7 @@
> #define __XEN_DRM_FRONT_GEM_H
>
> struct dma_buf_attachment;
> +struct dma_buf_map;
> struct drm_device;
> struct drm_gem_object;
> struct file;
> @@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
>
> int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
>
> -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
> +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
> + struct dma_buf_map *map);
>
> void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
> - void *vaddr);
> + struct dma_buf_map *map);
>
> int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
> struct vm_area_struct *vma);
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index c38dd35da00b..5e6daa1c982f 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -39,6 +39,7 @@
>
> #include <drm/drm_vma_manager.h>
>
> +struct dma_buf_map;
> struct drm_gem_object;
>
> /**
> @@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
> *
> * This callback is optional.
> */
> - void *(*vmap)(struct drm_gem_object *obj);
> + int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> /**
> * @vunmap:
> @@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
> *
> * This callback is optional.
> */
> - void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
> + void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> /**
> * @mmap:
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index a064b0d1c480..caf98b9cf4b4 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
> struct sg_table *sgt);
> int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma);
> -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
> +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> struct drm_gem_object *
> drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 5381f0c8cf6f..3449a0353fe0 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
> void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
> int drm_gem_shmem_pin(struct drm_gem_object *obj);
> void drm_gem_shmem_unpin(struct drm_gem_object *obj);
> -void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
> -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
>
> diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
> index 128f88174d32..c0d28ba0f5c9 100644
> --- a/include/drm/drm_gem_vram_helper.h
> +++ b/include/drm/drm_gem_vram_helper.h
> @@ -10,6 +10,7 @@
> #include <drm/ttm/ttm_bo_api.h>
> #include <drm/ttm/ttm_bo_driver.h>
>
> +#include <linux/dma-buf-map.h>
> #include <linux/kernel.h> /* for container_of() */
>
> struct drm_mode_create_dumb;
> @@ -29,9 +30,8 @@ struct vm_area_struct;
>
> /**
> * struct drm_gem_vram_object - GEM object backed by VRAM
> - * @gem: GEM object
> * @bo: TTM buffer object
> - * @kmap: Mapping information for @bo
> + * @map: Mapping information for @bo
> * @placement: TTM placement information. Supported placements are \
> %TTM_PL_VRAM and %TTM_PL_SYSTEM
> * @placements: TTM placement information.
> @@ -50,15 +50,15 @@ struct vm_area_struct;
> */
> struct drm_gem_vram_object {
> struct ttm_buffer_object bo;
> - struct ttm_bo_kmap_obj kmap;
> + struct dma_buf_map map;
>
> /**
> - * @kmap_use_count:
> + * @vmap_use_count:
> *
> * Reference count on the virtual address.
> * The address are un-mapped when the count reaches zero.
> */
> - unsigned int kmap_use_count;
> + unsigned int vmap_use_count;
>
> /* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
> struct ttm_placement placement;
> @@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
> s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
> int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
> int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
> -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
> +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
> +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
>
> int drm_gem_vram_fill_create_dumb(struct drm_file *file,
> struct drm_device *dev,
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
2020-10-15 14:08 ` Christian König
` (3 preceding siblings ...)
(?)
@ 2020-10-15 16:49 ` Daniel Vetter
-1 siblings, 0 replies; 195+ messages in thread
From: Daniel Vetter @ 2020-10-15 16:49 UTC (permalink / raw)
To: Christian König
Cc: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang, dri-devel, amd-gfx, virtualization, etnaviv,
linux-arm-kernel, linux-samsung-soc, lima, nouveau, spice-devel,
linux-rockchip, xen-devel, linux-media, linaro-mm-sig
On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> > address space. The mapping's address is returned as struct dma_buf_map.
> > Each function is a simplified version of TTM's existing kmap code. Both
> > functions respect the memory's location ani/or writecombine flags.
> >
> > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > two helpers that convert a GEM object into the TTM BO and forward the call
> > to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> > callbacks.
> >
> > v4:
> > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> > Christian)
>
> Bunch of minor comments below, but over all look very solid to me.
Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
cleanest. And then we can maybe push the combinatorial monster into
vmwgfx, which I think is the only user after this series. Or perhaps a
dedicated set of helpers to map an invidual page (again using the
dma_buf_map stuff).
I'll let Christian with the details, but at a high level this is
definitely
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Thanks a lot for doing all this.
-Daniel
>
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > include/drm/drm_gem_ttm_helper.h | 6 +++
> > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > include/linux/dma-buf-map.h | 20 ++++++++
> > 5 files changed, 164 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > index 0e4fb9ba43ad..db4c14d78a30 100644
> > --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> > }
> > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> > +/**
> > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: [out] returns the dma-buf mapping.
> > + *
> > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + *
> > + * Returns:
> > + * 0 on success, or a negative errno code otherwise.
> > + */
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + return ttm_bo_vmap(bo, map);
> > +
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > +
> > +/**
> > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: dma-buf mapping.
> > + *
> > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + */
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + ttm_bo_vunmap(bo, map);
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > +
> > /**
> > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > * @gem: GEM object.
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > index bdee4df1f3f2..80c42c774c7d 100644
> > --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > @@ -32,6 +32,7 @@
> > #include <drm/ttm/ttm_bo_driver.h>
> > #include <drm/ttm/ttm_placement.h>
> > #include <drm/drm_vma_manager.h>
> > +#include <linux/dma-buf-map.h>
> > #include <linux/io.h>
> > #include <linux/highmem.h>
> > #include <linux/wait.h>
> > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > }
> > EXPORT_SYMBOL(ttm_bo_kunmap);
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + struct ttm_resource *mem = &bo->mem;
> > + int ret;
> > +
> > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > + if (ret)
> > + return ret;
> > +
> > + if (mem->bus.is_iomem) {
> > + void __iomem *vaddr_iomem;
> > + unsigned long size = bo->num_pages << PAGE_SHIFT;
>
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
>
> We have an unit tests of allocating a 8GB BO and that should work on a 32bit
> machine as well :)
>
> > +
> > + if (mem->bus.addr)
> > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > + else if (mem->placement & TTM_PL_FLAG_WC)
>
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
>
> > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > + else
> > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > +
> > + if (!vaddr_iomem)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > +
> > + } else {
> > + struct ttm_operation_ctx ctx = {
> > + .interruptible = false,
> > + .no_wait_gpu = false
> > + };
> > + struct ttm_tt *ttm = bo->ttm;
> > + pgprot_t prot;
> > + void *vaddr;
> > +
> > + BUG_ON(!ttm);
>
> I think we can drop this, populate will just crash badly anyway.
>
> > +
> > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > + if (ret)
> > + return ret;
> > +
> > + /*
> > + * We need to use vmap to get the desired page protection
> > + * or to make the buffer object look contiguous.
> > + */
> > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>
> The calling convention has changed on drm-misc-next as well, but should be
> trivial to adapt.
>
> Regards,
> Christian.
>
> > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > + if (!vaddr)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr(map, vaddr);
> > + }
> > +
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vmap);
> > +
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + if (dma_buf_map_is_null(map))
> > + return;
> > +
> > + if (map->is_iomem)
> > + iounmap(map->vaddr_iomem);
> > + else
> > + vunmap(map->vaddr);
> > + dma_buf_map_clear(map);
> > +
> > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > +
> > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > bool dst_use_tt)
> > {
> > diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
> > index 118cef76f84f..7c6d874910b8 100644
> > --- a/include/drm/drm_gem_ttm_helper.h
> > +++ b/include/drm/drm_gem_ttm_helper.h
> > @@ -10,11 +10,17 @@
> > #include <drm/ttm/ttm_bo_api.h>
> > #include <drm/ttm/ttm_bo_driver.h>
> > +struct dma_buf_map;
> > +
> > #define drm_gem_ttm_of_gem(gem_obj) \
> > container_of(gem_obj, struct ttm_buffer_object, base)
> > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> > const struct drm_gem_object *gem);
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > struct vm_area_struct *vma);
> > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > index 37102e45e496..2c59a785374c 100644
> > --- a/include/drm/ttm/ttm_bo_api.h
> > +++ b/include/drm/ttm/ttm_bo_api.h
> > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > struct ttm_bo_device;
> > +struct dma_buf_map;
> > +
> > struct drm_mm_node;
> > struct ttm_placement;
> > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
> > */
> > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > +/**
> > + * ttm_bo_vmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: pointer to a struct dma_buf_map representing the map.
> > + *
> > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > + * data in the buffer object. The parameter @map returns the virtual
> > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > + *
> > + * Returns
> > + * -ENOMEM: Out of memory.
> > + * -EINVAL: Invalid range.
> > + */
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > +/**
> > + * ttm_bo_vunmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: Object describing the map to unmap.
> > + *
> > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > + */
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > /**
> > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > *
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index fd1aba545fdf..2e8bbecb5091 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -45,6 +45,12 @@
> > *
> > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > *
> > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
> > map->is_iomem = false;
> > }
> > +/**
> > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
> > + * @map: The dma-buf mapping structure
> > + * @vaddr_iomem: An I/O-memory address
> > + *
> > + * Sets the address and the I/O-memory flag.
> > + */
> > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > + void __iomem *vaddr_iomem)
> > +{
> > + map->vaddr_iomem = vaddr_iomem;
> > + map->is_iomem = true;
> > +}
> > +
> > /**
> > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
> > * @lhs: The dma-buf mapping structure
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 16:49 ` Daniel Vetter
0 siblings, 0 replies; 195+ messages in thread
From: Daniel Vetter @ 2020-10-15 16:49 UTC (permalink / raw)
To: Christian König
Cc: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers
On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> > address space. The mapping's address is returned as struct dma_buf_map.
> > Each function is a simplified version of TTM's existing kmap code. Both
> > functions respect the memory's location ani/or writecombine flags.
> >
> > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > two helpers that convert a GEM object into the TTM BO and forward the call
> > to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> > callbacks.
> >
> > v4:
> > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> > Christian)
>
> Bunch of minor comments below, but over all look very solid to me.
Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
cleanest. And then we can maybe push the combinatorial monster into
vmwgfx, which I think is the only user after this series. Or perhaps a
dedicated set of helpers to map an invidual page (again using the
dma_buf_map stuff).
I'll let Christian with the details, but at a high level this is
definitely
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Thanks a lot for doing all this.
-Daniel
>
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > include/drm/drm_gem_ttm_helper.h | 6 +++
> > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > include/linux/dma-buf-map.h | 20 ++++++++
> > 5 files changed, 164 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > index 0e4fb9ba43ad..db4c14d78a30 100644
> > --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> > }
> > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> > +/**
> > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: [out] returns the dma-buf mapping.
> > + *
> > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + *
> > + * Returns:
> > + * 0 on success, or a negative errno code otherwise.
> > + */
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + return ttm_bo_vmap(bo, map);
> > +
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > +
> > +/**
> > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: dma-buf mapping.
> > + *
> > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + */
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + ttm_bo_vunmap(bo, map);
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > +
> > /**
> > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > * @gem: GEM object.
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > index bdee4df1f3f2..80c42c774c7d 100644
> > --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > @@ -32,6 +32,7 @@
> > #include <drm/ttm/ttm_bo_driver.h>
> > #include <drm/ttm/ttm_placement.h>
> > #include <drm/drm_vma_manager.h>
> > +#include <linux/dma-buf-map.h>
> > #include <linux/io.h>
> > #include <linux/highmem.h>
> > #include <linux/wait.h>
> > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > }
> > EXPORT_SYMBOL(ttm_bo_kunmap);
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + struct ttm_resource *mem = &bo->mem;
> > + int ret;
> > +
> > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > + if (ret)
> > + return ret;
> > +
> > + if (mem->bus.is_iomem) {
> > + void __iomem *vaddr_iomem;
> > + unsigned long size = bo->num_pages << PAGE_SHIFT;
>
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
>
> We have an unit tests of allocating a 8GB BO and that should work on a 32bit
> machine as well :)
>
> > +
> > + if (mem->bus.addr)
> > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > + else if (mem->placement & TTM_PL_FLAG_WC)
>
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
>
> > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > + else
> > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > +
> > + if (!vaddr_iomem)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > +
> > + } else {
> > + struct ttm_operation_ctx ctx = {
> > + .interruptible = false,
> > + .no_wait_gpu = false
> > + };
> > + struct ttm_tt *ttm = bo->ttm;
> > + pgprot_t prot;
> > + void *vaddr;
> > +
> > + BUG_ON(!ttm);
>
> I think we can drop this, populate will just crash badly anyway.
>
> > +
> > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > + if (ret)
> > + return ret;
> > +
> > + /*
> > + * We need to use vmap to get the desired page protection
> > + * or to make the buffer object look contiguous.
> > + */
> > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>
> The calling convention has changed on drm-misc-next as well, but should be
> trivial to adapt.
>
> Regards,
> Christian.
>
> > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > + if (!vaddr)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr(map, vaddr);
> > + }
> > +
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vmap);
> > +
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + if (dma_buf_map_is_null(map))
> > + return;
> > +
> > + if (map->is_iomem)
> > + iounmap(map->vaddr_iomem);
> > + else
> > + vunmap(map->vaddr);
> > + dma_buf_map_clear(map);
> > +
> > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > +
> > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > bool dst_use_tt)
> > {
> > diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
> > index 118cef76f84f..7c6d874910b8 100644
> > --- a/include/drm/drm_gem_ttm_helper.h
> > +++ b/include/drm/drm_gem_ttm_helper.h
> > @@ -10,11 +10,17 @@
> > #include <drm/ttm/ttm_bo_api.h>
> > #include <drm/ttm/ttm_bo_driver.h>
> > +struct dma_buf_map;
> > +
> > #define drm_gem_ttm_of_gem(gem_obj) \
> > container_of(gem_obj, struct ttm_buffer_object, base)
> > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> > const struct drm_gem_object *gem);
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > struct vm_area_struct *vma);
> > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > index 37102e45e496..2c59a785374c 100644
> > --- a/include/drm/ttm/ttm_bo_api.h
> > +++ b/include/drm/ttm/ttm_bo_api.h
> > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > struct ttm_bo_device;
> > +struct dma_buf_map;
> > +
> > struct drm_mm_node;
> > struct ttm_placement;
> > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
> > */
> > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > +/**
> > + * ttm_bo_vmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: pointer to a struct dma_buf_map representing the map.
> > + *
> > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > + * data in the buffer object. The parameter @map returns the virtual
> > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > + *
> > + * Returns
> > + * -ENOMEM: Out of memory.
> > + * -EINVAL: Invalid range.
> > + */
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > +/**
> > + * ttm_bo_vunmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: Object describing the map to unmap.
> > + *
> > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > + */
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > /**
> > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > *
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index fd1aba545fdf..2e8bbecb5091 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -45,6 +45,12 @@
> > *
> > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > *
> > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
> > map->is_iomem = false;
> > }
> > +/**
> > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
> > + * @map: The dma-buf mapping structure
> > + * @vaddr_iomem: An I/O-memory address
> > + *
> > + * Sets the address and the I/O-memory flag.
> > + */
> > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > + void __iomem *vaddr_iomem)
> > +{
> > + map->vaddr_iomem = vaddr_iomem;
> > + map->is_iomem = true;
> > +}
> > +
> > /**
> > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
> > * @lhs: The dma-buf mapping structure
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 16:49 ` Daniel Vetter
0 siblings, 0 replies; 195+ messages in thread
From: Daniel Vetter @ 2020-10-15 16:49 UTC (permalink / raw)
To: Christian König
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, kraxel, sam, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, bskeggs, linux+etnaviv, spice-devel, alyssa.rosenzweig,
daniel, maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825,
Thomas Zimmermann, alexander.deucher, linux-media, l.stach
On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> > address space. The mapping's address is returned as struct dma_buf_map.
> > Each function is a simplified version of TTM's existing kmap code. Both
> > functions respect the memory's location ani/or writecombine flags.
> >
> > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > two helpers that convert a GEM object into the TTM BO and forward the call
> > to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> > callbacks.
> >
> > v4:
> > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> > Christian)
>
> Bunch of minor comments below, but over all look very solid to me.
Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
cleanest. And then we can maybe push the combinatorial monster into
vmwgfx, which I think is the only user after this series. Or perhaps a
dedicated set of helpers to map an invidual page (again using the
dma_buf_map stuff).
I'll let Christian with the details, but at a high level this is
definitely
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Thanks a lot for doing all this.
-Daniel
>
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > include/drm/drm_gem_ttm_helper.h | 6 +++
> > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > include/linux/dma-buf-map.h | 20 ++++++++
> > 5 files changed, 164 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > index 0e4fb9ba43ad..db4c14d78a30 100644
> > --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> > }
> > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> > +/**
> > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: [out] returns the dma-buf mapping.
> > + *
> > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + *
> > + * Returns:
> > + * 0 on success, or a negative errno code otherwise.
> > + */
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + return ttm_bo_vmap(bo, map);
> > +
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > +
> > +/**
> > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: dma-buf mapping.
> > + *
> > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + */
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + ttm_bo_vunmap(bo, map);
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > +
> > /**
> > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > * @gem: GEM object.
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > index bdee4df1f3f2..80c42c774c7d 100644
> > --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > @@ -32,6 +32,7 @@
> > #include <drm/ttm/ttm_bo_driver.h>
> > #include <drm/ttm/ttm_placement.h>
> > #include <drm/drm_vma_manager.h>
> > +#include <linux/dma-buf-map.h>
> > #include <linux/io.h>
> > #include <linux/highmem.h>
> > #include <linux/wait.h>
> > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > }
> > EXPORT_SYMBOL(ttm_bo_kunmap);
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + struct ttm_resource *mem = &bo->mem;
> > + int ret;
> > +
> > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > + if (ret)
> > + return ret;
> > +
> > + if (mem->bus.is_iomem) {
> > + void __iomem *vaddr_iomem;
> > + unsigned long size = bo->num_pages << PAGE_SHIFT;
>
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
>
> We have an unit tests of allocating a 8GB BO and that should work on a 32bit
> machine as well :)
>
> > +
> > + if (mem->bus.addr)
> > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > + else if (mem->placement & TTM_PL_FLAG_WC)
>
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
>
> > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > + else
> > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > +
> > + if (!vaddr_iomem)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > +
> > + } else {
> > + struct ttm_operation_ctx ctx = {
> > + .interruptible = false,
> > + .no_wait_gpu = false
> > + };
> > + struct ttm_tt *ttm = bo->ttm;
> > + pgprot_t prot;
> > + void *vaddr;
> > +
> > + BUG_ON(!ttm);
>
> I think we can drop this, populate will just crash badly anyway.
>
> > +
> > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > + if (ret)
> > + return ret;
> > +
> > + /*
> > + * We need to use vmap to get the desired page protection
> > + * or to make the buffer object look contiguous.
> > + */
> > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>
> The calling convention has changed on drm-misc-next as well, but should be
> trivial to adapt.
>
> Regards,
> Christian.
>
> > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > + if (!vaddr)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr(map, vaddr);
> > + }
> > +
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vmap);
> > +
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + if (dma_buf_map_is_null(map))
> > + return;
> > +
> > + if (map->is_iomem)
> > + iounmap(map->vaddr_iomem);
> > + else
> > + vunmap(map->vaddr);
> > + dma_buf_map_clear(map);
> > +
> > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > +
> > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > bool dst_use_tt)
> > {
> > diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
> > index 118cef76f84f..7c6d874910b8 100644
> > --- a/include/drm/drm_gem_ttm_helper.h
> > +++ b/include/drm/drm_gem_ttm_helper.h
> > @@ -10,11 +10,17 @@
> > #include <drm/ttm/ttm_bo_api.h>
> > #include <drm/ttm/ttm_bo_driver.h>
> > +struct dma_buf_map;
> > +
> > #define drm_gem_ttm_of_gem(gem_obj) \
> > container_of(gem_obj, struct ttm_buffer_object, base)
> > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> > const struct drm_gem_object *gem);
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > struct vm_area_struct *vma);
> > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > index 37102e45e496..2c59a785374c 100644
> > --- a/include/drm/ttm/ttm_bo_api.h
> > +++ b/include/drm/ttm/ttm_bo_api.h
> > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > struct ttm_bo_device;
> > +struct dma_buf_map;
> > +
> > struct drm_mm_node;
> > struct ttm_placement;
> > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
> > */
> > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > +/**
> > + * ttm_bo_vmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: pointer to a struct dma_buf_map representing the map.
> > + *
> > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > + * data in the buffer object. The parameter @map returns the virtual
> > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > + *
> > + * Returns
> > + * -ENOMEM: Out of memory.
> > + * -EINVAL: Invalid range.
> > + */
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > +/**
> > + * ttm_bo_vunmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: Object describing the map to unmap.
> > + *
> > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > + */
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > /**
> > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > *
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index fd1aba545fdf..2e8bbecb5091 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -45,6 +45,12 @@
> > *
> > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > *
> > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
> > map->is_iomem = false;
> > }
> > +/**
> > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
> > + * @map: The dma-buf mapping structure
> > + * @vaddr_iomem: An I/O-memory address
> > + *
> > + * Sets the address and the I/O-memory flag.
> > + */
> > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > + void __iomem *vaddr_iomem)
> > +{
> > + map->vaddr_iomem = vaddr_iomem;
> > + map->is_iomem = true;
> > +}
> > +
> > /**
> > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
> > * @lhs: The dma-buf mapping structure
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 16:49 ` Daniel Vetter
0 siblings, 0 replies; 195+ messages in thread
From: Daniel Vetter @ 2020-10-15 16:49 UTC (permalink / raw)
To: Christian König
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, sam, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, bskeggs, linux+etnaviv, spice-devel, alyssa.rosenzweig,
daniel, maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825,
Thomas Zimmermann, alexander.deucher, linux-media, l.stach
On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> > address space. The mapping's address is returned as struct dma_buf_map.
> > Each function is a simplified version of TTM's existing kmap code. Both
> > functions respect the memory's location ani/or writecombine flags.
> >
> > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > two helpers that convert a GEM object into the TTM BO and forward the call
> > to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> > callbacks.
> >
> > v4:
> > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> > Christian)
>
> Bunch of minor comments below, but over all look very solid to me.
Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
cleanest. And then we can maybe push the combinatorial monster into
vmwgfx, which I think is the only user after this series. Or perhaps a
dedicated set of helpers to map an invidual page (again using the
dma_buf_map stuff).
I'll let Christian with the details, but at a high level this is
definitely
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Thanks a lot for doing all this.
-Daniel
>
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > include/drm/drm_gem_ttm_helper.h | 6 +++
> > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > include/linux/dma-buf-map.h | 20 ++++++++
> > 5 files changed, 164 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > index 0e4fb9ba43ad..db4c14d78a30 100644
> > --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> > }
> > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> > +/**
> > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: [out] returns the dma-buf mapping.
> > + *
> > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + *
> > + * Returns:
> > + * 0 on success, or a negative errno code otherwise.
> > + */
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + return ttm_bo_vmap(bo, map);
> > +
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > +
> > +/**
> > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: dma-buf mapping.
> > + *
> > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + */
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + ttm_bo_vunmap(bo, map);
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > +
> > /**
> > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > * @gem: GEM object.
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > index bdee4df1f3f2..80c42c774c7d 100644
> > --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > @@ -32,6 +32,7 @@
> > #include <drm/ttm/ttm_bo_driver.h>
> > #include <drm/ttm/ttm_placement.h>
> > #include <drm/drm_vma_manager.h>
> > +#include <linux/dma-buf-map.h>
> > #include <linux/io.h>
> > #include <linux/highmem.h>
> > #include <linux/wait.h>
> > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > }
> > EXPORT_SYMBOL(ttm_bo_kunmap);
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + struct ttm_resource *mem = &bo->mem;
> > + int ret;
> > +
> > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > + if (ret)
> > + return ret;
> > +
> > + if (mem->bus.is_iomem) {
> > + void __iomem *vaddr_iomem;
> > + unsigned long size = bo->num_pages << PAGE_SHIFT;
>
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
>
> We have an unit tests of allocating a 8GB BO and that should work on a 32bit
> machine as well :)
>
> > +
> > + if (mem->bus.addr)
> > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > + else if (mem->placement & TTM_PL_FLAG_WC)
>
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
>
> > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > + else
> > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > +
> > + if (!vaddr_iomem)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > +
> > + } else {
> > + struct ttm_operation_ctx ctx = {
> > + .interruptible = false,
> > + .no_wait_gpu = false
> > + };
> > + struct ttm_tt *ttm = bo->ttm;
> > + pgprot_t prot;
> > + void *vaddr;
> > +
> > + BUG_ON(!ttm);
>
> I think we can drop this, populate will just crash badly anyway.
>
> > +
> > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > + if (ret)
> > + return ret;
> > +
> > + /*
> > + * We need to use vmap to get the desired page protection
> > + * or to make the buffer object look contiguous.
> > + */
> > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>
> The calling convention has changed on drm-misc-next as well, but should be
> trivial to adapt.
>
> Regards,
> Christian.
>
> > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > + if (!vaddr)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr(map, vaddr);
> > + }
> > +
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vmap);
> > +
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + if (dma_buf_map_is_null(map))
> > + return;
> > +
> > + if (map->is_iomem)
> > + iounmap(map->vaddr_iomem);
> > + else
> > + vunmap(map->vaddr);
> > + dma_buf_map_clear(map);
> > +
> > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > +
> > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > bool dst_use_tt)
> > {
> > diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
> > index 118cef76f84f..7c6d874910b8 100644
> > --- a/include/drm/drm_gem_ttm_helper.h
> > +++ b/include/drm/drm_gem_ttm_helper.h
> > @@ -10,11 +10,17 @@
> > #include <drm/ttm/ttm_bo_api.h>
> > #include <drm/ttm/ttm_bo_driver.h>
> > +struct dma_buf_map;
> > +
> > #define drm_gem_ttm_of_gem(gem_obj) \
> > container_of(gem_obj, struct ttm_buffer_object, base)
> > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> > const struct drm_gem_object *gem);
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > struct vm_area_struct *vma);
> > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > index 37102e45e496..2c59a785374c 100644
> > --- a/include/drm/ttm/ttm_bo_api.h
> > +++ b/include/drm/ttm/ttm_bo_api.h
> > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > struct ttm_bo_device;
> > +struct dma_buf_map;
> > +
> > struct drm_mm_node;
> > struct ttm_placement;
> > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
> > */
> > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > +/**
> > + * ttm_bo_vmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: pointer to a struct dma_buf_map representing the map.
> > + *
> > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > + * data in the buffer object. The parameter @map returns the virtual
> > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > + *
> > + * Returns
> > + * -ENOMEM: Out of memory.
> > + * -EINVAL: Invalid range.
> > + */
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > +/**
> > + * ttm_bo_vunmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: Object describing the map to unmap.
> > + *
> > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > + */
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > /**
> > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > *
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index fd1aba545fdf..2e8bbecb5091 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -45,6 +45,12 @@
> > *
> > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > *
> > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
> > map->is_iomem = false;
> > }
> > +/**
> > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
> > + * @map: The dma-buf mapping structure
> > + * @vaddr_iomem: An I/O-memory address
> > + *
> > + * Sets the address and the I/O-memory flag.
> > + */
> > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > + void __iomem *vaddr_iomem)
> > +{
> > + map->vaddr_iomem = vaddr_iomem;
> > + map->is_iomem = true;
> > +}
> > +
> > /**
> > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
> > * @lhs: The dma-buf mapping structure
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 16:49 ` Daniel Vetter
0 siblings, 0 replies; 195+ messages in thread
From: Daniel Vetter @ 2020-10-15 16:49 UTC (permalink / raw)
To: Christian König
Cc: luben.tuikov, airlied, nouveau, dri-devel, chris, melissa.srw,
ray.huang, kraxel, sam, emil.velikov, linux-samsung-soc,
jy0922.shim, lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, kgene, bskeggs, linux+etnaviv, spice-devel,
alyssa.rosenzweig, etnaviv, hdegoede, xen-devel, virtualization,
sean, apaneers, linux-arm-kernel, linaro-mm-sig, amd-gfx,
tomeu.vizoso, sw0312.kim, hjc, kyungmin.park, miaoqinglang,
yuq825, Thomas Zimmermann, alexander.deucher, linux-media
On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> > address space. The mapping's address is returned as struct dma_buf_map.
> > Each function is a simplified version of TTM's existing kmap code. Both
> > functions respect the memory's location ani/or writecombine flags.
> >
> > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > two helpers that convert a GEM object into the TTM BO and forward the call
> > to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> > callbacks.
> >
> > v4:
> > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> > Christian)
>
> Bunch of minor comments below, but over all look very solid to me.
Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
cleanest. And then we can maybe push the combinatorial monster into
vmwgfx, which I think is the only user after this series. Or perhaps a
dedicated set of helpers to map an invidual page (again using the
dma_buf_map stuff).
I'll let Christian with the details, but at a high level this is
definitely
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Thanks a lot for doing all this.
-Daniel
>
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > include/drm/drm_gem_ttm_helper.h | 6 +++
> > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > include/linux/dma-buf-map.h | 20 ++++++++
> > 5 files changed, 164 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > index 0e4fb9ba43ad..db4c14d78a30 100644
> > --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> > }
> > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> > +/**
> > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: [out] returns the dma-buf mapping.
> > + *
> > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + *
> > + * Returns:
> > + * 0 on success, or a negative errno code otherwise.
> > + */
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + return ttm_bo_vmap(bo, map);
> > +
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > +
> > +/**
> > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: dma-buf mapping.
> > + *
> > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + */
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + ttm_bo_vunmap(bo, map);
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > +
> > /**
> > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > * @gem: GEM object.
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > index bdee4df1f3f2..80c42c774c7d 100644
> > --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > @@ -32,6 +32,7 @@
> > #include <drm/ttm/ttm_bo_driver.h>
> > #include <drm/ttm/ttm_placement.h>
> > #include <drm/drm_vma_manager.h>
> > +#include <linux/dma-buf-map.h>
> > #include <linux/io.h>
> > #include <linux/highmem.h>
> > #include <linux/wait.h>
> > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > }
> > EXPORT_SYMBOL(ttm_bo_kunmap);
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + struct ttm_resource *mem = &bo->mem;
> > + int ret;
> > +
> > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > + if (ret)
> > + return ret;
> > +
> > + if (mem->bus.is_iomem) {
> > + void __iomem *vaddr_iomem;
> > + unsigned long size = bo->num_pages << PAGE_SHIFT;
>
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
>
> We have an unit tests of allocating a 8GB BO and that should work on a 32bit
> machine as well :)
>
> > +
> > + if (mem->bus.addr)
> > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > + else if (mem->placement & TTM_PL_FLAG_WC)
>
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
>
> > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > + else
> > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > +
> > + if (!vaddr_iomem)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > +
> > + } else {
> > + struct ttm_operation_ctx ctx = {
> > + .interruptible = false,
> > + .no_wait_gpu = false
> > + };
> > + struct ttm_tt *ttm = bo->ttm;
> > + pgprot_t prot;
> > + void *vaddr;
> > +
> > + BUG_ON(!ttm);
>
> I think we can drop this, populate will just crash badly anyway.
>
> > +
> > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > + if (ret)
> > + return ret;
> > +
> > + /*
> > + * We need to use vmap to get the desired page protection
> > + * or to make the buffer object look contiguous.
> > + */
> > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>
> The calling convention has changed on drm-misc-next as well, but should be
> trivial to adapt.
>
> Regards,
> Christian.
>
> > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > + if (!vaddr)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr(map, vaddr);
> > + }
> > +
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vmap);
> > +
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + if (dma_buf_map_is_null(map))
> > + return;
> > +
> > + if (map->is_iomem)
> > + iounmap(map->vaddr_iomem);
> > + else
> > + vunmap(map->vaddr);
> > + dma_buf_map_clear(map);
> > +
> > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > +
> > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > bool dst_use_tt)
> > {
> > diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
> > index 118cef76f84f..7c6d874910b8 100644
> > --- a/include/drm/drm_gem_ttm_helper.h
> > +++ b/include/drm/drm_gem_ttm_helper.h
> > @@ -10,11 +10,17 @@
> > #include <drm/ttm/ttm_bo_api.h>
> > #include <drm/ttm/ttm_bo_driver.h>
> > +struct dma_buf_map;
> > +
> > #define drm_gem_ttm_of_gem(gem_obj) \
> > container_of(gem_obj, struct ttm_buffer_object, base)
> > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> > const struct drm_gem_object *gem);
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > struct vm_area_struct *vma);
> > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > index 37102e45e496..2c59a785374c 100644
> > --- a/include/drm/ttm/ttm_bo_api.h
> > +++ b/include/drm/ttm/ttm_bo_api.h
> > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > struct ttm_bo_device;
> > +struct dma_buf_map;
> > +
> > struct drm_mm_node;
> > struct ttm_placement;
> > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
> > */
> > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > +/**
> > + * ttm_bo_vmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: pointer to a struct dma_buf_map representing the map.
> > + *
> > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > + * data in the buffer object. The parameter @map returns the virtual
> > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > + *
> > + * Returns
> > + * -ENOMEM: Out of memory.
> > + * -EINVAL: Invalid range.
> > + */
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > +/**
> > + * ttm_bo_vunmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: Object describing the map to unmap.
> > + *
> > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > + */
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > /**
> > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > *
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index fd1aba545fdf..2e8bbecb5091 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -45,6 +45,12 @@
> > *
> > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > *
> > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
> > map->is_iomem = false;
> > }
> > +/**
> > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
> > + * @map: The dma-buf mapping structure
> > + * @vaddr_iomem: An I/O-memory address
> > + *
> > + * Sets the address and the I/O-memory flag.
> > + */
> > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > + void __iomem *vaddr_iomem)
> > +{
> > + map->vaddr_iomem = vaddr_iomem;
> > + map->is_iomem = true;
> > +}
> > +
> > /**
> > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
> > * @lhs: The dma-buf mapping structure
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 16:49 ` Daniel Vetter
0 siblings, 0 replies; 195+ messages in thread
From: Daniel Vetter @ 2020-10-15 16:49 UTC (permalink / raw)
To: Christian König
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, kraxel, sam, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, bskeggs, linux+etnaviv, spice-devel, alyssa.rosenzweig,
daniel, maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825,
Thomas Zimmermann, alexander.deucher, linux-media, l.stach
On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> > address space. The mapping's address is returned as struct dma_buf_map.
> > Each function is a simplified version of TTM's existing kmap code. Both
> > functions respect the memory's location ani/or writecombine flags.
> >
> > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > two helpers that convert a GEM object into the TTM BO and forward the call
> > to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> > callbacks.
> >
> > v4:
> > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> > Christian)
>
> Bunch of minor comments below, but over all look very solid to me.
Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
cleanest. And then we can maybe push the combinatorial monster into
vmwgfx, which I think is the only user after this series. Or perhaps a
dedicated set of helpers to map an invidual page (again using the
dma_buf_map stuff).
I'll let Christian with the details, but at a high level this is
definitely
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Thanks a lot for doing all this.
-Daniel
>
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > include/drm/drm_gem_ttm_helper.h | 6 +++
> > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > include/linux/dma-buf-map.h | 20 ++++++++
> > 5 files changed, 164 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > index 0e4fb9ba43ad..db4c14d78a30 100644
> > --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> > }
> > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> > +/**
> > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: [out] returns the dma-buf mapping.
> > + *
> > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + *
> > + * Returns:
> > + * 0 on success, or a negative errno code otherwise.
> > + */
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + return ttm_bo_vmap(bo, map);
> > +
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > +
> > +/**
> > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: dma-buf mapping.
> > + *
> > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + */
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + ttm_bo_vunmap(bo, map);
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > +
> > /**
> > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > * @gem: GEM object.
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > index bdee4df1f3f2..80c42c774c7d 100644
> > --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > @@ -32,6 +32,7 @@
> > #include <drm/ttm/ttm_bo_driver.h>
> > #include <drm/ttm/ttm_placement.h>
> > #include <drm/drm_vma_manager.h>
> > +#include <linux/dma-buf-map.h>
> > #include <linux/io.h>
> > #include <linux/highmem.h>
> > #include <linux/wait.h>
> > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > }
> > EXPORT_SYMBOL(ttm_bo_kunmap);
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + struct ttm_resource *mem = &bo->mem;
> > + int ret;
> > +
> > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > + if (ret)
> > + return ret;
> > +
> > + if (mem->bus.is_iomem) {
> > + void __iomem *vaddr_iomem;
> > + unsigned long size = bo->num_pages << PAGE_SHIFT;
>
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
>
> We have an unit tests of allocating a 8GB BO and that should work on a 32bit
> machine as well :)
>
> > +
> > + if (mem->bus.addr)
> > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > + else if (mem->placement & TTM_PL_FLAG_WC)
>
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
>
> > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > + else
> > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > +
> > + if (!vaddr_iomem)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > +
> > + } else {
> > + struct ttm_operation_ctx ctx = {
> > + .interruptible = false,
> > + .no_wait_gpu = false
> > + };
> > + struct ttm_tt *ttm = bo->ttm;
> > + pgprot_t prot;
> > + void *vaddr;
> > +
> > + BUG_ON(!ttm);
>
> I think we can drop this, populate will just crash badly anyway.
>
> > +
> > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > + if (ret)
> > + return ret;
> > +
> > + /*
> > + * We need to use vmap to get the desired page protection
> > + * or to make the buffer object look contiguous.
> > + */
> > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>
> The calling convention has changed on drm-misc-next as well, but should be
> trivial to adapt.
>
> Regards,
> Christian.
>
> > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > + if (!vaddr)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr(map, vaddr);
> > + }
> > +
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vmap);
> > +
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + if (dma_buf_map_is_null(map))
> > + return;
> > +
> > + if (map->is_iomem)
> > + iounmap(map->vaddr_iomem);
> > + else
> > + vunmap(map->vaddr);
> > + dma_buf_map_clear(map);
> > +
> > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > +
> > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > bool dst_use_tt)
> > {
> > diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
> > index 118cef76f84f..7c6d874910b8 100644
> > --- a/include/drm/drm_gem_ttm_helper.h
> > +++ b/include/drm/drm_gem_ttm_helper.h
> > @@ -10,11 +10,17 @@
> > #include <drm/ttm/ttm_bo_api.h>
> > #include <drm/ttm/ttm_bo_driver.h>
> > +struct dma_buf_map;
> > +
> > #define drm_gem_ttm_of_gem(gem_obj) \
> > container_of(gem_obj, struct ttm_buffer_object, base)
> > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> > const struct drm_gem_object *gem);
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > struct vm_area_struct *vma);
> > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > index 37102e45e496..2c59a785374c 100644
> > --- a/include/drm/ttm/ttm_bo_api.h
> > +++ b/include/drm/ttm/ttm_bo_api.h
> > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > struct ttm_bo_device;
> > +struct dma_buf_map;
> > +
> > struct drm_mm_node;
> > struct ttm_placement;
> > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
> > */
> > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > +/**
> > + * ttm_bo_vmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: pointer to a struct dma_buf_map representing the map.
> > + *
> > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > + * data in the buffer object. The parameter @map returns the virtual
> > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > + *
> > + * Returns
> > + * -ENOMEM: Out of memory.
> > + * -EINVAL: Invalid range.
> > + */
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > +/**
> > + * ttm_bo_vunmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: Object describing the map to unmap.
> > + *
> > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > + */
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > /**
> > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > *
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index fd1aba545fdf..2e8bbecb5091 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -45,6 +45,12 @@
> > *
> > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > *
> > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
> > map->is_iomem = false;
> > }
> > +/**
> > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
> > + * @map: The dma-buf mapping structure
> > + * @vaddr_iomem: An I/O-memory address
> > + *
> > + * Sets the address and the I/O-memory flag.
> > + */
> > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > + void __iomem *vaddr_iomem)
> > +{
> > + map->vaddr_iomem = vaddr_iomem;
> > + map->is_iomem = true;
> > +}
> > +
> > /**
> > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
> > * @lhs: The dma-buf mapping structure
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
2020-10-15 16:49 ` Daniel Vetter
` (4 preceding siblings ...)
(?)
@ 2020-10-15 17:52 ` Thomas Zimmermann
-1 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 17:52 UTC (permalink / raw)
To: Daniel Vetter
Cc: Christian König, luben.tuikov, airlied, nouveau, dri-devel,
chris, melissa.srw, ray.huang, kraxel, sam, emil.velikov,
linux-samsung-soc, jy0922.shim, lima, oleksandr_andrushchenko,
krzk, steven.price, linux-rockchip, kgene, bskeggs,
linux+etnaviv, spice-devel, alyssa.rosenzweig, etnaviv, hdegoede,
xen-devel, virtualization, sean, apaneers, linux-arm-kernel,
linaro-mm-sig, amd-gfx, tomeu.vizoso, sw0312.kim, hjc,
kyungmin.park, miaoqinglang, yuq825, alexander.deucher,
linux-media
Hi
On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter <daniel@ffwll.ch> wrote:
> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
> > Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in
> > > kernel address space. The mapping's address is returned as struct
> > > dma_buf_map. Each function is a simplified version of TTM's existing
> > > kmap code. Both functions respect the memory's location ani/or
> > > writecombine flags.
> > >
> > > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > > two helpers that convert a GEM object into the TTM BO and forward the
> > > call to TTM's vmap/vunmap. These helpers can be dropped into the rsp
> > > GEM object callbacks.
> > >
> > > v4:
> > > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers
> > > (Daniel, Christian)
> >
> > Bunch of minor comments below, but over all look very solid to me.
>
> Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
> cleanest. And then we can maybe push the combinatorial monster into
> vmwgfx, which I think is the only user after this series. Or perhaps a
> dedicated set of helpers to map an invidual page (again using the
> dma_buf_map stuff).
From a quick look, I'd say it should be possible to have the same interface
for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map).
All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be
killed off entirely.
Best regards
Thomas
>
> I'll let Christian with the details, but at a high level this is
> definitely
>
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>
> Thanks a lot for doing all this.
> -Daniel
>
> >
> > >
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > > ---
> > > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > > include/drm/drm_gem_ttm_helper.h | 6 +++
> > > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > > include/linux/dma-buf-map.h | 20 ++++++++
> > > 5 files changed, 164 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
> > > 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
> > > unsigned int indent, }
> > > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> > > +/**
> > > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > > + * @gem: GEM object.
> > > + * @map: [out] returns the dma-buf mapping.
> > > + *
> > > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > > + * &drm_gem_object_funcs.vmap callback.
> > > + *
> > > + * Returns:
> > > + * 0 on success, or a negative errno code otherwise.
> > > + */
> > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map)
> > > +{
> > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > > +
> > > + return ttm_bo_vmap(bo, map);
> > > +
> > > +}
> > > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > > +
> > > +/**
> > > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > > + * @gem: GEM object.
> > > + * @map: dma-buf mapping.
> > > + *
> > > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used
> > > as
> > > + * &drm_gem_object_funcs.vmap callback.
> > > + */
> > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map)
> > > +{
> > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > > +
> > > + ttm_bo_vunmap(bo, map);
> > > +}
> > > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > > +
> > > /**
> > > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > > * @gem: GEM object.
> > > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
> > > 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > @@ -32,6 +32,7 @@
> > > #include <drm/ttm/ttm_bo_driver.h>
> > > #include <drm/ttm/ttm_placement.h>
> > > #include <drm/drm_vma_manager.h>
> > > +#include <linux/dma-buf-map.h>
> > > #include <linux/io.h>
> > > #include <linux/highmem.h>
> > > #include <linux/wait.h>
> > > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > > }
> > > EXPORT_SYMBOL(ttm_bo_kunmap);
> > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > > +{
> > > + struct ttm_resource *mem = &bo->mem;
> > > + int ret;
> > > +
> > > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + if (mem->bus.is_iomem) {
> > > + void __iomem *vaddr_iomem;
> > > + unsigned long size = bo->num_pages << PAGE_SHIFT;
> >
> > Please use uint64_t here and make sure to cast bo->num_pages before
> > shifting.
> >
> > We have an unit tests of allocating a 8GB BO and that should work on a
> > 32bit machine as well :)
> >
> > > +
> > > + if (mem->bus.addr)
> > > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > > + else if (mem->placement & TTM_PL_FLAG_WC)
> >
> > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> > mem->bus.caching enum as replacement.
> >
> > > + vaddr_iomem = ioremap_wc(mem->bus.offset,
> > > size);
> > > + else
> > > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > > +
> > > + if (!vaddr_iomem)
> > > + return -ENOMEM;
> > > +
> > > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > > +
> > > + } else {
> > > + struct ttm_operation_ctx ctx = {
> > > + .interruptible = false,
> > > + .no_wait_gpu = false
> > > + };
> > > + struct ttm_tt *ttm = bo->ttm;
> > > + pgprot_t prot;
> > > + void *vaddr;
> > > +
> > > + BUG_ON(!ttm);
> >
> > I think we can drop this, populate will just crash badly anyway.
> >
> > > +
> > > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + /*
> > > + * We need to use vmap to get the desired page
> > > protection
> > > + * or to make the buffer object look contiguous.
> > > + */
> > > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
> >
> > The calling convention has changed on drm-misc-next as well, but should be
> > trivial to adapt.
> >
> > Regards,
> > Christian.
> >
> > > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > > + if (!vaddr)
> > > + return -ENOMEM;
> > > +
> > > + dma_buf_map_set_vaddr(map, vaddr);
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +EXPORT_SYMBOL(ttm_bo_vmap);
> > > +
> > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > *map) +{
> > > + if (dma_buf_map_is_null(map))
> > > + return;
> > > +
> > > + if (map->is_iomem)
> > > + iounmap(map->vaddr_iomem);
> > > + else
> > > + vunmap(map->vaddr);
> > > + dma_buf_map_clear(map);
> > > +
> > > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > > +}
> > > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > > +
> > > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > > bool dst_use_tt)
> > > {
> > > diff --git a/include/drm/drm_gem_ttm_helper.h
> > > b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8
> > > 100644 --- a/include/drm/drm_gem_ttm_helper.h
> > > +++ b/include/drm/drm_gem_ttm_helper.h
> > > @@ -10,11 +10,17 @@
> > > #include <drm/ttm/ttm_bo_api.h>
> > > #include <drm/ttm/ttm_bo_driver.h>
> > > +struct dma_buf_map;
> > > +
> > > #define drm_gem_ttm_of_gem(gem_obj) \
> > > container_of(gem_obj, struct ttm_buffer_object, base)
> > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
> > > indent, const struct drm_gem_object *gem);
> > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map);
> > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map);
> > > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > > struct vm_area_struct *vma);
> > > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > > index 37102e45e496..2c59a785374c 100644
> > > --- a/include/drm/ttm/ttm_bo_api.h
> > > +++ b/include/drm/ttm/ttm_bo_api.h
> > > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > > struct ttm_bo_device;
> > > +struct dma_buf_map;
> > > +
> > > struct drm_mm_node;
> > > struct ttm_placement;
> > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > > unsigned long start_page, */
> > > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > > +/**
> > > + * ttm_bo_vmap
> > > + *
> > > + * @bo: The buffer object.
> > > + * @map: pointer to a struct dma_buf_map representing the map.
> > > + *
> > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > > + * data in the buffer object. The parameter @map returns the virtual
> > > + * address as struct dma_buf_map. Unmap the buffer with
> > > ttm_bo_vunmap().
> > > + *
> > > + * Returns
> > > + * -ENOMEM: Out of memory.
> > > + * -EINVAL: Invalid range.
> > > + */
> > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > > +
> > > +/**
> > > + * ttm_bo_vunmap
> > > + *
> > > + * @bo: The buffer object.
> > > + * @map: Object describing the map to unmap.
> > > + *
> > > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > > + */
> > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > *map); +
> > > /**
> > > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > > *
> > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > > index fd1aba545fdf..2e8bbecb5091 100644
> > > --- a/include/linux/dma-buf-map.h
> > > +++ b/include/linux/dma-buf-map.h
> > > @@ -45,6 +45,12 @@
> > > *
> > > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > > *
> > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > > + *
> > > + * .. code-block:: c
> > > + *
> > > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > > + *
> > > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > > * dma_buf_map_is_null().
> > > *
> > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > > dma_buf_map *map, void *vaddr) map->is_iomem = false;
> > > }
> > > +/**
> > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> > > an address in I/O memory
> > > + * @map: The dma-buf mapping structure
> > > + * @vaddr_iomem: An I/O-memory address
> > > + *
> > > + * Sets the address and the I/O-memory flag.
> > > + */
> > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > > + void __iomem
> > > *vaddr_iomem) +{
> > > + map->vaddr_iomem = vaddr_iomem;
> > > + map->is_iomem = true;
> > > +}
> > > +
> > > /**
> > > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
> > > equality
> > > * @lhs: The dma-buf mapping structure
> >
>
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 17:52 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 17:52 UTC (permalink / raw)
To: Daniel Vetter
Cc: Christian König, luben.tuikov, airlied, nouveau, dri-devel,
chris, melissa.srw, ray.huang, kraxel, sam, emil.velikov,
linux-samsung-soc, jy0922.shim, lima, oleksandr_andrushchenko,
krzk, steven.price, linux-rockchip, kgene, bskeggs,
linux+etnaviv, spice-devel, alyssa.rosenzweig, etnaviv, hdegoede,
xen-devel, virtualization, sean, apaneers
Hi
On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter <daniel@ffwll.ch> wrote:
> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
> > Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in
> > > kernel address space. The mapping's address is returned as struct
> > > dma_buf_map. Each function is a simplified version of TTM's existing
> > > kmap code. Both functions respect the memory's location ani/or
> > > writecombine flags.
> > >
> > > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > > two helpers that convert a GEM object into the TTM BO and forward the
> > > call to TTM's vmap/vunmap. These helpers can be dropped into the rsp
> > > GEM object callbacks.
> > >
> > > v4:
> > > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers
> > > (Daniel, Christian)
> >
> > Bunch of minor comments below, but over all look very solid to me.
>
> Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
> cleanest. And then we can maybe push the combinatorial monster into
> vmwgfx, which I think is the only user after this series. Or perhaps a
> dedicated set of helpers to map an invidual page (again using the
> dma_buf_map stuff).
From a quick look, I'd say it should be possible to have the same interface
for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map).
All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be
killed off entirely.
Best regards
Thomas
>
> I'll let Christian with the details, but at a high level this is
> definitely
>
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>
> Thanks a lot for doing all this.
> -Daniel
>
> >
> > >
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > > ---
> > > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > > include/drm/drm_gem_ttm_helper.h | 6 +++
> > > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > > include/linux/dma-buf-map.h | 20 ++++++++
> > > 5 files changed, 164 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
> > > 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
> > > unsigned int indent, }
> > > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> > > +/**
> > > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > > + * @gem: GEM object.
> > > + * @map: [out] returns the dma-buf mapping.
> > > + *
> > > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > > + * &drm_gem_object_funcs.vmap callback.
> > > + *
> > > + * Returns:
> > > + * 0 on success, or a negative errno code otherwise.
> > > + */
> > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map)
> > > +{
> > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > > +
> > > + return ttm_bo_vmap(bo, map);
> > > +
> > > +}
> > > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > > +
> > > +/**
> > > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > > + * @gem: GEM object.
> > > + * @map: dma-buf mapping.
> > > + *
> > > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used
> > > as
> > > + * &drm_gem_object_funcs.vmap callback.
> > > + */
> > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map)
> > > +{
> > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > > +
> > > + ttm_bo_vunmap(bo, map);
> > > +}
> > > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > > +
> > > /**
> > > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > > * @gem: GEM object.
> > > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
> > > 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > @@ -32,6 +32,7 @@
> > > #include <drm/ttm/ttm_bo_driver.h>
> > > #include <drm/ttm/ttm_placement.h>
> > > #include <drm/drm_vma_manager.h>
> > > +#include <linux/dma-buf-map.h>
> > > #include <linux/io.h>
> > > #include <linux/highmem.h>
> > > #include <linux/wait.h>
> > > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > > }
> > > EXPORT_SYMBOL(ttm_bo_kunmap);
> > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > > +{
> > > + struct ttm_resource *mem = &bo->mem;
> > > + int ret;
> > > +
> > > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + if (mem->bus.is_iomem) {
> > > + void __iomem *vaddr_iomem;
> > > + unsigned long size = bo->num_pages << PAGE_SHIFT;
> >
> > Please use uint64_t here and make sure to cast bo->num_pages before
> > shifting.
> >
> > We have an unit tests of allocating a 8GB BO and that should work on a
> > 32bit machine as well :)
> >
> > > +
> > > + if (mem->bus.addr)
> > > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > > + else if (mem->placement & TTM_PL_FLAG_WC)
> >
> > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> > mem->bus.caching enum as replacement.
> >
> > > + vaddr_iomem = ioremap_wc(mem->bus.offset,
> > > size);
> > > + else
> > > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > > +
> > > + if (!vaddr_iomem)
> > > + return -ENOMEM;
> > > +
> > > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > > +
> > > + } else {
> > > + struct ttm_operation_ctx ctx = {
> > > + .interruptible = false,
> > > + .no_wait_gpu = false
> > > + };
> > > + struct ttm_tt *ttm = bo->ttm;
> > > + pgprot_t prot;
> > > + void *vaddr;
> > > +
> > > + BUG_ON(!ttm);
> >
> > I think we can drop this, populate will just crash badly anyway.
> >
> > > +
> > > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + /*
> > > + * We need to use vmap to get the desired page
> > > protection
> > > + * or to make the buffer object look contiguous.
> > > + */
> > > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
> >
> > The calling convention has changed on drm-misc-next as well, but should be
> > trivial to adapt.
> >
> > Regards,
> > Christian.
> >
> > > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > > + if (!vaddr)
> > > + return -ENOMEM;
> > > +
> > > + dma_buf_map_set_vaddr(map, vaddr);
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +EXPORT_SYMBOL(ttm_bo_vmap);
> > > +
> > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > *map) +{
> > > + if (dma_buf_map_is_null(map))
> > > + return;
> > > +
> > > + if (map->is_iomem)
> > > + iounmap(map->vaddr_iomem);
> > > + else
> > > + vunmap(map->vaddr);
> > > + dma_buf_map_clear(map);
> > > +
> > > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > > +}
> > > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > > +
> > > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > > bool dst_use_tt)
> > > {
> > > diff --git a/include/drm/drm_gem_ttm_helper.h
> > > b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8
> > > 100644 --- a/include/drm/drm_gem_ttm_helper.h
> > > +++ b/include/drm/drm_gem_ttm_helper.h
> > > @@ -10,11 +10,17 @@
> > > #include <drm/ttm/ttm_bo_api.h>
> > > #include <drm/ttm/ttm_bo_driver.h>
> > > +struct dma_buf_map;
> > > +
> > > #define drm_gem_ttm_of_gem(gem_obj) \
> > > container_of(gem_obj, struct ttm_buffer_object, base)
> > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
> > > indent, const struct drm_gem_object *gem);
> > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map);
> > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map);
> > > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > > struct vm_area_struct *vma);
> > > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > > index 37102e45e496..2c59a785374c 100644
> > > --- a/include/drm/ttm/ttm_bo_api.h
> > > +++ b/include/drm/ttm/ttm_bo_api.h
> > > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > > struct ttm_bo_device;
> > > +struct dma_buf_map;
> > > +
> > > struct drm_mm_node;
> > > struct ttm_placement;
> > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > > unsigned long start_page, */
> > > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > > +/**
> > > + * ttm_bo_vmap
> > > + *
> > > + * @bo: The buffer object.
> > > + * @map: pointer to a struct dma_buf_map representing the map.
> > > + *
> > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > > + * data in the buffer object. The parameter @map returns the virtual
> > > + * address as struct dma_buf_map. Unmap the buffer with
> > > ttm_bo_vunmap().
> > > + *
> > > + * Returns
> > > + * -ENOMEM: Out of memory.
> > > + * -EINVAL: Invalid range.
> > > + */
> > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > > +
> > > +/**
> > > + * ttm_bo_vunmap
> > > + *
> > > + * @bo: The buffer object.
> > > + * @map: Object describing the map to unmap.
> > > + *
> > > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > > + */
> > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > *map); +
> > > /**
> > > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > > *
> > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > > index fd1aba545fdf..2e8bbecb5091 100644
> > > --- a/include/linux/dma-buf-map.h
> > > +++ b/include/linux/dma-buf-map.h
> > > @@ -45,6 +45,12 @@
> > > *
> > > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > > *
> > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > > + *
> > > + * .. code-block:: c
> > > + *
> > > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > > + *
> > > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > > * dma_buf_map_is_null().
> > > *
> > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > > dma_buf_map *map, void *vaddr) map->is_iomem = false;
> > > }
> > > +/**
> > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> > > an address in I/O memory
> > > + * @map: The dma-buf mapping structure
> > > + * @vaddr_iomem: An I/O-memory address
> > > + *
> > > + * Sets the address and the I/O-memory flag.
> > > + */
> > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > > + void __iomem
> > > *vaddr_iomem) +{
> > > + map->vaddr_iomem = vaddr_iomem;
> > > + map->is_iomem = true;
> > > +}
> > > +
> > > /**
> > > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
> > > equality
> > > * @lhs: The dma-buf mapping structure
> >
>
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 17:52 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 17:52 UTC (permalink / raw)
To: Daniel Vetter
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
kraxel, yuq825, sam, emil.velikov, linux-samsung-soc,
jy0922.shim, lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, luben.tuikov, bskeggs, linux+etnaviv,
spice-devel, alyssa.rosenzweig, etnaviv, linaro-mm-sig, hdegoede,
xen-devel, virtualization, sean, apaneers, linux-arm-kernel,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, kgene, alexander.deucher, linux-media,
Christian König
Hi
On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter <daniel@ffwll.ch> wrote:
> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
> > Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in
> > > kernel address space. The mapping's address is returned as struct
> > > dma_buf_map. Each function is a simplified version of TTM's existing
> > > kmap code. Both functions respect the memory's location ani/or
> > > writecombine flags.
> > >
> > > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > > two helpers that convert a GEM object into the TTM BO and forward the
> > > call to TTM's vmap/vunmap. These helpers can be dropped into the rsp
> > > GEM object callbacks.
> > >
> > > v4:
> > > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers
> > > (Daniel, Christian)
> >
> > Bunch of minor comments below, but over all look very solid to me.
>
> Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
> cleanest. And then we can maybe push the combinatorial monster into
> vmwgfx, which I think is the only user after this series. Or perhaps a
> dedicated set of helpers to map an invidual page (again using the
> dma_buf_map stuff).
From a quick look, I'd say it should be possible to have the same interface
for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map).
All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be
killed off entirely.
Best regards
Thomas
>
> I'll let Christian with the details, but at a high level this is
> definitely
>
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>
> Thanks a lot for doing all this.
> -Daniel
>
> >
> > >
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > > ---
> > > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > > include/drm/drm_gem_ttm_helper.h | 6 +++
> > > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > > include/linux/dma-buf-map.h | 20 ++++++++
> > > 5 files changed, 164 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
> > > 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
> > > unsigned int indent, }
> > > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> > > +/**
> > > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > > + * @gem: GEM object.
> > > + * @map: [out] returns the dma-buf mapping.
> > > + *
> > > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > > + * &drm_gem_object_funcs.vmap callback.
> > > + *
> > > + * Returns:
> > > + * 0 on success, or a negative errno code otherwise.
> > > + */
> > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map)
> > > +{
> > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > > +
> > > + return ttm_bo_vmap(bo, map);
> > > +
> > > +}
> > > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > > +
> > > +/**
> > > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > > + * @gem: GEM object.
> > > + * @map: dma-buf mapping.
> > > + *
> > > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used
> > > as
> > > + * &drm_gem_object_funcs.vmap callback.
> > > + */
> > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map)
> > > +{
> > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > > +
> > > + ttm_bo_vunmap(bo, map);
> > > +}
> > > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > > +
> > > /**
> > > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > > * @gem: GEM object.
> > > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
> > > 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > @@ -32,6 +32,7 @@
> > > #include <drm/ttm/ttm_bo_driver.h>
> > > #include <drm/ttm/ttm_placement.h>
> > > #include <drm/drm_vma_manager.h>
> > > +#include <linux/dma-buf-map.h>
> > > #include <linux/io.h>
> > > #include <linux/highmem.h>
> > > #include <linux/wait.h>
> > > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > > }
> > > EXPORT_SYMBOL(ttm_bo_kunmap);
> > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > > +{
> > > + struct ttm_resource *mem = &bo->mem;
> > > + int ret;
> > > +
> > > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + if (mem->bus.is_iomem) {
> > > + void __iomem *vaddr_iomem;
> > > + unsigned long size = bo->num_pages << PAGE_SHIFT;
> >
> > Please use uint64_t here and make sure to cast bo->num_pages before
> > shifting.
> >
> > We have an unit tests of allocating a 8GB BO and that should work on a
> > 32bit machine as well :)
> >
> > > +
> > > + if (mem->bus.addr)
> > > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > > + else if (mem->placement & TTM_PL_FLAG_WC)
> >
> > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> > mem->bus.caching enum as replacement.
> >
> > > + vaddr_iomem = ioremap_wc(mem->bus.offset,
> > > size);
> > > + else
> > > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > > +
> > > + if (!vaddr_iomem)
> > > + return -ENOMEM;
> > > +
> > > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > > +
> > > + } else {
> > > + struct ttm_operation_ctx ctx = {
> > > + .interruptible = false,
> > > + .no_wait_gpu = false
> > > + };
> > > + struct ttm_tt *ttm = bo->ttm;
> > > + pgprot_t prot;
> > > + void *vaddr;
> > > +
> > > + BUG_ON(!ttm);
> >
> > I think we can drop this, populate will just crash badly anyway.
> >
> > > +
> > > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + /*
> > > + * We need to use vmap to get the desired page
> > > protection
> > > + * or to make the buffer object look contiguous.
> > > + */
> > > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
> >
> > The calling convention has changed on drm-misc-next as well, but should be
> > trivial to adapt.
> >
> > Regards,
> > Christian.
> >
> > > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > > + if (!vaddr)
> > > + return -ENOMEM;
> > > +
> > > + dma_buf_map_set_vaddr(map, vaddr);
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +EXPORT_SYMBOL(ttm_bo_vmap);
> > > +
> > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > *map) +{
> > > + if (dma_buf_map_is_null(map))
> > > + return;
> > > +
> > > + if (map->is_iomem)
> > > + iounmap(map->vaddr_iomem);
> > > + else
> > > + vunmap(map->vaddr);
> > > + dma_buf_map_clear(map);
> > > +
> > > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > > +}
> > > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > > +
> > > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > > bool dst_use_tt)
> > > {
> > > diff --git a/include/drm/drm_gem_ttm_helper.h
> > > b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8
> > > 100644 --- a/include/drm/drm_gem_ttm_helper.h
> > > +++ b/include/drm/drm_gem_ttm_helper.h
> > > @@ -10,11 +10,17 @@
> > > #include <drm/ttm/ttm_bo_api.h>
> > > #include <drm/ttm/ttm_bo_driver.h>
> > > +struct dma_buf_map;
> > > +
> > > #define drm_gem_ttm_of_gem(gem_obj) \
> > > container_of(gem_obj, struct ttm_buffer_object, base)
> > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
> > > indent, const struct drm_gem_object *gem);
> > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map);
> > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map);
> > > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > > struct vm_area_struct *vma);
> > > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > > index 37102e45e496..2c59a785374c 100644
> > > --- a/include/drm/ttm/ttm_bo_api.h
> > > +++ b/include/drm/ttm/ttm_bo_api.h
> > > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > > struct ttm_bo_device;
> > > +struct dma_buf_map;
> > > +
> > > struct drm_mm_node;
> > > struct ttm_placement;
> > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > > unsigned long start_page, */
> > > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > > +/**
> > > + * ttm_bo_vmap
> > > + *
> > > + * @bo: The buffer object.
> > > + * @map: pointer to a struct dma_buf_map representing the map.
> > > + *
> > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > > + * data in the buffer object. The parameter @map returns the virtual
> > > + * address as struct dma_buf_map. Unmap the buffer with
> > > ttm_bo_vunmap().
> > > + *
> > > + * Returns
> > > + * -ENOMEM: Out of memory.
> > > + * -EINVAL: Invalid range.
> > > + */
> > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > > +
> > > +/**
> > > + * ttm_bo_vunmap
> > > + *
> > > + * @bo: The buffer object.
> > > + * @map: Object describing the map to unmap.
> > > + *
> > > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > > + */
> > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > *map); +
> > > /**
> > > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > > *
> > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > > index fd1aba545fdf..2e8bbecb5091 100644
> > > --- a/include/linux/dma-buf-map.h
> > > +++ b/include/linux/dma-buf-map.h
> > > @@ -45,6 +45,12 @@
> > > *
> > > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > > *
> > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > > + *
> > > + * .. code-block:: c
> > > + *
> > > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > > + *
> > > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > > * dma_buf_map_is_null().
> > > *
> > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > > dma_buf_map *map, void *vaddr) map->is_iomem = false;
> > > }
> > > +/**
> > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> > > an address in I/O memory
> > > + * @map: The dma-buf mapping structure
> > > + * @vaddr_iomem: An I/O-memory address
> > > + *
> > > + * Sets the address and the I/O-memory flag.
> > > + */
> > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > > + void __iomem
> > > *vaddr_iomem) +{
> > > + map->vaddr_iomem = vaddr_iomem;
> > > + map->is_iomem = true;
> > > +}
> > > +
> > > /**
> > > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
> > > equality
> > > * @lhs: The dma-buf mapping structure
> >
>
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 17:52 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 17:52 UTC (permalink / raw)
To: Daniel Vetter
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
yuq825, sam, emil.velikov, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
luben.tuikov, bskeggs, linux+etnaviv, spice-devel,
alyssa.rosenzweig, etnaviv, linaro-mm-sig, hdegoede, xen-devel,
virtualization, sean, apaneers, linux-arm-kernel, amd-gfx,
tomeu.vizoso, sw0312.kim, hjc, kyungmin.park, miaoqinglang,
kgene, alexander.deucher, linux-media, Christian König
Hi
On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter <daniel@ffwll.ch> wrote:
> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
> > Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in
> > > kernel address space. The mapping's address is returned as struct
> > > dma_buf_map. Each function is a simplified version of TTM's existing
> > > kmap code. Both functions respect the memory's location ani/or
> > > writecombine flags.
> > >
> > > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > > two helpers that convert a GEM object into the TTM BO and forward the
> > > call to TTM's vmap/vunmap. These helpers can be dropped into the rsp
> > > GEM object callbacks.
> > >
> > > v4:
> > > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers
> > > (Daniel, Christian)
> >
> > Bunch of minor comments below, but over all look very solid to me.
>
> Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
> cleanest. And then we can maybe push the combinatorial monster into
> vmwgfx, which I think is the only user after this series. Or perhaps a
> dedicated set of helpers to map an invidual page (again using the
> dma_buf_map stuff).
From a quick look, I'd say it should be possible to have the same interface
for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map).
All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be
killed off entirely.
Best regards
Thomas
>
> I'll let Christian with the details, but at a high level this is
> definitely
>
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>
> Thanks a lot for doing all this.
> -Daniel
>
> >
> > >
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > > ---
> > > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > > include/drm/drm_gem_ttm_helper.h | 6 +++
> > > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > > include/linux/dma-buf-map.h | 20 ++++++++
> > > 5 files changed, 164 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
> > > 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
> > > unsigned int indent, }
> > > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> > > +/**
> > > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > > + * @gem: GEM object.
> > > + * @map: [out] returns the dma-buf mapping.
> > > + *
> > > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > > + * &drm_gem_object_funcs.vmap callback.
> > > + *
> > > + * Returns:
> > > + * 0 on success, or a negative errno code otherwise.
> > > + */
> > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map)
> > > +{
> > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > > +
> > > + return ttm_bo_vmap(bo, map);
> > > +
> > > +}
> > > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > > +
> > > +/**
> > > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > > + * @gem: GEM object.
> > > + * @map: dma-buf mapping.
> > > + *
> > > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used
> > > as
> > > + * &drm_gem_object_funcs.vmap callback.
> > > + */
> > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map)
> > > +{
> > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > > +
> > > + ttm_bo_vunmap(bo, map);
> > > +}
> > > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > > +
> > > /**
> > > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > > * @gem: GEM object.
> > > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
> > > 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > @@ -32,6 +32,7 @@
> > > #include <drm/ttm/ttm_bo_driver.h>
> > > #include <drm/ttm/ttm_placement.h>
> > > #include <drm/drm_vma_manager.h>
> > > +#include <linux/dma-buf-map.h>
> > > #include <linux/io.h>
> > > #include <linux/highmem.h>
> > > #include <linux/wait.h>
> > > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > > }
> > > EXPORT_SYMBOL(ttm_bo_kunmap);
> > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > > +{
> > > + struct ttm_resource *mem = &bo->mem;
> > > + int ret;
> > > +
> > > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + if (mem->bus.is_iomem) {
> > > + void __iomem *vaddr_iomem;
> > > + unsigned long size = bo->num_pages << PAGE_SHIFT;
> >
> > Please use uint64_t here and make sure to cast bo->num_pages before
> > shifting.
> >
> > We have an unit tests of allocating a 8GB BO and that should work on a
> > 32bit machine as well :)
> >
> > > +
> > > + if (mem->bus.addr)
> > > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > > + else if (mem->placement & TTM_PL_FLAG_WC)
> >
> > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> > mem->bus.caching enum as replacement.
> >
> > > + vaddr_iomem = ioremap_wc(mem->bus.offset,
> > > size);
> > > + else
> > > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > > +
> > > + if (!vaddr_iomem)
> > > + return -ENOMEM;
> > > +
> > > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > > +
> > > + } else {
> > > + struct ttm_operation_ctx ctx = {
> > > + .interruptible = false,
> > > + .no_wait_gpu = false
> > > + };
> > > + struct ttm_tt *ttm = bo->ttm;
> > > + pgprot_t prot;
> > > + void *vaddr;
> > > +
> > > + BUG_ON(!ttm);
> >
> > I think we can drop this, populate will just crash badly anyway.
> >
> > > +
> > > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + /*
> > > + * We need to use vmap to get the desired page
> > > protection
> > > + * or to make the buffer object look contiguous.
> > > + */
> > > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
> >
> > The calling convention has changed on drm-misc-next as well, but should be
> > trivial to adapt.
> >
> > Regards,
> > Christian.
> >
> > > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > > + if (!vaddr)
> > > + return -ENOMEM;
> > > +
> > > + dma_buf_map_set_vaddr(map, vaddr);
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +EXPORT_SYMBOL(ttm_bo_vmap);
> > > +
> > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > *map) +{
> > > + if (dma_buf_map_is_null(map))
> > > + return;
> > > +
> > > + if (map->is_iomem)
> > > + iounmap(map->vaddr_iomem);
> > > + else
> > > + vunmap(map->vaddr);
> > > + dma_buf_map_clear(map);
> > > +
> > > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > > +}
> > > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > > +
> > > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > > bool dst_use_tt)
> > > {
> > > diff --git a/include/drm/drm_gem_ttm_helper.h
> > > b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8
> > > 100644 --- a/include/drm/drm_gem_ttm_helper.h
> > > +++ b/include/drm/drm_gem_ttm_helper.h
> > > @@ -10,11 +10,17 @@
> > > #include <drm/ttm/ttm_bo_api.h>
> > > #include <drm/ttm/ttm_bo_driver.h>
> > > +struct dma_buf_map;
> > > +
> > > #define drm_gem_ttm_of_gem(gem_obj) \
> > > container_of(gem_obj, struct ttm_buffer_object, base)
> > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
> > > indent, const struct drm_gem_object *gem);
> > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map);
> > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map);
> > > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > > struct vm_area_struct *vma);
> > > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > > index 37102e45e496..2c59a785374c 100644
> > > --- a/include/drm/ttm/ttm_bo_api.h
> > > +++ b/include/drm/ttm/ttm_bo_api.h
> > > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > > struct ttm_bo_device;
> > > +struct dma_buf_map;
> > > +
> > > struct drm_mm_node;
> > > struct ttm_placement;
> > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > > unsigned long start_page, */
> > > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > > +/**
> > > + * ttm_bo_vmap
> > > + *
> > > + * @bo: The buffer object.
> > > + * @map: pointer to a struct dma_buf_map representing the map.
> > > + *
> > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > > + * data in the buffer object. The parameter @map returns the virtual
> > > + * address as struct dma_buf_map. Unmap the buffer with
> > > ttm_bo_vunmap().
> > > + *
> > > + * Returns
> > > + * -ENOMEM: Out of memory.
> > > + * -EINVAL: Invalid range.
> > > + */
> > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > > +
> > > +/**
> > > + * ttm_bo_vunmap
> > > + *
> > > + * @bo: The buffer object.
> > > + * @map: Object describing the map to unmap.
> > > + *
> > > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > > + */
> > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > *map); +
> > > /**
> > > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > > *
> > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > > index fd1aba545fdf..2e8bbecb5091 100644
> > > --- a/include/linux/dma-buf-map.h
> > > +++ b/include/linux/dma-buf-map.h
> > > @@ -45,6 +45,12 @@
> > > *
> > > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > > *
> > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > > + *
> > > + * .. code-block:: c
> > > + *
> > > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > > + *
> > > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > > * dma_buf_map_is_null().
> > > *
> > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > > dma_buf_map *map, void *vaddr) map->is_iomem = false;
> > > }
> > > +/**
> > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> > > an address in I/O memory
> > > + * @map: The dma-buf mapping structure
> > > + * @vaddr_iomem: An I/O-memory address
> > > + *
> > > + * Sets the address and the I/O-memory flag.
> > > + */
> > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > > + void __iomem
> > > *vaddr_iomem) +{
> > > + map->vaddr_iomem = vaddr_iomem;
> > > + map->is_iomem = true;
> > > +}
> > > +
> > > /**
> > > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
> > > equality
> > > * @lhs: The dma-buf mapping structure
> >
>
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 17:52 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 17:52 UTC (permalink / raw)
To: Daniel Vetter
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
kraxel, yuq825, sam, emil.velikov, linux-samsung-soc,
jy0922.shim, lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, luben.tuikov, bskeggs, linux+etnaviv,
spice-devel, alyssa.rosenzweig, etnaviv, linaro-mm-sig, hdegoede,
xen-devel, virtualization, sean, apaneers, linux-arm-kernel,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, kgene, alexander.deucher, linux-media,
Christian König
Hi
On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter <daniel@ffwll.ch> wrote:
> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
> > Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in
> > > kernel address space. The mapping's address is returned as struct
> > > dma_buf_map. Each function is a simplified version of TTM's existing
> > > kmap code. Both functions respect the memory's location ani/or
> > > writecombine flags.
> > >
> > > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > > two helpers that convert a GEM object into the TTM BO and forward the
> > > call to TTM's vmap/vunmap. These helpers can be dropped into the rsp
> > > GEM object callbacks.
> > >
> > > v4:
> > > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers
> > > (Daniel, Christian)
> >
> > Bunch of minor comments below, but over all look very solid to me.
>
> Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
> cleanest. And then we can maybe push the combinatorial monster into
> vmwgfx, which I think is the only user after this series. Or perhaps a
> dedicated set of helpers to map an invidual page (again using the
> dma_buf_map stuff).
From a quick look, I'd say it should be possible to have the same interface
for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map).
All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be
killed off entirely.
Best regards
Thomas
>
> I'll let Christian with the details, but at a high level this is
> definitely
>
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>
> Thanks a lot for doing all this.
> -Daniel
>
> >
> > >
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > > ---
> > > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > > include/drm/drm_gem_ttm_helper.h | 6 +++
> > > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > > include/linux/dma-buf-map.h | 20 ++++++++
> > > 5 files changed, 164 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
> > > 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
> > > unsigned int indent, }
> > > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> > > +/**
> > > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > > + * @gem: GEM object.
> > > + * @map: [out] returns the dma-buf mapping.
> > > + *
> > > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > > + * &drm_gem_object_funcs.vmap callback.
> > > + *
> > > + * Returns:
> > > + * 0 on success, or a negative errno code otherwise.
> > > + */
> > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map)
> > > +{
> > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > > +
> > > + return ttm_bo_vmap(bo, map);
> > > +
> > > +}
> > > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > > +
> > > +/**
> > > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > > + * @gem: GEM object.
> > > + * @map: dma-buf mapping.
> > > + *
> > > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used
> > > as
> > > + * &drm_gem_object_funcs.vmap callback.
> > > + */
> > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map)
> > > +{
> > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > > +
> > > + ttm_bo_vunmap(bo, map);
> > > +}
> > > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > > +
> > > /**
> > > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > > * @gem: GEM object.
> > > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
> > > 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > @@ -32,6 +32,7 @@
> > > #include <drm/ttm/ttm_bo_driver.h>
> > > #include <drm/ttm/ttm_placement.h>
> > > #include <drm/drm_vma_manager.h>
> > > +#include <linux/dma-buf-map.h>
> > > #include <linux/io.h>
> > > #include <linux/highmem.h>
> > > #include <linux/wait.h>
> > > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > > }
> > > EXPORT_SYMBOL(ttm_bo_kunmap);
> > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > > +{
> > > + struct ttm_resource *mem = &bo->mem;
> > > + int ret;
> > > +
> > > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + if (mem->bus.is_iomem) {
> > > + void __iomem *vaddr_iomem;
> > > + unsigned long size = bo->num_pages << PAGE_SHIFT;
> >
> > Please use uint64_t here and make sure to cast bo->num_pages before
> > shifting.
> >
> > We have an unit tests of allocating a 8GB BO and that should work on a
> > 32bit machine as well :)
> >
> > > +
> > > + if (mem->bus.addr)
> > > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > > + else if (mem->placement & TTM_PL_FLAG_WC)
> >
> > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> > mem->bus.caching enum as replacement.
> >
> > > + vaddr_iomem = ioremap_wc(mem->bus.offset,
> > > size);
> > > + else
> > > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > > +
> > > + if (!vaddr_iomem)
> > > + return -ENOMEM;
> > > +
> > > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > > +
> > > + } else {
> > > + struct ttm_operation_ctx ctx = {
> > > + .interruptible = false,
> > > + .no_wait_gpu = false
> > > + };
> > > + struct ttm_tt *ttm = bo->ttm;
> > > + pgprot_t prot;
> > > + void *vaddr;
> > > +
> > > + BUG_ON(!ttm);
> >
> > I think we can drop this, populate will just crash badly anyway.
> >
> > > +
> > > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + /*
> > > + * We need to use vmap to get the desired page
> > > protection
> > > + * or to make the buffer object look contiguous.
> > > + */
> > > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
> >
> > The calling convention has changed on drm-misc-next as well, but should be
> > trivial to adapt.
> >
> > Regards,
> > Christian.
> >
> > > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > > + if (!vaddr)
> > > + return -ENOMEM;
> > > +
> > > + dma_buf_map_set_vaddr(map, vaddr);
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +EXPORT_SYMBOL(ttm_bo_vmap);
> > > +
> > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > *map) +{
> > > + if (dma_buf_map_is_null(map))
> > > + return;
> > > +
> > > + if (map->is_iomem)
> > > + iounmap(map->vaddr_iomem);
> > > + else
> > > + vunmap(map->vaddr);
> > > + dma_buf_map_clear(map);
> > > +
> > > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > > +}
> > > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > > +
> > > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > > bool dst_use_tt)
> > > {
> > > diff --git a/include/drm/drm_gem_ttm_helper.h
> > > b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8
> > > 100644 --- a/include/drm/drm_gem_ttm_helper.h
> > > +++ b/include/drm/drm_gem_ttm_helper.h
> > > @@ -10,11 +10,17 @@
> > > #include <drm/ttm/ttm_bo_api.h>
> > > #include <drm/ttm/ttm_bo_driver.h>
> > > +struct dma_buf_map;
> > > +
> > > #define drm_gem_ttm_of_gem(gem_obj) \
> > > container_of(gem_obj, struct ttm_buffer_object, base)
> > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
> > > indent, const struct drm_gem_object *gem);
> > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map);
> > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map);
> > > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > > struct vm_area_struct *vma);
> > > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > > index 37102e45e496..2c59a785374c 100644
> > > --- a/include/drm/ttm/ttm_bo_api.h
> > > +++ b/include/drm/ttm/ttm_bo_api.h
> > > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > > struct ttm_bo_device;
> > > +struct dma_buf_map;
> > > +
> > > struct drm_mm_node;
> > > struct ttm_placement;
> > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > > unsigned long start_page, */
> > > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > > +/**
> > > + * ttm_bo_vmap
> > > + *
> > > + * @bo: The buffer object.
> > > + * @map: pointer to a struct dma_buf_map representing the map.
> > > + *
> > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > > + * data in the buffer object. The parameter @map returns the virtual
> > > + * address as struct dma_buf_map. Unmap the buffer with
> > > ttm_bo_vunmap().
> > > + *
> > > + * Returns
> > > + * -ENOMEM: Out of memory.
> > > + * -EINVAL: Invalid range.
> > > + */
> > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > > +
> > > +/**
> > > + * ttm_bo_vunmap
> > > + *
> > > + * @bo: The buffer object.
> > > + * @map: Object describing the map to unmap.
> > > + *
> > > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > > + */
> > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > *map); +
> > > /**
> > > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > > *
> > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > > index fd1aba545fdf..2e8bbecb5091 100644
> > > --- a/include/linux/dma-buf-map.h
> > > +++ b/include/linux/dma-buf-map.h
> > > @@ -45,6 +45,12 @@
> > > *
> > > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > > *
> > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > > + *
> > > + * .. code-block:: c
> > > + *
> > > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > > + *
> > > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > > * dma_buf_map_is_null().
> > > *
> > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > > dma_buf_map *map, void *vaddr) map->is_iomem = false;
> > > }
> > > +/**
> > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> > > an address in I/O memory
> > > + * @map: The dma-buf mapping structure
> > > + * @vaddr_iomem: An I/O-memory address
> > > + *
> > > + * Sets the address and the I/O-memory flag.
> > > + */
> > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > > + void __iomem
> > > *vaddr_iomem) +{
> > > + map->vaddr_iomem = vaddr_iomem;
> > > + map->is_iomem = true;
> > > +}
> > > +
> > > /**
> > > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
> > > equality
> > > * @lhs: The dma-buf mapping structure
> >
>
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 17:52 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 17:52 UTC (permalink / raw)
To: Daniel Vetter
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
kraxel, yuq825, sam, emil.velikov, linux-samsung-soc,
jy0922.shim, lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, luben.tuikov, bskeggs, linux+etnaviv,
spice-devel, alyssa.rosenzweig, etnaviv, linaro-mm-sig, hdegoede,
xen-devel, virtualization, sean, apaneers, linux-arm-kernel,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, kgene, alexander.deucher, linux-media,
Christian König
Hi
On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter <daniel@ffwll.ch> wrote:
> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
> > Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in
> > > kernel address space. The mapping's address is returned as struct
> > > dma_buf_map. Each function is a simplified version of TTM's existing
> > > kmap code. Both functions respect the memory's location ani/or
> > > writecombine flags.
> > >
> > > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > > two helpers that convert a GEM object into the TTM BO and forward the
> > > call to TTM's vmap/vunmap. These helpers can be dropped into the rsp
> > > GEM object callbacks.
> > >
> > > v4:
> > > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers
> > > (Daniel, Christian)
> >
> > Bunch of minor comments below, but over all look very solid to me.
>
> Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
> cleanest. And then we can maybe push the combinatorial monster into
> vmwgfx, which I think is the only user after this series. Or perhaps a
> dedicated set of helpers to map an invidual page (again using the
> dma_buf_map stuff).
From a quick look, I'd say it should be possible to have the same interface
for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map).
All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be
killed off entirely.
Best regards
Thomas
>
> I'll let Christian with the details, but at a high level this is
> definitely
>
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>
> Thanks a lot for doing all this.
> -Daniel
>
> >
> > >
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > > ---
> > > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > > include/drm/drm_gem_ttm_helper.h | 6 +++
> > > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > > include/linux/dma-buf-map.h | 20 ++++++++
> > > 5 files changed, 164 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
> > > 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
> > > unsigned int indent, }
> > > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> > > +/**
> > > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > > + * @gem: GEM object.
> > > + * @map: [out] returns the dma-buf mapping.
> > > + *
> > > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > > + * &drm_gem_object_funcs.vmap callback.
> > > + *
> > > + * Returns:
> > > + * 0 on success, or a negative errno code otherwise.
> > > + */
> > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map)
> > > +{
> > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > > +
> > > + return ttm_bo_vmap(bo, map);
> > > +
> > > +}
> > > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > > +
> > > +/**
> > > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > > + * @gem: GEM object.
> > > + * @map: dma-buf mapping.
> > > + *
> > > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used
> > > as
> > > + * &drm_gem_object_funcs.vmap callback.
> > > + */
> > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map)
> > > +{
> > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > > +
> > > + ttm_bo_vunmap(bo, map);
> > > +}
> > > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > > +
> > > /**
> > > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > > * @gem: GEM object.
> > > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
> > > 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > @@ -32,6 +32,7 @@
> > > #include <drm/ttm/ttm_bo_driver.h>
> > > #include <drm/ttm/ttm_placement.h>
> > > #include <drm/drm_vma_manager.h>
> > > +#include <linux/dma-buf-map.h>
> > > #include <linux/io.h>
> > > #include <linux/highmem.h>
> > > #include <linux/wait.h>
> > > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > > }
> > > EXPORT_SYMBOL(ttm_bo_kunmap);
> > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > > +{
> > > + struct ttm_resource *mem = &bo->mem;
> > > + int ret;
> > > +
> > > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + if (mem->bus.is_iomem) {
> > > + void __iomem *vaddr_iomem;
> > > + unsigned long size = bo->num_pages << PAGE_SHIFT;
> >
> > Please use uint64_t here and make sure to cast bo->num_pages before
> > shifting.
> >
> > We have an unit tests of allocating a 8GB BO and that should work on a
> > 32bit machine as well :)
> >
> > > +
> > > + if (mem->bus.addr)
> > > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > > + else if (mem->placement & TTM_PL_FLAG_WC)
> >
> > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> > mem->bus.caching enum as replacement.
> >
> > > + vaddr_iomem = ioremap_wc(mem->bus.offset,
> > > size);
> > > + else
> > > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > > +
> > > + if (!vaddr_iomem)
> > > + return -ENOMEM;
> > > +
> > > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > > +
> > > + } else {
> > > + struct ttm_operation_ctx ctx = {
> > > + .interruptible = false,
> > > + .no_wait_gpu = false
> > > + };
> > > + struct ttm_tt *ttm = bo->ttm;
> > > + pgprot_t prot;
> > > + void *vaddr;
> > > +
> > > + BUG_ON(!ttm);
> >
> > I think we can drop this, populate will just crash badly anyway.
> >
> > > +
> > > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + /*
> > > + * We need to use vmap to get the desired page
> > > protection
> > > + * or to make the buffer object look contiguous.
> > > + */
> > > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
> >
> > The calling convention has changed on drm-misc-next as well, but should be
> > trivial to adapt.
> >
> > Regards,
> > Christian.
> >
> > > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > > + if (!vaddr)
> > > + return -ENOMEM;
> > > +
> > > + dma_buf_map_set_vaddr(map, vaddr);
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +EXPORT_SYMBOL(ttm_bo_vmap);
> > > +
> > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > *map) +{
> > > + if (dma_buf_map_is_null(map))
> > > + return;
> > > +
> > > + if (map->is_iomem)
> > > + iounmap(map->vaddr_iomem);
> > > + else
> > > + vunmap(map->vaddr);
> > > + dma_buf_map_clear(map);
> > > +
> > > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > > +}
> > > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > > +
> > > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > > bool dst_use_tt)
> > > {
> > > diff --git a/include/drm/drm_gem_ttm_helper.h
> > > b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8
> > > 100644 --- a/include/drm/drm_gem_ttm_helper.h
> > > +++ b/include/drm/drm_gem_ttm_helper.h
> > > @@ -10,11 +10,17 @@
> > > #include <drm/ttm/ttm_bo_api.h>
> > > #include <drm/ttm/ttm_bo_driver.h>
> > > +struct dma_buf_map;
> > > +
> > > #define drm_gem_ttm_of_gem(gem_obj) \
> > > container_of(gem_obj, struct ttm_buffer_object, base)
> > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
> > > indent, const struct drm_gem_object *gem);
> > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map);
> > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map);
> > > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > > struct vm_area_struct *vma);
> > > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > > index 37102e45e496..2c59a785374c 100644
> > > --- a/include/drm/ttm/ttm_bo_api.h
> > > +++ b/include/drm/ttm/ttm_bo_api.h
> > > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > > struct ttm_bo_device;
> > > +struct dma_buf_map;
> > > +
> > > struct drm_mm_node;
> > > struct ttm_placement;
> > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > > unsigned long start_page, */
> > > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > > +/**
> > > + * ttm_bo_vmap
> > > + *
> > > + * @bo: The buffer object.
> > > + * @map: pointer to a struct dma_buf_map representing the map.
> > > + *
> > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > > + * data in the buffer object. The parameter @map returns the virtual
> > > + * address as struct dma_buf_map. Unmap the buffer with
> > > ttm_bo_vunmap().
> > > + *
> > > + * Returns
> > > + * -ENOMEM: Out of memory.
> > > + * -EINVAL: Invalid range.
> > > + */
> > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > > +
> > > +/**
> > > + * ttm_bo_vunmap
> > > + *
> > > + * @bo: The buffer object.
> > > + * @map: Object describing the map to unmap.
> > > + *
> > > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > > + */
> > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > *map); +
> > > /**
> > > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > > *
> > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > > index fd1aba545fdf..2e8bbecb5091 100644
> > > --- a/include/linux/dma-buf-map.h
> > > +++ b/include/linux/dma-buf-map.h
> > > @@ -45,6 +45,12 @@
> > > *
> > > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > > *
> > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > > + *
> > > + * .. code-block:: c
> > > + *
> > > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > > + *
> > > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > > * dma_buf_map_is_null().
> > > *
> > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > > dma_buf_map *map, void *vaddr) map->is_iomem = false;
> > > }
> > > +/**
> > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> > > an address in I/O memory
> > > + * @map: The dma-buf mapping structure
> > > + * @vaddr_iomem: An I/O-memory address
> > > + *
> > > + * Sets the address and the I/O-memory flag.
> > > + */
> > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > > + void __iomem
> > > *vaddr_iomem) +{
> > > + map->vaddr_iomem = vaddr_iomem;
> > > + map->is_iomem = true;
> > > +}
> > > +
> > > /**
> > > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
> > > equality
> > > * @lhs: The dma-buf mapping structure
> >
>
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 17:52 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 17:52 UTC (permalink / raw)
To: Daniel Vetter
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
kraxel, yuq825, sam, emil.velikov, linux-samsung-soc,
jy0922.shim, lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, luben.tuikov, bskeggs, linux+etnaviv,
spice-devel, alyssa.rosenzweig, etnaviv, linaro-mm-sig, hdegoede,
xen-devel, virtualization, sean, apaneers, linux-arm-kernel,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, kgene, alexander.deucher, linux-media,
Christian König
Hi
On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter <daniel@ffwll.ch> wrote:
> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
> > Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in
> > > kernel address space. The mapping's address is returned as struct
> > > dma_buf_map. Each function is a simplified version of TTM's existing
> > > kmap code. Both functions respect the memory's location ani/or
> > > writecombine flags.
> > >
> > > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > > two helpers that convert a GEM object into the TTM BO and forward the
> > > call to TTM's vmap/vunmap. These helpers can be dropped into the rsp
> > > GEM object callbacks.
> > >
> > > v4:
> > > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers
> > > (Daniel, Christian)
> >
> > Bunch of minor comments below, but over all look very solid to me.
>
> Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
> cleanest. And then we can maybe push the combinatorial monster into
> vmwgfx, which I think is the only user after this series. Or perhaps a
> dedicated set of helpers to map an invidual page (again using the
> dma_buf_map stuff).
From a quick look, I'd say it should be possible to have the same interface
for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map).
All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be
killed off entirely.
Best regards
Thomas
>
> I'll let Christian with the details, but at a high level this is
> definitely
>
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>
> Thanks a lot for doing all this.
> -Daniel
>
> >
> > >
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > > ---
> > > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > > include/drm/drm_gem_ttm_helper.h | 6 +++
> > > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > > include/linux/dma-buf-map.h | 20 ++++++++
> > > 5 files changed, 164 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
> > > 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
> > > unsigned int indent, }
> > > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> > > +/**
> > > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > > + * @gem: GEM object.
> > > + * @map: [out] returns the dma-buf mapping.
> > > + *
> > > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > > + * &drm_gem_object_funcs.vmap callback.
> > > + *
> > > + * Returns:
> > > + * 0 on success, or a negative errno code otherwise.
> > > + */
> > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map)
> > > +{
> > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > > +
> > > + return ttm_bo_vmap(bo, map);
> > > +
> > > +}
> > > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > > +
> > > +/**
> > > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > > + * @gem: GEM object.
> > > + * @map: dma-buf mapping.
> > > + *
> > > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used
> > > as
> > > + * &drm_gem_object_funcs.vmap callback.
> > > + */
> > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map)
> > > +{
> > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > > +
> > > + ttm_bo_vunmap(bo, map);
> > > +}
> > > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > > +
> > > /**
> > > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > > * @gem: GEM object.
> > > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
> > > 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > @@ -32,6 +32,7 @@
> > > #include <drm/ttm/ttm_bo_driver.h>
> > > #include <drm/ttm/ttm_placement.h>
> > > #include <drm/drm_vma_manager.h>
> > > +#include <linux/dma-buf-map.h>
> > > #include <linux/io.h>
> > > #include <linux/highmem.h>
> > > #include <linux/wait.h>
> > > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > > }
> > > EXPORT_SYMBOL(ttm_bo_kunmap);
> > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > > +{
> > > + struct ttm_resource *mem = &bo->mem;
> > > + int ret;
> > > +
> > > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + if (mem->bus.is_iomem) {
> > > + void __iomem *vaddr_iomem;
> > > + unsigned long size = bo->num_pages << PAGE_SHIFT;
> >
> > Please use uint64_t here and make sure to cast bo->num_pages before
> > shifting.
> >
> > We have an unit tests of allocating a 8GB BO and that should work on a
> > 32bit machine as well :)
> >
> > > +
> > > + if (mem->bus.addr)
> > > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > > + else if (mem->placement & TTM_PL_FLAG_WC)
> >
> > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> > mem->bus.caching enum as replacement.
> >
> > > + vaddr_iomem = ioremap_wc(mem->bus.offset,
> > > size);
> > > + else
> > > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > > +
> > > + if (!vaddr_iomem)
> > > + return -ENOMEM;
> > > +
> > > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > > +
> > > + } else {
> > > + struct ttm_operation_ctx ctx = {
> > > + .interruptible = false,
> > > + .no_wait_gpu = false
> > > + };
> > > + struct ttm_tt *ttm = bo->ttm;
> > > + pgprot_t prot;
> > > + void *vaddr;
> > > +
> > > + BUG_ON(!ttm);
> >
> > I think we can drop this, populate will just crash badly anyway.
> >
> > > +
> > > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + /*
> > > + * We need to use vmap to get the desired page
> > > protection
> > > + * or to make the buffer object look contiguous.
> > > + */
> > > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
> >
> > The calling convention has changed on drm-misc-next as well, but should be
> > trivial to adapt.
> >
> > Regards,
> > Christian.
> >
> > > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > > + if (!vaddr)
> > > + return -ENOMEM;
> > > +
> > > + dma_buf_map_set_vaddr(map, vaddr);
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +EXPORT_SYMBOL(ttm_bo_vmap);
> > > +
> > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > *map) +{
> > > + if (dma_buf_map_is_null(map))
> > > + return;
> > > +
> > > + if (map->is_iomem)
> > > + iounmap(map->vaddr_iomem);
> > > + else
> > > + vunmap(map->vaddr);
> > > + dma_buf_map_clear(map);
> > > +
> > > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > > +}
> > > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > > +
> > > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > > bool dst_use_tt)
> > > {
> > > diff --git a/include/drm/drm_gem_ttm_helper.h
> > > b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8
> > > 100644 --- a/include/drm/drm_gem_ttm_helper.h
> > > +++ b/include/drm/drm_gem_ttm_helper.h
> > > @@ -10,11 +10,17 @@
> > > #include <drm/ttm/ttm_bo_api.h>
> > > #include <drm/ttm/ttm_bo_driver.h>
> > > +struct dma_buf_map;
> > > +
> > > #define drm_gem_ttm_of_gem(gem_obj) \
> > > container_of(gem_obj, struct ttm_buffer_object, base)
> > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
> > > indent, const struct drm_gem_object *gem);
> > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map);
> > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > + struct dma_buf_map *map);
> > > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > > struct vm_area_struct *vma);
> > > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > > index 37102e45e496..2c59a785374c 100644
> > > --- a/include/drm/ttm/ttm_bo_api.h
> > > +++ b/include/drm/ttm/ttm_bo_api.h
> > > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > > struct ttm_bo_device;
> > > +struct dma_buf_map;
> > > +
> > > struct drm_mm_node;
> > > struct ttm_placement;
> > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > > unsigned long start_page, */
> > > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > > +/**
> > > + * ttm_bo_vmap
> > > + *
> > > + * @bo: The buffer object.
> > > + * @map: pointer to a struct dma_buf_map representing the map.
> > > + *
> > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > > + * data in the buffer object. The parameter @map returns the virtual
> > > + * address as struct dma_buf_map. Unmap the buffer with
> > > ttm_bo_vunmap().
> > > + *
> > > + * Returns
> > > + * -ENOMEM: Out of memory.
> > > + * -EINVAL: Invalid range.
> > > + */
> > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > > +
> > > +/**
> > > + * ttm_bo_vunmap
> > > + *
> > > + * @bo: The buffer object.
> > > + * @map: Object describing the map to unmap.
> > > + *
> > > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > > + */
> > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > *map); +
> > > /**
> > > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > > *
> > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > > index fd1aba545fdf..2e8bbecb5091 100644
> > > --- a/include/linux/dma-buf-map.h
> > > +++ b/include/linux/dma-buf-map.h
> > > @@ -45,6 +45,12 @@
> > > *
> > > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > > *
> > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > > + *
> > > + * .. code-block:: c
> > > + *
> > > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > > + *
> > > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > > * dma_buf_map_is_null().
> > > *
> > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > > dma_buf_map *map, void *vaddr) map->is_iomem = false;
> > > }
> > > +/**
> > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> > > an address in I/O memory
> > > + * @map: The dma-buf mapping structure
> > > + * @vaddr_iomem: An I/O-memory address
> > > + *
> > > + * Sets the address and the I/O-memory flag.
> > > + */
> > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > > + void __iomem
> > > *vaddr_iomem) +{
> > > + map->vaddr_iomem = vaddr_iomem;
> > > + map->is_iomem = true;
> > > +}
> > > +
> > > /**
> > > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
> > > equality
> > > * @lhs: The dma-buf mapping structure
> >
>
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
2020-10-15 14:08 ` Christian König
` (3 preceding siblings ...)
(?)
@ 2020-10-15 17:56 ` Thomas Zimmermann
-1 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 17:56 UTC (permalink / raw)
To: Christian König
Cc: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang, linux-samsung-soc, lima, nouveau, etnaviv,
amd-gfx, virtualization, linaro-mm-sig, linux-rockchip,
dri-devel, xen-devel, spice-devel, linux-arm-kernel, linux-media
Hi
On Thu, 15 Oct 2020 16:08:13 +0200 Christian König <christian.koenig@amd.com>
wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> > address space. The mapping's address is returned as struct dma_buf_map.
> > Each function is a simplified version of TTM's existing kmap code. Both
> > functions respect the memory's location ani/or writecombine flags.
> >
> > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > two helpers that convert a GEM object into the TTM BO and forward the call
> > to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> > callbacks.
> >
> > v4:
> > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> > Christian)
>
> Bunch of minor comments below, but over all look very solid to me.
>
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > include/drm/drm_gem_ttm_helper.h | 6 +++
> > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > include/linux/dma-buf-map.h | 20 ++++++++
> > 5 files changed, 164 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
> > 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
> > unsigned int indent, }
> > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> >
> > +/**
> > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: [out] returns the dma-buf mapping.
> > + *
> > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + *
> > + * Returns:
> > + * 0 on success, or a negative errno code otherwise.
> > + */
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + return ttm_bo_vmap(bo, map);
> > +
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > +
> > +/**
> > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: dma-buf mapping.
> > + *
> > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + */
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + ttm_bo_vunmap(bo, map);
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > +
> > /**
> > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > * @gem: GEM object.
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
> > 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > @@ -32,6 +32,7 @@
> > #include <drm/ttm/ttm_bo_driver.h>
> > #include <drm/ttm/ttm_placement.h>
> > #include <drm/drm_vma_manager.h>
> > +#include <linux/dma-buf-map.h>
> > #include <linux/io.h>
> > #include <linux/highmem.h>
> > #include <linux/wait.h>
> > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > }
> > EXPORT_SYMBOL(ttm_bo_kunmap);
> >
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + struct ttm_resource *mem = &bo->mem;
> > + int ret;
> > +
> > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > + if (ret)
> > + return ret;
> > +
> > + if (mem->bus.is_iomem) {
> > + void __iomem *vaddr_iomem;
> > + unsigned long size = bo->num_pages << PAGE_SHIFT;
>
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
>
> We have an unit tests of allocating a 8GB BO and that should work on a
> 32bit machine as well :)
>
> > +
> > + if (mem->bus.addr)
> > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > + else if (mem->placement & TTM_PL_FLAG_WC)
>
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
>
> > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > + else
> > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > +
> > + if (!vaddr_iomem)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > +
> > + } else {
> > + struct ttm_operation_ctx ctx = {
> > + .interruptible = false,
> > + .no_wait_gpu = false
> > + };
> > + struct ttm_tt *ttm = bo->ttm;
> > + pgprot_t prot;
> > + void *vaddr;
> > +
> > + BUG_ON(!ttm);
>
> I think we can drop this, populate will just crash badly anyway.
>
> > +
> > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > + if (ret)
> > + return ret;
> > +
> > + /*
> > + * We need to use vmap to get the desired page protection
> > + * or to make the buffer object look contiguous.
> > + */
> > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>
> The calling convention has changed on drm-misc-next as well, but should
> be trivial to adapt.
Thanks for quickly reviewing these patches. My drm-tip seems out of date
(last Sunday). TTM is moving fast these days and I still have to get used to
that. :)
Best regards
Thomas
>
> Regards,
> Christian.
>
> > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > + if (!vaddr)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr(map, vaddr);
> > + }
> > +
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vmap);
> > +
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + if (dma_buf_map_is_null(map))
> > + return;
> > +
> > + if (map->is_iomem)
> > + iounmap(map->vaddr_iomem);
> > + else
> > + vunmap(map->vaddr);
> > + dma_buf_map_clear(map);
> > +
> > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > +
> > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > bool dst_use_tt)
> > {
> > diff --git a/include/drm/drm_gem_ttm_helper.h
> > b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8 100644
> > --- a/include/drm/drm_gem_ttm_helper.h
> > +++ b/include/drm/drm_gem_ttm_helper.h
> > @@ -10,11 +10,17 @@
> > #include <drm/ttm/ttm_bo_api.h>
> > #include <drm/ttm/ttm_bo_driver.h>
> >
> > +struct dma_buf_map;
> > +
> > #define drm_gem_ttm_of_gem(gem_obj) \
> > container_of(gem_obj, struct ttm_buffer_object, base)
> >
> > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> > const struct drm_gem_object *gem);
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > struct vm_area_struct *vma);
> >
> > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > index 37102e45e496..2c59a785374c 100644
> > --- a/include/drm/ttm/ttm_bo_api.h
> > +++ b/include/drm/ttm/ttm_bo_api.h
> > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> >
> > struct ttm_bo_device;
> >
> > +struct dma_buf_map;
> > +
> > struct drm_mm_node;
> >
> > struct ttm_placement;
> > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > unsigned long start_page, */
> > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> >
> > +/**
> > + * ttm_bo_vmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: pointer to a struct dma_buf_map representing the map.
> > + *
> > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > + * data in the buffer object. The parameter @map returns the virtual
> > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > + *
> > + * Returns
> > + * -ENOMEM: Out of memory.
> > + * -EINVAL: Invalid range.
> > + */
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > +/**
> > + * ttm_bo_vunmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: Object describing the map to unmap.
> > + *
> > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > + */
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > *map); +
> > /**
> > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > *
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index fd1aba545fdf..2e8bbecb5091 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -45,6 +45,12 @@
> > *
> > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > *
> > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > dma_buf_map *map, void *vaddr) map->is_iomem = false;
> > }
> >
> > +/**
> > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an
> > address in I/O memory
> > + * @map: The dma-buf mapping structure
> > + * @vaddr_iomem: An I/O-memory address
> > + *
> > + * Sets the address and the I/O-memory flag.
> > + */
> > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > + void __iomem *vaddr_iomem)
> > +{
> > + map->vaddr_iomem = vaddr_iomem;
> > + map->is_iomem = true;
> > +}
> > +
> > /**
> > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
> > equality
> > * @lhs: The dma-buf mapping structure
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 17:56 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 17:56 UTC (permalink / raw)
To: Christian König
Cc: maarten.lankhorst, mripard, airlied, daniel, sam,
alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw
Hi
On Thu, 15 Oct 2020 16:08:13 +0200 Christian König <christian.koenig@amd.com>
wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> > address space. The mapping's address is returned as struct dma_buf_map.
> > Each function is a simplified version of TTM's existing kmap code. Both
> > functions respect the memory's location ani/or writecombine flags.
> >
> > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > two helpers that convert a GEM object into the TTM BO and forward the call
> > to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> > callbacks.
> >
> > v4:
> > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> > Christian)
>
> Bunch of minor comments below, but over all look very solid to me.
>
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > include/drm/drm_gem_ttm_helper.h | 6 +++
> > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > include/linux/dma-buf-map.h | 20 ++++++++
> > 5 files changed, 164 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
> > 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
> > unsigned int indent, }
> > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> >
> > +/**
> > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: [out] returns the dma-buf mapping.
> > + *
> > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + *
> > + * Returns:
> > + * 0 on success, or a negative errno code otherwise.
> > + */
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + return ttm_bo_vmap(bo, map);
> > +
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > +
> > +/**
> > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: dma-buf mapping.
> > + *
> > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + */
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + ttm_bo_vunmap(bo, map);
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > +
> > /**
> > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > * @gem: GEM object.
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
> > 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > @@ -32,6 +32,7 @@
> > #include <drm/ttm/ttm_bo_driver.h>
> > #include <drm/ttm/ttm_placement.h>
> > #include <drm/drm_vma_manager.h>
> > +#include <linux/dma-buf-map.h>
> > #include <linux/io.h>
> > #include <linux/highmem.h>
> > #include <linux/wait.h>
> > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > }
> > EXPORT_SYMBOL(ttm_bo_kunmap);
> >
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + struct ttm_resource *mem = &bo->mem;
> > + int ret;
> > +
> > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > + if (ret)
> > + return ret;
> > +
> > + if (mem->bus.is_iomem) {
> > + void __iomem *vaddr_iomem;
> > + unsigned long size = bo->num_pages << PAGE_SHIFT;
>
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
>
> We have an unit tests of allocating a 8GB BO and that should work on a
> 32bit machine as well :)
>
> > +
> > + if (mem->bus.addr)
> > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > + else if (mem->placement & TTM_PL_FLAG_WC)
>
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
>
> > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > + else
> > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > +
> > + if (!vaddr_iomem)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > +
> > + } else {
> > + struct ttm_operation_ctx ctx = {
> > + .interruptible = false,
> > + .no_wait_gpu = false
> > + };
> > + struct ttm_tt *ttm = bo->ttm;
> > + pgprot_t prot;
> > + void *vaddr;
> > +
> > + BUG_ON(!ttm);
>
> I think we can drop this, populate will just crash badly anyway.
>
> > +
> > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > + if (ret)
> > + return ret;
> > +
> > + /*
> > + * We need to use vmap to get the desired page protection
> > + * or to make the buffer object look contiguous.
> > + */
> > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>
> The calling convention has changed on drm-misc-next as well, but should
> be trivial to adapt.
Thanks for quickly reviewing these patches. My drm-tip seems out of date
(last Sunday). TTM is moving fast these days and I still have to get used to
that. :)
Best regards
Thomas
>
> Regards,
> Christian.
>
> > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > + if (!vaddr)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr(map, vaddr);
> > + }
> > +
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vmap);
> > +
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + if (dma_buf_map_is_null(map))
> > + return;
> > +
> > + if (map->is_iomem)
> > + iounmap(map->vaddr_iomem);
> > + else
> > + vunmap(map->vaddr);
> > + dma_buf_map_clear(map);
> > +
> > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > +
> > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > bool dst_use_tt)
> > {
> > diff --git a/include/drm/drm_gem_ttm_helper.h
> > b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8 100644
> > --- a/include/drm/drm_gem_ttm_helper.h
> > +++ b/include/drm/drm_gem_ttm_helper.h
> > @@ -10,11 +10,17 @@
> > #include <drm/ttm/ttm_bo_api.h>
> > #include <drm/ttm/ttm_bo_driver.h>
> >
> > +struct dma_buf_map;
> > +
> > #define drm_gem_ttm_of_gem(gem_obj) \
> > container_of(gem_obj, struct ttm_buffer_object, base)
> >
> > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> > const struct drm_gem_object *gem);
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > struct vm_area_struct *vma);
> >
> > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > index 37102e45e496..2c59a785374c 100644
> > --- a/include/drm/ttm/ttm_bo_api.h
> > +++ b/include/drm/ttm/ttm_bo_api.h
> > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> >
> > struct ttm_bo_device;
> >
> > +struct dma_buf_map;
> > +
> > struct drm_mm_node;
> >
> > struct ttm_placement;
> > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > unsigned long start_page, */
> > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> >
> > +/**
> > + * ttm_bo_vmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: pointer to a struct dma_buf_map representing the map.
> > + *
> > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > + * data in the buffer object. The parameter @map returns the virtual
> > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > + *
> > + * Returns
> > + * -ENOMEM: Out of memory.
> > + * -EINVAL: Invalid range.
> > + */
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > +/**
> > + * ttm_bo_vunmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: Object describing the map to unmap.
> > + *
> > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > + */
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > *map); +
> > /**
> > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > *
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index fd1aba545fdf..2e8bbecb5091 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -45,6 +45,12 @@
> > *
> > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > *
> > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > dma_buf_map *map, void *vaddr) map->is_iomem = false;
> > }
> >
> > +/**
> > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an
> > address in I/O memory
> > + * @map: The dma-buf mapping structure
> > + * @vaddr_iomem: An I/O-memory address
> > + *
> > + * Sets the address and the I/O-memory flag.
> > + */
> > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > + void __iomem *vaddr_iomem)
> > +{
> > + map->vaddr_iomem = vaddr_iomem;
> > + map->is_iomem = true;
> > +}
> > +
> > /**
> > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
> > equality
> > * @lhs: The dma-buf mapping structure
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 17:56 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 17:56 UTC (permalink / raw)
To: Christian König
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, kraxel, sam, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, xen-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, spice-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, l.stach
Hi
On Thu, 15 Oct 2020 16:08:13 +0200 Christian König <christian.koenig@amd.com>
wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> > address space. The mapping's address is returned as struct dma_buf_map.
> > Each function is a simplified version of TTM's existing kmap code. Both
> > functions respect the memory's location ani/or writecombine flags.
> >
> > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > two helpers that convert a GEM object into the TTM BO and forward the call
> > to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> > callbacks.
> >
> > v4:
> > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> > Christian)
>
> Bunch of minor comments below, but over all look very solid to me.
>
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > include/drm/drm_gem_ttm_helper.h | 6 +++
> > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > include/linux/dma-buf-map.h | 20 ++++++++
> > 5 files changed, 164 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
> > 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
> > unsigned int indent, }
> > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> >
> > +/**
> > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: [out] returns the dma-buf mapping.
> > + *
> > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + *
> > + * Returns:
> > + * 0 on success, or a negative errno code otherwise.
> > + */
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + return ttm_bo_vmap(bo, map);
> > +
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > +
> > +/**
> > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: dma-buf mapping.
> > + *
> > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + */
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + ttm_bo_vunmap(bo, map);
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > +
> > /**
> > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > * @gem: GEM object.
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
> > 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > @@ -32,6 +32,7 @@
> > #include <drm/ttm/ttm_bo_driver.h>
> > #include <drm/ttm/ttm_placement.h>
> > #include <drm/drm_vma_manager.h>
> > +#include <linux/dma-buf-map.h>
> > #include <linux/io.h>
> > #include <linux/highmem.h>
> > #include <linux/wait.h>
> > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > }
> > EXPORT_SYMBOL(ttm_bo_kunmap);
> >
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + struct ttm_resource *mem = &bo->mem;
> > + int ret;
> > +
> > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > + if (ret)
> > + return ret;
> > +
> > + if (mem->bus.is_iomem) {
> > + void __iomem *vaddr_iomem;
> > + unsigned long size = bo->num_pages << PAGE_SHIFT;
>
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
>
> We have an unit tests of allocating a 8GB BO and that should work on a
> 32bit machine as well :)
>
> > +
> > + if (mem->bus.addr)
> > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > + else if (mem->placement & TTM_PL_FLAG_WC)
>
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
>
> > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > + else
> > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > +
> > + if (!vaddr_iomem)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > +
> > + } else {
> > + struct ttm_operation_ctx ctx = {
> > + .interruptible = false,
> > + .no_wait_gpu = false
> > + };
> > + struct ttm_tt *ttm = bo->ttm;
> > + pgprot_t prot;
> > + void *vaddr;
> > +
> > + BUG_ON(!ttm);
>
> I think we can drop this, populate will just crash badly anyway.
>
> > +
> > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > + if (ret)
> > + return ret;
> > +
> > + /*
> > + * We need to use vmap to get the desired page protection
> > + * or to make the buffer object look contiguous.
> > + */
> > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>
> The calling convention has changed on drm-misc-next as well, but should
> be trivial to adapt.
Thanks for quickly reviewing these patches. My drm-tip seems out of date
(last Sunday). TTM is moving fast these days and I still have to get used to
that. :)
Best regards
Thomas
>
> Regards,
> Christian.
>
> > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > + if (!vaddr)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr(map, vaddr);
> > + }
> > +
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vmap);
> > +
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + if (dma_buf_map_is_null(map))
> > + return;
> > +
> > + if (map->is_iomem)
> > + iounmap(map->vaddr_iomem);
> > + else
> > + vunmap(map->vaddr);
> > + dma_buf_map_clear(map);
> > +
> > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > +
> > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > bool dst_use_tt)
> > {
> > diff --git a/include/drm/drm_gem_ttm_helper.h
> > b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8 100644
> > --- a/include/drm/drm_gem_ttm_helper.h
> > +++ b/include/drm/drm_gem_ttm_helper.h
> > @@ -10,11 +10,17 @@
> > #include <drm/ttm/ttm_bo_api.h>
> > #include <drm/ttm/ttm_bo_driver.h>
> >
> > +struct dma_buf_map;
> > +
> > #define drm_gem_ttm_of_gem(gem_obj) \
> > container_of(gem_obj, struct ttm_buffer_object, base)
> >
> > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> > const struct drm_gem_object *gem);
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > struct vm_area_struct *vma);
> >
> > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > index 37102e45e496..2c59a785374c 100644
> > --- a/include/drm/ttm/ttm_bo_api.h
> > +++ b/include/drm/ttm/ttm_bo_api.h
> > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> >
> > struct ttm_bo_device;
> >
> > +struct dma_buf_map;
> > +
> > struct drm_mm_node;
> >
> > struct ttm_placement;
> > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > unsigned long start_page, */
> > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> >
> > +/**
> > + * ttm_bo_vmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: pointer to a struct dma_buf_map representing the map.
> > + *
> > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > + * data in the buffer object. The parameter @map returns the virtual
> > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > + *
> > + * Returns
> > + * -ENOMEM: Out of memory.
> > + * -EINVAL: Invalid range.
> > + */
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > +/**
> > + * ttm_bo_vunmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: Object describing the map to unmap.
> > + *
> > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > + */
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > *map); +
> > /**
> > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > *
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index fd1aba545fdf..2e8bbecb5091 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -45,6 +45,12 @@
> > *
> > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > *
> > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > dma_buf_map *map, void *vaddr) map->is_iomem = false;
> > }
> >
> > +/**
> > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an
> > address in I/O memory
> > + * @map: The dma-buf mapping structure
> > + * @vaddr_iomem: An I/O-memory address
> > + *
> > + * Sets the address and the I/O-memory flag.
> > + */
> > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > + void __iomem *vaddr_iomem)
> > +{
> > + map->vaddr_iomem = vaddr_iomem;
> > + map->is_iomem = true;
> > +}
> > +
> > /**
> > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
> > equality
> > * @lhs: The dma-buf mapping structure
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 17:56 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 17:56 UTC (permalink / raw)
To: Christian König
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, sam, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, xen-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, spice-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, l.stach
Hi
On Thu, 15 Oct 2020 16:08:13 +0200 Christian König <christian.koenig@amd.com>
wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> > address space. The mapping's address is returned as struct dma_buf_map.
> > Each function is a simplified version of TTM's existing kmap code. Both
> > functions respect the memory's location ani/or writecombine flags.
> >
> > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > two helpers that convert a GEM object into the TTM BO and forward the call
> > to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> > callbacks.
> >
> > v4:
> > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> > Christian)
>
> Bunch of minor comments below, but over all look very solid to me.
>
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > include/drm/drm_gem_ttm_helper.h | 6 +++
> > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > include/linux/dma-buf-map.h | 20 ++++++++
> > 5 files changed, 164 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
> > 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
> > unsigned int indent, }
> > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> >
> > +/**
> > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: [out] returns the dma-buf mapping.
> > + *
> > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + *
> > + * Returns:
> > + * 0 on success, or a negative errno code otherwise.
> > + */
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + return ttm_bo_vmap(bo, map);
> > +
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > +
> > +/**
> > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: dma-buf mapping.
> > + *
> > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + */
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + ttm_bo_vunmap(bo, map);
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > +
> > /**
> > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > * @gem: GEM object.
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
> > 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > @@ -32,6 +32,7 @@
> > #include <drm/ttm/ttm_bo_driver.h>
> > #include <drm/ttm/ttm_placement.h>
> > #include <drm/drm_vma_manager.h>
> > +#include <linux/dma-buf-map.h>
> > #include <linux/io.h>
> > #include <linux/highmem.h>
> > #include <linux/wait.h>
> > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > }
> > EXPORT_SYMBOL(ttm_bo_kunmap);
> >
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + struct ttm_resource *mem = &bo->mem;
> > + int ret;
> > +
> > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > + if (ret)
> > + return ret;
> > +
> > + if (mem->bus.is_iomem) {
> > + void __iomem *vaddr_iomem;
> > + unsigned long size = bo->num_pages << PAGE_SHIFT;
>
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
>
> We have an unit tests of allocating a 8GB BO and that should work on a
> 32bit machine as well :)
>
> > +
> > + if (mem->bus.addr)
> > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > + else if (mem->placement & TTM_PL_FLAG_WC)
>
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
>
> > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > + else
> > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > +
> > + if (!vaddr_iomem)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > +
> > + } else {
> > + struct ttm_operation_ctx ctx = {
> > + .interruptible = false,
> > + .no_wait_gpu = false
> > + };
> > + struct ttm_tt *ttm = bo->ttm;
> > + pgprot_t prot;
> > + void *vaddr;
> > +
> > + BUG_ON(!ttm);
>
> I think we can drop this, populate will just crash badly anyway.
>
> > +
> > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > + if (ret)
> > + return ret;
> > +
> > + /*
> > + * We need to use vmap to get the desired page protection
> > + * or to make the buffer object look contiguous.
> > + */
> > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>
> The calling convention has changed on drm-misc-next as well, but should
> be trivial to adapt.
Thanks for quickly reviewing these patches. My drm-tip seems out of date
(last Sunday). TTM is moving fast these days and I still have to get used to
that. :)
Best regards
Thomas
>
> Regards,
> Christian.
>
> > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > + if (!vaddr)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr(map, vaddr);
> > + }
> > +
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vmap);
> > +
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + if (dma_buf_map_is_null(map))
> > + return;
> > +
> > + if (map->is_iomem)
> > + iounmap(map->vaddr_iomem);
> > + else
> > + vunmap(map->vaddr);
> > + dma_buf_map_clear(map);
> > +
> > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > +
> > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > bool dst_use_tt)
> > {
> > diff --git a/include/drm/drm_gem_ttm_helper.h
> > b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8 100644
> > --- a/include/drm/drm_gem_ttm_helper.h
> > +++ b/include/drm/drm_gem_ttm_helper.h
> > @@ -10,11 +10,17 @@
> > #include <drm/ttm/ttm_bo_api.h>
> > #include <drm/ttm/ttm_bo_driver.h>
> >
> > +struct dma_buf_map;
> > +
> > #define drm_gem_ttm_of_gem(gem_obj) \
> > container_of(gem_obj, struct ttm_buffer_object, base)
> >
> > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> > const struct drm_gem_object *gem);
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > struct vm_area_struct *vma);
> >
> > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > index 37102e45e496..2c59a785374c 100644
> > --- a/include/drm/ttm/ttm_bo_api.h
> > +++ b/include/drm/ttm/ttm_bo_api.h
> > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> >
> > struct ttm_bo_device;
> >
> > +struct dma_buf_map;
> > +
> > struct drm_mm_node;
> >
> > struct ttm_placement;
> > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > unsigned long start_page, */
> > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> >
> > +/**
> > + * ttm_bo_vmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: pointer to a struct dma_buf_map representing the map.
> > + *
> > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > + * data in the buffer object. The parameter @map returns the virtual
> > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > + *
> > + * Returns
> > + * -ENOMEM: Out of memory.
> > + * -EINVAL: Invalid range.
> > + */
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > +/**
> > + * ttm_bo_vunmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: Object describing the map to unmap.
> > + *
> > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > + */
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > *map); +
> > /**
> > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > *
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index fd1aba545fdf..2e8bbecb5091 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -45,6 +45,12 @@
> > *
> > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > *
> > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > dma_buf_map *map, void *vaddr) map->is_iomem = false;
> > }
> >
> > +/**
> > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an
> > address in I/O memory
> > + * @map: The dma-buf mapping structure
> > + * @vaddr_iomem: An I/O-memory address
> > + *
> > + * Sets the address and the I/O-memory flag.
> > + */
> > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > + void __iomem *vaddr_iomem)
> > +{
> > + map->vaddr_iomem = vaddr_iomem;
> > + map->is_iomem = true;
> > +}
> > +
> > /**
> > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
> > equality
> > * @lhs: The dma-buf mapping structure
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 17:56 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 17:56 UTC (permalink / raw)
To: Christian König
Cc: luben.tuikov, airlied, nouveau, dri-devel, chris, melissa.srw,
ray.huang, kraxel, sam, emil.velikov, linux-samsung-soc,
jy0922.shim, lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, kgene, alyssa.rosenzweig, linux+etnaviv,
xen-devel, bskeggs, etnaviv, hdegoede, spice-devel,
virtualization, sean, apaneers, linux-arm-kernel, linaro-mm-sig,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, yuq825, alexander.deucher, linux-media
Hi
On Thu, 15 Oct 2020 16:08:13 +0200 Christian König <christian.koenig@amd.com>
wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> > address space. The mapping's address is returned as struct dma_buf_map.
> > Each function is a simplified version of TTM's existing kmap code. Both
> > functions respect the memory's location ani/or writecombine flags.
> >
> > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > two helpers that convert a GEM object into the TTM BO and forward the call
> > to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> > callbacks.
> >
> > v4:
> > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> > Christian)
>
> Bunch of minor comments below, but over all look very solid to me.
>
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > include/drm/drm_gem_ttm_helper.h | 6 +++
> > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > include/linux/dma-buf-map.h | 20 ++++++++
> > 5 files changed, 164 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
> > 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
> > unsigned int indent, }
> > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> >
> > +/**
> > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: [out] returns the dma-buf mapping.
> > + *
> > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + *
> > + * Returns:
> > + * 0 on success, or a negative errno code otherwise.
> > + */
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + return ttm_bo_vmap(bo, map);
> > +
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > +
> > +/**
> > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: dma-buf mapping.
> > + *
> > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + */
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + ttm_bo_vunmap(bo, map);
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > +
> > /**
> > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > * @gem: GEM object.
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
> > 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > @@ -32,6 +32,7 @@
> > #include <drm/ttm/ttm_bo_driver.h>
> > #include <drm/ttm/ttm_placement.h>
> > #include <drm/drm_vma_manager.h>
> > +#include <linux/dma-buf-map.h>
> > #include <linux/io.h>
> > #include <linux/highmem.h>
> > #include <linux/wait.h>
> > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > }
> > EXPORT_SYMBOL(ttm_bo_kunmap);
> >
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + struct ttm_resource *mem = &bo->mem;
> > + int ret;
> > +
> > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > + if (ret)
> > + return ret;
> > +
> > + if (mem->bus.is_iomem) {
> > + void __iomem *vaddr_iomem;
> > + unsigned long size = bo->num_pages << PAGE_SHIFT;
>
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
>
> We have an unit tests of allocating a 8GB BO and that should work on a
> 32bit machine as well :)
>
> > +
> > + if (mem->bus.addr)
> > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > + else if (mem->placement & TTM_PL_FLAG_WC)
>
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
>
> > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > + else
> > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > +
> > + if (!vaddr_iomem)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > +
> > + } else {
> > + struct ttm_operation_ctx ctx = {
> > + .interruptible = false,
> > + .no_wait_gpu = false
> > + };
> > + struct ttm_tt *ttm = bo->ttm;
> > + pgprot_t prot;
> > + void *vaddr;
> > +
> > + BUG_ON(!ttm);
>
> I think we can drop this, populate will just crash badly anyway.
>
> > +
> > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > + if (ret)
> > + return ret;
> > +
> > + /*
> > + * We need to use vmap to get the desired page protection
> > + * or to make the buffer object look contiguous.
> > + */
> > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>
> The calling convention has changed on drm-misc-next as well, but should
> be trivial to adapt.
Thanks for quickly reviewing these patches. My drm-tip seems out of date
(last Sunday). TTM is moving fast these days and I still have to get used to
that. :)
Best regards
Thomas
>
> Regards,
> Christian.
>
> > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > + if (!vaddr)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr(map, vaddr);
> > + }
> > +
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vmap);
> > +
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + if (dma_buf_map_is_null(map))
> > + return;
> > +
> > + if (map->is_iomem)
> > + iounmap(map->vaddr_iomem);
> > + else
> > + vunmap(map->vaddr);
> > + dma_buf_map_clear(map);
> > +
> > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > +
> > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > bool dst_use_tt)
> > {
> > diff --git a/include/drm/drm_gem_ttm_helper.h
> > b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8 100644
> > --- a/include/drm/drm_gem_ttm_helper.h
> > +++ b/include/drm/drm_gem_ttm_helper.h
> > @@ -10,11 +10,17 @@
> > #include <drm/ttm/ttm_bo_api.h>
> > #include <drm/ttm/ttm_bo_driver.h>
> >
> > +struct dma_buf_map;
> > +
> > #define drm_gem_ttm_of_gem(gem_obj) \
> > container_of(gem_obj, struct ttm_buffer_object, base)
> >
> > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> > const struct drm_gem_object *gem);
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > struct vm_area_struct *vma);
> >
> > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > index 37102e45e496..2c59a785374c 100644
> > --- a/include/drm/ttm/ttm_bo_api.h
> > +++ b/include/drm/ttm/ttm_bo_api.h
> > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> >
> > struct ttm_bo_device;
> >
> > +struct dma_buf_map;
> > +
> > struct drm_mm_node;
> >
> > struct ttm_placement;
> > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > unsigned long start_page, */
> > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> >
> > +/**
> > + * ttm_bo_vmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: pointer to a struct dma_buf_map representing the map.
> > + *
> > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > + * data in the buffer object. The parameter @map returns the virtual
> > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > + *
> > + * Returns
> > + * -ENOMEM: Out of memory.
> > + * -EINVAL: Invalid range.
> > + */
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > +/**
> > + * ttm_bo_vunmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: Object describing the map to unmap.
> > + *
> > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > + */
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > *map); +
> > /**
> > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > *
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index fd1aba545fdf..2e8bbecb5091 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -45,6 +45,12 @@
> > *
> > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > *
> > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > dma_buf_map *map, void *vaddr) map->is_iomem = false;
> > }
> >
> > +/**
> > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an
> > address in I/O memory
> > + * @map: The dma-buf mapping structure
> > + * @vaddr_iomem: An I/O-memory address
> > + *
> > + * Sets the address and the I/O-memory flag.
> > + */
> > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > + void __iomem *vaddr_iomem)
> > +{
> > + map->vaddr_iomem = vaddr_iomem;
> > + map->is_iomem = true;
> > +}
> > +
> > /**
> > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
> > equality
> > * @lhs: The dma-buf mapping structure
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-15 17:56 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-15 17:56 UTC (permalink / raw)
To: Christian König
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, kraxel, sam, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, xen-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, spice-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, l.stach
Hi
On Thu, 15 Oct 2020 16:08:13 +0200 Christian König <christian.koenig@amd.com>
wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> > address space. The mapping's address is returned as struct dma_buf_map.
> > Each function is a simplified version of TTM's existing kmap code. Both
> > functions respect the memory's location ani/or writecombine flags.
> >
> > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > two helpers that convert a GEM object into the TTM BO and forward the call
> > to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> > callbacks.
> >
> > v4:
> > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> > Christian)
>
> Bunch of minor comments below, but over all look very solid to me.
>
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
> > include/drm/drm_gem_ttm_helper.h | 6 +++
> > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
> > include/linux/dma-buf-map.h | 20 ++++++++
> > 5 files changed, 164 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
> > 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
> > unsigned int indent, }
> > EXPORT_SYMBOL(drm_gem_ttm_print_info);
> >
> > +/**
> > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: [out] returns the dma-buf mapping.
> > + *
> > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + *
> > + * Returns:
> > + * 0 on success, or a negative errno code otherwise.
> > + */
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + return ttm_bo_vmap(bo, map);
> > +
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > +
> > +/**
> > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: dma-buf mapping.
> > + *
> > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + */
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map)
> > +{
> > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > + ttm_bo_vunmap(bo, map);
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > +
> > /**
> > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > * @gem: GEM object.
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
> > 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > @@ -32,6 +32,7 @@
> > #include <drm/ttm/ttm_bo_driver.h>
> > #include <drm/ttm/ttm_placement.h>
> > #include <drm/drm_vma_manager.h>
> > +#include <linux/dma-buf-map.h>
> > #include <linux/io.h>
> > #include <linux/highmem.h>
> > #include <linux/wait.h>
> > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > }
> > EXPORT_SYMBOL(ttm_bo_kunmap);
> >
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + struct ttm_resource *mem = &bo->mem;
> > + int ret;
> > +
> > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > + if (ret)
> > + return ret;
> > +
> > + if (mem->bus.is_iomem) {
> > + void __iomem *vaddr_iomem;
> > + unsigned long size = bo->num_pages << PAGE_SHIFT;
>
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
>
> We have an unit tests of allocating a 8GB BO and that should work on a
> 32bit machine as well :)
>
> > +
> > + if (mem->bus.addr)
> > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > + else if (mem->placement & TTM_PL_FLAG_WC)
>
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
>
> > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > + else
> > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > +
> > + if (!vaddr_iomem)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > +
> > + } else {
> > + struct ttm_operation_ctx ctx = {
> > + .interruptible = false,
> > + .no_wait_gpu = false
> > + };
> > + struct ttm_tt *ttm = bo->ttm;
> > + pgprot_t prot;
> > + void *vaddr;
> > +
> > + BUG_ON(!ttm);
>
> I think we can drop this, populate will just crash badly anyway.
>
> > +
> > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > + if (ret)
> > + return ret;
> > +
> > + /*
> > + * We need to use vmap to get the desired page protection
> > + * or to make the buffer object look contiguous.
> > + */
> > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>
> The calling convention has changed on drm-misc-next as well, but should
> be trivial to adapt.
Thanks for quickly reviewing these patches. My drm-tip seems out of date
(last Sunday). TTM is moving fast these days and I still have to get used to
that. :)
Best regards
Thomas
>
> Regards,
> Christian.
>
> > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > + if (!vaddr)
> > + return -ENOMEM;
> > +
> > + dma_buf_map_set_vaddr(map, vaddr);
> > + }
> > +
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vmap);
> > +
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > + if (dma_buf_map_is_null(map))
> > + return;
> > +
> > + if (map->is_iomem)
> > + iounmap(map->vaddr_iomem);
> > + else
> > + vunmap(map->vaddr);
> > + dma_buf_map_clear(map);
> > +
> > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > +
> > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > bool dst_use_tt)
> > {
> > diff --git a/include/drm/drm_gem_ttm_helper.h
> > b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8 100644
> > --- a/include/drm/drm_gem_ttm_helper.h
> > +++ b/include/drm/drm_gem_ttm_helper.h
> > @@ -10,11 +10,17 @@
> > #include <drm/ttm/ttm_bo_api.h>
> > #include <drm/ttm/ttm_bo_driver.h>
> >
> > +struct dma_buf_map;
> > +
> > #define drm_gem_ttm_of_gem(gem_obj) \
> > container_of(gem_obj, struct ttm_buffer_object, base)
> >
> > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> > const struct drm_gem_object *gem);
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > + struct dma_buf_map *map);
> > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > struct vm_area_struct *vma);
> >
> > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > index 37102e45e496..2c59a785374c 100644
> > --- a/include/drm/ttm/ttm_bo_api.h
> > +++ b/include/drm/ttm/ttm_bo_api.h
> > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> >
> > struct ttm_bo_device;
> >
> > +struct dma_buf_map;
> > +
> > struct drm_mm_node;
> >
> > struct ttm_placement;
> > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > unsigned long start_page, */
> > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> >
> > +/**
> > + * ttm_bo_vmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: pointer to a struct dma_buf_map representing the map.
> > + *
> > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > + * data in the buffer object. The parameter @map returns the virtual
> > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > + *
> > + * Returns
> > + * -ENOMEM: Out of memory.
> > + * -EINVAL: Invalid range.
> > + */
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > +/**
> > + * ttm_bo_vunmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: Object describing the map to unmap.
> > + *
> > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > + */
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > *map); +
> > /**
> > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > *
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index fd1aba545fdf..2e8bbecb5091 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -45,6 +45,12 @@
> > *
> > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > *
> > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > dma_buf_map *map, void *vaddr) map->is_iomem = false;
> > }
> >
> > +/**
> > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an
> > address in I/O memory
> > + * @map: The dma-buf mapping structure
> > + * @vaddr_iomem: An I/O-memory address
> > + *
> > + * Sets the address and the I/O-memory flag.
> > + */
> > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > + void __iomem *vaddr_iomem)
> > +{
> > + map->vaddr_iomem = vaddr_iomem;
> > + map->is_iomem = true;
> > +}
> > +
> > /**
> > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
> > equality
> > * @lhs: The dma-buf mapping structure
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
2020-10-15 17:52 ` Thomas Zimmermann
` (4 preceding siblings ...)
(?)
@ 2020-10-16 9:41 ` Christian König
-1 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-16 9:41 UTC (permalink / raw)
To: Thomas Zimmermann, Daniel Vetter
Cc: luben.tuikov, airlied, nouveau, dri-devel, chris, melissa.srw,
ray.huang, kraxel, sam, emil.velikov, linux-samsung-soc,
jy0922.shim, lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, kgene, bskeggs, linux+etnaviv, spice-devel,
alyssa.rosenzweig, etnaviv, hdegoede, xen-devel, virtualization,
sean, apaneers, linux-arm-kernel, linaro-mm-sig, amd-gfx,
tomeu.vizoso, sw0312.kim, hjc, kyungmin.park, miaoqinglang,
yuq825, alexander.deucher, linux-media
Am 15.10.20 um 19:52 schrieb Thomas Zimmermann:
> Hi
>
> On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter <daniel@ffwll.ch> wrote:
>
>> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
>>> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
>>>> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in
>>>> kernel address space. The mapping's address is returned as struct
>>>> dma_buf_map. Each function is a simplified version of TTM's existing
>>>> kmap code. Both functions respect the memory's location ani/or
>>>> writecombine flags.
>>>>
>>>> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
>>>> two helpers that convert a GEM object into the TTM BO and forward the
>>>> call to TTM's vmap/vunmap. These helpers can be dropped into the rsp
>>>> GEM object callbacks.
>>>>
>>>> v4:
>>>> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers
>>>> (Daniel, Christian)
>>> Bunch of minor comments below, but over all look very solid to me.
>> Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
>> cleanest. And then we can maybe push the combinatorial monster into
>> vmwgfx, which I think is the only user after this series. Or perhaps a
>> dedicated set of helpers to map an invidual page (again using the
>> dma_buf_map stuff).
> From a quick look, I'd say it should be possible to have the same interface
> for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map).
> All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be
> killed off entirely.
Yes, that would be rather nice to have.
Thanks,
Christian.
>
> Best regards
> Thomas
>
>> I'll let Christian with the details, but at a high level this is
>> definitely
>>
>> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>
>> Thanks a lot for doing all this.
>> -Daniel
>>
>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>> ---
>>>> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
>>>> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
>>>> include/drm/drm_gem_ttm_helper.h | 6 +++
>>>> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
>>>> include/linux/dma-buf-map.h | 20 ++++++++
>>>> 5 files changed, 164 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
>>>> 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
>>>> unsigned int indent, }
>>>> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>>>> +/**
>>>> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
>>>> + * @gem: GEM object.
>>>> + * @map: [out] returns the dma-buf mapping.
>>>> + *
>>>> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
>>>> + * &drm_gem_object_funcs.vmap callback.
>>>> + *
>>>> + * Returns:
>>>> + * 0 on success, or a negative errno code otherwise.
>>>> + */
>>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>>>> +
>>>> + return ttm_bo_vmap(bo, map);
>>>> +
>>>> +}
>>>> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
>>>> +
>>>> +/**
>>>> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
>>>> + * @gem: GEM object.
>>>> + * @map: dma-buf mapping.
>>>> + *
>>>> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used
>>>> as
>>>> + * &drm_gem_object_funcs.vmap callback.
>>>> + */
>>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>>>> +
>>>> + ttm_bo_vunmap(bo, map);
>>>> +}
>>>> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
>>>> +
>>>> /**
>>>> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
>>>> * @gem: GEM object.
>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
>>>> 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> @@ -32,6 +32,7 @@
>>>> #include <drm/ttm/ttm_bo_driver.h>
>>>> #include <drm/ttm/ttm_placement.h>
>>>> #include <drm/drm_vma_manager.h>
>>>> +#include <linux/dma-buf-map.h>
>>>> #include <linux/io.h>
>>>> #include <linux/highmem.h>
>>>> #include <linux/wait.h>
>>>> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>>>> }
>>>> EXPORT_SYMBOL(ttm_bo_kunmap);
>>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_resource *mem = &bo->mem;
>>>> + int ret;
>>>> +
>>>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + if (mem->bus.is_iomem) {
>>>> + void __iomem *vaddr_iomem;
>>>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>>> Please use uint64_t here and make sure to cast bo->num_pages before
>>> shifting.
>>>
>>> We have an unit tests of allocating a 8GB BO and that should work on a
>>> 32bit machine as well :)
>>>
>>>> +
>>>> + if (mem->bus.addr)
>>>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
>>>> + else if (mem->placement & TTM_PL_FLAG_WC)
>>> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
>>> mem->bus.caching enum as replacement.
>>>
>>>> + vaddr_iomem = ioremap_wc(mem->bus.offset,
>>>> size);
>>>> + else
>>>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>>>> +
>>>> + if (!vaddr_iomem)
>>>> + return -ENOMEM;
>>>> +
>>>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>>>> +
>>>> + } else {
>>>> + struct ttm_operation_ctx ctx = {
>>>> + .interruptible = false,
>>>> + .no_wait_gpu = false
>>>> + };
>>>> + struct ttm_tt *ttm = bo->ttm;
>>>> + pgprot_t prot;
>>>> + void *vaddr;
>>>> +
>>>> + BUG_ON(!ttm);
>>> I think we can drop this, populate will just crash badly anyway.
>>>
>>>> +
>>>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + /*
>>>> + * We need to use vmap to get the desired page
>>>> protection
>>>> + * or to make the buffer object look contiguous.
>>>> + */
>>>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>>> The calling convention has changed on drm-misc-next as well, but should be
>>> trivial to adapt.
>>>
>>> Regards,
>>> Christian.
>>>
>>>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>>>> + if (!vaddr)
>>>> + return -ENOMEM;
>>>> +
>>>> + dma_buf_map_set_vaddr(map, vaddr);
>>>> + }
>>>> +
>>>> + return 0;
>>>> +}
>>>> +EXPORT_SYMBOL(ttm_bo_vmap);
>>>> +
>>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>>> *map) +{
>>>> + if (dma_buf_map_is_null(map))
>>>> + return;
>>>> +
>>>> + if (map->is_iomem)
>>>> + iounmap(map->vaddr_iomem);
>>>> + else
>>>> + vunmap(map->vaddr);
>>>> + dma_buf_map_clear(map);
>>>> +
>>>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>>>> +}
>>>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>>>> +
>>>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>>>> bool dst_use_tt)
>>>> {
>>>> diff --git a/include/drm/drm_gem_ttm_helper.h
>>>> b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8
>>>> 100644 --- a/include/drm/drm_gem_ttm_helper.h
>>>> +++ b/include/drm/drm_gem_ttm_helper.h
>>>> @@ -10,11 +10,17 @@
>>>> #include <drm/ttm/ttm_bo_api.h>
>>>> #include <drm/ttm/ttm_bo_driver.h>
>>>> +struct dma_buf_map;
>>>> +
>>>> #define drm_gem_ttm_of_gem(gem_obj) \
>>>> container_of(gem_obj, struct ttm_buffer_object, base)
>>>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>>>> indent, const struct drm_gem_object *gem);
>>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map);
>>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map);
>>>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>>>> struct vm_area_struct *vma);
>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>>> index 37102e45e496..2c59a785374c 100644
>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>>>> struct ttm_bo_device;
>>>> +struct dma_buf_map;
>>>> +
>>>> struct drm_mm_node;
>>>> struct ttm_placement;
>>>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>>> unsigned long start_page, */
>>>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>>>> +/**
>>>> + * ttm_bo_vmap
>>>> + *
>>>> + * @bo: The buffer object.
>>>> + * @map: pointer to a struct dma_buf_map representing the map.
>>>> + *
>>>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>>>> + * data in the buffer object. The parameter @map returns the virtual
>>>> + * address as struct dma_buf_map. Unmap the buffer with
>>>> ttm_bo_vunmap().
>>>> + *
>>>> + * Returns
>>>> + * -ENOMEM: Out of memory.
>>>> + * -EINVAL: Invalid range.
>>>> + */
>>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>>> +
>>>> +/**
>>>> + * ttm_bo_vunmap
>>>> + *
>>>> + * @bo: The buffer object.
>>>> + * @map: Object describing the map to unmap.
>>>> + *
>>>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>>>> + */
>>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>>> *map); +
>>>> /**
>>>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>>>> *
>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>> --- a/include/linux/dma-buf-map.h
>>>> +++ b/include/linux/dma-buf-map.h
>>>> @@ -45,6 +45,12 @@
>>>> *
>>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>> *
>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>>> + *
>>>> + * .. code-block:: c
>>>> + *
>>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>> + *
>>>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>> * dma_buf_map_is_null().
>>>> *
>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>>> dma_buf_map *map, void *vaddr) map->is_iomem = false;
>>>> }
>>>> +/**
>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>>> an address in I/O memory
>>>> + * @map: The dma-buf mapping structure
>>>> + * @vaddr_iomem: An I/O-memory address
>>>> + *
>>>> + * Sets the address and the I/O-memory flag.
>>>> + */
>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>>> + void __iomem
>>>> *vaddr_iomem) +{
>>>> + map->vaddr_iomem = vaddr_iomem;
>>>> + map->is_iomem = true;
>>>> +}
>>>> +
>>>> /**
>>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
>>>> equality
>>>> * @lhs: The dma-buf mapping structure
>
>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-16 9:41 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-16 9:41 UTC (permalink / raw)
To: Thomas Zimmermann, Daniel Vetter
Cc: luben.tuikov, airlied, nouveau, dri-devel, chris, melissa.srw,
ray.huang, kraxel, sam, emil.velikov, linux-samsung-soc,
jy0922.shim, lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, kgene, bskeggs, linux+etnaviv, spice-devel,
alyssa.rosenzweig, etnaviv, hdegoede, xen-devel, virtualization,
sean, apaneers, linux-arm-kernel, linaro-mm-sig
Am 15.10.20 um 19:52 schrieb Thomas Zimmermann:
> Hi
>
> On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter <daniel@ffwll.ch> wrote:
>
>> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
>>> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
>>>> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in
>>>> kernel address space. The mapping's address is returned as struct
>>>> dma_buf_map. Each function is a simplified version of TTM's existing
>>>> kmap code. Both functions respect the memory's location ani/or
>>>> writecombine flags.
>>>>
>>>> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
>>>> two helpers that convert a GEM object into the TTM BO and forward the
>>>> call to TTM's vmap/vunmap. These helpers can be dropped into the rsp
>>>> GEM object callbacks.
>>>>
>>>> v4:
>>>> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers
>>>> (Daniel, Christian)
>>> Bunch of minor comments below, but over all look very solid to me.
>> Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
>> cleanest. And then we can maybe push the combinatorial monster into
>> vmwgfx, which I think is the only user after this series. Or perhaps a
>> dedicated set of helpers to map an invidual page (again using the
>> dma_buf_map stuff).
> From a quick look, I'd say it should be possible to have the same interface
> for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map).
> All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be
> killed off entirely.
Yes, that would be rather nice to have.
Thanks,
Christian.
>
> Best regards
> Thomas
>
>> I'll let Christian with the details, but at a high level this is
>> definitely
>>
>> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>
>> Thanks a lot for doing all this.
>> -Daniel
>>
>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>> ---
>>>> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
>>>> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
>>>> include/drm/drm_gem_ttm_helper.h | 6 +++
>>>> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
>>>> include/linux/dma-buf-map.h | 20 ++++++++
>>>> 5 files changed, 164 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
>>>> 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
>>>> unsigned int indent, }
>>>> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>>>> +/**
>>>> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
>>>> + * @gem: GEM object.
>>>> + * @map: [out] returns the dma-buf mapping.
>>>> + *
>>>> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
>>>> + * &drm_gem_object_funcs.vmap callback.
>>>> + *
>>>> + * Returns:
>>>> + * 0 on success, or a negative errno code otherwise.
>>>> + */
>>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>>>> +
>>>> + return ttm_bo_vmap(bo, map);
>>>> +
>>>> +}
>>>> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
>>>> +
>>>> +/**
>>>> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
>>>> + * @gem: GEM object.
>>>> + * @map: dma-buf mapping.
>>>> + *
>>>> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used
>>>> as
>>>> + * &drm_gem_object_funcs.vmap callback.
>>>> + */
>>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>>>> +
>>>> + ttm_bo_vunmap(bo, map);
>>>> +}
>>>> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
>>>> +
>>>> /**
>>>> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
>>>> * @gem: GEM object.
>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
>>>> 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> @@ -32,6 +32,7 @@
>>>> #include <drm/ttm/ttm_bo_driver.h>
>>>> #include <drm/ttm/ttm_placement.h>
>>>> #include <drm/drm_vma_manager.h>
>>>> +#include <linux/dma-buf-map.h>
>>>> #include <linux/io.h>
>>>> #include <linux/highmem.h>
>>>> #include <linux/wait.h>
>>>> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>>>> }
>>>> EXPORT_SYMBOL(ttm_bo_kunmap);
>>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_resource *mem = &bo->mem;
>>>> + int ret;
>>>> +
>>>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + if (mem->bus.is_iomem) {
>>>> + void __iomem *vaddr_iomem;
>>>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>>> Please use uint64_t here and make sure to cast bo->num_pages before
>>> shifting.
>>>
>>> We have an unit tests of allocating a 8GB BO and that should work on a
>>> 32bit machine as well :)
>>>
>>>> +
>>>> + if (mem->bus.addr)
>>>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
>>>> + else if (mem->placement & TTM_PL_FLAG_WC)
>>> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
>>> mem->bus.caching enum as replacement.
>>>
>>>> + vaddr_iomem = ioremap_wc(mem->bus.offset,
>>>> size);
>>>> + else
>>>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>>>> +
>>>> + if (!vaddr_iomem)
>>>> + return -ENOMEM;
>>>> +
>>>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>>>> +
>>>> + } else {
>>>> + struct ttm_operation_ctx ctx = {
>>>> + .interruptible = false,
>>>> + .no_wait_gpu = false
>>>> + };
>>>> + struct ttm_tt *ttm = bo->ttm;
>>>> + pgprot_t prot;
>>>> + void *vaddr;
>>>> +
>>>> + BUG_ON(!ttm);
>>> I think we can drop this, populate will just crash badly anyway.
>>>
>>>> +
>>>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + /*
>>>> + * We need to use vmap to get the desired page
>>>> protection
>>>> + * or to make the buffer object look contiguous.
>>>> + */
>>>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>>> The calling convention has changed on drm-misc-next as well, but should be
>>> trivial to adapt.
>>>
>>> Regards,
>>> Christian.
>>>
>>>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>>>> + if (!vaddr)
>>>> + return -ENOMEM;
>>>> +
>>>> + dma_buf_map_set_vaddr(map, vaddr);
>>>> + }
>>>> +
>>>> + return 0;
>>>> +}
>>>> +EXPORT_SYMBOL(ttm_bo_vmap);
>>>> +
>>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>>> *map) +{
>>>> + if (dma_buf_map_is_null(map))
>>>> + return;
>>>> +
>>>> + if (map->is_iomem)
>>>> + iounmap(map->vaddr_iomem);
>>>> + else
>>>> + vunmap(map->vaddr);
>>>> + dma_buf_map_clear(map);
>>>> +
>>>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>>>> +}
>>>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>>>> +
>>>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>>>> bool dst_use_tt)
>>>> {
>>>> diff --git a/include/drm/drm_gem_ttm_helper.h
>>>> b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8
>>>> 100644 --- a/include/drm/drm_gem_ttm_helper.h
>>>> +++ b/include/drm/drm_gem_ttm_helper.h
>>>> @@ -10,11 +10,17 @@
>>>> #include <drm/ttm/ttm_bo_api.h>
>>>> #include <drm/ttm/ttm_bo_driver.h>
>>>> +struct dma_buf_map;
>>>> +
>>>> #define drm_gem_ttm_of_gem(gem_obj) \
>>>> container_of(gem_obj, struct ttm_buffer_object, base)
>>>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>>>> indent, const struct drm_gem_object *gem);
>>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map);
>>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map);
>>>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>>>> struct vm_area_struct *vma);
>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>>> index 37102e45e496..2c59a785374c 100644
>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>>>> struct ttm_bo_device;
>>>> +struct dma_buf_map;
>>>> +
>>>> struct drm_mm_node;
>>>> struct ttm_placement;
>>>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>>> unsigned long start_page, */
>>>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>>>> +/**
>>>> + * ttm_bo_vmap
>>>> + *
>>>> + * @bo: The buffer object.
>>>> + * @map: pointer to a struct dma_buf_map representing the map.
>>>> + *
>>>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>>>> + * data in the buffer object. The parameter @map returns the virtual
>>>> + * address as struct dma_buf_map. Unmap the buffer with
>>>> ttm_bo_vunmap().
>>>> + *
>>>> + * Returns
>>>> + * -ENOMEM: Out of memory.
>>>> + * -EINVAL: Invalid range.
>>>> + */
>>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>>> +
>>>> +/**
>>>> + * ttm_bo_vunmap
>>>> + *
>>>> + * @bo: The buffer object.
>>>> + * @map: Object describing the map to unmap.
>>>> + *
>>>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>>>> + */
>>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>>> *map); +
>>>> /**
>>>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>>>> *
>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>> --- a/include/linux/dma-buf-map.h
>>>> +++ b/include/linux/dma-buf-map.h
>>>> @@ -45,6 +45,12 @@
>>>> *
>>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>> *
>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>>> + *
>>>> + * .. code-block:: c
>>>> + *
>>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>> + *
>>>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>> * dma_buf_map_is_null().
>>>> *
>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>>> dma_buf_map *map, void *vaddr) map->is_iomem = false;
>>>> }
>>>> +/**
>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>>> an address in I/O memory
>>>> + * @map: The dma-buf mapping structure
>>>> + * @vaddr_iomem: An I/O-memory address
>>>> + *
>>>> + * Sets the address and the I/O-memory flag.
>>>> + */
>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>>> + void __iomem
>>>> *vaddr_iomem) +{
>>>> + map->vaddr_iomem = vaddr_iomem;
>>>> + map->is_iomem = true;
>>>> +}
>>>> +
>>>> /**
>>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
>>>> equality
>>>> * @lhs: The dma-buf mapping structure
>
>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-16 9:41 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-16 9:41 UTC (permalink / raw)
To: Thomas Zimmermann, Daniel Vetter
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
kraxel, yuq825, sam, emil.velikov, linux-samsung-soc,
jy0922.shim, lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, luben.tuikov, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, linaro-mm-sig, hdegoede,
xen-devel, virtualization, sean, apaneers, linux-arm-kernel,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, kgene, alexander.deucher, linux-media
Am 15.10.20 um 19:52 schrieb Thomas Zimmermann:
> Hi
>
> On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter <daniel@ffwll.ch> wrote:
>
>> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
>>> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
>>>> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in
>>>> kernel address space. The mapping's address is returned as struct
>>>> dma_buf_map. Each function is a simplified version of TTM's existing
>>>> kmap code. Both functions respect the memory's location ani/or
>>>> writecombine flags.
>>>>
>>>> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
>>>> two helpers that convert a GEM object into the TTM BO and forward the
>>>> call to TTM's vmap/vunmap. These helpers can be dropped into the rsp
>>>> GEM object callbacks.
>>>>
>>>> v4:
>>>> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers
>>>> (Daniel, Christian)
>>> Bunch of minor comments below, but over all look very solid to me.
>> Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
>> cleanest. And then we can maybe push the combinatorial monster into
>> vmwgfx, which I think is the only user after this series. Or perhaps a
>> dedicated set of helpers to map an invidual page (again using the
>> dma_buf_map stuff).
> From a quick look, I'd say it should be possible to have the same interface
> for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map).
> All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be
> killed off entirely.
Yes, that would be rather nice to have.
Thanks,
Christian.
>
> Best regards
> Thomas
>
>> I'll let Christian with the details, but at a high level this is
>> definitely
>>
>> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>
>> Thanks a lot for doing all this.
>> -Daniel
>>
>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>> ---
>>>> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
>>>> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
>>>> include/drm/drm_gem_ttm_helper.h | 6 +++
>>>> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
>>>> include/linux/dma-buf-map.h | 20 ++++++++
>>>> 5 files changed, 164 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
>>>> 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
>>>> unsigned int indent, }
>>>> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>>>> +/**
>>>> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
>>>> + * @gem: GEM object.
>>>> + * @map: [out] returns the dma-buf mapping.
>>>> + *
>>>> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
>>>> + * &drm_gem_object_funcs.vmap callback.
>>>> + *
>>>> + * Returns:
>>>> + * 0 on success, or a negative errno code otherwise.
>>>> + */
>>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>>>> +
>>>> + return ttm_bo_vmap(bo, map);
>>>> +
>>>> +}
>>>> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
>>>> +
>>>> +/**
>>>> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
>>>> + * @gem: GEM object.
>>>> + * @map: dma-buf mapping.
>>>> + *
>>>> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used
>>>> as
>>>> + * &drm_gem_object_funcs.vmap callback.
>>>> + */
>>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>>>> +
>>>> + ttm_bo_vunmap(bo, map);
>>>> +}
>>>> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
>>>> +
>>>> /**
>>>> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
>>>> * @gem: GEM object.
>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
>>>> 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> @@ -32,6 +32,7 @@
>>>> #include <drm/ttm/ttm_bo_driver.h>
>>>> #include <drm/ttm/ttm_placement.h>
>>>> #include <drm/drm_vma_manager.h>
>>>> +#include <linux/dma-buf-map.h>
>>>> #include <linux/io.h>
>>>> #include <linux/highmem.h>
>>>> #include <linux/wait.h>
>>>> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>>>> }
>>>> EXPORT_SYMBOL(ttm_bo_kunmap);
>>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_resource *mem = &bo->mem;
>>>> + int ret;
>>>> +
>>>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + if (mem->bus.is_iomem) {
>>>> + void __iomem *vaddr_iomem;
>>>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>>> Please use uint64_t here and make sure to cast bo->num_pages before
>>> shifting.
>>>
>>> We have an unit tests of allocating a 8GB BO and that should work on a
>>> 32bit machine as well :)
>>>
>>>> +
>>>> + if (mem->bus.addr)
>>>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
>>>> + else if (mem->placement & TTM_PL_FLAG_WC)
>>> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
>>> mem->bus.caching enum as replacement.
>>>
>>>> + vaddr_iomem = ioremap_wc(mem->bus.offset,
>>>> size);
>>>> + else
>>>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>>>> +
>>>> + if (!vaddr_iomem)
>>>> + return -ENOMEM;
>>>> +
>>>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>>>> +
>>>> + } else {
>>>> + struct ttm_operation_ctx ctx = {
>>>> + .interruptible = false,
>>>> + .no_wait_gpu = false
>>>> + };
>>>> + struct ttm_tt *ttm = bo->ttm;
>>>> + pgprot_t prot;
>>>> + void *vaddr;
>>>> +
>>>> + BUG_ON(!ttm);
>>> I think we can drop this, populate will just crash badly anyway.
>>>
>>>> +
>>>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + /*
>>>> + * We need to use vmap to get the desired page
>>>> protection
>>>> + * or to make the buffer object look contiguous.
>>>> + */
>>>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>>> The calling convention has changed on drm-misc-next as well, but should be
>>> trivial to adapt.
>>>
>>> Regards,
>>> Christian.
>>>
>>>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>>>> + if (!vaddr)
>>>> + return -ENOMEM;
>>>> +
>>>> + dma_buf_map_set_vaddr(map, vaddr);
>>>> + }
>>>> +
>>>> + return 0;
>>>> +}
>>>> +EXPORT_SYMBOL(ttm_bo_vmap);
>>>> +
>>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>>> *map) +{
>>>> + if (dma_buf_map_is_null(map))
>>>> + return;
>>>> +
>>>> + if (map->is_iomem)
>>>> + iounmap(map->vaddr_iomem);
>>>> + else
>>>> + vunmap(map->vaddr);
>>>> + dma_buf_map_clear(map);
>>>> +
>>>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>>>> +}
>>>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>>>> +
>>>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>>>> bool dst_use_tt)
>>>> {
>>>> diff --git a/include/drm/drm_gem_ttm_helper.h
>>>> b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8
>>>> 100644 --- a/include/drm/drm_gem_ttm_helper.h
>>>> +++ b/include/drm/drm_gem_ttm_helper.h
>>>> @@ -10,11 +10,17 @@
>>>> #include <drm/ttm/ttm_bo_api.h>
>>>> #include <drm/ttm/ttm_bo_driver.h>
>>>> +struct dma_buf_map;
>>>> +
>>>> #define drm_gem_ttm_of_gem(gem_obj) \
>>>> container_of(gem_obj, struct ttm_buffer_object, base)
>>>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>>>> indent, const struct drm_gem_object *gem);
>>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map);
>>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map);
>>>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>>>> struct vm_area_struct *vma);
>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>>> index 37102e45e496..2c59a785374c 100644
>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>>>> struct ttm_bo_device;
>>>> +struct dma_buf_map;
>>>> +
>>>> struct drm_mm_node;
>>>> struct ttm_placement;
>>>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>>> unsigned long start_page, */
>>>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>>>> +/**
>>>> + * ttm_bo_vmap
>>>> + *
>>>> + * @bo: The buffer object.
>>>> + * @map: pointer to a struct dma_buf_map representing the map.
>>>> + *
>>>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>>>> + * data in the buffer object. The parameter @map returns the virtual
>>>> + * address as struct dma_buf_map. Unmap the buffer with
>>>> ttm_bo_vunmap().
>>>> + *
>>>> + * Returns
>>>> + * -ENOMEM: Out of memory.
>>>> + * -EINVAL: Invalid range.
>>>> + */
>>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>>> +
>>>> +/**
>>>> + * ttm_bo_vunmap
>>>> + *
>>>> + * @bo: The buffer object.
>>>> + * @map: Object describing the map to unmap.
>>>> + *
>>>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>>>> + */
>>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>>> *map); +
>>>> /**
>>>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>>>> *
>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>> --- a/include/linux/dma-buf-map.h
>>>> +++ b/include/linux/dma-buf-map.h
>>>> @@ -45,6 +45,12 @@
>>>> *
>>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>> *
>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>>> + *
>>>> + * .. code-block:: c
>>>> + *
>>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>> + *
>>>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>> * dma_buf_map_is_null().
>>>> *
>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>>> dma_buf_map *map, void *vaddr) map->is_iomem = false;
>>>> }
>>>> +/**
>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>>> an address in I/O memory
>>>> + * @map: The dma-buf mapping structure
>>>> + * @vaddr_iomem: An I/O-memory address
>>>> + *
>>>> + * Sets the address and the I/O-memory flag.
>>>> + */
>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>>> + void __iomem
>>>> *vaddr_iomem) +{
>>>> + map->vaddr_iomem = vaddr_iomem;
>>>> + map->is_iomem = true;
>>>> +}
>>>> +
>>>> /**
>>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
>>>> equality
>>>> * @lhs: The dma-buf mapping structure
>
>
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-16 9:41 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-16 9:41 UTC (permalink / raw)
To: Thomas Zimmermann, Daniel Vetter
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
yuq825, sam, emil.velikov, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
luben.tuikov, alyssa.rosenzweig, linux+etnaviv, spice-devel,
bskeggs, etnaviv, linaro-mm-sig, hdegoede, xen-devel,
virtualization, sean, apaneers, linux-arm-kernel, amd-gfx,
tomeu.vizoso, sw0312.kim, hjc, kyungmin.park, miaoqinglang,
kgene, alexander.deucher, linux-media
Am 15.10.20 um 19:52 schrieb Thomas Zimmermann:
> Hi
>
> On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter <daniel@ffwll.ch> wrote:
>
>> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
>>> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
>>>> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in
>>>> kernel address space. The mapping's address is returned as struct
>>>> dma_buf_map. Each function is a simplified version of TTM's existing
>>>> kmap code. Both functions respect the memory's location ani/or
>>>> writecombine flags.
>>>>
>>>> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
>>>> two helpers that convert a GEM object into the TTM BO and forward the
>>>> call to TTM's vmap/vunmap. These helpers can be dropped into the rsp
>>>> GEM object callbacks.
>>>>
>>>> v4:
>>>> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers
>>>> (Daniel, Christian)
>>> Bunch of minor comments below, but over all look very solid to me.
>> Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
>> cleanest. And then we can maybe push the combinatorial monster into
>> vmwgfx, which I think is the only user after this series. Or perhaps a
>> dedicated set of helpers to map an invidual page (again using the
>> dma_buf_map stuff).
> From a quick look, I'd say it should be possible to have the same interface
> for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map).
> All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be
> killed off entirely.
Yes, that would be rather nice to have.
Thanks,
Christian.
>
> Best regards
> Thomas
>
>> I'll let Christian with the details, but at a high level this is
>> definitely
>>
>> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>
>> Thanks a lot for doing all this.
>> -Daniel
>>
>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>> ---
>>>> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
>>>> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
>>>> include/drm/drm_gem_ttm_helper.h | 6 +++
>>>> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
>>>> include/linux/dma-buf-map.h | 20 ++++++++
>>>> 5 files changed, 164 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
>>>> 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
>>>> unsigned int indent, }
>>>> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>>>> +/**
>>>> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
>>>> + * @gem: GEM object.
>>>> + * @map: [out] returns the dma-buf mapping.
>>>> + *
>>>> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
>>>> + * &drm_gem_object_funcs.vmap callback.
>>>> + *
>>>> + * Returns:
>>>> + * 0 on success, or a negative errno code otherwise.
>>>> + */
>>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>>>> +
>>>> + return ttm_bo_vmap(bo, map);
>>>> +
>>>> +}
>>>> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
>>>> +
>>>> +/**
>>>> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
>>>> + * @gem: GEM object.
>>>> + * @map: dma-buf mapping.
>>>> + *
>>>> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used
>>>> as
>>>> + * &drm_gem_object_funcs.vmap callback.
>>>> + */
>>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>>>> +
>>>> + ttm_bo_vunmap(bo, map);
>>>> +}
>>>> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
>>>> +
>>>> /**
>>>> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
>>>> * @gem: GEM object.
>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
>>>> 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> @@ -32,6 +32,7 @@
>>>> #include <drm/ttm/ttm_bo_driver.h>
>>>> #include <drm/ttm/ttm_placement.h>
>>>> #include <drm/drm_vma_manager.h>
>>>> +#include <linux/dma-buf-map.h>
>>>> #include <linux/io.h>
>>>> #include <linux/highmem.h>
>>>> #include <linux/wait.h>
>>>> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>>>> }
>>>> EXPORT_SYMBOL(ttm_bo_kunmap);
>>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_resource *mem = &bo->mem;
>>>> + int ret;
>>>> +
>>>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + if (mem->bus.is_iomem) {
>>>> + void __iomem *vaddr_iomem;
>>>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>>> Please use uint64_t here and make sure to cast bo->num_pages before
>>> shifting.
>>>
>>> We have an unit tests of allocating a 8GB BO and that should work on a
>>> 32bit machine as well :)
>>>
>>>> +
>>>> + if (mem->bus.addr)
>>>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
>>>> + else if (mem->placement & TTM_PL_FLAG_WC)
>>> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
>>> mem->bus.caching enum as replacement.
>>>
>>>> + vaddr_iomem = ioremap_wc(mem->bus.offset,
>>>> size);
>>>> + else
>>>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>>>> +
>>>> + if (!vaddr_iomem)
>>>> + return -ENOMEM;
>>>> +
>>>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>>>> +
>>>> + } else {
>>>> + struct ttm_operation_ctx ctx = {
>>>> + .interruptible = false,
>>>> + .no_wait_gpu = false
>>>> + };
>>>> + struct ttm_tt *ttm = bo->ttm;
>>>> + pgprot_t prot;
>>>> + void *vaddr;
>>>> +
>>>> + BUG_ON(!ttm);
>>> I think we can drop this, populate will just crash badly anyway.
>>>
>>>> +
>>>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + /*
>>>> + * We need to use vmap to get the desired page
>>>> protection
>>>> + * or to make the buffer object look contiguous.
>>>> + */
>>>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>>> The calling convention has changed on drm-misc-next as well, but should be
>>> trivial to adapt.
>>>
>>> Regards,
>>> Christian.
>>>
>>>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>>>> + if (!vaddr)
>>>> + return -ENOMEM;
>>>> +
>>>> + dma_buf_map_set_vaddr(map, vaddr);
>>>> + }
>>>> +
>>>> + return 0;
>>>> +}
>>>> +EXPORT_SYMBOL(ttm_bo_vmap);
>>>> +
>>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>>> *map) +{
>>>> + if (dma_buf_map_is_null(map))
>>>> + return;
>>>> +
>>>> + if (map->is_iomem)
>>>> + iounmap(map->vaddr_iomem);
>>>> + else
>>>> + vunmap(map->vaddr);
>>>> + dma_buf_map_clear(map);
>>>> +
>>>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>>>> +}
>>>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>>>> +
>>>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>>>> bool dst_use_tt)
>>>> {
>>>> diff --git a/include/drm/drm_gem_ttm_helper.h
>>>> b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8
>>>> 100644 --- a/include/drm/drm_gem_ttm_helper.h
>>>> +++ b/include/drm/drm_gem_ttm_helper.h
>>>> @@ -10,11 +10,17 @@
>>>> #include <drm/ttm/ttm_bo_api.h>
>>>> #include <drm/ttm/ttm_bo_driver.h>
>>>> +struct dma_buf_map;
>>>> +
>>>> #define drm_gem_ttm_of_gem(gem_obj) \
>>>> container_of(gem_obj, struct ttm_buffer_object, base)
>>>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>>>> indent, const struct drm_gem_object *gem);
>>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map);
>>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map);
>>>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>>>> struct vm_area_struct *vma);
>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>>> index 37102e45e496..2c59a785374c 100644
>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>>>> struct ttm_bo_device;
>>>> +struct dma_buf_map;
>>>> +
>>>> struct drm_mm_node;
>>>> struct ttm_placement;
>>>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>>> unsigned long start_page, */
>>>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>>>> +/**
>>>> + * ttm_bo_vmap
>>>> + *
>>>> + * @bo: The buffer object.
>>>> + * @map: pointer to a struct dma_buf_map representing the map.
>>>> + *
>>>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>>>> + * data in the buffer object. The parameter @map returns the virtual
>>>> + * address as struct dma_buf_map. Unmap the buffer with
>>>> ttm_bo_vunmap().
>>>> + *
>>>> + * Returns
>>>> + * -ENOMEM: Out of memory.
>>>> + * -EINVAL: Invalid range.
>>>> + */
>>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>>> +
>>>> +/**
>>>> + * ttm_bo_vunmap
>>>> + *
>>>> + * @bo: The buffer object.
>>>> + * @map: Object describing the map to unmap.
>>>> + *
>>>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>>>> + */
>>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>>> *map); +
>>>> /**
>>>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>>>> *
>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>> --- a/include/linux/dma-buf-map.h
>>>> +++ b/include/linux/dma-buf-map.h
>>>> @@ -45,6 +45,12 @@
>>>> *
>>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>> *
>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>>> + *
>>>> + * .. code-block:: c
>>>> + *
>>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>> + *
>>>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>> * dma_buf_map_is_null().
>>>> *
>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>>> dma_buf_map *map, void *vaddr) map->is_iomem = false;
>>>> }
>>>> +/**
>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>>> an address in I/O memory
>>>> + * @map: The dma-buf mapping structure
>>>> + * @vaddr_iomem: An I/O-memory address
>>>> + *
>>>> + * Sets the address and the I/O-memory flag.
>>>> + */
>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>>> + void __iomem
>>>> *vaddr_iomem) +{
>>>> + map->vaddr_iomem = vaddr_iomem;
>>>> + map->is_iomem = true;
>>>> +}
>>>> +
>>>> /**
>>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
>>>> equality
>>>> * @lhs: The dma-buf mapping structure
>
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-16 9:41 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-16 9:41 UTC (permalink / raw)
To: Thomas Zimmermann, Daniel Vetter
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
kraxel, yuq825, sam, emil.velikov, linux-samsung-soc,
jy0922.shim, lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, luben.tuikov, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, linaro-mm-sig, hdegoede,
xen-devel, virtualization, sean, apaneers, linux-arm-kernel,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, kgene, alexander.deucher, linux-media
Am 15.10.20 um 19:52 schrieb Thomas Zimmermann:
> Hi
>
> On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter <daniel@ffwll.ch> wrote:
>
>> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
>>> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
>>>> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in
>>>> kernel address space. The mapping's address is returned as struct
>>>> dma_buf_map. Each function is a simplified version of TTM's existing
>>>> kmap code. Both functions respect the memory's location ani/or
>>>> writecombine flags.
>>>>
>>>> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
>>>> two helpers that convert a GEM object into the TTM BO and forward the
>>>> call to TTM's vmap/vunmap. These helpers can be dropped into the rsp
>>>> GEM object callbacks.
>>>>
>>>> v4:
>>>> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers
>>>> (Daniel, Christian)
>>> Bunch of minor comments below, but over all look very solid to me.
>> Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
>> cleanest. And then we can maybe push the combinatorial monster into
>> vmwgfx, which I think is the only user after this series. Or perhaps a
>> dedicated set of helpers to map an invidual page (again using the
>> dma_buf_map stuff).
> From a quick look, I'd say it should be possible to have the same interface
> for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map).
> All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be
> killed off entirely.
Yes, that would be rather nice to have.
Thanks,
Christian.
>
> Best regards
> Thomas
>
>> I'll let Christian with the details, but at a high level this is
>> definitely
>>
>> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>
>> Thanks a lot for doing all this.
>> -Daniel
>>
>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>> ---
>>>> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
>>>> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
>>>> include/drm/drm_gem_ttm_helper.h | 6 +++
>>>> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
>>>> include/linux/dma-buf-map.h | 20 ++++++++
>>>> 5 files changed, 164 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
>>>> 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
>>>> unsigned int indent, }
>>>> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>>>> +/**
>>>> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
>>>> + * @gem: GEM object.
>>>> + * @map: [out] returns the dma-buf mapping.
>>>> + *
>>>> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
>>>> + * &drm_gem_object_funcs.vmap callback.
>>>> + *
>>>> + * Returns:
>>>> + * 0 on success, or a negative errno code otherwise.
>>>> + */
>>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>>>> +
>>>> + return ttm_bo_vmap(bo, map);
>>>> +
>>>> +}
>>>> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
>>>> +
>>>> +/**
>>>> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
>>>> + * @gem: GEM object.
>>>> + * @map: dma-buf mapping.
>>>> + *
>>>> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used
>>>> as
>>>> + * &drm_gem_object_funcs.vmap callback.
>>>> + */
>>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>>>> +
>>>> + ttm_bo_vunmap(bo, map);
>>>> +}
>>>> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
>>>> +
>>>> /**
>>>> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
>>>> * @gem: GEM object.
>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
>>>> 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> @@ -32,6 +32,7 @@
>>>> #include <drm/ttm/ttm_bo_driver.h>
>>>> #include <drm/ttm/ttm_placement.h>
>>>> #include <drm/drm_vma_manager.h>
>>>> +#include <linux/dma-buf-map.h>
>>>> #include <linux/io.h>
>>>> #include <linux/highmem.h>
>>>> #include <linux/wait.h>
>>>> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>>>> }
>>>> EXPORT_SYMBOL(ttm_bo_kunmap);
>>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_resource *mem = &bo->mem;
>>>> + int ret;
>>>> +
>>>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + if (mem->bus.is_iomem) {
>>>> + void __iomem *vaddr_iomem;
>>>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>>> Please use uint64_t here and make sure to cast bo->num_pages before
>>> shifting.
>>>
>>> We have an unit tests of allocating a 8GB BO and that should work on a
>>> 32bit machine as well :)
>>>
>>>> +
>>>> + if (mem->bus.addr)
>>>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
>>>> + else if (mem->placement & TTM_PL_FLAG_WC)
>>> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
>>> mem->bus.caching enum as replacement.
>>>
>>>> + vaddr_iomem = ioremap_wc(mem->bus.offset,
>>>> size);
>>>> + else
>>>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>>>> +
>>>> + if (!vaddr_iomem)
>>>> + return -ENOMEM;
>>>> +
>>>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>>>> +
>>>> + } else {
>>>> + struct ttm_operation_ctx ctx = {
>>>> + .interruptible = false,
>>>> + .no_wait_gpu = false
>>>> + };
>>>> + struct ttm_tt *ttm = bo->ttm;
>>>> + pgprot_t prot;
>>>> + void *vaddr;
>>>> +
>>>> + BUG_ON(!ttm);
>>> I think we can drop this, populate will just crash badly anyway.
>>>
>>>> +
>>>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + /*
>>>> + * We need to use vmap to get the desired page
>>>> protection
>>>> + * or to make the buffer object look contiguous.
>>>> + */
>>>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>>> The calling convention has changed on drm-misc-next as well, but should be
>>> trivial to adapt.
>>>
>>> Regards,
>>> Christian.
>>>
>>>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>>>> + if (!vaddr)
>>>> + return -ENOMEM;
>>>> +
>>>> + dma_buf_map_set_vaddr(map, vaddr);
>>>> + }
>>>> +
>>>> + return 0;
>>>> +}
>>>> +EXPORT_SYMBOL(ttm_bo_vmap);
>>>> +
>>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>>> *map) +{
>>>> + if (dma_buf_map_is_null(map))
>>>> + return;
>>>> +
>>>> + if (map->is_iomem)
>>>> + iounmap(map->vaddr_iomem);
>>>> + else
>>>> + vunmap(map->vaddr);
>>>> + dma_buf_map_clear(map);
>>>> +
>>>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>>>> +}
>>>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>>>> +
>>>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>>>> bool dst_use_tt)
>>>> {
>>>> diff --git a/include/drm/drm_gem_ttm_helper.h
>>>> b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8
>>>> 100644 --- a/include/drm/drm_gem_ttm_helper.h
>>>> +++ b/include/drm/drm_gem_ttm_helper.h
>>>> @@ -10,11 +10,17 @@
>>>> #include <drm/ttm/ttm_bo_api.h>
>>>> #include <drm/ttm/ttm_bo_driver.h>
>>>> +struct dma_buf_map;
>>>> +
>>>> #define drm_gem_ttm_of_gem(gem_obj) \
>>>> container_of(gem_obj, struct ttm_buffer_object, base)
>>>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>>>> indent, const struct drm_gem_object *gem);
>>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map);
>>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map);
>>>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>>>> struct vm_area_struct *vma);
>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>>> index 37102e45e496..2c59a785374c 100644
>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>>>> struct ttm_bo_device;
>>>> +struct dma_buf_map;
>>>> +
>>>> struct drm_mm_node;
>>>> struct ttm_placement;
>>>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>>> unsigned long start_page, */
>>>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>>>> +/**
>>>> + * ttm_bo_vmap
>>>> + *
>>>> + * @bo: The buffer object.
>>>> + * @map: pointer to a struct dma_buf_map representing the map.
>>>> + *
>>>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>>>> + * data in the buffer object. The parameter @map returns the virtual
>>>> + * address as struct dma_buf_map. Unmap the buffer with
>>>> ttm_bo_vunmap().
>>>> + *
>>>> + * Returns
>>>> + * -ENOMEM: Out of memory.
>>>> + * -EINVAL: Invalid range.
>>>> + */
>>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>>> +
>>>> +/**
>>>> + * ttm_bo_vunmap
>>>> + *
>>>> + * @bo: The buffer object.
>>>> + * @map: Object describing the map to unmap.
>>>> + *
>>>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>>>> + */
>>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>>> *map); +
>>>> /**
>>>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>>>> *
>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>> --- a/include/linux/dma-buf-map.h
>>>> +++ b/include/linux/dma-buf-map.h
>>>> @@ -45,6 +45,12 @@
>>>> *
>>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>> *
>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>>> + *
>>>> + * .. code-block:: c
>>>> + *
>>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>> + *
>>>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>> * dma_buf_map_is_null().
>>>> *
>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>>> dma_buf_map *map, void *vaddr) map->is_iomem = false;
>>>> }
>>>> +/**
>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>>> an address in I/O memory
>>>> + * @map: The dma-buf mapping structure
>>>> + * @vaddr_iomem: An I/O-memory address
>>>> + *
>>>> + * Sets the address and the I/O-memory flag.
>>>> + */
>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>>> + void __iomem
>>>> *vaddr_iomem) +{
>>>> + map->vaddr_iomem = vaddr_iomem;
>>>> + map->is_iomem = true;
>>>> +}
>>>> +
>>>> /**
>>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
>>>> equality
>>>> * @lhs: The dma-buf mapping structure
>
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-16 9:41 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-16 9:41 UTC (permalink / raw)
To: Thomas Zimmermann, Daniel Vetter
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
kraxel, yuq825, sam, emil.velikov, linux-samsung-soc,
jy0922.shim, lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, luben.tuikov, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, linaro-mm-sig, hdegoede,
xen-devel, virtualization, sean, apaneers, linux-arm-kernel,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, kgene, alexander.deucher, linux-media
Am 15.10.20 um 19:52 schrieb Thomas Zimmermann:
> Hi
>
> On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter <daniel@ffwll.ch> wrote:
>
>> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
>>> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
>>>> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in
>>>> kernel address space. The mapping's address is returned as struct
>>>> dma_buf_map. Each function is a simplified version of TTM's existing
>>>> kmap code. Both functions respect the memory's location ani/or
>>>> writecombine flags.
>>>>
>>>> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
>>>> two helpers that convert a GEM object into the TTM BO and forward the
>>>> call to TTM's vmap/vunmap. These helpers can be dropped into the rsp
>>>> GEM object callbacks.
>>>>
>>>> v4:
>>>> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers
>>>> (Daniel, Christian)
>>> Bunch of minor comments below, but over all look very solid to me.
>> Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
>> cleanest. And then we can maybe push the combinatorial monster into
>> vmwgfx, which I think is the only user after this series. Or perhaps a
>> dedicated set of helpers to map an invidual page (again using the
>> dma_buf_map stuff).
> From a quick look, I'd say it should be possible to have the same interface
> for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map).
> All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be
> killed off entirely.
Yes, that would be rather nice to have.
Thanks,
Christian.
>
> Best regards
> Thomas
>
>> I'll let Christian with the details, but at a high level this is
>> definitely
>>
>> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>
>> Thanks a lot for doing all this.
>> -Daniel
>>
>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>> ---
>>>> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
>>>> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
>>>> include/drm/drm_gem_ttm_helper.h | 6 +++
>>>> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
>>>> include/linux/dma-buf-map.h | 20 ++++++++
>>>> 5 files changed, 164 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
>>>> 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
>>>> unsigned int indent, }
>>>> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>>>> +/**
>>>> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
>>>> + * @gem: GEM object.
>>>> + * @map: [out] returns the dma-buf mapping.
>>>> + *
>>>> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
>>>> + * &drm_gem_object_funcs.vmap callback.
>>>> + *
>>>> + * Returns:
>>>> + * 0 on success, or a negative errno code otherwise.
>>>> + */
>>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>>>> +
>>>> + return ttm_bo_vmap(bo, map);
>>>> +
>>>> +}
>>>> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
>>>> +
>>>> +/**
>>>> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
>>>> + * @gem: GEM object.
>>>> + * @map: dma-buf mapping.
>>>> + *
>>>> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used
>>>> as
>>>> + * &drm_gem_object_funcs.vmap callback.
>>>> + */
>>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>>>> +
>>>> + ttm_bo_vunmap(bo, map);
>>>> +}
>>>> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
>>>> +
>>>> /**
>>>> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
>>>> * @gem: GEM object.
>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
>>>> 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> @@ -32,6 +32,7 @@
>>>> #include <drm/ttm/ttm_bo_driver.h>
>>>> #include <drm/ttm/ttm_placement.h>
>>>> #include <drm/drm_vma_manager.h>
>>>> +#include <linux/dma-buf-map.h>
>>>> #include <linux/io.h>
>>>> #include <linux/highmem.h>
>>>> #include <linux/wait.h>
>>>> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>>>> }
>>>> EXPORT_SYMBOL(ttm_bo_kunmap);
>>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_resource *mem = &bo->mem;
>>>> + int ret;
>>>> +
>>>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + if (mem->bus.is_iomem) {
>>>> + void __iomem *vaddr_iomem;
>>>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>>> Please use uint64_t here and make sure to cast bo->num_pages before
>>> shifting.
>>>
>>> We have an unit tests of allocating a 8GB BO and that should work on a
>>> 32bit machine as well :)
>>>
>>>> +
>>>> + if (mem->bus.addr)
>>>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
>>>> + else if (mem->placement & TTM_PL_FLAG_WC)
>>> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
>>> mem->bus.caching enum as replacement.
>>>
>>>> + vaddr_iomem = ioremap_wc(mem->bus.offset,
>>>> size);
>>>> + else
>>>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>>>> +
>>>> + if (!vaddr_iomem)
>>>> + return -ENOMEM;
>>>> +
>>>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>>>> +
>>>> + } else {
>>>> + struct ttm_operation_ctx ctx = {
>>>> + .interruptible = false,
>>>> + .no_wait_gpu = false
>>>> + };
>>>> + struct ttm_tt *ttm = bo->ttm;
>>>> + pgprot_t prot;
>>>> + void *vaddr;
>>>> +
>>>> + BUG_ON(!ttm);
>>> I think we can drop this, populate will just crash badly anyway.
>>>
>>>> +
>>>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + /*
>>>> + * We need to use vmap to get the desired page
>>>> protection
>>>> + * or to make the buffer object look contiguous.
>>>> + */
>>>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>>> The calling convention has changed on drm-misc-next as well, but should be
>>> trivial to adapt.
>>>
>>> Regards,
>>> Christian.
>>>
>>>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>>>> + if (!vaddr)
>>>> + return -ENOMEM;
>>>> +
>>>> + dma_buf_map_set_vaddr(map, vaddr);
>>>> + }
>>>> +
>>>> + return 0;
>>>> +}
>>>> +EXPORT_SYMBOL(ttm_bo_vmap);
>>>> +
>>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>>> *map) +{
>>>> + if (dma_buf_map_is_null(map))
>>>> + return;
>>>> +
>>>> + if (map->is_iomem)
>>>> + iounmap(map->vaddr_iomem);
>>>> + else
>>>> + vunmap(map->vaddr);
>>>> + dma_buf_map_clear(map);
>>>> +
>>>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>>>> +}
>>>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>>>> +
>>>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>>>> bool dst_use_tt)
>>>> {
>>>> diff --git a/include/drm/drm_gem_ttm_helper.h
>>>> b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8
>>>> 100644 --- a/include/drm/drm_gem_ttm_helper.h
>>>> +++ b/include/drm/drm_gem_ttm_helper.h
>>>> @@ -10,11 +10,17 @@
>>>> #include <drm/ttm/ttm_bo_api.h>
>>>> #include <drm/ttm/ttm_bo_driver.h>
>>>> +struct dma_buf_map;
>>>> +
>>>> #define drm_gem_ttm_of_gem(gem_obj) \
>>>> container_of(gem_obj, struct ttm_buffer_object, base)
>>>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>>>> indent, const struct drm_gem_object *gem);
>>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map);
>>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map);
>>>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>>>> struct vm_area_struct *vma);
>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>>> index 37102e45e496..2c59a785374c 100644
>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>>>> struct ttm_bo_device;
>>>> +struct dma_buf_map;
>>>> +
>>>> struct drm_mm_node;
>>>> struct ttm_placement;
>>>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>>> unsigned long start_page, */
>>>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>>>> +/**
>>>> + * ttm_bo_vmap
>>>> + *
>>>> + * @bo: The buffer object.
>>>> + * @map: pointer to a struct dma_buf_map representing the map.
>>>> + *
>>>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>>>> + * data in the buffer object. The parameter @map returns the virtual
>>>> + * address as struct dma_buf_map. Unmap the buffer with
>>>> ttm_bo_vunmap().
>>>> + *
>>>> + * Returns
>>>> + * -ENOMEM: Out of memory.
>>>> + * -EINVAL: Invalid range.
>>>> + */
>>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>>> +
>>>> +/**
>>>> + * ttm_bo_vunmap
>>>> + *
>>>> + * @bo: The buffer object.
>>>> + * @map: Object describing the map to unmap.
>>>> + *
>>>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>>>> + */
>>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>>> *map); +
>>>> /**
>>>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>>>> *
>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>> --- a/include/linux/dma-buf-map.h
>>>> +++ b/include/linux/dma-buf-map.h
>>>> @@ -45,6 +45,12 @@
>>>> *
>>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>> *
>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>>> + *
>>>> + * .. code-block:: c
>>>> + *
>>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>> + *
>>>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>> * dma_buf_map_is_null().
>>>> *
>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>>> dma_buf_map *map, void *vaddr) map->is_iomem = false;
>>>> }
>>>> +/**
>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>>> an address in I/O memory
>>>> + * @map: The dma-buf mapping structure
>>>> + * @vaddr_iomem: An I/O-memory address
>>>> + *
>>>> + * Sets the address and the I/O-memory flag.
>>>> + */
>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>>> + void __iomem
>>>> *vaddr_iomem) +{
>>>> + map->vaddr_iomem = vaddr_iomem;
>>>> + map->is_iomem = true;
>>>> +}
>>>> +
>>>> /**
>>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
>>>> equality
>>>> * @lhs: The dma-buf mapping structure
>
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-16 9:41 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-16 9:41 UTC (permalink / raw)
To: Thomas Zimmermann, Daniel Vetter
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
kraxel, yuq825, sam, emil.velikov, linux-samsung-soc,
jy0922.shim, lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, luben.tuikov, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, linaro-mm-sig, hdegoede,
xen-devel, virtualization, sean, apaneers, linux-arm-kernel,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, kgene, alexander.deucher, linux-media
Am 15.10.20 um 19:52 schrieb Thomas Zimmermann:
> Hi
>
> On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter <daniel@ffwll.ch> wrote:
>
>> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
>>> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
>>>> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in
>>>> kernel address space. The mapping's address is returned as struct
>>>> dma_buf_map. Each function is a simplified version of TTM's existing
>>>> kmap code. Both functions respect the memory's location ani/or
>>>> writecombine flags.
>>>>
>>>> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
>>>> two helpers that convert a GEM object into the TTM BO and forward the
>>>> call to TTM's vmap/vunmap. These helpers can be dropped into the rsp
>>>> GEM object callbacks.
>>>>
>>>> v4:
>>>> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers
>>>> (Daniel, Christian)
>>> Bunch of minor comments below, but over all look very solid to me.
>> Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
>> cleanest. And then we can maybe push the combinatorial monster into
>> vmwgfx, which I think is the only user after this series. Or perhaps a
>> dedicated set of helpers to map an invidual page (again using the
>> dma_buf_map stuff).
> From a quick look, I'd say it should be possible to have the same interface
> for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map).
> All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be
> killed off entirely.
Yes, that would be rather nice to have.
Thanks,
Christian.
>
> Best regards
> Thomas
>
>> I'll let Christian with the details, but at a high level this is
>> definitely
>>
>> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>
>> Thanks a lot for doing all this.
>> -Daniel
>>
>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>> ---
>>>> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
>>>> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
>>>> include/drm/drm_gem_ttm_helper.h | 6 +++
>>>> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
>>>> include/linux/dma-buf-map.h | 20 ++++++++
>>>> 5 files changed, 164 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
>>>> 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
>>>> unsigned int indent, }
>>>> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>>>> +/**
>>>> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
>>>> + * @gem: GEM object.
>>>> + * @map: [out] returns the dma-buf mapping.
>>>> + *
>>>> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
>>>> + * &drm_gem_object_funcs.vmap callback.
>>>> + *
>>>> + * Returns:
>>>> + * 0 on success, or a negative errno code otherwise.
>>>> + */
>>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>>>> +
>>>> + return ttm_bo_vmap(bo, map);
>>>> +
>>>> +}
>>>> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
>>>> +
>>>> +/**
>>>> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
>>>> + * @gem: GEM object.
>>>> + * @map: dma-buf mapping.
>>>> + *
>>>> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used
>>>> as
>>>> + * &drm_gem_object_funcs.vmap callback.
>>>> + */
>>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>>>> +
>>>> + ttm_bo_vunmap(bo, map);
>>>> +}
>>>> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
>>>> +
>>>> /**
>>>> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
>>>> * @gem: GEM object.
>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
>>>> 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> @@ -32,6 +32,7 @@
>>>> #include <drm/ttm/ttm_bo_driver.h>
>>>> #include <drm/ttm/ttm_placement.h>
>>>> #include <drm/drm_vma_manager.h>
>>>> +#include <linux/dma-buf-map.h>
>>>> #include <linux/io.h>
>>>> #include <linux/highmem.h>
>>>> #include <linux/wait.h>
>>>> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>>>> }
>>>> EXPORT_SYMBOL(ttm_bo_kunmap);
>>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>>>> +{
>>>> + struct ttm_resource *mem = &bo->mem;
>>>> + int ret;
>>>> +
>>>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + if (mem->bus.is_iomem) {
>>>> + void __iomem *vaddr_iomem;
>>>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>>> Please use uint64_t here and make sure to cast bo->num_pages before
>>> shifting.
>>>
>>> We have an unit tests of allocating a 8GB BO and that should work on a
>>> 32bit machine as well :)
>>>
>>>> +
>>>> + if (mem->bus.addr)
>>>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
>>>> + else if (mem->placement & TTM_PL_FLAG_WC)
>>> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
>>> mem->bus.caching enum as replacement.
>>>
>>>> + vaddr_iomem = ioremap_wc(mem->bus.offset,
>>>> size);
>>>> + else
>>>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>>>> +
>>>> + if (!vaddr_iomem)
>>>> + return -ENOMEM;
>>>> +
>>>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>>>> +
>>>> + } else {
>>>> + struct ttm_operation_ctx ctx = {
>>>> + .interruptible = false,
>>>> + .no_wait_gpu = false
>>>> + };
>>>> + struct ttm_tt *ttm = bo->ttm;
>>>> + pgprot_t prot;
>>>> + void *vaddr;
>>>> +
>>>> + BUG_ON(!ttm);
>>> I think we can drop this, populate will just crash badly anyway.
>>>
>>>> +
>>>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + /*
>>>> + * We need to use vmap to get the desired page
>>>> protection
>>>> + * or to make the buffer object look contiguous.
>>>> + */
>>>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>>> The calling convention has changed on drm-misc-next as well, but should be
>>> trivial to adapt.
>>>
>>> Regards,
>>> Christian.
>>>
>>>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>>>> + if (!vaddr)
>>>> + return -ENOMEM;
>>>> +
>>>> + dma_buf_map_set_vaddr(map, vaddr);
>>>> + }
>>>> +
>>>> + return 0;
>>>> +}
>>>> +EXPORT_SYMBOL(ttm_bo_vmap);
>>>> +
>>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>>> *map) +{
>>>> + if (dma_buf_map_is_null(map))
>>>> + return;
>>>> +
>>>> + if (map->is_iomem)
>>>> + iounmap(map->vaddr_iomem);
>>>> + else
>>>> + vunmap(map->vaddr);
>>>> + dma_buf_map_clear(map);
>>>> +
>>>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>>>> +}
>>>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>>>> +
>>>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>>>> bool dst_use_tt)
>>>> {
>>>> diff --git a/include/drm/drm_gem_ttm_helper.h
>>>> b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8
>>>> 100644 --- a/include/drm/drm_gem_ttm_helper.h
>>>> +++ b/include/drm/drm_gem_ttm_helper.h
>>>> @@ -10,11 +10,17 @@
>>>> #include <drm/ttm/ttm_bo_api.h>
>>>> #include <drm/ttm/ttm_bo_driver.h>
>>>> +struct dma_buf_map;
>>>> +
>>>> #define drm_gem_ttm_of_gem(gem_obj) \
>>>> container_of(gem_obj, struct ttm_buffer_object, base)
>>>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>>>> indent, const struct drm_gem_object *gem);
>>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map);
>>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>>> + struct dma_buf_map *map);
>>>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>>>> struct vm_area_struct *vma);
>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>>> index 37102e45e496..2c59a785374c 100644
>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>>>> struct ttm_bo_device;
>>>> +struct dma_buf_map;
>>>> +
>>>> struct drm_mm_node;
>>>> struct ttm_placement;
>>>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>>> unsigned long start_page, */
>>>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>>>> +/**
>>>> + * ttm_bo_vmap
>>>> + *
>>>> + * @bo: The buffer object.
>>>> + * @map: pointer to a struct dma_buf_map representing the map.
>>>> + *
>>>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>>>> + * data in the buffer object. The parameter @map returns the virtual
>>>> + * address as struct dma_buf_map. Unmap the buffer with
>>>> ttm_bo_vunmap().
>>>> + *
>>>> + * Returns
>>>> + * -ENOMEM: Out of memory.
>>>> + * -EINVAL: Invalid range.
>>>> + */
>>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>>> +
>>>> +/**
>>>> + * ttm_bo_vunmap
>>>> + *
>>>> + * @bo: The buffer object.
>>>> + * @map: Object describing the map to unmap.
>>>> + *
>>>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>>>> + */
>>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>>> *map); +
>>>> /**
>>>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>>>> *
>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>> --- a/include/linux/dma-buf-map.h
>>>> +++ b/include/linux/dma-buf-map.h
>>>> @@ -45,6 +45,12 @@
>>>> *
>>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>> *
>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>>> + *
>>>> + * .. code-block:: c
>>>> + *
>>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>> + *
>>>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>> * dma_buf_map_is_null().
>>>> *
>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>>> dma_buf_map *map, void *vaddr) map->is_iomem = false;
>>>> }
>>>> +/**
>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>>> an address in I/O memory
>>>> + * @map: The dma-buf mapping structure
>>>> + * @vaddr_iomem: An I/O-memory address
>>>> + *
>>>> + * Sets the address and the I/O-memory flag.
>>>> + */
>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>>> + void __iomem
>>>> *vaddr_iomem) +{
>>>> + map->vaddr_iomem = vaddr_iomem;
>>>> + map->is_iomem = true;
>>>> +}
>>>> +
>>>> /**
>>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
>>>> equality
>>>> * @lhs: The dma-buf mapping structure
>
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
2020-10-15 12:38 ` Thomas Zimmermann
` (3 preceding siblings ...)
(?)
@ 2020-10-16 10:08 ` Sam Ravnborg
-1 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 10:08 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: maarten.lankhorst, mripard, airlied, daniel, alexander.deucher,
christian.koenig, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang, dri-devel, amd-gfx, virtualization, etnaviv,
linux-arm-kernel, linux-samsung-soc, lima, nouveau, spice-devel,
linux-rockchip, xen-devel, linux-media, linaro-mm-sig
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> To do framebuffer updates, one needs memcpy from system memory and a
> pointer-increment function. Add both interfaces with documentation.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Looks good.
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
> ---
> include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> 1 file changed, 62 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index 2e8bbecb5091..6ca0f304dda2 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -32,6 +32,14 @@
> * accessing the buffer. Use the returned instance and the helper functions
> * to access the buffer's memory in the correct way.
> *
> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> + * actually independent from the dma-buf infrastructure. When sharing buffers
> + * among devices, drivers have to know the location of the memory to access
> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> + * solves this problem for dma-buf and its users. If other drivers or
> + * sub-systems require similar functionality, the type could be generalized
> + * and moved to a more prominent header file.
> + *
> * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> * considered bad style. Rather then accessing its fields directly, use one
> * of the provided helper functions, or implement your own. For example,
> @@ -51,6 +59,14 @@
> *
> * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> *
> + * Instances of struct dma_buf_map do not have to be cleaned up, but
> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> + * always refer to system memory.
> + *
> + * .. code-block:: c
> + *
> + * dma_buf_map_clear(&map);
> + *
> * Test if a mapping is valid with either dma_buf_map_is_set() or
> * dma_buf_map_is_null().
> *
> @@ -73,17 +89,19 @@
> * if (dma_buf_map_is_equal(&sys_map, &io_map))
> * // always false
> *
> - * Instances of struct dma_buf_map do not have to be cleaned up, but
> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> - * always refer to system memory.
> + * A set up instance of struct dma_buf_map can be used to access or manipulate
> + * the buffer memory. Depending on the location of the memory, the provided
> + * helpers will pick the correct operations. Data can be copied into the memory
> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> + * dma_buf_map_incr().
> *
> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> - * actually independent from the dma-buf infrastructure. When sharing buffers
> - * among devices, drivers have to know the location of the memory to access
> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> - * solves this problem for dma-buf and its users. If other drivers or
> - * sub-systems require similar functionality, the type could be generalized
> - * and moved to a more prominent header file.
> + * .. code-block:: c
> + *
> + * const void *src = ...; // source buffer
> + * size_t len = ...; // length of src
> + *
> + * dma_buf_map_memcpy_to(&map, src, len);
> + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy
> */
>
> /**
> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
> }
> }
>
> +/**
> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> + * @dst: The dma-buf mapping structure
> + * @src: The source buffer
> + * @len: The number of byte in src
> + *
> + * Copies data into a dma-buf mapping. The source buffer is in system
> + * memory. Depending on the buffer's location, the helper picks the correct
> + * method of accessing the memory.
> + */
> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> +{
> + if (dst->is_iomem)
> + memcpy_toio(dst->vaddr_iomem, src, len);
> + else
> + memcpy(dst->vaddr, src, len);
> +}
> +
> +/**
> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> + * @map: The dma-buf mapping structure
> + * @incr: The number of bytes to increment
> + *
> + * Increments the address stored in a dma-buf mapping. Depending on the
> + * buffer's location, the correct value will be updated.
> + */
> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> +{
> + if (map->is_iomem)
> + map->vaddr_iomem += incr;
> + else
> + map->vaddr += incr;
> +}
> +
> #endif /* __DMA_BUF_MAP_H__ */
> --
> 2.28.0
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-16 10:08 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 10:08 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: maarten.lankhorst, mripard, airlied, daniel, alexander.deucher,
christian.koenig, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, mel
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> To do framebuffer updates, one needs memcpy from system memory and a
> pointer-increment function. Add both interfaces with documentation.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Looks good.
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
> ---
> include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> 1 file changed, 62 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index 2e8bbecb5091..6ca0f304dda2 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -32,6 +32,14 @@
> * accessing the buffer. Use the returned instance and the helper functions
> * to access the buffer's memory in the correct way.
> *
> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> + * actually independent from the dma-buf infrastructure. When sharing buffers
> + * among devices, drivers have to know the location of the memory to access
> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> + * solves this problem for dma-buf and its users. If other drivers or
> + * sub-systems require similar functionality, the type could be generalized
> + * and moved to a more prominent header file.
> + *
> * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> * considered bad style. Rather then accessing its fields directly, use one
> * of the provided helper functions, or implement your own. For example,
> @@ -51,6 +59,14 @@
> *
> * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> *
> + * Instances of struct dma_buf_map do not have to be cleaned up, but
> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> + * always refer to system memory.
> + *
> + * .. code-block:: c
> + *
> + * dma_buf_map_clear(&map);
> + *
> * Test if a mapping is valid with either dma_buf_map_is_set() or
> * dma_buf_map_is_null().
> *
> @@ -73,17 +89,19 @@
> * if (dma_buf_map_is_equal(&sys_map, &io_map))
> * // always false
> *
> - * Instances of struct dma_buf_map do not have to be cleaned up, but
> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> - * always refer to system memory.
> + * A set up instance of struct dma_buf_map can be used to access or manipulate
> + * the buffer memory. Depending on the location of the memory, the provided
> + * helpers will pick the correct operations. Data can be copied into the memory
> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> + * dma_buf_map_incr().
> *
> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> - * actually independent from the dma-buf infrastructure. When sharing buffers
> - * among devices, drivers have to know the location of the memory to access
> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> - * solves this problem for dma-buf and its users. If other drivers or
> - * sub-systems require similar functionality, the type could be generalized
> - * and moved to a more prominent header file.
> + * .. code-block:: c
> + *
> + * const void *src = ...; // source buffer
> + * size_t len = ...; // length of src
> + *
> + * dma_buf_map_memcpy_to(&map, src, len);
> + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy
> */
>
> /**
> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
> }
> }
>
> +/**
> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> + * @dst: The dma-buf mapping structure
> + * @src: The source buffer
> + * @len: The number of byte in src
> + *
> + * Copies data into a dma-buf mapping. The source buffer is in system
> + * memory. Depending on the buffer's location, the helper picks the correct
> + * method of accessing the memory.
> + */
> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> +{
> + if (dst->is_iomem)
> + memcpy_toio(dst->vaddr_iomem, src, len);
> + else
> + memcpy(dst->vaddr, src, len);
> +}
> +
> +/**
> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> + * @map: The dma-buf mapping structure
> + * @incr: The number of bytes to increment
> + *
> + * Increments the address stored in a dma-buf mapping. Depending on the
> + * buffer's location, the correct value will be updated.
> + */
> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> +{
> + if (map->is_iomem)
> + map->vaddr_iomem += incr;
> + else
> + map->vaddr += incr;
> +}
> +
> #endif /* __DMA_BUF_MAP_H__ */
> --
> 2.28.0
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-16 10:08 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 10:08 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, kraxel, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, christian.koenig, l.stach
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> To do framebuffer updates, one needs memcpy from system memory and a
> pointer-increment function. Add both interfaces with documentation.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Looks good.
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
> ---
> include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> 1 file changed, 62 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index 2e8bbecb5091..6ca0f304dda2 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -32,6 +32,14 @@
> * accessing the buffer. Use the returned instance and the helper functions
> * to access the buffer's memory in the correct way.
> *
> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> + * actually independent from the dma-buf infrastructure. When sharing buffers
> + * among devices, drivers have to know the location of the memory to access
> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> + * solves this problem for dma-buf and its users. If other drivers or
> + * sub-systems require similar functionality, the type could be generalized
> + * and moved to a more prominent header file.
> + *
> * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> * considered bad style. Rather then accessing its fields directly, use one
> * of the provided helper functions, or implement your own. For example,
> @@ -51,6 +59,14 @@
> *
> * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> *
> + * Instances of struct dma_buf_map do not have to be cleaned up, but
> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> + * always refer to system memory.
> + *
> + * .. code-block:: c
> + *
> + * dma_buf_map_clear(&map);
> + *
> * Test if a mapping is valid with either dma_buf_map_is_set() or
> * dma_buf_map_is_null().
> *
> @@ -73,17 +89,19 @@
> * if (dma_buf_map_is_equal(&sys_map, &io_map))
> * // always false
> *
> - * Instances of struct dma_buf_map do not have to be cleaned up, but
> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> - * always refer to system memory.
> + * A set up instance of struct dma_buf_map can be used to access or manipulate
> + * the buffer memory. Depending on the location of the memory, the provided
> + * helpers will pick the correct operations. Data can be copied into the memory
> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> + * dma_buf_map_incr().
> *
> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> - * actually independent from the dma-buf infrastructure. When sharing buffers
> - * among devices, drivers have to know the location of the memory to access
> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> - * solves this problem for dma-buf and its users. If other drivers or
> - * sub-systems require similar functionality, the type could be generalized
> - * and moved to a more prominent header file.
> + * .. code-block:: c
> + *
> + * const void *src = ...; // source buffer
> + * size_t len = ...; // length of src
> + *
> + * dma_buf_map_memcpy_to(&map, src, len);
> + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy
> */
>
> /**
> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
> }
> }
>
> +/**
> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> + * @dst: The dma-buf mapping structure
> + * @src: The source buffer
> + * @len: The number of byte in src
> + *
> + * Copies data into a dma-buf mapping. The source buffer is in system
> + * memory. Depending on the buffer's location, the helper picks the correct
> + * method of accessing the memory.
> + */
> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> +{
> + if (dst->is_iomem)
> + memcpy_toio(dst->vaddr_iomem, src, len);
> + else
> + memcpy(dst->vaddr, src, len);
> +}
> +
> +/**
> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> + * @map: The dma-buf mapping structure
> + * @incr: The number of bytes to increment
> + *
> + * Increments the address stored in a dma-buf mapping. Depending on the
> + * buffer's location, the correct value will be updated.
> + */
> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> +{
> + if (map->is_iomem)
> + map->vaddr_iomem += incr;
> + else
> + map->vaddr += incr;
> +}
> +
> #endif /* __DMA_BUF_MAP_H__ */
> --
> 2.28.0
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-16 10:08 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 10:08 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, sumit.semwal, emil.velikov,
robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, christian.koenig, l.stach
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> To do framebuffer updates, one needs memcpy from system memory and a
> pointer-increment function. Add both interfaces with documentation.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Looks good.
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
> ---
> include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> 1 file changed, 62 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index 2e8bbecb5091..6ca0f304dda2 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -32,6 +32,14 @@
> * accessing the buffer. Use the returned instance and the helper functions
> * to access the buffer's memory in the correct way.
> *
> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> + * actually independent from the dma-buf infrastructure. When sharing buffers
> + * among devices, drivers have to know the location of the memory to access
> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> + * solves this problem for dma-buf and its users. If other drivers or
> + * sub-systems require similar functionality, the type could be generalized
> + * and moved to a more prominent header file.
> + *
> * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> * considered bad style. Rather then accessing its fields directly, use one
> * of the provided helper functions, or implement your own. For example,
> @@ -51,6 +59,14 @@
> *
> * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> *
> + * Instances of struct dma_buf_map do not have to be cleaned up, but
> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> + * always refer to system memory.
> + *
> + * .. code-block:: c
> + *
> + * dma_buf_map_clear(&map);
> + *
> * Test if a mapping is valid with either dma_buf_map_is_set() or
> * dma_buf_map_is_null().
> *
> @@ -73,17 +89,19 @@
> * if (dma_buf_map_is_equal(&sys_map, &io_map))
> * // always false
> *
> - * Instances of struct dma_buf_map do not have to be cleaned up, but
> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> - * always refer to system memory.
> + * A set up instance of struct dma_buf_map can be used to access or manipulate
> + * the buffer memory. Depending on the location of the memory, the provided
> + * helpers will pick the correct operations. Data can be copied into the memory
> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> + * dma_buf_map_incr().
> *
> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> - * actually independent from the dma-buf infrastructure. When sharing buffers
> - * among devices, drivers have to know the location of the memory to access
> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> - * solves this problem for dma-buf and its users. If other drivers or
> - * sub-systems require similar functionality, the type could be generalized
> - * and moved to a more prominent header file.
> + * .. code-block:: c
> + *
> + * const void *src = ...; // source buffer
> + * size_t len = ...; // length of src
> + *
> + * dma_buf_map_memcpy_to(&map, src, len);
> + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy
> */
>
> /**
> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
> }
> }
>
> +/**
> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> + * @dst: The dma-buf mapping structure
> + * @src: The source buffer
> + * @len: The number of byte in src
> + *
> + * Copies data into a dma-buf mapping. The source buffer is in system
> + * memory. Depending on the buffer's location, the helper picks the correct
> + * method of accessing the memory.
> + */
> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> +{
> + if (dst->is_iomem)
> + memcpy_toio(dst->vaddr_iomem, src, len);
> + else
> + memcpy(dst->vaddr, src, len);
> +}
> +
> +/**
> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> + * @map: The dma-buf mapping structure
> + * @incr: The number of bytes to increment
> + *
> + * Increments the address stored in a dma-buf mapping. Depending on the
> + * buffer's location, the correct value will be updated.
> + */
> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> +{
> + if (map->is_iomem)
> + map->vaddr_iomem += incr;
> + else
> + map->vaddr += incr;
> +}
> +
> #endif /* __DMA_BUF_MAP_H__ */
> --
> 2.28.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-16 10:08 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 10:08 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, airlied, nouveau, dri-devel, chris, melissa.srw,
ray.huang, kraxel, emil.velikov, linux-samsung-soc, jy0922.shim,
lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, kgene, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, hdegoede, xen-devel,
virtualization, sean, apaneers, linux-arm-kernel, linaro-mm-sig,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, yuq825, alexander.deucher, linux-media,
christian.koenig
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> To do framebuffer updates, one needs memcpy from system memory and a
> pointer-increment function. Add both interfaces with documentation.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Looks good.
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
> ---
> include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> 1 file changed, 62 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index 2e8bbecb5091..6ca0f304dda2 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -32,6 +32,14 @@
> * accessing the buffer. Use the returned instance and the helper functions
> * to access the buffer's memory in the correct way.
> *
> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> + * actually independent from the dma-buf infrastructure. When sharing buffers
> + * among devices, drivers have to know the location of the memory to access
> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> + * solves this problem for dma-buf and its users. If other drivers or
> + * sub-systems require similar functionality, the type could be generalized
> + * and moved to a more prominent header file.
> + *
> * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> * considered bad style. Rather then accessing its fields directly, use one
> * of the provided helper functions, or implement your own. For example,
> @@ -51,6 +59,14 @@
> *
> * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> *
> + * Instances of struct dma_buf_map do not have to be cleaned up, but
> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> + * always refer to system memory.
> + *
> + * .. code-block:: c
> + *
> + * dma_buf_map_clear(&map);
> + *
> * Test if a mapping is valid with either dma_buf_map_is_set() or
> * dma_buf_map_is_null().
> *
> @@ -73,17 +89,19 @@
> * if (dma_buf_map_is_equal(&sys_map, &io_map))
> * // always false
> *
> - * Instances of struct dma_buf_map do not have to be cleaned up, but
> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> - * always refer to system memory.
> + * A set up instance of struct dma_buf_map can be used to access or manipulate
> + * the buffer memory. Depending on the location of the memory, the provided
> + * helpers will pick the correct operations. Data can be copied into the memory
> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> + * dma_buf_map_incr().
> *
> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> - * actually independent from the dma-buf infrastructure. When sharing buffers
> - * among devices, drivers have to know the location of the memory to access
> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> - * solves this problem for dma-buf and its users. If other drivers or
> - * sub-systems require similar functionality, the type could be generalized
> - * and moved to a more prominent header file.
> + * .. code-block:: c
> + *
> + * const void *src = ...; // source buffer
> + * size_t len = ...; // length of src
> + *
> + * dma_buf_map_memcpy_to(&map, src, len);
> + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy
> */
>
> /**
> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
> }
> }
>
> +/**
> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> + * @dst: The dma-buf mapping structure
> + * @src: The source buffer
> + * @len: The number of byte in src
> + *
> + * Copies data into a dma-buf mapping. The source buffer is in system
> + * memory. Depending on the buffer's location, the helper picks the correct
> + * method of accessing the memory.
> + */
> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> +{
> + if (dst->is_iomem)
> + memcpy_toio(dst->vaddr_iomem, src, len);
> + else
> + memcpy(dst->vaddr, src, len);
> +}
> +
> +/**
> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> + * @map: The dma-buf mapping structure
> + * @incr: The number of bytes to increment
> + *
> + * Increments the address stored in a dma-buf mapping. Depending on the
> + * buffer's location, the correct value will be updated.
> + */
> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> +{
> + if (map->is_iomem)
> + map->vaddr_iomem += incr;
> + else
> + map->vaddr += incr;
> +}
> +
> #endif /* __DMA_BUF_MAP_H__ */
> --
> 2.28.0
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-16 10:08 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 10:08 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, kraxel, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, christian.koenig, l.stach
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> To do framebuffer updates, one needs memcpy from system memory and a
> pointer-increment function. Add both interfaces with documentation.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Looks good.
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
> ---
> include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> 1 file changed, 62 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index 2e8bbecb5091..6ca0f304dda2 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -32,6 +32,14 @@
> * accessing the buffer. Use the returned instance and the helper functions
> * to access the buffer's memory in the correct way.
> *
> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> + * actually independent from the dma-buf infrastructure. When sharing buffers
> + * among devices, drivers have to know the location of the memory to access
> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> + * solves this problem for dma-buf and its users. If other drivers or
> + * sub-systems require similar functionality, the type could be generalized
> + * and moved to a more prominent header file.
> + *
> * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> * considered bad style. Rather then accessing its fields directly, use one
> * of the provided helper functions, or implement your own. For example,
> @@ -51,6 +59,14 @@
> *
> * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> *
> + * Instances of struct dma_buf_map do not have to be cleaned up, but
> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> + * always refer to system memory.
> + *
> + * .. code-block:: c
> + *
> + * dma_buf_map_clear(&map);
> + *
> * Test if a mapping is valid with either dma_buf_map_is_set() or
> * dma_buf_map_is_null().
> *
> @@ -73,17 +89,19 @@
> * if (dma_buf_map_is_equal(&sys_map, &io_map))
> * // always false
> *
> - * Instances of struct dma_buf_map do not have to be cleaned up, but
> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> - * always refer to system memory.
> + * A set up instance of struct dma_buf_map can be used to access or manipulate
> + * the buffer memory. Depending on the location of the memory, the provided
> + * helpers will pick the correct operations. Data can be copied into the memory
> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> + * dma_buf_map_incr().
> *
> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> - * actually independent from the dma-buf infrastructure. When sharing buffers
> - * among devices, drivers have to know the location of the memory to access
> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> - * solves this problem for dma-buf and its users. If other drivers or
> - * sub-systems require similar functionality, the type could be generalized
> - * and moved to a more prominent header file.
> + * .. code-block:: c
> + *
> + * const void *src = ...; // source buffer
> + * size_t len = ...; // length of src
> + *
> + * dma_buf_map_memcpy_to(&map, src, len);
> + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy
> */
>
> /**
> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
> }
> }
>
> +/**
> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> + * @dst: The dma-buf mapping structure
> + * @src: The source buffer
> + * @len: The number of byte in src
> + *
> + * Copies data into a dma-buf mapping. The source buffer is in system
> + * memory. Depending on the buffer's location, the helper picks the correct
> + * method of accessing the memory.
> + */
> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> +{
> + if (dst->is_iomem)
> + memcpy_toio(dst->vaddr_iomem, src, len);
> + else
> + memcpy(dst->vaddr, src, len);
> +}
> +
> +/**
> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> + * @map: The dma-buf mapping structure
> + * @incr: The number of bytes to increment
> + *
> + * Increments the address stored in a dma-buf mapping. Depending on the
> + * buffer's location, the correct value will be updated.
> + */
> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> +{
> + if (map->is_iomem)
> + map->vaddr_iomem += incr;
> + else
> + map->vaddr += incr;
> +}
> +
> #endif /* __DMA_BUF_MAP_H__ */
> --
> 2.28.0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
2020-10-16 10:08 ` Sam Ravnborg
` (4 preceding siblings ...)
(?)
@ 2020-10-16 10:39 ` Thomas Zimmermann
-1 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 10:39 UTC (permalink / raw)
To: Sam Ravnborg
Cc: luben.tuikov, airlied, nouveau, dri-devel, chris, melissa.srw,
ray.huang, kraxel, emil.velikov, linux-samsung-soc, jy0922.shim,
lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, kgene, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, hdegoede, xen-devel,
virtualization, sean, apaneers, linux-arm-kernel, linaro-mm-sig,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, yuq825, alexander.deucher, linux-media,
christian.koenig
Hi Sam
On Fri, 16 Oct 2020 12:08:54 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> > To do framebuffer updates, one needs memcpy from system memory and a
> > pointer-increment function. Add both interfaces with documentation.
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>
> Looks good.
> Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
Thanks. If you have the time, may I ask you to test this patchset on the
bochs/sparc64 system that failed with the original code?
Best regards
Thomas
>
> > ---
> > include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> > 1 file changed, 62 insertions(+), 10 deletions(-)
> >
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index 2e8bbecb5091..6ca0f304dda2 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -32,6 +32,14 @@
> > * accessing the buffer. Use the returned instance and the helper
> > functions
> > * to access the buffer's memory in the correct way.
> > *
> > + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers
> > are
> > + * actually independent from the dma-buf infrastructure. When sharing
> > buffers
> > + * among devices, drivers have to know the location of the memory to
> > access
> > + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> > + * solves this problem for dma-buf and its users. If other drivers or
> > + * sub-systems require similar functionality, the type could be
> > generalized
> > + * and moved to a more prominent header file.
> > + *
> > * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> > * considered bad style. Rather then accessing its fields directly, use
> > one
> > * of the provided helper functions, or implement your own. For example,
> > @@ -51,6 +59,14 @@
> > *
> > * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > *
> > + * Instances of struct dma_buf_map do not have to be cleaned up, but
> > + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > + * always refer to system memory.
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_clear(&map);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -73,17 +89,19 @@
> > * if (dma_buf_map_is_equal(&sys_map, &io_map))
> > * // always false
> > *
> > - * Instances of struct dma_buf_map do not have to be cleaned up, but
> > - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > - * always refer to system memory.
> > + * A set up instance of struct dma_buf_map can be used to access or
> > manipulate
> > + * the buffer memory. Depending on the location of the memory, the
> > provided
> > + * helpers will pick the correct operations. Data can be copied into the
> > memory
> > + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> > + * dma_buf_map_incr().
> > *
> > - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers
> > are
> > - * actually independent from the dma-buf infrastructure. When sharing
> > buffers
> > - * among devices, drivers have to know the location of the memory to
> > access
> > - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> > - * solves this problem for dma-buf and its users. If other drivers or
> > - * sub-systems require similar functionality, the type could be
> > generalized
> > - * and moved to a more prominent header file.
> > + * .. code-block:: c
> > + *
> > + * const void *src = ...; // source buffer
> > + * size_t len = ...; // length of src
> > + *
> > + * dma_buf_map_memcpy_to(&map, src, len);
> > + * dma_buf_map_incr(&map, len); // go to first byte after the
> > memcpy */
> >
> > /**
> > @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct
> > dma_buf_map *map) }
> > }
> >
> > +/**
> > + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> > + * @dst: The dma-buf mapping structure
> > + * @src: The source buffer
> > + * @len: The number of byte in src
> > + *
> > + * Copies data into a dma-buf mapping. The source buffer is in system
> > + * memory. Depending on the buffer's location, the helper picks the
> > correct
> > + * method of accessing the memory.
> > + */
> > +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const
> > void *src, size_t len) +{
> > + if (dst->is_iomem)
> > + memcpy_toio(dst->vaddr_iomem, src, len);
> > + else
> > + memcpy(dst->vaddr, src, len);
> > +}
> > +
> > +/**
> > + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> > + * @map: The dma-buf mapping structure
> > + * @incr: The number of bytes to increment
> > + *
> > + * Increments the address stored in a dma-buf mapping. Depending on the
> > + * buffer's location, the correct value will be updated.
> > + */
> > +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> > +{
> > + if (map->is_iomem)
> > + map->vaddr_iomem += incr;
> > + else
> > + map->vaddr += incr;
> > +}
> > +
> > #endif /* __DMA_BUF_MAP_H__ */
> > --
> > 2.28.0
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-16 10:39 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 10:39 UTC (permalink / raw)
To: Sam Ravnborg
Cc: luben.tuikov, airlied, nouveau, dri-devel, chris, melissa.srw,
ray.huang, kraxel, emil.velikov, linux-samsung-soc, jy0922.shim,
lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, kgene, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, hdegoede, xen-devel,
virtualization, sean, apaneers, linux-arm-kernel, linaro-mm-sig
Hi Sam
On Fri, 16 Oct 2020 12:08:54 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> > To do framebuffer updates, one needs memcpy from system memory and a
> > pointer-increment function. Add both interfaces with documentation.
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>
> Looks good.
> Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
Thanks. If you have the time, may I ask you to test this patchset on the
bochs/sparc64 system that failed with the original code?
Best regards
Thomas
>
> > ---
> > include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> > 1 file changed, 62 insertions(+), 10 deletions(-)
> >
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index 2e8bbecb5091..6ca0f304dda2 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -32,6 +32,14 @@
> > * accessing the buffer. Use the returned instance and the helper
> > functions
> > * to access the buffer's memory in the correct way.
> > *
> > + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers
> > are
> > + * actually independent from the dma-buf infrastructure. When sharing
> > buffers
> > + * among devices, drivers have to know the location of the memory to
> > access
> > + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> > + * solves this problem for dma-buf and its users. If other drivers or
> > + * sub-systems require similar functionality, the type could be
> > generalized
> > + * and moved to a more prominent header file.
> > + *
> > * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> > * considered bad style. Rather then accessing its fields directly, use
> > one
> > * of the provided helper functions, or implement your own. For example,
> > @@ -51,6 +59,14 @@
> > *
> > * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > *
> > + * Instances of struct dma_buf_map do not have to be cleaned up, but
> > + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > + * always refer to system memory.
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_clear(&map);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -73,17 +89,19 @@
> > * if (dma_buf_map_is_equal(&sys_map, &io_map))
> > * // always false
> > *
> > - * Instances of struct dma_buf_map do not have to be cleaned up, but
> > - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > - * always refer to system memory.
> > + * A set up instance of struct dma_buf_map can be used to access or
> > manipulate
> > + * the buffer memory. Depending on the location of the memory, the
> > provided
> > + * helpers will pick the correct operations. Data can be copied into the
> > memory
> > + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> > + * dma_buf_map_incr().
> > *
> > - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers
> > are
> > - * actually independent from the dma-buf infrastructure. When sharing
> > buffers
> > - * among devices, drivers have to know the location of the memory to
> > access
> > - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> > - * solves this problem for dma-buf and its users. If other drivers or
> > - * sub-systems require similar functionality, the type could be
> > generalized
> > - * and moved to a more prominent header file.
> > + * .. code-block:: c
> > + *
> > + * const void *src = ...; // source buffer
> > + * size_t len = ...; // length of src
> > + *
> > + * dma_buf_map_memcpy_to(&map, src, len);
> > + * dma_buf_map_incr(&map, len); // go to first byte after the
> > memcpy */
> >
> > /**
> > @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct
> > dma_buf_map *map) }
> > }
> >
> > +/**
> > + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> > + * @dst: The dma-buf mapping structure
> > + * @src: The source buffer
> > + * @len: The number of byte in src
> > + *
> > + * Copies data into a dma-buf mapping. The source buffer is in system
> > + * memory. Depending on the buffer's location, the helper picks the
> > correct
> > + * method of accessing the memory.
> > + */
> > +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const
> > void *src, size_t len) +{
> > + if (dst->is_iomem)
> > + memcpy_toio(dst->vaddr_iomem, src, len);
> > + else
> > + memcpy(dst->vaddr, src, len);
> > +}
> > +
> > +/**
> > + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> > + * @map: The dma-buf mapping structure
> > + * @incr: The number of bytes to increment
> > + *
> > + * Increments the address stored in a dma-buf mapping. Depending on the
> > + * buffer's location, the correct value will be updated.
> > + */
> > +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> > +{
> > + if (map->is_iomem)
> > + map->vaddr_iomem += incr;
> > + else
> > + map->vaddr += incr;
> > +}
> > +
> > #endif /* __DMA_BUF_MAP_H__ */
> > --
> > 2.28.0
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-16 10:39 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 10:39 UTC (permalink / raw)
To: Sam Ravnborg
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
kraxel, yuq825, emil.velikov, linux-samsung-soc, jy0922.shim,
lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, luben.tuikov, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, linaro-mm-sig, hdegoede,
xen-devel, virtualization, sean, apaneers, linux-arm-kernel,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, kgene, alexander.deucher, linux-media,
christian.koenig
Hi Sam
On Fri, 16 Oct 2020 12:08:54 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> > To do framebuffer updates, one needs memcpy from system memory and a
> > pointer-increment function. Add both interfaces with documentation.
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>
> Looks good.
> Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
Thanks. If you have the time, may I ask you to test this patchset on the
bochs/sparc64 system that failed with the original code?
Best regards
Thomas
>
> > ---
> > include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> > 1 file changed, 62 insertions(+), 10 deletions(-)
> >
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index 2e8bbecb5091..6ca0f304dda2 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -32,6 +32,14 @@
> > * accessing the buffer. Use the returned instance and the helper
> > functions
> > * to access the buffer's memory in the correct way.
> > *
> > + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers
> > are
> > + * actually independent from the dma-buf infrastructure. When sharing
> > buffers
> > + * among devices, drivers have to know the location of the memory to
> > access
> > + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> > + * solves this problem for dma-buf and its users. If other drivers or
> > + * sub-systems require similar functionality, the type could be
> > generalized
> > + * and moved to a more prominent header file.
> > + *
> > * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> > * considered bad style. Rather then accessing its fields directly, use
> > one
> > * of the provided helper functions, or implement your own. For example,
> > @@ -51,6 +59,14 @@
> > *
> > * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > *
> > + * Instances of struct dma_buf_map do not have to be cleaned up, but
> > + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > + * always refer to system memory.
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_clear(&map);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -73,17 +89,19 @@
> > * if (dma_buf_map_is_equal(&sys_map, &io_map))
> > * // always false
> > *
> > - * Instances of struct dma_buf_map do not have to be cleaned up, but
> > - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > - * always refer to system memory.
> > + * A set up instance of struct dma_buf_map can be used to access or
> > manipulate
> > + * the buffer memory. Depending on the location of the memory, the
> > provided
> > + * helpers will pick the correct operations. Data can be copied into the
> > memory
> > + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> > + * dma_buf_map_incr().
> > *
> > - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers
> > are
> > - * actually independent from the dma-buf infrastructure. When sharing
> > buffers
> > - * among devices, drivers have to know the location of the memory to
> > access
> > - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> > - * solves this problem for dma-buf and its users. If other drivers or
> > - * sub-systems require similar functionality, the type could be
> > generalized
> > - * and moved to a more prominent header file.
> > + * .. code-block:: c
> > + *
> > + * const void *src = ...; // source buffer
> > + * size_t len = ...; // length of src
> > + *
> > + * dma_buf_map_memcpy_to(&map, src, len);
> > + * dma_buf_map_incr(&map, len); // go to first byte after the
> > memcpy */
> >
> > /**
> > @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct
> > dma_buf_map *map) }
> > }
> >
> > +/**
> > + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> > + * @dst: The dma-buf mapping structure
> > + * @src: The source buffer
> > + * @len: The number of byte in src
> > + *
> > + * Copies data into a dma-buf mapping. The source buffer is in system
> > + * memory. Depending on the buffer's location, the helper picks the
> > correct
> > + * method of accessing the memory.
> > + */
> > +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const
> > void *src, size_t len) +{
> > + if (dst->is_iomem)
> > + memcpy_toio(dst->vaddr_iomem, src, len);
> > + else
> > + memcpy(dst->vaddr, src, len);
> > +}
> > +
> > +/**
> > + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> > + * @map: The dma-buf mapping structure
> > + * @incr: The number of bytes to increment
> > + *
> > + * Increments the address stored in a dma-buf mapping. Depending on the
> > + * buffer's location, the correct value will be updated.
> > + */
> > +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> > +{
> > + if (map->is_iomem)
> > + map->vaddr_iomem += incr;
> > + else
> > + map->vaddr += incr;
> > +}
> > +
> > #endif /* __DMA_BUF_MAP_H__ */
> > --
> > 2.28.0
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-16 10:39 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 10:39 UTC (permalink / raw)
To: Sam Ravnborg
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
yuq825, emil.velikov, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
luben.tuikov, alyssa.rosenzweig, linux+etnaviv, spice-devel,
bskeggs, etnaviv, linaro-mm-sig, hdegoede, xen-devel,
virtualization, sean, apaneers, linux-arm-kernel, amd-gfx,
tomeu.vizoso, sw0312.kim, hjc, kyungmin.park, miaoqinglang,
kgene, alexander.deucher, linux-media, christian.koenig
Hi Sam
On Fri, 16 Oct 2020 12:08:54 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> > To do framebuffer updates, one needs memcpy from system memory and a
> > pointer-increment function. Add both interfaces with documentation.
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>
> Looks good.
> Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
Thanks. If you have the time, may I ask you to test this patchset on the
bochs/sparc64 system that failed with the original code?
Best regards
Thomas
>
> > ---
> > include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> > 1 file changed, 62 insertions(+), 10 deletions(-)
> >
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index 2e8bbecb5091..6ca0f304dda2 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -32,6 +32,14 @@
> > * accessing the buffer. Use the returned instance and the helper
> > functions
> > * to access the buffer's memory in the correct way.
> > *
> > + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers
> > are
> > + * actually independent from the dma-buf infrastructure. When sharing
> > buffers
> > + * among devices, drivers have to know the location of the memory to
> > access
> > + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> > + * solves this problem for dma-buf and its users. If other drivers or
> > + * sub-systems require similar functionality, the type could be
> > generalized
> > + * and moved to a more prominent header file.
> > + *
> > * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> > * considered bad style. Rather then accessing its fields directly, use
> > one
> > * of the provided helper functions, or implement your own. For example,
> > @@ -51,6 +59,14 @@
> > *
> > * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > *
> > + * Instances of struct dma_buf_map do not have to be cleaned up, but
> > + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > + * always refer to system memory.
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_clear(&map);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -73,17 +89,19 @@
> > * if (dma_buf_map_is_equal(&sys_map, &io_map))
> > * // always false
> > *
> > - * Instances of struct dma_buf_map do not have to be cleaned up, but
> > - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > - * always refer to system memory.
> > + * A set up instance of struct dma_buf_map can be used to access or
> > manipulate
> > + * the buffer memory. Depending on the location of the memory, the
> > provided
> > + * helpers will pick the correct operations. Data can be copied into the
> > memory
> > + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> > + * dma_buf_map_incr().
> > *
> > - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers
> > are
> > - * actually independent from the dma-buf infrastructure. When sharing
> > buffers
> > - * among devices, drivers have to know the location of the memory to
> > access
> > - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> > - * solves this problem for dma-buf and its users. If other drivers or
> > - * sub-systems require similar functionality, the type could be
> > generalized
> > - * and moved to a more prominent header file.
> > + * .. code-block:: c
> > + *
> > + * const void *src = ...; // source buffer
> > + * size_t len = ...; // length of src
> > + *
> > + * dma_buf_map_memcpy_to(&map, src, len);
> > + * dma_buf_map_incr(&map, len); // go to first byte after the
> > memcpy */
> >
> > /**
> > @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct
> > dma_buf_map *map) }
> > }
> >
> > +/**
> > + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> > + * @dst: The dma-buf mapping structure
> > + * @src: The source buffer
> > + * @len: The number of byte in src
> > + *
> > + * Copies data into a dma-buf mapping. The source buffer is in system
> > + * memory. Depending on the buffer's location, the helper picks the
> > correct
> > + * method of accessing the memory.
> > + */
> > +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const
> > void *src, size_t len) +{
> > + if (dst->is_iomem)
> > + memcpy_toio(dst->vaddr_iomem, src, len);
> > + else
> > + memcpy(dst->vaddr, src, len);
> > +}
> > +
> > +/**
> > + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> > + * @map: The dma-buf mapping structure
> > + * @incr: The number of bytes to increment
> > + *
> > + * Increments the address stored in a dma-buf mapping. Depending on the
> > + * buffer's location, the correct value will be updated.
> > + */
> > +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> > +{
> > + if (map->is_iomem)
> > + map->vaddr_iomem += incr;
> > + else
> > + map->vaddr += incr;
> > +}
> > +
> > #endif /* __DMA_BUF_MAP_H__ */
> > --
> > 2.28.0
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-16 10:39 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 10:39 UTC (permalink / raw)
To: Sam Ravnborg
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
kraxel, yuq825, emil.velikov, linux-samsung-soc, jy0922.shim,
lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, luben.tuikov, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, linaro-mm-sig, hdegoede,
xen-devel, virtualization, sean, apaneers, linux-arm-kernel,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, kgene, alexander.deucher, linux-media,
christian.koenig
Hi Sam
On Fri, 16 Oct 2020 12:08:54 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> > To do framebuffer updates, one needs memcpy from system memory and a
> > pointer-increment function. Add both interfaces with documentation.
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>
> Looks good.
> Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
Thanks. If you have the time, may I ask you to test this patchset on the
bochs/sparc64 system that failed with the original code?
Best regards
Thomas
>
> > ---
> > include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> > 1 file changed, 62 insertions(+), 10 deletions(-)
> >
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index 2e8bbecb5091..6ca0f304dda2 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -32,6 +32,14 @@
> > * accessing the buffer. Use the returned instance and the helper
> > functions
> > * to access the buffer's memory in the correct way.
> > *
> > + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers
> > are
> > + * actually independent from the dma-buf infrastructure. When sharing
> > buffers
> > + * among devices, drivers have to know the location of the memory to
> > access
> > + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> > + * solves this problem for dma-buf and its users. If other drivers or
> > + * sub-systems require similar functionality, the type could be
> > generalized
> > + * and moved to a more prominent header file.
> > + *
> > * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> > * considered bad style. Rather then accessing its fields directly, use
> > one
> > * of the provided helper functions, or implement your own. For example,
> > @@ -51,6 +59,14 @@
> > *
> > * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > *
> > + * Instances of struct dma_buf_map do not have to be cleaned up, but
> > + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > + * always refer to system memory.
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_clear(&map);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -73,17 +89,19 @@
> > * if (dma_buf_map_is_equal(&sys_map, &io_map))
> > * // always false
> > *
> > - * Instances of struct dma_buf_map do not have to be cleaned up, but
> > - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > - * always refer to system memory.
> > + * A set up instance of struct dma_buf_map can be used to access or
> > manipulate
> > + * the buffer memory. Depending on the location of the memory, the
> > provided
> > + * helpers will pick the correct operations. Data can be copied into the
> > memory
> > + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> > + * dma_buf_map_incr().
> > *
> > - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers
> > are
> > - * actually independent from the dma-buf infrastructure. When sharing
> > buffers
> > - * among devices, drivers have to know the location of the memory to
> > access
> > - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> > - * solves this problem for dma-buf and its users. If other drivers or
> > - * sub-systems require similar functionality, the type could be
> > generalized
> > - * and moved to a more prominent header file.
> > + * .. code-block:: c
> > + *
> > + * const void *src = ...; // source buffer
> > + * size_t len = ...; // length of src
> > + *
> > + * dma_buf_map_memcpy_to(&map, src, len);
> > + * dma_buf_map_incr(&map, len); // go to first byte after the
> > memcpy */
> >
> > /**
> > @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct
> > dma_buf_map *map) }
> > }
> >
> > +/**
> > + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> > + * @dst: The dma-buf mapping structure
> > + * @src: The source buffer
> > + * @len: The number of byte in src
> > + *
> > + * Copies data into a dma-buf mapping. The source buffer is in system
> > + * memory. Depending on the buffer's location, the helper picks the
> > correct
> > + * method of accessing the memory.
> > + */
> > +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const
> > void *src, size_t len) +{
> > + if (dst->is_iomem)
> > + memcpy_toio(dst->vaddr_iomem, src, len);
> > + else
> > + memcpy(dst->vaddr, src, len);
> > +}
> > +
> > +/**
> > + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> > + * @map: The dma-buf mapping structure
> > + * @incr: The number of bytes to increment
> > + *
> > + * Increments the address stored in a dma-buf mapping. Depending on the
> > + * buffer's location, the correct value will be updated.
> > + */
> > +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> > +{
> > + if (map->is_iomem)
> > + map->vaddr_iomem += incr;
> > + else
> > + map->vaddr += incr;
> > +}
> > +
> > #endif /* __DMA_BUF_MAP_H__ */
> > --
> > 2.28.0
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-16 10:39 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 10:39 UTC (permalink / raw)
To: Sam Ravnborg
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
kraxel, yuq825, emil.velikov, linux-samsung-soc, jy0922.shim,
lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, luben.tuikov, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, linaro-mm-sig, hdegoede,
xen-devel, virtualization, sean, apaneers, linux-arm-kernel,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, kgene, alexander.deucher, linux-media,
christian.koenig
Hi Sam
On Fri, 16 Oct 2020 12:08:54 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> > To do framebuffer updates, one needs memcpy from system memory and a
> > pointer-increment function. Add both interfaces with documentation.
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>
> Looks good.
> Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
Thanks. If you have the time, may I ask you to test this patchset on the
bochs/sparc64 system that failed with the original code?
Best regards
Thomas
>
> > ---
> > include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> > 1 file changed, 62 insertions(+), 10 deletions(-)
> >
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index 2e8bbecb5091..6ca0f304dda2 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -32,6 +32,14 @@
> > * accessing the buffer. Use the returned instance and the helper
> > functions
> > * to access the buffer's memory in the correct way.
> > *
> > + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers
> > are
> > + * actually independent from the dma-buf infrastructure. When sharing
> > buffers
> > + * among devices, drivers have to know the location of the memory to
> > access
> > + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> > + * solves this problem for dma-buf and its users. If other drivers or
> > + * sub-systems require similar functionality, the type could be
> > generalized
> > + * and moved to a more prominent header file.
> > + *
> > * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> > * considered bad style. Rather then accessing its fields directly, use
> > one
> > * of the provided helper functions, or implement your own. For example,
> > @@ -51,6 +59,14 @@
> > *
> > * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > *
> > + * Instances of struct dma_buf_map do not have to be cleaned up, but
> > + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > + * always refer to system memory.
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_clear(&map);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -73,17 +89,19 @@
> > * if (dma_buf_map_is_equal(&sys_map, &io_map))
> > * // always false
> > *
> > - * Instances of struct dma_buf_map do not have to be cleaned up, but
> > - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > - * always refer to system memory.
> > + * A set up instance of struct dma_buf_map can be used to access or
> > manipulate
> > + * the buffer memory. Depending on the location of the memory, the
> > provided
> > + * helpers will pick the correct operations. Data can be copied into the
> > memory
> > + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> > + * dma_buf_map_incr().
> > *
> > - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers
> > are
> > - * actually independent from the dma-buf infrastructure. When sharing
> > buffers
> > - * among devices, drivers have to know the location of the memory to
> > access
> > - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> > - * solves this problem for dma-buf and its users. If other drivers or
> > - * sub-systems require similar functionality, the type could be
> > generalized
> > - * and moved to a more prominent header file.
> > + * .. code-block:: c
> > + *
> > + * const void *src = ...; // source buffer
> > + * size_t len = ...; // length of src
> > + *
> > + * dma_buf_map_memcpy_to(&map, src, len);
> > + * dma_buf_map_incr(&map, len); // go to first byte after the
> > memcpy */
> >
> > /**
> > @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct
> > dma_buf_map *map) }
> > }
> >
> > +/**
> > + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> > + * @dst: The dma-buf mapping structure
> > + * @src: The source buffer
> > + * @len: The number of byte in src
> > + *
> > + * Copies data into a dma-buf mapping. The source buffer is in system
> > + * memory. Depending on the buffer's location, the helper picks the
> > correct
> > + * method of accessing the memory.
> > + */
> > +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const
> > void *src, size_t len) +{
> > + if (dst->is_iomem)
> > + memcpy_toio(dst->vaddr_iomem, src, len);
> > + else
> > + memcpy(dst->vaddr, src, len);
> > +}
> > +
> > +/**
> > + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> > + * @map: The dma-buf mapping structure
> > + * @incr: The number of bytes to increment
> > + *
> > + * Increments the address stored in a dma-buf mapping. Depending on the
> > + * buffer's location, the correct value will be updated.
> > + */
> > +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> > +{
> > + if (map->is_iomem)
> > + map->vaddr_iomem += incr;
> > + else
> > + map->vaddr += incr;
> > +}
> > +
> > #endif /* __DMA_BUF_MAP_H__ */
> > --
> > 2.28.0
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-16 10:39 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 10:39 UTC (permalink / raw)
To: Sam Ravnborg
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
kraxel, yuq825, emil.velikov, linux-samsung-soc, jy0922.shim,
lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, luben.tuikov, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, linaro-mm-sig, hdegoede,
xen-devel, virtualization, sean, apaneers, linux-arm-kernel,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, kgene, alexander.deucher, linux-media,
christian.koenig
Hi Sam
On Fri, 16 Oct 2020 12:08:54 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> > To do framebuffer updates, one needs memcpy from system memory and a
> > pointer-increment function. Add both interfaces with documentation.
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>
> Looks good.
> Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
Thanks. If you have the time, may I ask you to test this patchset on the
bochs/sparc64 system that failed with the original code?
Best regards
Thomas
>
> > ---
> > include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> > 1 file changed, 62 insertions(+), 10 deletions(-)
> >
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index 2e8bbecb5091..6ca0f304dda2 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -32,6 +32,14 @@
> > * accessing the buffer. Use the returned instance and the helper
> > functions
> > * to access the buffer's memory in the correct way.
> > *
> > + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers
> > are
> > + * actually independent from the dma-buf infrastructure. When sharing
> > buffers
> > + * among devices, drivers have to know the location of the memory to
> > access
> > + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> > + * solves this problem for dma-buf and its users. If other drivers or
> > + * sub-systems require similar functionality, the type could be
> > generalized
> > + * and moved to a more prominent header file.
> > + *
> > * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> > * considered bad style. Rather then accessing its fields directly, use
> > one
> > * of the provided helper functions, or implement your own. For example,
> > @@ -51,6 +59,14 @@
> > *
> > * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > *
> > + * Instances of struct dma_buf_map do not have to be cleaned up, but
> > + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > + * always refer to system memory.
> > + *
> > + * .. code-block:: c
> > + *
> > + * dma_buf_map_clear(&map);
> > + *
> > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > * dma_buf_map_is_null().
> > *
> > @@ -73,17 +89,19 @@
> > * if (dma_buf_map_is_equal(&sys_map, &io_map))
> > * // always false
> > *
> > - * Instances of struct dma_buf_map do not have to be cleaned up, but
> > - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > - * always refer to system memory.
> > + * A set up instance of struct dma_buf_map can be used to access or
> > manipulate
> > + * the buffer memory. Depending on the location of the memory, the
> > provided
> > + * helpers will pick the correct operations. Data can be copied into the
> > memory
> > + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> > + * dma_buf_map_incr().
> > *
> > - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers
> > are
> > - * actually independent from the dma-buf infrastructure. When sharing
> > buffers
> > - * among devices, drivers have to know the location of the memory to
> > access
> > - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> > - * solves this problem for dma-buf and its users. If other drivers or
> > - * sub-systems require similar functionality, the type could be
> > generalized
> > - * and moved to a more prominent header file.
> > + * .. code-block:: c
> > + *
> > + * const void *src = ...; // source buffer
> > + * size_t len = ...; // length of src
> > + *
> > + * dma_buf_map_memcpy_to(&map, src, len);
> > + * dma_buf_map_incr(&map, len); // go to first byte after the
> > memcpy */
> >
> > /**
> > @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct
> > dma_buf_map *map) }
> > }
> >
> > +/**
> > + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> > + * @dst: The dma-buf mapping structure
> > + * @src: The source buffer
> > + * @len: The number of byte in src
> > + *
> > + * Copies data into a dma-buf mapping. The source buffer is in system
> > + * memory. Depending on the buffer's location, the helper picks the
> > correct
> > + * method of accessing the memory.
> > + */
> > +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const
> > void *src, size_t len) +{
> > + if (dst->is_iomem)
> > + memcpy_toio(dst->vaddr_iomem, src, len);
> > + else
> > + memcpy(dst->vaddr, src, len);
> > +}
> > +
> > +/**
> > + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> > + * @map: The dma-buf mapping structure
> > + * @incr: The number of bytes to increment
> > + *
> > + * Increments the address stored in a dma-buf mapping. Depending on the
> > + * buffer's location, the correct value will be updated.
> > + */
> > +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> > +{
> > + if (map->is_iomem)
> > + map->vaddr_iomem += incr;
> > + else
> > + map->vaddr += incr;
> > +}
> > +
> > #endif /* __DMA_BUF_MAP_H__ */
> > --
> > 2.28.0
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
2020-10-15 12:38 ` Thomas Zimmermann
` (3 preceding siblings ...)
(?)
@ 2020-10-16 10:58 ` Sam Ravnborg
-1 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 10:58 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: maarten.lankhorst, mripard, airlied, daniel, alexander.deucher,
christian.koenig, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang, dri-devel, amd-gfx, virtualization, etnaviv,
linux-arm-kernel, linux-samsung-soc, lima, nouveau, spice-devel,
linux-rockchip, xen-devel, linux-media, linaro-mm-sig
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> At least sparc64 requires I/O-specific access to framebuffers. This
> patch updates the fbdev console accordingly.
>
> For drivers with direct access to the framebuffer memory, the callback
> functions in struct fb_ops test for the type of memory and call the rsp
> fb_sys_ of fb_cfb_ functions.
>
> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> interfaces to access the buffer.
>
> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> I/O memory and avoid a HW exception. With the introduction of struct
> dma_buf_map, this is not required any longer. The patch removes the rsp
> code from both, bochs and fbdev.
>
> v4:
> * move dma_buf_map changes into separate patch (Daniel)
> * TODO list: comment on fbdev updates (Daniel)
I have been offline for a while so have not followed all the threads on
this. So may comments below may well be addressed but I failed to see
it.
If the point about fb_sync is already addressed/considered then:
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> Documentation/gpu/todo.rst | 19 ++-
> drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> include/drm/drm_mode_config.h | 12 --
> 4 files changed, 220 insertions(+), 29 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 7e6fc3c04add..638b7f704339 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> ------------------------------------------------
>
> Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> -expects the framebuffer in system memory (or system-like memory).
> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> +expected the framebuffer in system memory or system-like memory. By employing
> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> +as well.
>
> Contact: Maintainer of the driver you plan to convert
>
> Level: Intermediate
>
> +Reimplement functions in drm_fbdev_fb_ops without fbdev
> +-------------------------------------------------------
> +
> +A number of callback functions in drm_fbdev_fb_ops could benefit from
> +being rewritten without dependencies on the fbdev module. Some of the
> +helpers could further benefit from using struct dma_buf_map instead of
> +raw pointers.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> +
> +Level: Advanced
> +
> +
> drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> -----------------------------------------------------------------
>
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> index 13d0d04c4457..853081d186d5 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> bochs->dev->mode_config.preferred_depth = 24;
> bochs->dev->mode_config.prefer_shadow = 0;
> bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> - bochs->dev->mode_config.fbdev_use_iomem = true;
> bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>
> bochs->dev->mode_config.funcs = &bochs_mode_funcs;
Good to see this workaround gone again!
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 6212cd7cde1d..462b0c130ebb 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
> }
>
> static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> - struct drm_clip_rect *clip)
> + struct drm_clip_rect *clip,
> + struct dma_buf_map *dst)
> {
> struct drm_framebuffer *fb = fb_helper->fb;
> unsigned int cpp = fb->format->cpp[0];
> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> void *src = fb_helper->fbdev->screen_buffer + offset;
> - void *dst = fb_helper->buffer->map.vaddr + offset;
> size_t len = (clip->x2 - clip->x1) * cpp;
> unsigned int y;
>
> - for (y = clip->y1; y < clip->y2; y++) {
> - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> - memcpy(dst, src, len);
> - else
> - memcpy_toio((void __iomem *)dst, src, len);
> + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>
> + for (y = clip->y1; y < clip->y2; y++) {
> + dma_buf_map_memcpy_to(dst, src, len);
> + dma_buf_map_incr(dst, fb->pitches[0]);
> src += fb->pitches[0];
> - dst += fb->pitches[0];
> }
> }
>
> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
> ret = drm_client_buffer_vmap(helper->buffer, &map);
> if (ret)
> return;
> - drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
> }
> +
> if (helper->fb->funcs->dirty)
> helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> &clip_copy, 1);
> @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
> }
> EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
>
So far everything looks good.
> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *dst;
> + u8 __iomem *src;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p >= total_size)
> + return 0;
> +
> + if (count >= total_size)
> + count = total_size;
> +
> + if (count + p > total_size)
> + count = total_size - p;
> +
> + src = (u8 __iomem *)(info->screen_base + p);
screen_base is a char __iomem * - so this cast looks semi redundant.
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + dst = kmalloc(alloc_size, GFP_KERNEL);
> + if (!dst)
> + return -ENOMEM;
> +
Same comment as below about fb_sync.
> + while (count) {
> + c = min(count, alloc_size);
> +
> + memcpy_fromio(dst, src, c);
> + if (copy_to_user(buf, dst, c)) {
> + err = -EFAULT;
> + break;
> + }
> +
> + src += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(dst);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *src;
> + u8 __iomem *dst;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p > total_size)
> + return -EFBIG;
> +
> + if (count > total_size) {
> + err = -EFBIG;
> + count = total_size;
> + }
> +
> + if (count + p > total_size) {
> + /*
> + * The framebuffer is too small. We do the
> + * copy operation, but return an error code
> + * afterwards. Taken from fbdev.
> + */
> + if (!err)
> + err = -ENOSPC;
> + count = total_size - p;
> + }
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + src = kmalloc(alloc_size, GFP_KERNEL);
> + if (!src)
> + return -ENOMEM;
> +
> + dst = (u8 __iomem *)(info->screen_base + p);
> +
The fbdev variant call the fb_sync callback here.
noveau and gma500 implments the fb_sync callback - but no-one else.
> + while (count) {
> + c = min(count, alloc_size);
> +
> + if (copy_from_user(src, buf, c)) {
> + err = -EFAULT;
> + break;
> + }
> + memcpy_toio(dst, src, c);
When we rewrite this part to use dma_buf_map_memcpy_to() then we can
merge the two variants of helper_{sys,cfb}_read()?
Which is part of the todo - so OK
> +
> + dst += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(src);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> /**
> * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> * @info: fbdev registered by the helper
> @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> return -ENODEV;
> }
>
> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_read(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> +}
> +
> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_write(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> +}
> +
> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> + const struct fb_fillrect *rect)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_fillrect(info, rect);
> + else
> + drm_fb_helper_cfb_fillrect(info, rect);
> +}
> +
> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> + const struct fb_copyarea *area)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_copyarea(info, area);
> + else
> + drm_fb_helper_cfb_copyarea(info, area);
> +}
> +
> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> + const struct fb_image *image)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_imageblit(info, image);
> + else
> + drm_fb_helper_cfb_imageblit(info, image);
> +}
> +
> static const struct fb_ops drm_fbdev_fb_ops = {
> .owner = THIS_MODULE,
> DRM_FB_HELPER_DEFAULT_OPS,
> @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> .fb_release = drm_fbdev_fb_release,
> .fb_destroy = drm_fbdev_fb_destroy,
> .fb_mmap = drm_fbdev_fb_mmap,
> - .fb_read = drm_fb_helper_sys_read,
> - .fb_write = drm_fb_helper_sys_write,
> - .fb_fillrect = drm_fb_helper_sys_fillrect,
> - .fb_copyarea = drm_fb_helper_sys_copyarea,
> - .fb_imageblit = drm_fb_helper_sys_imageblit,
> + .fb_read = drm_fbdev_fb_read,
> + .fb_write = drm_fbdev_fb_write,
> + .fb_fillrect = drm_fbdev_fb_fillrect,
> + .fb_copyarea = drm_fbdev_fb_copyarea,
> + .fb_imageblit = drm_fbdev_fb_imageblit,
> };
>
> static struct fb_deferred_io drm_fbdev_defio = {
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 5ffbb4ed5b35..ab424ddd7665 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -877,18 +877,6 @@ struct drm_mode_config {
> */
> bool prefer_shadow_fbdev;
>
> - /**
> - * @fbdev_use_iomem:
> - *
> - * Set to true if framebuffer reside in iomem.
> - * When set to true memcpy_toio() is used when copying the framebuffer in
> - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> - *
> - * FIXME: This should be replaced with a per-mapping is_iomem
> - * flag (like ttm does), and then used everywhere in fbdev code.
> - */
> - bool fbdev_use_iomem;
> -
> /**
> * @quirk_addfb_prefer_xbgr_30bpp:
> *
> --
> 2.28.0
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 10:58 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 10:58 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: maarten.lankhorst, mripard, airlied, daniel, alexander.deucher,
christian.koenig, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, mel
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> At least sparc64 requires I/O-specific access to framebuffers. This
> patch updates the fbdev console accordingly.
>
> For drivers with direct access to the framebuffer memory, the callback
> functions in struct fb_ops test for the type of memory and call the rsp
> fb_sys_ of fb_cfb_ functions.
>
> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> interfaces to access the buffer.
>
> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> I/O memory and avoid a HW exception. With the introduction of struct
> dma_buf_map, this is not required any longer. The patch removes the rsp
> code from both, bochs and fbdev.
>
> v4:
> * move dma_buf_map changes into separate patch (Daniel)
> * TODO list: comment on fbdev updates (Daniel)
I have been offline for a while so have not followed all the threads on
this. So may comments below may well be addressed but I failed to see
it.
If the point about fb_sync is already addressed/considered then:
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> Documentation/gpu/todo.rst | 19 ++-
> drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> include/drm/drm_mode_config.h | 12 --
> 4 files changed, 220 insertions(+), 29 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 7e6fc3c04add..638b7f704339 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> ------------------------------------------------
>
> Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> -expects the framebuffer in system memory (or system-like memory).
> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> +expected the framebuffer in system memory or system-like memory. By employing
> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> +as well.
>
> Contact: Maintainer of the driver you plan to convert
>
> Level: Intermediate
>
> +Reimplement functions in drm_fbdev_fb_ops without fbdev
> +-------------------------------------------------------
> +
> +A number of callback functions in drm_fbdev_fb_ops could benefit from
> +being rewritten without dependencies on the fbdev module. Some of the
> +helpers could further benefit from using struct dma_buf_map instead of
> +raw pointers.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> +
> +Level: Advanced
> +
> +
> drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> -----------------------------------------------------------------
>
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> index 13d0d04c4457..853081d186d5 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> bochs->dev->mode_config.preferred_depth = 24;
> bochs->dev->mode_config.prefer_shadow = 0;
> bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> - bochs->dev->mode_config.fbdev_use_iomem = true;
> bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>
> bochs->dev->mode_config.funcs = &bochs_mode_funcs;
Good to see this workaround gone again!
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 6212cd7cde1d..462b0c130ebb 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
> }
>
> static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> - struct drm_clip_rect *clip)
> + struct drm_clip_rect *clip,
> + struct dma_buf_map *dst)
> {
> struct drm_framebuffer *fb = fb_helper->fb;
> unsigned int cpp = fb->format->cpp[0];
> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> void *src = fb_helper->fbdev->screen_buffer + offset;
> - void *dst = fb_helper->buffer->map.vaddr + offset;
> size_t len = (clip->x2 - clip->x1) * cpp;
> unsigned int y;
>
> - for (y = clip->y1; y < clip->y2; y++) {
> - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> - memcpy(dst, src, len);
> - else
> - memcpy_toio((void __iomem *)dst, src, len);
> + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>
> + for (y = clip->y1; y < clip->y2; y++) {
> + dma_buf_map_memcpy_to(dst, src, len);
> + dma_buf_map_incr(dst, fb->pitches[0]);
> src += fb->pitches[0];
> - dst += fb->pitches[0];
> }
> }
>
> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
> ret = drm_client_buffer_vmap(helper->buffer, &map);
> if (ret)
> return;
> - drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
> }
> +
> if (helper->fb->funcs->dirty)
> helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> &clip_copy, 1);
> @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
> }
> EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
>
So far everything looks good.
> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *dst;
> + u8 __iomem *src;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p >= total_size)
> + return 0;
> +
> + if (count >= total_size)
> + count = total_size;
> +
> + if (count + p > total_size)
> + count = total_size - p;
> +
> + src = (u8 __iomem *)(info->screen_base + p);
screen_base is a char __iomem * - so this cast looks semi redundant.
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + dst = kmalloc(alloc_size, GFP_KERNEL);
> + if (!dst)
> + return -ENOMEM;
> +
Same comment as below about fb_sync.
> + while (count) {
> + c = min(count, alloc_size);
> +
> + memcpy_fromio(dst, src, c);
> + if (copy_to_user(buf, dst, c)) {
> + err = -EFAULT;
> + break;
> + }
> +
> + src += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(dst);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *src;
> + u8 __iomem *dst;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p > total_size)
> + return -EFBIG;
> +
> + if (count > total_size) {
> + err = -EFBIG;
> + count = total_size;
> + }
> +
> + if (count + p > total_size) {
> + /*
> + * The framebuffer is too small. We do the
> + * copy operation, but return an error code
> + * afterwards. Taken from fbdev.
> + */
> + if (!err)
> + err = -ENOSPC;
> + count = total_size - p;
> + }
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + src = kmalloc(alloc_size, GFP_KERNEL);
> + if (!src)
> + return -ENOMEM;
> +
> + dst = (u8 __iomem *)(info->screen_base + p);
> +
The fbdev variant call the fb_sync callback here.
noveau and gma500 implments the fb_sync callback - but no-one else.
> + while (count) {
> + c = min(count, alloc_size);
> +
> + if (copy_from_user(src, buf, c)) {
> + err = -EFAULT;
> + break;
> + }
> + memcpy_toio(dst, src, c);
When we rewrite this part to use dma_buf_map_memcpy_to() then we can
merge the two variants of helper_{sys,cfb}_read()?
Which is part of the todo - so OK
> +
> + dst += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(src);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> /**
> * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> * @info: fbdev registered by the helper
> @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> return -ENODEV;
> }
>
> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_read(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> +}
> +
> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_write(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> +}
> +
> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> + const struct fb_fillrect *rect)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_fillrect(info, rect);
> + else
> + drm_fb_helper_cfb_fillrect(info, rect);
> +}
> +
> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> + const struct fb_copyarea *area)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_copyarea(info, area);
> + else
> + drm_fb_helper_cfb_copyarea(info, area);
> +}
> +
> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> + const struct fb_image *image)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_imageblit(info, image);
> + else
> + drm_fb_helper_cfb_imageblit(info, image);
> +}
> +
> static const struct fb_ops drm_fbdev_fb_ops = {
> .owner = THIS_MODULE,
> DRM_FB_HELPER_DEFAULT_OPS,
> @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> .fb_release = drm_fbdev_fb_release,
> .fb_destroy = drm_fbdev_fb_destroy,
> .fb_mmap = drm_fbdev_fb_mmap,
> - .fb_read = drm_fb_helper_sys_read,
> - .fb_write = drm_fb_helper_sys_write,
> - .fb_fillrect = drm_fb_helper_sys_fillrect,
> - .fb_copyarea = drm_fb_helper_sys_copyarea,
> - .fb_imageblit = drm_fb_helper_sys_imageblit,
> + .fb_read = drm_fbdev_fb_read,
> + .fb_write = drm_fbdev_fb_write,
> + .fb_fillrect = drm_fbdev_fb_fillrect,
> + .fb_copyarea = drm_fbdev_fb_copyarea,
> + .fb_imageblit = drm_fbdev_fb_imageblit,
> };
>
> static struct fb_deferred_io drm_fbdev_defio = {
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 5ffbb4ed5b35..ab424ddd7665 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -877,18 +877,6 @@ struct drm_mode_config {
> */
> bool prefer_shadow_fbdev;
>
> - /**
> - * @fbdev_use_iomem:
> - *
> - * Set to true if framebuffer reside in iomem.
> - * When set to true memcpy_toio() is used when copying the framebuffer in
> - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> - *
> - * FIXME: This should be replaced with a per-mapping is_iomem
> - * flag (like ttm does), and then used everywhere in fbdev code.
> - */
> - bool fbdev_use_iomem;
> -
> /**
> * @quirk_addfb_prefer_xbgr_30bpp:
> *
> --
> 2.28.0
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 10:58 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 10:58 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, kraxel, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, christian.koenig, l.stach
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> At least sparc64 requires I/O-specific access to framebuffers. This
> patch updates the fbdev console accordingly.
>
> For drivers with direct access to the framebuffer memory, the callback
> functions in struct fb_ops test for the type of memory and call the rsp
> fb_sys_ of fb_cfb_ functions.
>
> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> interfaces to access the buffer.
>
> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> I/O memory and avoid a HW exception. With the introduction of struct
> dma_buf_map, this is not required any longer. The patch removes the rsp
> code from both, bochs and fbdev.
>
> v4:
> * move dma_buf_map changes into separate patch (Daniel)
> * TODO list: comment on fbdev updates (Daniel)
I have been offline for a while so have not followed all the threads on
this. So may comments below may well be addressed but I failed to see
it.
If the point about fb_sync is already addressed/considered then:
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> Documentation/gpu/todo.rst | 19 ++-
> drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> include/drm/drm_mode_config.h | 12 --
> 4 files changed, 220 insertions(+), 29 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 7e6fc3c04add..638b7f704339 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> ------------------------------------------------
>
> Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> -expects the framebuffer in system memory (or system-like memory).
> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> +expected the framebuffer in system memory or system-like memory. By employing
> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> +as well.
>
> Contact: Maintainer of the driver you plan to convert
>
> Level: Intermediate
>
> +Reimplement functions in drm_fbdev_fb_ops without fbdev
> +-------------------------------------------------------
> +
> +A number of callback functions in drm_fbdev_fb_ops could benefit from
> +being rewritten without dependencies on the fbdev module. Some of the
> +helpers could further benefit from using struct dma_buf_map instead of
> +raw pointers.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> +
> +Level: Advanced
> +
> +
> drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> -----------------------------------------------------------------
>
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> index 13d0d04c4457..853081d186d5 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> bochs->dev->mode_config.preferred_depth = 24;
> bochs->dev->mode_config.prefer_shadow = 0;
> bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> - bochs->dev->mode_config.fbdev_use_iomem = true;
> bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>
> bochs->dev->mode_config.funcs = &bochs_mode_funcs;
Good to see this workaround gone again!
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 6212cd7cde1d..462b0c130ebb 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
> }
>
> static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> - struct drm_clip_rect *clip)
> + struct drm_clip_rect *clip,
> + struct dma_buf_map *dst)
> {
> struct drm_framebuffer *fb = fb_helper->fb;
> unsigned int cpp = fb->format->cpp[0];
> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> void *src = fb_helper->fbdev->screen_buffer + offset;
> - void *dst = fb_helper->buffer->map.vaddr + offset;
> size_t len = (clip->x2 - clip->x1) * cpp;
> unsigned int y;
>
> - for (y = clip->y1; y < clip->y2; y++) {
> - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> - memcpy(dst, src, len);
> - else
> - memcpy_toio((void __iomem *)dst, src, len);
> + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>
> + for (y = clip->y1; y < clip->y2; y++) {
> + dma_buf_map_memcpy_to(dst, src, len);
> + dma_buf_map_incr(dst, fb->pitches[0]);
> src += fb->pitches[0];
> - dst += fb->pitches[0];
> }
> }
>
> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
> ret = drm_client_buffer_vmap(helper->buffer, &map);
> if (ret)
> return;
> - drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
> }
> +
> if (helper->fb->funcs->dirty)
> helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> &clip_copy, 1);
> @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
> }
> EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
>
So far everything looks good.
> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *dst;
> + u8 __iomem *src;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p >= total_size)
> + return 0;
> +
> + if (count >= total_size)
> + count = total_size;
> +
> + if (count + p > total_size)
> + count = total_size - p;
> +
> + src = (u8 __iomem *)(info->screen_base + p);
screen_base is a char __iomem * - so this cast looks semi redundant.
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + dst = kmalloc(alloc_size, GFP_KERNEL);
> + if (!dst)
> + return -ENOMEM;
> +
Same comment as below about fb_sync.
> + while (count) {
> + c = min(count, alloc_size);
> +
> + memcpy_fromio(dst, src, c);
> + if (copy_to_user(buf, dst, c)) {
> + err = -EFAULT;
> + break;
> + }
> +
> + src += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(dst);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *src;
> + u8 __iomem *dst;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p > total_size)
> + return -EFBIG;
> +
> + if (count > total_size) {
> + err = -EFBIG;
> + count = total_size;
> + }
> +
> + if (count + p > total_size) {
> + /*
> + * The framebuffer is too small. We do the
> + * copy operation, but return an error code
> + * afterwards. Taken from fbdev.
> + */
> + if (!err)
> + err = -ENOSPC;
> + count = total_size - p;
> + }
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + src = kmalloc(alloc_size, GFP_KERNEL);
> + if (!src)
> + return -ENOMEM;
> +
> + dst = (u8 __iomem *)(info->screen_base + p);
> +
The fbdev variant call the fb_sync callback here.
noveau and gma500 implments the fb_sync callback - but no-one else.
> + while (count) {
> + c = min(count, alloc_size);
> +
> + if (copy_from_user(src, buf, c)) {
> + err = -EFAULT;
> + break;
> + }
> + memcpy_toio(dst, src, c);
When we rewrite this part to use dma_buf_map_memcpy_to() then we can
merge the two variants of helper_{sys,cfb}_read()?
Which is part of the todo - so OK
> +
> + dst += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(src);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> /**
> * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> * @info: fbdev registered by the helper
> @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> return -ENODEV;
> }
>
> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_read(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> +}
> +
> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_write(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> +}
> +
> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> + const struct fb_fillrect *rect)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_fillrect(info, rect);
> + else
> + drm_fb_helper_cfb_fillrect(info, rect);
> +}
> +
> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> + const struct fb_copyarea *area)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_copyarea(info, area);
> + else
> + drm_fb_helper_cfb_copyarea(info, area);
> +}
> +
> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> + const struct fb_image *image)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_imageblit(info, image);
> + else
> + drm_fb_helper_cfb_imageblit(info, image);
> +}
> +
> static const struct fb_ops drm_fbdev_fb_ops = {
> .owner = THIS_MODULE,
> DRM_FB_HELPER_DEFAULT_OPS,
> @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> .fb_release = drm_fbdev_fb_release,
> .fb_destroy = drm_fbdev_fb_destroy,
> .fb_mmap = drm_fbdev_fb_mmap,
> - .fb_read = drm_fb_helper_sys_read,
> - .fb_write = drm_fb_helper_sys_write,
> - .fb_fillrect = drm_fb_helper_sys_fillrect,
> - .fb_copyarea = drm_fb_helper_sys_copyarea,
> - .fb_imageblit = drm_fb_helper_sys_imageblit,
> + .fb_read = drm_fbdev_fb_read,
> + .fb_write = drm_fbdev_fb_write,
> + .fb_fillrect = drm_fbdev_fb_fillrect,
> + .fb_copyarea = drm_fbdev_fb_copyarea,
> + .fb_imageblit = drm_fbdev_fb_imageblit,
> };
>
> static struct fb_deferred_io drm_fbdev_defio = {
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 5ffbb4ed5b35..ab424ddd7665 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -877,18 +877,6 @@ struct drm_mode_config {
> */
> bool prefer_shadow_fbdev;
>
> - /**
> - * @fbdev_use_iomem:
> - *
> - * Set to true if framebuffer reside in iomem.
> - * When set to true memcpy_toio() is used when copying the framebuffer in
> - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> - *
> - * FIXME: This should be replaced with a per-mapping is_iomem
> - * flag (like ttm does), and then used everywhere in fbdev code.
> - */
> - bool fbdev_use_iomem;
> -
> /**
> * @quirk_addfb_prefer_xbgr_30bpp:
> *
> --
> 2.28.0
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 10:58 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 10:58 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, sumit.semwal, emil.velikov,
robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, christian.koenig, l.stach
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> At least sparc64 requires I/O-specific access to framebuffers. This
> patch updates the fbdev console accordingly.
>
> For drivers with direct access to the framebuffer memory, the callback
> functions in struct fb_ops test for the type of memory and call the rsp
> fb_sys_ of fb_cfb_ functions.
>
> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> interfaces to access the buffer.
>
> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> I/O memory and avoid a HW exception. With the introduction of struct
> dma_buf_map, this is not required any longer. The patch removes the rsp
> code from both, bochs and fbdev.
>
> v4:
> * move dma_buf_map changes into separate patch (Daniel)
> * TODO list: comment on fbdev updates (Daniel)
I have been offline for a while so have not followed all the threads on
this. So may comments below may well be addressed but I failed to see
it.
If the point about fb_sync is already addressed/considered then:
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> Documentation/gpu/todo.rst | 19 ++-
> drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> include/drm/drm_mode_config.h | 12 --
> 4 files changed, 220 insertions(+), 29 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 7e6fc3c04add..638b7f704339 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> ------------------------------------------------
>
> Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> -expects the framebuffer in system memory (or system-like memory).
> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> +expected the framebuffer in system memory or system-like memory. By employing
> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> +as well.
>
> Contact: Maintainer of the driver you plan to convert
>
> Level: Intermediate
>
> +Reimplement functions in drm_fbdev_fb_ops without fbdev
> +-------------------------------------------------------
> +
> +A number of callback functions in drm_fbdev_fb_ops could benefit from
> +being rewritten without dependencies on the fbdev module. Some of the
> +helpers could further benefit from using struct dma_buf_map instead of
> +raw pointers.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> +
> +Level: Advanced
> +
> +
> drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> -----------------------------------------------------------------
>
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> index 13d0d04c4457..853081d186d5 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> bochs->dev->mode_config.preferred_depth = 24;
> bochs->dev->mode_config.prefer_shadow = 0;
> bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> - bochs->dev->mode_config.fbdev_use_iomem = true;
> bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>
> bochs->dev->mode_config.funcs = &bochs_mode_funcs;
Good to see this workaround gone again!
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 6212cd7cde1d..462b0c130ebb 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
> }
>
> static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> - struct drm_clip_rect *clip)
> + struct drm_clip_rect *clip,
> + struct dma_buf_map *dst)
> {
> struct drm_framebuffer *fb = fb_helper->fb;
> unsigned int cpp = fb->format->cpp[0];
> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> void *src = fb_helper->fbdev->screen_buffer + offset;
> - void *dst = fb_helper->buffer->map.vaddr + offset;
> size_t len = (clip->x2 - clip->x1) * cpp;
> unsigned int y;
>
> - for (y = clip->y1; y < clip->y2; y++) {
> - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> - memcpy(dst, src, len);
> - else
> - memcpy_toio((void __iomem *)dst, src, len);
> + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>
> + for (y = clip->y1; y < clip->y2; y++) {
> + dma_buf_map_memcpy_to(dst, src, len);
> + dma_buf_map_incr(dst, fb->pitches[0]);
> src += fb->pitches[0];
> - dst += fb->pitches[0];
> }
> }
>
> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
> ret = drm_client_buffer_vmap(helper->buffer, &map);
> if (ret)
> return;
> - drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
> }
> +
> if (helper->fb->funcs->dirty)
> helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> &clip_copy, 1);
> @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
> }
> EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
>
So far everything looks good.
> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *dst;
> + u8 __iomem *src;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p >= total_size)
> + return 0;
> +
> + if (count >= total_size)
> + count = total_size;
> +
> + if (count + p > total_size)
> + count = total_size - p;
> +
> + src = (u8 __iomem *)(info->screen_base + p);
screen_base is a char __iomem * - so this cast looks semi redundant.
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + dst = kmalloc(alloc_size, GFP_KERNEL);
> + if (!dst)
> + return -ENOMEM;
> +
Same comment as below about fb_sync.
> + while (count) {
> + c = min(count, alloc_size);
> +
> + memcpy_fromio(dst, src, c);
> + if (copy_to_user(buf, dst, c)) {
> + err = -EFAULT;
> + break;
> + }
> +
> + src += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(dst);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *src;
> + u8 __iomem *dst;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p > total_size)
> + return -EFBIG;
> +
> + if (count > total_size) {
> + err = -EFBIG;
> + count = total_size;
> + }
> +
> + if (count + p > total_size) {
> + /*
> + * The framebuffer is too small. We do the
> + * copy operation, but return an error code
> + * afterwards. Taken from fbdev.
> + */
> + if (!err)
> + err = -ENOSPC;
> + count = total_size - p;
> + }
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + src = kmalloc(alloc_size, GFP_KERNEL);
> + if (!src)
> + return -ENOMEM;
> +
> + dst = (u8 __iomem *)(info->screen_base + p);
> +
The fbdev variant call the fb_sync callback here.
noveau and gma500 implments the fb_sync callback - but no-one else.
> + while (count) {
> + c = min(count, alloc_size);
> +
> + if (copy_from_user(src, buf, c)) {
> + err = -EFAULT;
> + break;
> + }
> + memcpy_toio(dst, src, c);
When we rewrite this part to use dma_buf_map_memcpy_to() then we can
merge the two variants of helper_{sys,cfb}_read()?
Which is part of the todo - so OK
> +
> + dst += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(src);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> /**
> * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> * @info: fbdev registered by the helper
> @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> return -ENODEV;
> }
>
> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_read(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> +}
> +
> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_write(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> +}
> +
> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> + const struct fb_fillrect *rect)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_fillrect(info, rect);
> + else
> + drm_fb_helper_cfb_fillrect(info, rect);
> +}
> +
> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> + const struct fb_copyarea *area)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_copyarea(info, area);
> + else
> + drm_fb_helper_cfb_copyarea(info, area);
> +}
> +
> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> + const struct fb_image *image)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_imageblit(info, image);
> + else
> + drm_fb_helper_cfb_imageblit(info, image);
> +}
> +
> static const struct fb_ops drm_fbdev_fb_ops = {
> .owner = THIS_MODULE,
> DRM_FB_HELPER_DEFAULT_OPS,
> @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> .fb_release = drm_fbdev_fb_release,
> .fb_destroy = drm_fbdev_fb_destroy,
> .fb_mmap = drm_fbdev_fb_mmap,
> - .fb_read = drm_fb_helper_sys_read,
> - .fb_write = drm_fb_helper_sys_write,
> - .fb_fillrect = drm_fb_helper_sys_fillrect,
> - .fb_copyarea = drm_fb_helper_sys_copyarea,
> - .fb_imageblit = drm_fb_helper_sys_imageblit,
> + .fb_read = drm_fbdev_fb_read,
> + .fb_write = drm_fbdev_fb_write,
> + .fb_fillrect = drm_fbdev_fb_fillrect,
> + .fb_copyarea = drm_fbdev_fb_copyarea,
> + .fb_imageblit = drm_fbdev_fb_imageblit,
> };
>
> static struct fb_deferred_io drm_fbdev_defio = {
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 5ffbb4ed5b35..ab424ddd7665 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -877,18 +877,6 @@ struct drm_mode_config {
> */
> bool prefer_shadow_fbdev;
>
> - /**
> - * @fbdev_use_iomem:
> - *
> - * Set to true if framebuffer reside in iomem.
> - * When set to true memcpy_toio() is used when copying the framebuffer in
> - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> - *
> - * FIXME: This should be replaced with a per-mapping is_iomem
> - * flag (like ttm does), and then used everywhere in fbdev code.
> - */
> - bool fbdev_use_iomem;
> -
> /**
> * @quirk_addfb_prefer_xbgr_30bpp:
> *
> --
> 2.28.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 10:58 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 10:58 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, airlied, nouveau, dri-devel, chris, melissa.srw,
ray.huang, kraxel, emil.velikov, linux-samsung-soc, jy0922.shim,
lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, kgene, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, hdegoede, xen-devel,
virtualization, sean, apaneers, linux-arm-kernel, linaro-mm-sig,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, yuq825, alexander.deucher, linux-media,
christian.koenig
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> At least sparc64 requires I/O-specific access to framebuffers. This
> patch updates the fbdev console accordingly.
>
> For drivers with direct access to the framebuffer memory, the callback
> functions in struct fb_ops test for the type of memory and call the rsp
> fb_sys_ of fb_cfb_ functions.
>
> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> interfaces to access the buffer.
>
> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> I/O memory and avoid a HW exception. With the introduction of struct
> dma_buf_map, this is not required any longer. The patch removes the rsp
> code from both, bochs and fbdev.
>
> v4:
> * move dma_buf_map changes into separate patch (Daniel)
> * TODO list: comment on fbdev updates (Daniel)
I have been offline for a while so have not followed all the threads on
this. So may comments below may well be addressed but I failed to see
it.
If the point about fb_sync is already addressed/considered then:
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> Documentation/gpu/todo.rst | 19 ++-
> drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> include/drm/drm_mode_config.h | 12 --
> 4 files changed, 220 insertions(+), 29 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 7e6fc3c04add..638b7f704339 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> ------------------------------------------------
>
> Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> -expects the framebuffer in system memory (or system-like memory).
> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> +expected the framebuffer in system memory or system-like memory. By employing
> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> +as well.
>
> Contact: Maintainer of the driver you plan to convert
>
> Level: Intermediate
>
> +Reimplement functions in drm_fbdev_fb_ops without fbdev
> +-------------------------------------------------------
> +
> +A number of callback functions in drm_fbdev_fb_ops could benefit from
> +being rewritten without dependencies on the fbdev module. Some of the
> +helpers could further benefit from using struct dma_buf_map instead of
> +raw pointers.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> +
> +Level: Advanced
> +
> +
> drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> -----------------------------------------------------------------
>
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> index 13d0d04c4457..853081d186d5 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> bochs->dev->mode_config.preferred_depth = 24;
> bochs->dev->mode_config.prefer_shadow = 0;
> bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> - bochs->dev->mode_config.fbdev_use_iomem = true;
> bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>
> bochs->dev->mode_config.funcs = &bochs_mode_funcs;
Good to see this workaround gone again!
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 6212cd7cde1d..462b0c130ebb 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
> }
>
> static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> - struct drm_clip_rect *clip)
> + struct drm_clip_rect *clip,
> + struct dma_buf_map *dst)
> {
> struct drm_framebuffer *fb = fb_helper->fb;
> unsigned int cpp = fb->format->cpp[0];
> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> void *src = fb_helper->fbdev->screen_buffer + offset;
> - void *dst = fb_helper->buffer->map.vaddr + offset;
> size_t len = (clip->x2 - clip->x1) * cpp;
> unsigned int y;
>
> - for (y = clip->y1; y < clip->y2; y++) {
> - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> - memcpy(dst, src, len);
> - else
> - memcpy_toio((void __iomem *)dst, src, len);
> + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>
> + for (y = clip->y1; y < clip->y2; y++) {
> + dma_buf_map_memcpy_to(dst, src, len);
> + dma_buf_map_incr(dst, fb->pitches[0]);
> src += fb->pitches[0];
> - dst += fb->pitches[0];
> }
> }
>
> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
> ret = drm_client_buffer_vmap(helper->buffer, &map);
> if (ret)
> return;
> - drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
> }
> +
> if (helper->fb->funcs->dirty)
> helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> &clip_copy, 1);
> @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
> }
> EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
>
So far everything looks good.
> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *dst;
> + u8 __iomem *src;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p >= total_size)
> + return 0;
> +
> + if (count >= total_size)
> + count = total_size;
> +
> + if (count + p > total_size)
> + count = total_size - p;
> +
> + src = (u8 __iomem *)(info->screen_base + p);
screen_base is a char __iomem * - so this cast looks semi redundant.
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + dst = kmalloc(alloc_size, GFP_KERNEL);
> + if (!dst)
> + return -ENOMEM;
> +
Same comment as below about fb_sync.
> + while (count) {
> + c = min(count, alloc_size);
> +
> + memcpy_fromio(dst, src, c);
> + if (copy_to_user(buf, dst, c)) {
> + err = -EFAULT;
> + break;
> + }
> +
> + src += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(dst);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *src;
> + u8 __iomem *dst;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p > total_size)
> + return -EFBIG;
> +
> + if (count > total_size) {
> + err = -EFBIG;
> + count = total_size;
> + }
> +
> + if (count + p > total_size) {
> + /*
> + * The framebuffer is too small. We do the
> + * copy operation, but return an error code
> + * afterwards. Taken from fbdev.
> + */
> + if (!err)
> + err = -ENOSPC;
> + count = total_size - p;
> + }
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + src = kmalloc(alloc_size, GFP_KERNEL);
> + if (!src)
> + return -ENOMEM;
> +
> + dst = (u8 __iomem *)(info->screen_base + p);
> +
The fbdev variant call the fb_sync callback here.
noveau and gma500 implments the fb_sync callback - but no-one else.
> + while (count) {
> + c = min(count, alloc_size);
> +
> + if (copy_from_user(src, buf, c)) {
> + err = -EFAULT;
> + break;
> + }
> + memcpy_toio(dst, src, c);
When we rewrite this part to use dma_buf_map_memcpy_to() then we can
merge the two variants of helper_{sys,cfb}_read()?
Which is part of the todo - so OK
> +
> + dst += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(src);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> /**
> * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> * @info: fbdev registered by the helper
> @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> return -ENODEV;
> }
>
> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_read(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> +}
> +
> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_write(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> +}
> +
> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> + const struct fb_fillrect *rect)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_fillrect(info, rect);
> + else
> + drm_fb_helper_cfb_fillrect(info, rect);
> +}
> +
> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> + const struct fb_copyarea *area)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_copyarea(info, area);
> + else
> + drm_fb_helper_cfb_copyarea(info, area);
> +}
> +
> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> + const struct fb_image *image)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_imageblit(info, image);
> + else
> + drm_fb_helper_cfb_imageblit(info, image);
> +}
> +
> static const struct fb_ops drm_fbdev_fb_ops = {
> .owner = THIS_MODULE,
> DRM_FB_HELPER_DEFAULT_OPS,
> @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> .fb_release = drm_fbdev_fb_release,
> .fb_destroy = drm_fbdev_fb_destroy,
> .fb_mmap = drm_fbdev_fb_mmap,
> - .fb_read = drm_fb_helper_sys_read,
> - .fb_write = drm_fb_helper_sys_write,
> - .fb_fillrect = drm_fb_helper_sys_fillrect,
> - .fb_copyarea = drm_fb_helper_sys_copyarea,
> - .fb_imageblit = drm_fb_helper_sys_imageblit,
> + .fb_read = drm_fbdev_fb_read,
> + .fb_write = drm_fbdev_fb_write,
> + .fb_fillrect = drm_fbdev_fb_fillrect,
> + .fb_copyarea = drm_fbdev_fb_copyarea,
> + .fb_imageblit = drm_fbdev_fb_imageblit,
> };
>
> static struct fb_deferred_io drm_fbdev_defio = {
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 5ffbb4ed5b35..ab424ddd7665 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -877,18 +877,6 @@ struct drm_mode_config {
> */
> bool prefer_shadow_fbdev;
>
> - /**
> - * @fbdev_use_iomem:
> - *
> - * Set to true if framebuffer reside in iomem.
> - * When set to true memcpy_toio() is used when copying the framebuffer in
> - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> - *
> - * FIXME: This should be replaced with a per-mapping is_iomem
> - * flag (like ttm does), and then used everywhere in fbdev code.
> - */
> - bool fbdev_use_iomem;
> -
> /**
> * @quirk_addfb_prefer_xbgr_30bpp:
> *
> --
> 2.28.0
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 10:58 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 10:58 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, kraxel, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, christian.koenig, l.stach
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> At least sparc64 requires I/O-specific access to framebuffers. This
> patch updates the fbdev console accordingly.
>
> For drivers with direct access to the framebuffer memory, the callback
> functions in struct fb_ops test for the type of memory and call the rsp
> fb_sys_ of fb_cfb_ functions.
>
> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> interfaces to access the buffer.
>
> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> I/O memory and avoid a HW exception. With the introduction of struct
> dma_buf_map, this is not required any longer. The patch removes the rsp
> code from both, bochs and fbdev.
>
> v4:
> * move dma_buf_map changes into separate patch (Daniel)
> * TODO list: comment on fbdev updates (Daniel)
I have been offline for a while so have not followed all the threads on
this. So may comments below may well be addressed but I failed to see
it.
If the point about fb_sync is already addressed/considered then:
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> Documentation/gpu/todo.rst | 19 ++-
> drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> include/drm/drm_mode_config.h | 12 --
> 4 files changed, 220 insertions(+), 29 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 7e6fc3c04add..638b7f704339 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> ------------------------------------------------
>
> Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> -expects the framebuffer in system memory (or system-like memory).
> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> +expected the framebuffer in system memory or system-like memory. By employing
> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> +as well.
>
> Contact: Maintainer of the driver you plan to convert
>
> Level: Intermediate
>
> +Reimplement functions in drm_fbdev_fb_ops without fbdev
> +-------------------------------------------------------
> +
> +A number of callback functions in drm_fbdev_fb_ops could benefit from
> +being rewritten without dependencies on the fbdev module. Some of the
> +helpers could further benefit from using struct dma_buf_map instead of
> +raw pointers.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> +
> +Level: Advanced
> +
> +
> drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> -----------------------------------------------------------------
>
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> index 13d0d04c4457..853081d186d5 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> bochs->dev->mode_config.preferred_depth = 24;
> bochs->dev->mode_config.prefer_shadow = 0;
> bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> - bochs->dev->mode_config.fbdev_use_iomem = true;
> bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>
> bochs->dev->mode_config.funcs = &bochs_mode_funcs;
Good to see this workaround gone again!
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 6212cd7cde1d..462b0c130ebb 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
> }
>
> static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> - struct drm_clip_rect *clip)
> + struct drm_clip_rect *clip,
> + struct dma_buf_map *dst)
> {
> struct drm_framebuffer *fb = fb_helper->fb;
> unsigned int cpp = fb->format->cpp[0];
> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> void *src = fb_helper->fbdev->screen_buffer + offset;
> - void *dst = fb_helper->buffer->map.vaddr + offset;
> size_t len = (clip->x2 - clip->x1) * cpp;
> unsigned int y;
>
> - for (y = clip->y1; y < clip->y2; y++) {
> - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> - memcpy(dst, src, len);
> - else
> - memcpy_toio((void __iomem *)dst, src, len);
> + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>
> + for (y = clip->y1; y < clip->y2; y++) {
> + dma_buf_map_memcpy_to(dst, src, len);
> + dma_buf_map_incr(dst, fb->pitches[0]);
> src += fb->pitches[0];
> - dst += fb->pitches[0];
> }
> }
>
> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
> ret = drm_client_buffer_vmap(helper->buffer, &map);
> if (ret)
> return;
> - drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
> }
> +
> if (helper->fb->funcs->dirty)
> helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> &clip_copy, 1);
> @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
> }
> EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
>
So far everything looks good.
> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *dst;
> + u8 __iomem *src;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p >= total_size)
> + return 0;
> +
> + if (count >= total_size)
> + count = total_size;
> +
> + if (count + p > total_size)
> + count = total_size - p;
> +
> + src = (u8 __iomem *)(info->screen_base + p);
screen_base is a char __iomem * - so this cast looks semi redundant.
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + dst = kmalloc(alloc_size, GFP_KERNEL);
> + if (!dst)
> + return -ENOMEM;
> +
Same comment as below about fb_sync.
> + while (count) {
> + c = min(count, alloc_size);
> +
> + memcpy_fromio(dst, src, c);
> + if (copy_to_user(buf, dst, c)) {
> + err = -EFAULT;
> + break;
> + }
> +
> + src += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(dst);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *src;
> + u8 __iomem *dst;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p > total_size)
> + return -EFBIG;
> +
> + if (count > total_size) {
> + err = -EFBIG;
> + count = total_size;
> + }
> +
> + if (count + p > total_size) {
> + /*
> + * The framebuffer is too small. We do the
> + * copy operation, but return an error code
> + * afterwards. Taken from fbdev.
> + */
> + if (!err)
> + err = -ENOSPC;
> + count = total_size - p;
> + }
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + src = kmalloc(alloc_size, GFP_KERNEL);
> + if (!src)
> + return -ENOMEM;
> +
> + dst = (u8 __iomem *)(info->screen_base + p);
> +
The fbdev variant call the fb_sync callback here.
noveau and gma500 implments the fb_sync callback - but no-one else.
> + while (count) {
> + c = min(count, alloc_size);
> +
> + if (copy_from_user(src, buf, c)) {
> + err = -EFAULT;
> + break;
> + }
> + memcpy_toio(dst, src, c);
When we rewrite this part to use dma_buf_map_memcpy_to() then we can
merge the two variants of helper_{sys,cfb}_read()?
Which is part of the todo - so OK
> +
> + dst += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(src);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> /**
> * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> * @info: fbdev registered by the helper
> @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> return -ENODEV;
> }
>
> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_read(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> +}
> +
> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_write(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> +}
> +
> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> + const struct fb_fillrect *rect)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_fillrect(info, rect);
> + else
> + drm_fb_helper_cfb_fillrect(info, rect);
> +}
> +
> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> + const struct fb_copyarea *area)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_copyarea(info, area);
> + else
> + drm_fb_helper_cfb_copyarea(info, area);
> +}
> +
> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> + const struct fb_image *image)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_imageblit(info, image);
> + else
> + drm_fb_helper_cfb_imageblit(info, image);
> +}
> +
> static const struct fb_ops drm_fbdev_fb_ops = {
> .owner = THIS_MODULE,
> DRM_FB_HELPER_DEFAULT_OPS,
> @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> .fb_release = drm_fbdev_fb_release,
> .fb_destroy = drm_fbdev_fb_destroy,
> .fb_mmap = drm_fbdev_fb_mmap,
> - .fb_read = drm_fb_helper_sys_read,
> - .fb_write = drm_fb_helper_sys_write,
> - .fb_fillrect = drm_fb_helper_sys_fillrect,
> - .fb_copyarea = drm_fb_helper_sys_copyarea,
> - .fb_imageblit = drm_fb_helper_sys_imageblit,
> + .fb_read = drm_fbdev_fb_read,
> + .fb_write = drm_fbdev_fb_write,
> + .fb_fillrect = drm_fbdev_fb_fillrect,
> + .fb_copyarea = drm_fbdev_fb_copyarea,
> + .fb_imageblit = drm_fbdev_fb_imageblit,
> };
>
> static struct fb_deferred_io drm_fbdev_defio = {
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 5ffbb4ed5b35..ab424ddd7665 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -877,18 +877,6 @@ struct drm_mode_config {
> */
> bool prefer_shadow_fbdev;
>
> - /**
> - * @fbdev_use_iomem:
> - *
> - * Set to true if framebuffer reside in iomem.
> - * When set to true memcpy_toio() is used when copying the framebuffer in
> - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> - *
> - * FIXME: This should be replaced with a per-mapping is_iomem
> - * flag (like ttm does), and then used everywhere in fbdev code.
> - */
> - bool fbdev_use_iomem;
> -
> /**
> * @quirk_addfb_prefer_xbgr_30bpp:
> *
> --
> 2.28.0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
2020-10-15 12:38 ` Thomas Zimmermann
` (3 preceding siblings ...)
(?)
@ 2020-10-16 11:31 ` Sam Ravnborg
-1 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 11:31 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: maarten.lankhorst, mripard, airlied, daniel, alexander.deucher,
christian.koenig, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang, dri-devel, amd-gfx, virtualization, etnaviv,
linux-arm-kernel, linux-samsung-soc, lima, nouveau, spice-devel,
linux-rockchip, xen-devel, linux-media, linaro-mm-sig
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> To do framebuffer updates, one needs memcpy from system memory and a
> pointer-increment function. Add both interfaces with documentation.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> 1 file changed, 62 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index 2e8bbecb5091..6ca0f304dda2 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -32,6 +32,14 @@
> * accessing the buffer. Use the returned instance and the helper functions
> * to access the buffer's memory in the correct way.
> *
> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> + * actually independent from the dma-buf infrastructure. When sharing buffers
> + * among devices, drivers have to know the location of the memory to access
> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> + * solves this problem for dma-buf and its users. If other drivers or
> + * sub-systems require similar functionality, the type could be generalized
> + * and moved to a more prominent header file.
> + *
> * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> * considered bad style. Rather then accessing its fields directly, use one
> * of the provided helper functions, or implement your own. For example,
> @@ -51,6 +59,14 @@
> *
> * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> *
> + * Instances of struct dma_buf_map do not have to be cleaned up, but
> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> + * always refer to system memory.
> + *
> + * .. code-block:: c
> + *
> + * dma_buf_map_clear(&map);
> + *
> * Test if a mapping is valid with either dma_buf_map_is_set() or
> * dma_buf_map_is_null().
> *
> @@ -73,17 +89,19 @@
> * if (dma_buf_map_is_equal(&sys_map, &io_map))
> * // always false
> *
> - * Instances of struct dma_buf_map do not have to be cleaned up, but
> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> - * always refer to system memory.
> + * A set up instance of struct dma_buf_map can be used to access or manipulate
> + * the buffer memory. Depending on the location of the memory, the provided
> + * helpers will pick the correct operations. Data can be copied into the memory
> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> + * dma_buf_map_incr().
> *
> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> - * actually independent from the dma-buf infrastructure. When sharing buffers
> - * among devices, drivers have to know the location of the memory to access
> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> - * solves this problem for dma-buf and its users. If other drivers or
> - * sub-systems require similar functionality, the type could be generalized
> - * and moved to a more prominent header file.
> + * .. code-block:: c
> + *
> + * const void *src = ...; // source buffer
> + * size_t len = ...; // length of src
> + *
> + * dma_buf_map_memcpy_to(&map, src, len);
> + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy
> */
>
> /**
> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
> }
> }
>
> +/**
> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> + * @dst: The dma-buf mapping structure
> + * @src: The source buffer
> + * @len: The number of byte in src
> + *
> + * Copies data into a dma-buf mapping. The source buffer is in system
> + * memory. Depending on the buffer's location, the helper picks the correct
> + * method of accessing the memory.
> + */
> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> +{
> + if (dst->is_iomem)
> + memcpy_toio(dst->vaddr_iomem, src, len);
> + else
> + memcpy(dst->vaddr, src, len);
sparc64 needs "#include <linux/string.h>" to build as is does not get
this via io.h
Sam
> +}
> +
> +/**
> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> + * @map: The dma-buf mapping structure
> + * @incr: The number of bytes to increment
> + *
> + * Increments the address stored in a dma-buf mapping. Depending on the
> + * buffer's location, the correct value will be updated.
> + */
> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> +{
> + if (map->is_iomem)
> + map->vaddr_iomem += incr;
> + else
> + map->vaddr += incr;
> +}
> +
> #endif /* __DMA_BUF_MAP_H__ */
> --
> 2.28.0
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-16 11:31 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 11:31 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: maarten.lankhorst, mripard, airlied, daniel, alexander.deucher,
christian.koenig, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, mel
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> To do framebuffer updates, one needs memcpy from system memory and a
> pointer-increment function. Add both interfaces with documentation.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> 1 file changed, 62 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index 2e8bbecb5091..6ca0f304dda2 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -32,6 +32,14 @@
> * accessing the buffer. Use the returned instance and the helper functions
> * to access the buffer's memory in the correct way.
> *
> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> + * actually independent from the dma-buf infrastructure. When sharing buffers
> + * among devices, drivers have to know the location of the memory to access
> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> + * solves this problem for dma-buf and its users. If other drivers or
> + * sub-systems require similar functionality, the type could be generalized
> + * and moved to a more prominent header file.
> + *
> * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> * considered bad style. Rather then accessing its fields directly, use one
> * of the provided helper functions, or implement your own. For example,
> @@ -51,6 +59,14 @@
> *
> * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> *
> + * Instances of struct dma_buf_map do not have to be cleaned up, but
> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> + * always refer to system memory.
> + *
> + * .. code-block:: c
> + *
> + * dma_buf_map_clear(&map);
> + *
> * Test if a mapping is valid with either dma_buf_map_is_set() or
> * dma_buf_map_is_null().
> *
> @@ -73,17 +89,19 @@
> * if (dma_buf_map_is_equal(&sys_map, &io_map))
> * // always false
> *
> - * Instances of struct dma_buf_map do not have to be cleaned up, but
> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> - * always refer to system memory.
> + * A set up instance of struct dma_buf_map can be used to access or manipulate
> + * the buffer memory. Depending on the location of the memory, the provided
> + * helpers will pick the correct operations. Data can be copied into the memory
> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> + * dma_buf_map_incr().
> *
> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> - * actually independent from the dma-buf infrastructure. When sharing buffers
> - * among devices, drivers have to know the location of the memory to access
> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> - * solves this problem for dma-buf and its users. If other drivers or
> - * sub-systems require similar functionality, the type could be generalized
> - * and moved to a more prominent header file.
> + * .. code-block:: c
> + *
> + * const void *src = ...; // source buffer
> + * size_t len = ...; // length of src
> + *
> + * dma_buf_map_memcpy_to(&map, src, len);
> + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy
> */
>
> /**
> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
> }
> }
>
> +/**
> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> + * @dst: The dma-buf mapping structure
> + * @src: The source buffer
> + * @len: The number of byte in src
> + *
> + * Copies data into a dma-buf mapping. The source buffer is in system
> + * memory. Depending on the buffer's location, the helper picks the correct
> + * method of accessing the memory.
> + */
> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> +{
> + if (dst->is_iomem)
> + memcpy_toio(dst->vaddr_iomem, src, len);
> + else
> + memcpy(dst->vaddr, src, len);
sparc64 needs "#include <linux/string.h>" to build as is does not get
this via io.h
Sam
> +}
> +
> +/**
> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> + * @map: The dma-buf mapping structure
> + * @incr: The number of bytes to increment
> + *
> + * Increments the address stored in a dma-buf mapping. Depending on the
> + * buffer's location, the correct value will be updated.
> + */
> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> +{
> + if (map->is_iomem)
> + map->vaddr_iomem += incr;
> + else
> + map->vaddr += incr;
> +}
> +
> #endif /* __DMA_BUF_MAP_H__ */
> --
> 2.28.0
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-16 11:31 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 11:31 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, kraxel, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, christian.koenig, l.stach
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> To do framebuffer updates, one needs memcpy from system memory and a
> pointer-increment function. Add both interfaces with documentation.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> 1 file changed, 62 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index 2e8bbecb5091..6ca0f304dda2 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -32,6 +32,14 @@
> * accessing the buffer. Use the returned instance and the helper functions
> * to access the buffer's memory in the correct way.
> *
> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> + * actually independent from the dma-buf infrastructure. When sharing buffers
> + * among devices, drivers have to know the location of the memory to access
> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> + * solves this problem for dma-buf and its users. If other drivers or
> + * sub-systems require similar functionality, the type could be generalized
> + * and moved to a more prominent header file.
> + *
> * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> * considered bad style. Rather then accessing its fields directly, use one
> * of the provided helper functions, or implement your own. For example,
> @@ -51,6 +59,14 @@
> *
> * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> *
> + * Instances of struct dma_buf_map do not have to be cleaned up, but
> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> + * always refer to system memory.
> + *
> + * .. code-block:: c
> + *
> + * dma_buf_map_clear(&map);
> + *
> * Test if a mapping is valid with either dma_buf_map_is_set() or
> * dma_buf_map_is_null().
> *
> @@ -73,17 +89,19 @@
> * if (dma_buf_map_is_equal(&sys_map, &io_map))
> * // always false
> *
> - * Instances of struct dma_buf_map do not have to be cleaned up, but
> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> - * always refer to system memory.
> + * A set up instance of struct dma_buf_map can be used to access or manipulate
> + * the buffer memory. Depending on the location of the memory, the provided
> + * helpers will pick the correct operations. Data can be copied into the memory
> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> + * dma_buf_map_incr().
> *
> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> - * actually independent from the dma-buf infrastructure. When sharing buffers
> - * among devices, drivers have to know the location of the memory to access
> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> - * solves this problem for dma-buf and its users. If other drivers or
> - * sub-systems require similar functionality, the type could be generalized
> - * and moved to a more prominent header file.
> + * .. code-block:: c
> + *
> + * const void *src = ...; // source buffer
> + * size_t len = ...; // length of src
> + *
> + * dma_buf_map_memcpy_to(&map, src, len);
> + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy
> */
>
> /**
> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
> }
> }
>
> +/**
> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> + * @dst: The dma-buf mapping structure
> + * @src: The source buffer
> + * @len: The number of byte in src
> + *
> + * Copies data into a dma-buf mapping. The source buffer is in system
> + * memory. Depending on the buffer's location, the helper picks the correct
> + * method of accessing the memory.
> + */
> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> +{
> + if (dst->is_iomem)
> + memcpy_toio(dst->vaddr_iomem, src, len);
> + else
> + memcpy(dst->vaddr, src, len);
sparc64 needs "#include <linux/string.h>" to build as is does not get
this via io.h
Sam
> +}
> +
> +/**
> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> + * @map: The dma-buf mapping structure
> + * @incr: The number of bytes to increment
> + *
> + * Increments the address stored in a dma-buf mapping. Depending on the
> + * buffer's location, the correct value will be updated.
> + */
> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> +{
> + if (map->is_iomem)
> + map->vaddr_iomem += incr;
> + else
> + map->vaddr += incr;
> +}
> +
> #endif /* __DMA_BUF_MAP_H__ */
> --
> 2.28.0
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-16 11:31 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 11:31 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, sumit.semwal, emil.velikov,
robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, christian.koenig, l.stach
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> To do framebuffer updates, one needs memcpy from system memory and a
> pointer-increment function. Add both interfaces with documentation.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> 1 file changed, 62 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index 2e8bbecb5091..6ca0f304dda2 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -32,6 +32,14 @@
> * accessing the buffer. Use the returned instance and the helper functions
> * to access the buffer's memory in the correct way.
> *
> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> + * actually independent from the dma-buf infrastructure. When sharing buffers
> + * among devices, drivers have to know the location of the memory to access
> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> + * solves this problem for dma-buf and its users. If other drivers or
> + * sub-systems require similar functionality, the type could be generalized
> + * and moved to a more prominent header file.
> + *
> * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> * considered bad style. Rather then accessing its fields directly, use one
> * of the provided helper functions, or implement your own. For example,
> @@ -51,6 +59,14 @@
> *
> * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> *
> + * Instances of struct dma_buf_map do not have to be cleaned up, but
> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> + * always refer to system memory.
> + *
> + * .. code-block:: c
> + *
> + * dma_buf_map_clear(&map);
> + *
> * Test if a mapping is valid with either dma_buf_map_is_set() or
> * dma_buf_map_is_null().
> *
> @@ -73,17 +89,19 @@
> * if (dma_buf_map_is_equal(&sys_map, &io_map))
> * // always false
> *
> - * Instances of struct dma_buf_map do not have to be cleaned up, but
> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> - * always refer to system memory.
> + * A set up instance of struct dma_buf_map can be used to access or manipulate
> + * the buffer memory. Depending on the location of the memory, the provided
> + * helpers will pick the correct operations. Data can be copied into the memory
> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> + * dma_buf_map_incr().
> *
> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> - * actually independent from the dma-buf infrastructure. When sharing buffers
> - * among devices, drivers have to know the location of the memory to access
> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> - * solves this problem for dma-buf and its users. If other drivers or
> - * sub-systems require similar functionality, the type could be generalized
> - * and moved to a more prominent header file.
> + * .. code-block:: c
> + *
> + * const void *src = ...; // source buffer
> + * size_t len = ...; // length of src
> + *
> + * dma_buf_map_memcpy_to(&map, src, len);
> + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy
> */
>
> /**
> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
> }
> }
>
> +/**
> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> + * @dst: The dma-buf mapping structure
> + * @src: The source buffer
> + * @len: The number of byte in src
> + *
> + * Copies data into a dma-buf mapping. The source buffer is in system
> + * memory. Depending on the buffer's location, the helper picks the correct
> + * method of accessing the memory.
> + */
> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> +{
> + if (dst->is_iomem)
> + memcpy_toio(dst->vaddr_iomem, src, len);
> + else
> + memcpy(dst->vaddr, src, len);
sparc64 needs "#include <linux/string.h>" to build as is does not get
this via io.h
Sam
> +}
> +
> +/**
> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> + * @map: The dma-buf mapping structure
> + * @incr: The number of bytes to increment
> + *
> + * Increments the address stored in a dma-buf mapping. Depending on the
> + * buffer's location, the correct value will be updated.
> + */
> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> +{
> + if (map->is_iomem)
> + map->vaddr_iomem += incr;
> + else
> + map->vaddr += incr;
> +}
> +
> #endif /* __DMA_BUF_MAP_H__ */
> --
> 2.28.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-16 11:31 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 11:31 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, airlied, nouveau, dri-devel, chris, melissa.srw,
ray.huang, kraxel, emil.velikov, linux-samsung-soc, jy0922.shim,
lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, kgene, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, hdegoede, xen-devel,
virtualization, sean, apaneers, linux-arm-kernel, linaro-mm-sig,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, yuq825, alexander.deucher, linux-media,
christian.koenig
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> To do framebuffer updates, one needs memcpy from system memory and a
> pointer-increment function. Add both interfaces with documentation.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> 1 file changed, 62 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index 2e8bbecb5091..6ca0f304dda2 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -32,6 +32,14 @@
> * accessing the buffer. Use the returned instance and the helper functions
> * to access the buffer's memory in the correct way.
> *
> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> + * actually independent from the dma-buf infrastructure. When sharing buffers
> + * among devices, drivers have to know the location of the memory to access
> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> + * solves this problem for dma-buf and its users. If other drivers or
> + * sub-systems require similar functionality, the type could be generalized
> + * and moved to a more prominent header file.
> + *
> * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> * considered bad style. Rather then accessing its fields directly, use one
> * of the provided helper functions, or implement your own. For example,
> @@ -51,6 +59,14 @@
> *
> * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> *
> + * Instances of struct dma_buf_map do not have to be cleaned up, but
> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> + * always refer to system memory.
> + *
> + * .. code-block:: c
> + *
> + * dma_buf_map_clear(&map);
> + *
> * Test if a mapping is valid with either dma_buf_map_is_set() or
> * dma_buf_map_is_null().
> *
> @@ -73,17 +89,19 @@
> * if (dma_buf_map_is_equal(&sys_map, &io_map))
> * // always false
> *
> - * Instances of struct dma_buf_map do not have to be cleaned up, but
> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> - * always refer to system memory.
> + * A set up instance of struct dma_buf_map can be used to access or manipulate
> + * the buffer memory. Depending on the location of the memory, the provided
> + * helpers will pick the correct operations. Data can be copied into the memory
> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> + * dma_buf_map_incr().
> *
> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> - * actually independent from the dma-buf infrastructure. When sharing buffers
> - * among devices, drivers have to know the location of the memory to access
> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> - * solves this problem for dma-buf and its users. If other drivers or
> - * sub-systems require similar functionality, the type could be generalized
> - * and moved to a more prominent header file.
> + * .. code-block:: c
> + *
> + * const void *src = ...; // source buffer
> + * size_t len = ...; // length of src
> + *
> + * dma_buf_map_memcpy_to(&map, src, len);
> + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy
> */
>
> /**
> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
> }
> }
>
> +/**
> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> + * @dst: The dma-buf mapping structure
> + * @src: The source buffer
> + * @len: The number of byte in src
> + *
> + * Copies data into a dma-buf mapping. The source buffer is in system
> + * memory. Depending on the buffer's location, the helper picks the correct
> + * method of accessing the memory.
> + */
> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> +{
> + if (dst->is_iomem)
> + memcpy_toio(dst->vaddr_iomem, src, len);
> + else
> + memcpy(dst->vaddr, src, len);
sparc64 needs "#include <linux/string.h>" to build as is does not get
this via io.h
Sam
> +}
> +
> +/**
> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> + * @map: The dma-buf mapping structure
> + * @incr: The number of bytes to increment
> + *
> + * Increments the address stored in a dma-buf mapping. Depending on the
> + * buffer's location, the correct value will be updated.
> + */
> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> +{
> + if (map->is_iomem)
> + map->vaddr_iomem += incr;
> + else
> + map->vaddr += incr;
> +}
> +
> #endif /* __DMA_BUF_MAP_H__ */
> --
> 2.28.0
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
@ 2020-10-16 11:31 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 11:31 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, kraxel, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, christian.koenig, l.stach
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> To do framebuffer updates, one needs memcpy from system memory and a
> pointer-increment function. Add both interfaces with documentation.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> 1 file changed, 62 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index 2e8bbecb5091..6ca0f304dda2 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -32,6 +32,14 @@
> * accessing the buffer. Use the returned instance and the helper functions
> * to access the buffer's memory in the correct way.
> *
> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> + * actually independent from the dma-buf infrastructure. When sharing buffers
> + * among devices, drivers have to know the location of the memory to access
> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> + * solves this problem for dma-buf and its users. If other drivers or
> + * sub-systems require similar functionality, the type could be generalized
> + * and moved to a more prominent header file.
> + *
> * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> * considered bad style. Rather then accessing its fields directly, use one
> * of the provided helper functions, or implement your own. For example,
> @@ -51,6 +59,14 @@
> *
> * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> *
> + * Instances of struct dma_buf_map do not have to be cleaned up, but
> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> + * always refer to system memory.
> + *
> + * .. code-block:: c
> + *
> + * dma_buf_map_clear(&map);
> + *
> * Test if a mapping is valid with either dma_buf_map_is_set() or
> * dma_buf_map_is_null().
> *
> @@ -73,17 +89,19 @@
> * if (dma_buf_map_is_equal(&sys_map, &io_map))
> * // always false
> *
> - * Instances of struct dma_buf_map do not have to be cleaned up, but
> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> - * always refer to system memory.
> + * A set up instance of struct dma_buf_map can be used to access or manipulate
> + * the buffer memory. Depending on the location of the memory, the provided
> + * helpers will pick the correct operations. Data can be copied into the memory
> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> + * dma_buf_map_incr().
> *
> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> - * actually independent from the dma-buf infrastructure. When sharing buffers
> - * among devices, drivers have to know the location of the memory to access
> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> - * solves this problem for dma-buf and its users. If other drivers or
> - * sub-systems require similar functionality, the type could be generalized
> - * and moved to a more prominent header file.
> + * .. code-block:: c
> + *
> + * const void *src = ...; // source buffer
> + * size_t len = ...; // length of src
> + *
> + * dma_buf_map_memcpy_to(&map, src, len);
> + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy
> */
>
> /**
> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
> }
> }
>
> +/**
> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> + * @dst: The dma-buf mapping structure
> + * @src: The source buffer
> + * @len: The number of byte in src
> + *
> + * Copies data into a dma-buf mapping. The source buffer is in system
> + * memory. Depending on the buffer's location, the helper picks the correct
> + * method of accessing the memory.
> + */
> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> +{
> + if (dst->is_iomem)
> + memcpy_toio(dst->vaddr_iomem, src, len);
> + else
> + memcpy(dst->vaddr, src, len);
sparc64 needs "#include <linux/string.h>" to build as is does not get
this via io.h
Sam
> +}
> +
> +/**
> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> + * @map: The dma-buf mapping structure
> + * @incr: The number of bytes to increment
> + *
> + * Increments the address stored in a dma-buf mapping. Depending on the
> + * buffer's location, the correct value will be updated.
> + */
> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> +{
> + if (map->is_iomem)
> + map->vaddr_iomem += incr;
> + else
> + map->vaddr += incr;
> +}
> +
> #endif /* __DMA_BUF_MAP_H__ */
> --
> 2.28.0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
2020-10-16 10:58 ` Sam Ravnborg
` (4 preceding siblings ...)
(?)
@ 2020-10-16 11:34 ` Thomas Zimmermann
-1 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 11:34 UTC (permalink / raw)
To: Sam Ravnborg
Cc: luben.tuikov, airlied, nouveau, dri-devel, chris, melissa.srw,
ray.huang, kraxel, emil.velikov, linux-samsung-soc, jy0922.shim,
lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, kgene, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, hdegoede, xen-devel,
virtualization, sean, apaneers, linux-arm-kernel, linaro-mm-sig,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, yuq825, alexander.deucher, linux-media,
christian.koenig
Hi
On Fri, 16 Oct 2020 12:58:54 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > At least sparc64 requires I/O-specific access to framebuffers. This
> > patch updates the fbdev console accordingly.
> >
> > For drivers with direct access to the framebuffer memory, the callback
> > functions in struct fb_ops test for the type of memory and call the rsp
> > fb_sys_ of fb_cfb_ functions.
> >
> > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > interfaces to access the buffer.
> >
> > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > I/O memory and avoid a HW exception. With the introduction of struct
> > dma_buf_map, this is not required any longer. The patch removes the rsp
> > code from both, bochs and fbdev.
> >
> > v4:
> > * move dma_buf_map changes into separate patch (Daniel)
> > * TODO list: comment on fbdev updates (Daniel)
>
> I have been offline for a while so have not followed all the threads on
> this. So may comments below may well be addressed but I failed to see
> it.
>
> If the point about fb_sync is already addressed/considered then:
> Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
It has not been brought up yet. See below.
>
>
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > Documentation/gpu/todo.rst | 19 ++-
> > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > include/drm/drm_mode_config.h | 12 --
> > 4 files changed, 220 insertions(+), 29 deletions(-)
> >
> > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > index 7e6fc3c04add..638b7f704339 100644
> > --- a/Documentation/gpu/todo.rst
> > +++ b/Documentation/gpu/todo.rst
> > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > ------------------------------------------------
> >
> > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > -expects the framebuffer in system memory (or system-like memory).
> > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > emulation +expected the framebuffer in system memory or system-like
> > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > memory can be supported +as well.
> >
> > Contact: Maintainer of the driver you plan to convert
> >
> > Level: Intermediate
> >
> > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > +-------------------------------------------------------
> > +
> > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > +being rewritten without dependencies on the fbdev module. Some of the
> > +helpers could further benefit from using struct dma_buf_map instead of
> > +raw pointers.
> > +
> > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > +
> > +Level: Advanced
> > +
> > +
> > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > -----------------------------------------------------------------
> >
> > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > bochs->dev->mode_config.preferred_depth = 24;
> > bochs->dev->mode_config.prefer_shadow = 0;
> > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > true;
> > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> Good to see this workaround gone again!
>
> > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > --- a/drivers/gpu/drm/drm_fb_helper.c
> > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > work_struct *work) }
> >
> > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > *fb_helper,
> > - struct drm_clip_rect *clip)
> > + struct drm_clip_rect *clip,
> > + struct dma_buf_map *dst)
> > {
> > struct drm_framebuffer *fb = fb_helper->fb;
> > unsigned int cpp = fb->format->cpp[0];
> > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > void *src = fb_helper->fbdev->screen_buffer + offset;
> > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > size_t len = (clip->x2 - clip->x1) * cpp;
> > unsigned int y;
> >
> > - for (y = clip->y1; y < clip->y2; y++) {
> > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > - memcpy(dst, src, len);
> > - else
> > - memcpy_toio((void __iomem *)dst, src, len);
> > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > rect */
> > + for (y = clip->y1; y < clip->y2; y++) {
> > + dma_buf_map_memcpy_to(dst, src, len);
> > + dma_buf_map_incr(dst, fb->pitches[0]);
> > src += fb->pitches[0];
> > - dst += fb->pitches[0];
> > }
> > }
> >
> > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > if (ret)
> > return;
> > - drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy);
> > + drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy, &map); }
> > +
> > if (helper->fb->funcs->dirty)
> > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > &clip_copy, 1);
> > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > *info, }
> > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> >
> So far everything looks good.
>
> > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *dst;
> > + u8 __iomem *src;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p >= total_size)
> > + return 0;
> > +
> > + if (count >= total_size)
> > + count = total_size;
> > +
> > + if (count + p > total_size)
> > + count = total_size - p;
> > +
> > + src = (u8 __iomem *)(info->screen_base + p);
> screen_base is a char __iomem * - so this cast looks semi redundant.
I took the basic code from fbdev. Maybe there's a reason for the case,
otherwise I'll remove it.
>
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!dst)
> > + return -ENOMEM;
> > +
> Same comment as below about fb_sync.
>
>
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + memcpy_fromio(dst, src, c);
> > + if (copy_to_user(buf, dst, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > +
> > + src += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(dst);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *src;
> > + u8 __iomem *dst;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p > total_size)
> > + return -EFBIG;
> > +
> > + if (count > total_size) {
> > + err = -EFBIG;
> > + count = total_size;
> > + }
> > +
> > + if (count + p > total_size) {
> > + /*
> > + * The framebuffer is too small. We do the
> > + * copy operation, but return an error code
> > + * afterwards. Taken from fbdev.
> > + */
> > + if (!err)
> > + err = -ENOSPC;
> > + count = total_size - p;
> > + }
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + src = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!src)
> > + return -ENOMEM;
> > +
> > + dst = (u8 __iomem *)(info->screen_base + p);
> > +
>
> The fbdev variant call the fb_sync callback here.
> noveau and gma500 implments the fb_sync callback - but no-one else.
These drivers implement some form of HW acceleration. If they have a HW
blit/draw/etc op queued up, they have to wait for it to complete. Otherwise,
the copied memory would contain an old state. The fb_sync acts as the fence.
Fbdev only uses software copying, so the fb_sync is not required.
From what I heard, the HW acceleration is not useful on modern machines. I
hope to convert more drivers to generic fbdev after these patches for
I/O-memory support have been merged.
>
>
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + if (copy_from_user(src, buf, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > + memcpy_toio(dst, src, c);
> When we rewrite this part to use dma_buf_map_memcpy_to() then we can
> merge the two variants of helper_{sys,cfb}_read()?
> Which is part of the todo - so OK
I'm not sure if dma_buf_map is a good fit here. The I/O-memory function does
an additional copy between system memory and I/O memory. Of course, the top
and bottom of both functions are similar and could probably be shared.
Best regards
Thomas
> > +
> > + dst += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(src);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > /**
> > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > * @info: fbdev registered by the helper
> > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > struct vm_area_struct *vma) return -ENODEV;
> > }
> >
> > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > +}
> > +
> > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > +}
> > +
> > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > + const struct fb_fillrect *rect)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_fillrect(info, rect);
> > + else
> > + drm_fb_helper_cfb_fillrect(info, rect);
> > +}
> > +
> > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > + const struct fb_copyarea *area)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_copyarea(info, area);
> > + else
> > + drm_fb_helper_cfb_copyarea(info, area);
> > +}
> > +
> > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > + const struct fb_image *image)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_imageblit(info, image);
> > + else
> > + drm_fb_helper_cfb_imageblit(info, image);
> > +}
> > +
> > static const struct fb_ops drm_fbdev_fb_ops = {
> > .owner = THIS_MODULE,
> > DRM_FB_HELPER_DEFAULT_OPS,
> > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > .fb_release = drm_fbdev_fb_release,
> > .fb_destroy = drm_fbdev_fb_destroy,
> > .fb_mmap = drm_fbdev_fb_mmap,
> > - .fb_read = drm_fb_helper_sys_read,
> > - .fb_write = drm_fb_helper_sys_write,
> > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > + .fb_read = drm_fbdev_fb_read,
> > + .fb_write = drm_fbdev_fb_write,
> > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > };
> >
> > static struct fb_deferred_io drm_fbdev_defio = {
> > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > index 5ffbb4ed5b35..ab424ddd7665 100644
> > --- a/include/drm/drm_mode_config.h
> > +++ b/include/drm/drm_mode_config.h
> > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > */
> > bool prefer_shadow_fbdev;
> >
> > - /**
> > - * @fbdev_use_iomem:
> > - *
> > - * Set to true if framebuffer reside in iomem.
> > - * When set to true memcpy_toio() is used when copying the
> > framebuffer in
> > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > - *
> > - * FIXME: This should be replaced with a per-mapping is_iomem
> > - * flag (like ttm does), and then used everywhere in fbdev code.
> > - */
> > - bool fbdev_use_iomem;
> > -
> > /**
> > * @quirk_addfb_prefer_xbgr_30bpp:
> > *
> > --
> > 2.28.0
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 11:34 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 11:34 UTC (permalink / raw)
To: Sam Ravnborg
Cc: luben.tuikov, airlied, nouveau, dri-devel, chris, melissa.srw,
ray.huang, kraxel, emil.velikov, linux-samsung-soc, jy0922.shim,
lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, kgene, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, hdegoede, xen-devel,
virtualization, sean, apaneers, linux-arm-kernel, linaro-mm-sig
Hi
On Fri, 16 Oct 2020 12:58:54 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > At least sparc64 requires I/O-specific access to framebuffers. This
> > patch updates the fbdev console accordingly.
> >
> > For drivers with direct access to the framebuffer memory, the callback
> > functions in struct fb_ops test for the type of memory and call the rsp
> > fb_sys_ of fb_cfb_ functions.
> >
> > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > interfaces to access the buffer.
> >
> > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > I/O memory and avoid a HW exception. With the introduction of struct
> > dma_buf_map, this is not required any longer. The patch removes the rsp
> > code from both, bochs and fbdev.
> >
> > v4:
> > * move dma_buf_map changes into separate patch (Daniel)
> > * TODO list: comment on fbdev updates (Daniel)
>
> I have been offline for a while so have not followed all the threads on
> this. So may comments below may well be addressed but I failed to see
> it.
>
> If the point about fb_sync is already addressed/considered then:
> Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
It has not been brought up yet. See below.
>
>
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > Documentation/gpu/todo.rst | 19 ++-
> > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > include/drm/drm_mode_config.h | 12 --
> > 4 files changed, 220 insertions(+), 29 deletions(-)
> >
> > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > index 7e6fc3c04add..638b7f704339 100644
> > --- a/Documentation/gpu/todo.rst
> > +++ b/Documentation/gpu/todo.rst
> > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > ------------------------------------------------
> >
> > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > -expects the framebuffer in system memory (or system-like memory).
> > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > emulation +expected the framebuffer in system memory or system-like
> > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > memory can be supported +as well.
> >
> > Contact: Maintainer of the driver you plan to convert
> >
> > Level: Intermediate
> >
> > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > +-------------------------------------------------------
> > +
> > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > +being rewritten without dependencies on the fbdev module. Some of the
> > +helpers could further benefit from using struct dma_buf_map instead of
> > +raw pointers.
> > +
> > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > +
> > +Level: Advanced
> > +
> > +
> > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > -----------------------------------------------------------------
> >
> > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > bochs->dev->mode_config.preferred_depth = 24;
> > bochs->dev->mode_config.prefer_shadow = 0;
> > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > true;
> > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> Good to see this workaround gone again!
>
> > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > --- a/drivers/gpu/drm/drm_fb_helper.c
> > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > work_struct *work) }
> >
> > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > *fb_helper,
> > - struct drm_clip_rect *clip)
> > + struct drm_clip_rect *clip,
> > + struct dma_buf_map *dst)
> > {
> > struct drm_framebuffer *fb = fb_helper->fb;
> > unsigned int cpp = fb->format->cpp[0];
> > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > void *src = fb_helper->fbdev->screen_buffer + offset;
> > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > size_t len = (clip->x2 - clip->x1) * cpp;
> > unsigned int y;
> >
> > - for (y = clip->y1; y < clip->y2; y++) {
> > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > - memcpy(dst, src, len);
> > - else
> > - memcpy_toio((void __iomem *)dst, src, len);
> > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > rect */
> > + for (y = clip->y1; y < clip->y2; y++) {
> > + dma_buf_map_memcpy_to(dst, src, len);
> > + dma_buf_map_incr(dst, fb->pitches[0]);
> > src += fb->pitches[0];
> > - dst += fb->pitches[0];
> > }
> > }
> >
> > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > if (ret)
> > return;
> > - drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy);
> > + drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy, &map); }
> > +
> > if (helper->fb->funcs->dirty)
> > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > &clip_copy, 1);
> > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > *info, }
> > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> >
> So far everything looks good.
>
> > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *dst;
> > + u8 __iomem *src;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p >= total_size)
> > + return 0;
> > +
> > + if (count >= total_size)
> > + count = total_size;
> > +
> > + if (count + p > total_size)
> > + count = total_size - p;
> > +
> > + src = (u8 __iomem *)(info->screen_base + p);
> screen_base is a char __iomem * - so this cast looks semi redundant.
I took the basic code from fbdev. Maybe there's a reason for the case,
otherwise I'll remove it.
>
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!dst)
> > + return -ENOMEM;
> > +
> Same comment as below about fb_sync.
>
>
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + memcpy_fromio(dst, src, c);
> > + if (copy_to_user(buf, dst, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > +
> > + src += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(dst);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *src;
> > + u8 __iomem *dst;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p > total_size)
> > + return -EFBIG;
> > +
> > + if (count > total_size) {
> > + err = -EFBIG;
> > + count = total_size;
> > + }
> > +
> > + if (count + p > total_size) {
> > + /*
> > + * The framebuffer is too small. We do the
> > + * copy operation, but return an error code
> > + * afterwards. Taken from fbdev.
> > + */
> > + if (!err)
> > + err = -ENOSPC;
> > + count = total_size - p;
> > + }
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + src = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!src)
> > + return -ENOMEM;
> > +
> > + dst = (u8 __iomem *)(info->screen_base + p);
> > +
>
> The fbdev variant call the fb_sync callback here.
> noveau and gma500 implments the fb_sync callback - but no-one else.
These drivers implement some form of HW acceleration. If they have a HW
blit/draw/etc op queued up, they have to wait for it to complete. Otherwise,
the copied memory would contain an old state. The fb_sync acts as the fence.
Fbdev only uses software copying, so the fb_sync is not required.
From what I heard, the HW acceleration is not useful on modern machines. I
hope to convert more drivers to generic fbdev after these patches for
I/O-memory support have been merged.
>
>
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + if (copy_from_user(src, buf, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > + memcpy_toio(dst, src, c);
> When we rewrite this part to use dma_buf_map_memcpy_to() then we can
> merge the two variants of helper_{sys,cfb}_read()?
> Which is part of the todo - so OK
I'm not sure if dma_buf_map is a good fit here. The I/O-memory function does
an additional copy between system memory and I/O memory. Of course, the top
and bottom of both functions are similar and could probably be shared.
Best regards
Thomas
> > +
> > + dst += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(src);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > /**
> > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > * @info: fbdev registered by the helper
> > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > struct vm_area_struct *vma) return -ENODEV;
> > }
> >
> > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > +}
> > +
> > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > +}
> > +
> > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > + const struct fb_fillrect *rect)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_fillrect(info, rect);
> > + else
> > + drm_fb_helper_cfb_fillrect(info, rect);
> > +}
> > +
> > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > + const struct fb_copyarea *area)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_copyarea(info, area);
> > + else
> > + drm_fb_helper_cfb_copyarea(info, area);
> > +}
> > +
> > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > + const struct fb_image *image)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_imageblit(info, image);
> > + else
> > + drm_fb_helper_cfb_imageblit(info, image);
> > +}
> > +
> > static const struct fb_ops drm_fbdev_fb_ops = {
> > .owner = THIS_MODULE,
> > DRM_FB_HELPER_DEFAULT_OPS,
> > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > .fb_release = drm_fbdev_fb_release,
> > .fb_destroy = drm_fbdev_fb_destroy,
> > .fb_mmap = drm_fbdev_fb_mmap,
> > - .fb_read = drm_fb_helper_sys_read,
> > - .fb_write = drm_fb_helper_sys_write,
> > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > + .fb_read = drm_fbdev_fb_read,
> > + .fb_write = drm_fbdev_fb_write,
> > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > };
> >
> > static struct fb_deferred_io drm_fbdev_defio = {
> > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > index 5ffbb4ed5b35..ab424ddd7665 100644
> > --- a/include/drm/drm_mode_config.h
> > +++ b/include/drm/drm_mode_config.h
> > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > */
> > bool prefer_shadow_fbdev;
> >
> > - /**
> > - * @fbdev_use_iomem:
> > - *
> > - * Set to true if framebuffer reside in iomem.
> > - * When set to true memcpy_toio() is used when copying the
> > framebuffer in
> > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > - *
> > - * FIXME: This should be replaced with a per-mapping is_iomem
> > - * flag (like ttm does), and then used everywhere in fbdev code.
> > - */
> > - bool fbdev_use_iomem;
> > -
> > /**
> > * @quirk_addfb_prefer_xbgr_30bpp:
> > *
> > --
> > 2.28.0
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 11:34 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 11:34 UTC (permalink / raw)
To: Sam Ravnborg
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
kraxel, yuq825, emil.velikov, linux-samsung-soc, jy0922.shim,
lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, luben.tuikov, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, linaro-mm-sig, hdegoede,
xen-devel, virtualization, sean, apaneers, linux-arm-kernel,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, kgene, alexander.deucher, linux-media,
christian.koenig
Hi
On Fri, 16 Oct 2020 12:58:54 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > At least sparc64 requires I/O-specific access to framebuffers. This
> > patch updates the fbdev console accordingly.
> >
> > For drivers with direct access to the framebuffer memory, the callback
> > functions in struct fb_ops test for the type of memory and call the rsp
> > fb_sys_ of fb_cfb_ functions.
> >
> > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > interfaces to access the buffer.
> >
> > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > I/O memory and avoid a HW exception. With the introduction of struct
> > dma_buf_map, this is not required any longer. The patch removes the rsp
> > code from both, bochs and fbdev.
> >
> > v4:
> > * move dma_buf_map changes into separate patch (Daniel)
> > * TODO list: comment on fbdev updates (Daniel)
>
> I have been offline for a while so have not followed all the threads on
> this. So may comments below may well be addressed but I failed to see
> it.
>
> If the point about fb_sync is already addressed/considered then:
> Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
It has not been brought up yet. See below.
>
>
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > Documentation/gpu/todo.rst | 19 ++-
> > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > include/drm/drm_mode_config.h | 12 --
> > 4 files changed, 220 insertions(+), 29 deletions(-)
> >
> > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > index 7e6fc3c04add..638b7f704339 100644
> > --- a/Documentation/gpu/todo.rst
> > +++ b/Documentation/gpu/todo.rst
> > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > ------------------------------------------------
> >
> > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > -expects the framebuffer in system memory (or system-like memory).
> > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > emulation +expected the framebuffer in system memory or system-like
> > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > memory can be supported +as well.
> >
> > Contact: Maintainer of the driver you plan to convert
> >
> > Level: Intermediate
> >
> > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > +-------------------------------------------------------
> > +
> > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > +being rewritten without dependencies on the fbdev module. Some of the
> > +helpers could further benefit from using struct dma_buf_map instead of
> > +raw pointers.
> > +
> > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > +
> > +Level: Advanced
> > +
> > +
> > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > -----------------------------------------------------------------
> >
> > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > bochs->dev->mode_config.preferred_depth = 24;
> > bochs->dev->mode_config.prefer_shadow = 0;
> > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > true;
> > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> Good to see this workaround gone again!
>
> > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > --- a/drivers/gpu/drm/drm_fb_helper.c
> > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > work_struct *work) }
> >
> > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > *fb_helper,
> > - struct drm_clip_rect *clip)
> > + struct drm_clip_rect *clip,
> > + struct dma_buf_map *dst)
> > {
> > struct drm_framebuffer *fb = fb_helper->fb;
> > unsigned int cpp = fb->format->cpp[0];
> > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > void *src = fb_helper->fbdev->screen_buffer + offset;
> > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > size_t len = (clip->x2 - clip->x1) * cpp;
> > unsigned int y;
> >
> > - for (y = clip->y1; y < clip->y2; y++) {
> > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > - memcpy(dst, src, len);
> > - else
> > - memcpy_toio((void __iomem *)dst, src, len);
> > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > rect */
> > + for (y = clip->y1; y < clip->y2; y++) {
> > + dma_buf_map_memcpy_to(dst, src, len);
> > + dma_buf_map_incr(dst, fb->pitches[0]);
> > src += fb->pitches[0];
> > - dst += fb->pitches[0];
> > }
> > }
> >
> > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > if (ret)
> > return;
> > - drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy);
> > + drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy, &map); }
> > +
> > if (helper->fb->funcs->dirty)
> > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > &clip_copy, 1);
> > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > *info, }
> > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> >
> So far everything looks good.
>
> > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *dst;
> > + u8 __iomem *src;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p >= total_size)
> > + return 0;
> > +
> > + if (count >= total_size)
> > + count = total_size;
> > +
> > + if (count + p > total_size)
> > + count = total_size - p;
> > +
> > + src = (u8 __iomem *)(info->screen_base + p);
> screen_base is a char __iomem * - so this cast looks semi redundant.
I took the basic code from fbdev. Maybe there's a reason for the case,
otherwise I'll remove it.
>
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!dst)
> > + return -ENOMEM;
> > +
> Same comment as below about fb_sync.
>
>
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + memcpy_fromio(dst, src, c);
> > + if (copy_to_user(buf, dst, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > +
> > + src += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(dst);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *src;
> > + u8 __iomem *dst;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p > total_size)
> > + return -EFBIG;
> > +
> > + if (count > total_size) {
> > + err = -EFBIG;
> > + count = total_size;
> > + }
> > +
> > + if (count + p > total_size) {
> > + /*
> > + * The framebuffer is too small. We do the
> > + * copy operation, but return an error code
> > + * afterwards. Taken from fbdev.
> > + */
> > + if (!err)
> > + err = -ENOSPC;
> > + count = total_size - p;
> > + }
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + src = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!src)
> > + return -ENOMEM;
> > +
> > + dst = (u8 __iomem *)(info->screen_base + p);
> > +
>
> The fbdev variant call the fb_sync callback here.
> noveau and gma500 implments the fb_sync callback - but no-one else.
These drivers implement some form of HW acceleration. If they have a HW
blit/draw/etc op queued up, they have to wait for it to complete. Otherwise,
the copied memory would contain an old state. The fb_sync acts as the fence.
Fbdev only uses software copying, so the fb_sync is not required.
From what I heard, the HW acceleration is not useful on modern machines. I
hope to convert more drivers to generic fbdev after these patches for
I/O-memory support have been merged.
>
>
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + if (copy_from_user(src, buf, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > + memcpy_toio(dst, src, c);
> When we rewrite this part to use dma_buf_map_memcpy_to() then we can
> merge the two variants of helper_{sys,cfb}_read()?
> Which is part of the todo - so OK
I'm not sure if dma_buf_map is a good fit here. The I/O-memory function does
an additional copy between system memory and I/O memory. Of course, the top
and bottom of both functions are similar and could probably be shared.
Best regards
Thomas
> > +
> > + dst += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(src);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > /**
> > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > * @info: fbdev registered by the helper
> > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > struct vm_area_struct *vma) return -ENODEV;
> > }
> >
> > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > +}
> > +
> > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > +}
> > +
> > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > + const struct fb_fillrect *rect)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_fillrect(info, rect);
> > + else
> > + drm_fb_helper_cfb_fillrect(info, rect);
> > +}
> > +
> > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > + const struct fb_copyarea *area)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_copyarea(info, area);
> > + else
> > + drm_fb_helper_cfb_copyarea(info, area);
> > +}
> > +
> > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > + const struct fb_image *image)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_imageblit(info, image);
> > + else
> > + drm_fb_helper_cfb_imageblit(info, image);
> > +}
> > +
> > static const struct fb_ops drm_fbdev_fb_ops = {
> > .owner = THIS_MODULE,
> > DRM_FB_HELPER_DEFAULT_OPS,
> > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > .fb_release = drm_fbdev_fb_release,
> > .fb_destroy = drm_fbdev_fb_destroy,
> > .fb_mmap = drm_fbdev_fb_mmap,
> > - .fb_read = drm_fb_helper_sys_read,
> > - .fb_write = drm_fb_helper_sys_write,
> > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > + .fb_read = drm_fbdev_fb_read,
> > + .fb_write = drm_fbdev_fb_write,
> > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > };
> >
> > static struct fb_deferred_io drm_fbdev_defio = {
> > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > index 5ffbb4ed5b35..ab424ddd7665 100644
> > --- a/include/drm/drm_mode_config.h
> > +++ b/include/drm/drm_mode_config.h
> > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > */
> > bool prefer_shadow_fbdev;
> >
> > - /**
> > - * @fbdev_use_iomem:
> > - *
> > - * Set to true if framebuffer reside in iomem.
> > - * When set to true memcpy_toio() is used when copying the
> > framebuffer in
> > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > - *
> > - * FIXME: This should be replaced with a per-mapping is_iomem
> > - * flag (like ttm does), and then used everywhere in fbdev code.
> > - */
> > - bool fbdev_use_iomem;
> > -
> > /**
> > * @quirk_addfb_prefer_xbgr_30bpp:
> > *
> > --
> > 2.28.0
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 11:34 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 11:34 UTC (permalink / raw)
To: Sam Ravnborg
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
yuq825, emil.velikov, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
luben.tuikov, alyssa.rosenzweig, linux+etnaviv, spice-devel,
bskeggs, etnaviv, linaro-mm-sig, hdegoede, xen-devel,
virtualization, sean, apaneers, linux-arm-kernel, amd-gfx,
tomeu.vizoso, sw0312.kim, hjc, kyungmin.park, miaoqinglang,
kgene, alexander.deucher, linux-media, christian.koenig
Hi
On Fri, 16 Oct 2020 12:58:54 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > At least sparc64 requires I/O-specific access to framebuffers. This
> > patch updates the fbdev console accordingly.
> >
> > For drivers with direct access to the framebuffer memory, the callback
> > functions in struct fb_ops test for the type of memory and call the rsp
> > fb_sys_ of fb_cfb_ functions.
> >
> > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > interfaces to access the buffer.
> >
> > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > I/O memory and avoid a HW exception. With the introduction of struct
> > dma_buf_map, this is not required any longer. The patch removes the rsp
> > code from both, bochs and fbdev.
> >
> > v4:
> > * move dma_buf_map changes into separate patch (Daniel)
> > * TODO list: comment on fbdev updates (Daniel)
>
> I have been offline for a while so have not followed all the threads on
> this. So may comments below may well be addressed but I failed to see
> it.
>
> If the point about fb_sync is already addressed/considered then:
> Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
It has not been brought up yet. See below.
>
>
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > Documentation/gpu/todo.rst | 19 ++-
> > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > include/drm/drm_mode_config.h | 12 --
> > 4 files changed, 220 insertions(+), 29 deletions(-)
> >
> > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > index 7e6fc3c04add..638b7f704339 100644
> > --- a/Documentation/gpu/todo.rst
> > +++ b/Documentation/gpu/todo.rst
> > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > ------------------------------------------------
> >
> > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > -expects the framebuffer in system memory (or system-like memory).
> > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > emulation +expected the framebuffer in system memory or system-like
> > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > memory can be supported +as well.
> >
> > Contact: Maintainer of the driver you plan to convert
> >
> > Level: Intermediate
> >
> > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > +-------------------------------------------------------
> > +
> > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > +being rewritten without dependencies on the fbdev module. Some of the
> > +helpers could further benefit from using struct dma_buf_map instead of
> > +raw pointers.
> > +
> > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > +
> > +Level: Advanced
> > +
> > +
> > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > -----------------------------------------------------------------
> >
> > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > bochs->dev->mode_config.preferred_depth = 24;
> > bochs->dev->mode_config.prefer_shadow = 0;
> > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > true;
> > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> Good to see this workaround gone again!
>
> > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > --- a/drivers/gpu/drm/drm_fb_helper.c
> > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > work_struct *work) }
> >
> > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > *fb_helper,
> > - struct drm_clip_rect *clip)
> > + struct drm_clip_rect *clip,
> > + struct dma_buf_map *dst)
> > {
> > struct drm_framebuffer *fb = fb_helper->fb;
> > unsigned int cpp = fb->format->cpp[0];
> > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > void *src = fb_helper->fbdev->screen_buffer + offset;
> > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > size_t len = (clip->x2 - clip->x1) * cpp;
> > unsigned int y;
> >
> > - for (y = clip->y1; y < clip->y2; y++) {
> > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > - memcpy(dst, src, len);
> > - else
> > - memcpy_toio((void __iomem *)dst, src, len);
> > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > rect */
> > + for (y = clip->y1; y < clip->y2; y++) {
> > + dma_buf_map_memcpy_to(dst, src, len);
> > + dma_buf_map_incr(dst, fb->pitches[0]);
> > src += fb->pitches[0];
> > - dst += fb->pitches[0];
> > }
> > }
> >
> > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > if (ret)
> > return;
> > - drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy);
> > + drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy, &map); }
> > +
> > if (helper->fb->funcs->dirty)
> > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > &clip_copy, 1);
> > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > *info, }
> > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> >
> So far everything looks good.
>
> > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *dst;
> > + u8 __iomem *src;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p >= total_size)
> > + return 0;
> > +
> > + if (count >= total_size)
> > + count = total_size;
> > +
> > + if (count + p > total_size)
> > + count = total_size - p;
> > +
> > + src = (u8 __iomem *)(info->screen_base + p);
> screen_base is a char __iomem * - so this cast looks semi redundant.
I took the basic code from fbdev. Maybe there's a reason for the case,
otherwise I'll remove it.
>
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!dst)
> > + return -ENOMEM;
> > +
> Same comment as below about fb_sync.
>
>
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + memcpy_fromio(dst, src, c);
> > + if (copy_to_user(buf, dst, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > +
> > + src += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(dst);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *src;
> > + u8 __iomem *dst;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p > total_size)
> > + return -EFBIG;
> > +
> > + if (count > total_size) {
> > + err = -EFBIG;
> > + count = total_size;
> > + }
> > +
> > + if (count + p > total_size) {
> > + /*
> > + * The framebuffer is too small. We do the
> > + * copy operation, but return an error code
> > + * afterwards. Taken from fbdev.
> > + */
> > + if (!err)
> > + err = -ENOSPC;
> > + count = total_size - p;
> > + }
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + src = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!src)
> > + return -ENOMEM;
> > +
> > + dst = (u8 __iomem *)(info->screen_base + p);
> > +
>
> The fbdev variant call the fb_sync callback here.
> noveau and gma500 implments the fb_sync callback - but no-one else.
These drivers implement some form of HW acceleration. If they have a HW
blit/draw/etc op queued up, they have to wait for it to complete. Otherwise,
the copied memory would contain an old state. The fb_sync acts as the fence.
Fbdev only uses software copying, so the fb_sync is not required.
From what I heard, the HW acceleration is not useful on modern machines. I
hope to convert more drivers to generic fbdev after these patches for
I/O-memory support have been merged.
>
>
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + if (copy_from_user(src, buf, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > + memcpy_toio(dst, src, c);
> When we rewrite this part to use dma_buf_map_memcpy_to() then we can
> merge the two variants of helper_{sys,cfb}_read()?
> Which is part of the todo - so OK
I'm not sure if dma_buf_map is a good fit here. The I/O-memory function does
an additional copy between system memory and I/O memory. Of course, the top
and bottom of both functions are similar and could probably be shared.
Best regards
Thomas
> > +
> > + dst += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(src);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > /**
> > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > * @info: fbdev registered by the helper
> > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > struct vm_area_struct *vma) return -ENODEV;
> > }
> >
> > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > +}
> > +
> > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > +}
> > +
> > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > + const struct fb_fillrect *rect)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_fillrect(info, rect);
> > + else
> > + drm_fb_helper_cfb_fillrect(info, rect);
> > +}
> > +
> > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > + const struct fb_copyarea *area)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_copyarea(info, area);
> > + else
> > + drm_fb_helper_cfb_copyarea(info, area);
> > +}
> > +
> > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > + const struct fb_image *image)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_imageblit(info, image);
> > + else
> > + drm_fb_helper_cfb_imageblit(info, image);
> > +}
> > +
> > static const struct fb_ops drm_fbdev_fb_ops = {
> > .owner = THIS_MODULE,
> > DRM_FB_HELPER_DEFAULT_OPS,
> > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > .fb_release = drm_fbdev_fb_release,
> > .fb_destroy = drm_fbdev_fb_destroy,
> > .fb_mmap = drm_fbdev_fb_mmap,
> > - .fb_read = drm_fb_helper_sys_read,
> > - .fb_write = drm_fb_helper_sys_write,
> > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > + .fb_read = drm_fbdev_fb_read,
> > + .fb_write = drm_fbdev_fb_write,
> > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > };
> >
> > static struct fb_deferred_io drm_fbdev_defio = {
> > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > index 5ffbb4ed5b35..ab424ddd7665 100644
> > --- a/include/drm/drm_mode_config.h
> > +++ b/include/drm/drm_mode_config.h
> > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > */
> > bool prefer_shadow_fbdev;
> >
> > - /**
> > - * @fbdev_use_iomem:
> > - *
> > - * Set to true if framebuffer reside in iomem.
> > - * When set to true memcpy_toio() is used when copying the
> > framebuffer in
> > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > - *
> > - * FIXME: This should be replaced with a per-mapping is_iomem
> > - * flag (like ttm does), and then used everywhere in fbdev code.
> > - */
> > - bool fbdev_use_iomem;
> > -
> > /**
> > * @quirk_addfb_prefer_xbgr_30bpp:
> > *
> > --
> > 2.28.0
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 11:34 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 11:34 UTC (permalink / raw)
To: Sam Ravnborg
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
kraxel, yuq825, emil.velikov, linux-samsung-soc, jy0922.shim,
lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, luben.tuikov, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, linaro-mm-sig, hdegoede,
xen-devel, virtualization, sean, apaneers, linux-arm-kernel,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, kgene, alexander.deucher, linux-media,
christian.koenig
Hi
On Fri, 16 Oct 2020 12:58:54 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > At least sparc64 requires I/O-specific access to framebuffers. This
> > patch updates the fbdev console accordingly.
> >
> > For drivers with direct access to the framebuffer memory, the callback
> > functions in struct fb_ops test for the type of memory and call the rsp
> > fb_sys_ of fb_cfb_ functions.
> >
> > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > interfaces to access the buffer.
> >
> > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > I/O memory and avoid a HW exception. With the introduction of struct
> > dma_buf_map, this is not required any longer. The patch removes the rsp
> > code from both, bochs and fbdev.
> >
> > v4:
> > * move dma_buf_map changes into separate patch (Daniel)
> > * TODO list: comment on fbdev updates (Daniel)
>
> I have been offline for a while so have not followed all the threads on
> this. So may comments below may well be addressed but I failed to see
> it.
>
> If the point about fb_sync is already addressed/considered then:
> Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
It has not been brought up yet. See below.
>
>
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > Documentation/gpu/todo.rst | 19 ++-
> > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > include/drm/drm_mode_config.h | 12 --
> > 4 files changed, 220 insertions(+), 29 deletions(-)
> >
> > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > index 7e6fc3c04add..638b7f704339 100644
> > --- a/Documentation/gpu/todo.rst
> > +++ b/Documentation/gpu/todo.rst
> > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > ------------------------------------------------
> >
> > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > -expects the framebuffer in system memory (or system-like memory).
> > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > emulation +expected the framebuffer in system memory or system-like
> > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > memory can be supported +as well.
> >
> > Contact: Maintainer of the driver you plan to convert
> >
> > Level: Intermediate
> >
> > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > +-------------------------------------------------------
> > +
> > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > +being rewritten without dependencies on the fbdev module. Some of the
> > +helpers could further benefit from using struct dma_buf_map instead of
> > +raw pointers.
> > +
> > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > +
> > +Level: Advanced
> > +
> > +
> > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > -----------------------------------------------------------------
> >
> > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > bochs->dev->mode_config.preferred_depth = 24;
> > bochs->dev->mode_config.prefer_shadow = 0;
> > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > true;
> > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> Good to see this workaround gone again!
>
> > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > --- a/drivers/gpu/drm/drm_fb_helper.c
> > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > work_struct *work) }
> >
> > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > *fb_helper,
> > - struct drm_clip_rect *clip)
> > + struct drm_clip_rect *clip,
> > + struct dma_buf_map *dst)
> > {
> > struct drm_framebuffer *fb = fb_helper->fb;
> > unsigned int cpp = fb->format->cpp[0];
> > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > void *src = fb_helper->fbdev->screen_buffer + offset;
> > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > size_t len = (clip->x2 - clip->x1) * cpp;
> > unsigned int y;
> >
> > - for (y = clip->y1; y < clip->y2; y++) {
> > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > - memcpy(dst, src, len);
> > - else
> > - memcpy_toio((void __iomem *)dst, src, len);
> > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > rect */
> > + for (y = clip->y1; y < clip->y2; y++) {
> > + dma_buf_map_memcpy_to(dst, src, len);
> > + dma_buf_map_incr(dst, fb->pitches[0]);
> > src += fb->pitches[0];
> > - dst += fb->pitches[0];
> > }
> > }
> >
> > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > if (ret)
> > return;
> > - drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy);
> > + drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy, &map); }
> > +
> > if (helper->fb->funcs->dirty)
> > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > &clip_copy, 1);
> > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > *info, }
> > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> >
> So far everything looks good.
>
> > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *dst;
> > + u8 __iomem *src;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p >= total_size)
> > + return 0;
> > +
> > + if (count >= total_size)
> > + count = total_size;
> > +
> > + if (count + p > total_size)
> > + count = total_size - p;
> > +
> > + src = (u8 __iomem *)(info->screen_base + p);
> screen_base is a char __iomem * - so this cast looks semi redundant.
I took the basic code from fbdev. Maybe there's a reason for the case,
otherwise I'll remove it.
>
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!dst)
> > + return -ENOMEM;
> > +
> Same comment as below about fb_sync.
>
>
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + memcpy_fromio(dst, src, c);
> > + if (copy_to_user(buf, dst, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > +
> > + src += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(dst);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *src;
> > + u8 __iomem *dst;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p > total_size)
> > + return -EFBIG;
> > +
> > + if (count > total_size) {
> > + err = -EFBIG;
> > + count = total_size;
> > + }
> > +
> > + if (count + p > total_size) {
> > + /*
> > + * The framebuffer is too small. We do the
> > + * copy operation, but return an error code
> > + * afterwards. Taken from fbdev.
> > + */
> > + if (!err)
> > + err = -ENOSPC;
> > + count = total_size - p;
> > + }
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + src = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!src)
> > + return -ENOMEM;
> > +
> > + dst = (u8 __iomem *)(info->screen_base + p);
> > +
>
> The fbdev variant call the fb_sync callback here.
> noveau and gma500 implments the fb_sync callback - but no-one else.
These drivers implement some form of HW acceleration. If they have a HW
blit/draw/etc op queued up, they have to wait for it to complete. Otherwise,
the copied memory would contain an old state. The fb_sync acts as the fence.
Fbdev only uses software copying, so the fb_sync is not required.
From what I heard, the HW acceleration is not useful on modern machines. I
hope to convert more drivers to generic fbdev after these patches for
I/O-memory support have been merged.
>
>
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + if (copy_from_user(src, buf, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > + memcpy_toio(dst, src, c);
> When we rewrite this part to use dma_buf_map_memcpy_to() then we can
> merge the two variants of helper_{sys,cfb}_read()?
> Which is part of the todo - so OK
I'm not sure if dma_buf_map is a good fit here. The I/O-memory function does
an additional copy between system memory and I/O memory. Of course, the top
and bottom of both functions are similar and could probably be shared.
Best regards
Thomas
> > +
> > + dst += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(src);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > /**
> > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > * @info: fbdev registered by the helper
> > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > struct vm_area_struct *vma) return -ENODEV;
> > }
> >
> > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > +}
> > +
> > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > +}
> > +
> > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > + const struct fb_fillrect *rect)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_fillrect(info, rect);
> > + else
> > + drm_fb_helper_cfb_fillrect(info, rect);
> > +}
> > +
> > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > + const struct fb_copyarea *area)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_copyarea(info, area);
> > + else
> > + drm_fb_helper_cfb_copyarea(info, area);
> > +}
> > +
> > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > + const struct fb_image *image)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_imageblit(info, image);
> > + else
> > + drm_fb_helper_cfb_imageblit(info, image);
> > +}
> > +
> > static const struct fb_ops drm_fbdev_fb_ops = {
> > .owner = THIS_MODULE,
> > DRM_FB_HELPER_DEFAULT_OPS,
> > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > .fb_release = drm_fbdev_fb_release,
> > .fb_destroy = drm_fbdev_fb_destroy,
> > .fb_mmap = drm_fbdev_fb_mmap,
> > - .fb_read = drm_fb_helper_sys_read,
> > - .fb_write = drm_fb_helper_sys_write,
> > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > + .fb_read = drm_fbdev_fb_read,
> > + .fb_write = drm_fbdev_fb_write,
> > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > };
> >
> > static struct fb_deferred_io drm_fbdev_defio = {
> > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > index 5ffbb4ed5b35..ab424ddd7665 100644
> > --- a/include/drm/drm_mode_config.h
> > +++ b/include/drm/drm_mode_config.h
> > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > */
> > bool prefer_shadow_fbdev;
> >
> > - /**
> > - * @fbdev_use_iomem:
> > - *
> > - * Set to true if framebuffer reside in iomem.
> > - * When set to true memcpy_toio() is used when copying the
> > framebuffer in
> > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > - *
> > - * FIXME: This should be replaced with a per-mapping is_iomem
> > - * flag (like ttm does), and then used everywhere in fbdev code.
> > - */
> > - bool fbdev_use_iomem;
> > -
> > /**
> > * @quirk_addfb_prefer_xbgr_30bpp:
> > *
> > --
> > 2.28.0
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 11:34 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 11:34 UTC (permalink / raw)
To: Sam Ravnborg
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
kraxel, yuq825, emil.velikov, linux-samsung-soc, jy0922.shim,
lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, luben.tuikov, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, linaro-mm-sig, hdegoede,
xen-devel, virtualization, sean, apaneers, linux-arm-kernel,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, kgene, alexander.deucher, linux-media,
christian.koenig
Hi
On Fri, 16 Oct 2020 12:58:54 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > At least sparc64 requires I/O-specific access to framebuffers. This
> > patch updates the fbdev console accordingly.
> >
> > For drivers with direct access to the framebuffer memory, the callback
> > functions in struct fb_ops test for the type of memory and call the rsp
> > fb_sys_ of fb_cfb_ functions.
> >
> > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > interfaces to access the buffer.
> >
> > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > I/O memory and avoid a HW exception. With the introduction of struct
> > dma_buf_map, this is not required any longer. The patch removes the rsp
> > code from both, bochs and fbdev.
> >
> > v4:
> > * move dma_buf_map changes into separate patch (Daniel)
> > * TODO list: comment on fbdev updates (Daniel)
>
> I have been offline for a while so have not followed all the threads on
> this. So may comments below may well be addressed but I failed to see
> it.
>
> If the point about fb_sync is already addressed/considered then:
> Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
It has not been brought up yet. See below.
>
>
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > Documentation/gpu/todo.rst | 19 ++-
> > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > include/drm/drm_mode_config.h | 12 --
> > 4 files changed, 220 insertions(+), 29 deletions(-)
> >
> > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > index 7e6fc3c04add..638b7f704339 100644
> > --- a/Documentation/gpu/todo.rst
> > +++ b/Documentation/gpu/todo.rst
> > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > ------------------------------------------------
> >
> > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > -expects the framebuffer in system memory (or system-like memory).
> > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > emulation +expected the framebuffer in system memory or system-like
> > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > memory can be supported +as well.
> >
> > Contact: Maintainer of the driver you plan to convert
> >
> > Level: Intermediate
> >
> > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > +-------------------------------------------------------
> > +
> > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > +being rewritten without dependencies on the fbdev module. Some of the
> > +helpers could further benefit from using struct dma_buf_map instead of
> > +raw pointers.
> > +
> > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > +
> > +Level: Advanced
> > +
> > +
> > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > -----------------------------------------------------------------
> >
> > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > bochs->dev->mode_config.preferred_depth = 24;
> > bochs->dev->mode_config.prefer_shadow = 0;
> > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > true;
> > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> Good to see this workaround gone again!
>
> > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > --- a/drivers/gpu/drm/drm_fb_helper.c
> > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > work_struct *work) }
> >
> > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > *fb_helper,
> > - struct drm_clip_rect *clip)
> > + struct drm_clip_rect *clip,
> > + struct dma_buf_map *dst)
> > {
> > struct drm_framebuffer *fb = fb_helper->fb;
> > unsigned int cpp = fb->format->cpp[0];
> > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > void *src = fb_helper->fbdev->screen_buffer + offset;
> > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > size_t len = (clip->x2 - clip->x1) * cpp;
> > unsigned int y;
> >
> > - for (y = clip->y1; y < clip->y2; y++) {
> > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > - memcpy(dst, src, len);
> > - else
> > - memcpy_toio((void __iomem *)dst, src, len);
> > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > rect */
> > + for (y = clip->y1; y < clip->y2; y++) {
> > + dma_buf_map_memcpy_to(dst, src, len);
> > + dma_buf_map_incr(dst, fb->pitches[0]);
> > src += fb->pitches[0];
> > - dst += fb->pitches[0];
> > }
> > }
> >
> > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > if (ret)
> > return;
> > - drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy);
> > + drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy, &map); }
> > +
> > if (helper->fb->funcs->dirty)
> > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > &clip_copy, 1);
> > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > *info, }
> > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> >
> So far everything looks good.
>
> > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *dst;
> > + u8 __iomem *src;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p >= total_size)
> > + return 0;
> > +
> > + if (count >= total_size)
> > + count = total_size;
> > +
> > + if (count + p > total_size)
> > + count = total_size - p;
> > +
> > + src = (u8 __iomem *)(info->screen_base + p);
> screen_base is a char __iomem * - so this cast looks semi redundant.
I took the basic code from fbdev. Maybe there's a reason for the case,
otherwise I'll remove it.
>
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!dst)
> > + return -ENOMEM;
> > +
> Same comment as below about fb_sync.
>
>
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + memcpy_fromio(dst, src, c);
> > + if (copy_to_user(buf, dst, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > +
> > + src += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(dst);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *src;
> > + u8 __iomem *dst;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p > total_size)
> > + return -EFBIG;
> > +
> > + if (count > total_size) {
> > + err = -EFBIG;
> > + count = total_size;
> > + }
> > +
> > + if (count + p > total_size) {
> > + /*
> > + * The framebuffer is too small. We do the
> > + * copy operation, but return an error code
> > + * afterwards. Taken from fbdev.
> > + */
> > + if (!err)
> > + err = -ENOSPC;
> > + count = total_size - p;
> > + }
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + src = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!src)
> > + return -ENOMEM;
> > +
> > + dst = (u8 __iomem *)(info->screen_base + p);
> > +
>
> The fbdev variant call the fb_sync callback here.
> noveau and gma500 implments the fb_sync callback - but no-one else.
These drivers implement some form of HW acceleration. If they have a HW
blit/draw/etc op queued up, they have to wait for it to complete. Otherwise,
the copied memory would contain an old state. The fb_sync acts as the fence.
Fbdev only uses software copying, so the fb_sync is not required.
From what I heard, the HW acceleration is not useful on modern machines. I
hope to convert more drivers to generic fbdev after these patches for
I/O-memory support have been merged.
>
>
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + if (copy_from_user(src, buf, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > + memcpy_toio(dst, src, c);
> When we rewrite this part to use dma_buf_map_memcpy_to() then we can
> merge the two variants of helper_{sys,cfb}_read()?
> Which is part of the todo - so OK
I'm not sure if dma_buf_map is a good fit here. The I/O-memory function does
an additional copy between system memory and I/O memory. Of course, the top
and bottom of both functions are similar and could probably be shared.
Best regards
Thomas
> > +
> > + dst += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(src);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > /**
> > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > * @info: fbdev registered by the helper
> > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > struct vm_area_struct *vma) return -ENODEV;
> > }
> >
> > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > +}
> > +
> > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > +}
> > +
> > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > + const struct fb_fillrect *rect)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_fillrect(info, rect);
> > + else
> > + drm_fb_helper_cfb_fillrect(info, rect);
> > +}
> > +
> > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > + const struct fb_copyarea *area)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_copyarea(info, area);
> > + else
> > + drm_fb_helper_cfb_copyarea(info, area);
> > +}
> > +
> > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > + const struct fb_image *image)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_imageblit(info, image);
> > + else
> > + drm_fb_helper_cfb_imageblit(info, image);
> > +}
> > +
> > static const struct fb_ops drm_fbdev_fb_ops = {
> > .owner = THIS_MODULE,
> > DRM_FB_HELPER_DEFAULT_OPS,
> > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > .fb_release = drm_fbdev_fb_release,
> > .fb_destroy = drm_fbdev_fb_destroy,
> > .fb_mmap = drm_fbdev_fb_mmap,
> > - .fb_read = drm_fb_helper_sys_read,
> > - .fb_write = drm_fb_helper_sys_write,
> > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > + .fb_read = drm_fbdev_fb_read,
> > + .fb_write = drm_fbdev_fb_write,
> > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > };
> >
> > static struct fb_deferred_io drm_fbdev_defio = {
> > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > index 5ffbb4ed5b35..ab424ddd7665 100644
> > --- a/include/drm/drm_mode_config.h
> > +++ b/include/drm/drm_mode_config.h
> > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > */
> > bool prefer_shadow_fbdev;
> >
> > - /**
> > - * @fbdev_use_iomem:
> > - *
> > - * Set to true if framebuffer reside in iomem.
> > - * When set to true memcpy_toio() is used when copying the
> > framebuffer in
> > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > - *
> > - * FIXME: This should be replaced with a per-mapping is_iomem
> > - * flag (like ttm does), and then used everywhere in fbdev code.
> > - */
> > - bool fbdev_use_iomem;
> > -
> > /**
> > * @quirk_addfb_prefer_xbgr_30bpp:
> > *
> > --
> > 2.28.0
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 11:34 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 11:34 UTC (permalink / raw)
To: Sam Ravnborg
Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang,
kraxel, yuq825, emil.velikov, linux-samsung-soc, jy0922.shim,
lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, luben.tuikov, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, linaro-mm-sig, hdegoede,
xen-devel, virtualization, sean, apaneers, linux-arm-kernel,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, kgene, alexander.deucher, linux-media,
christian.koenig
Hi
On Fri, 16 Oct 2020 12:58:54 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > At least sparc64 requires I/O-specific access to framebuffers. This
> > patch updates the fbdev console accordingly.
> >
> > For drivers with direct access to the framebuffer memory, the callback
> > functions in struct fb_ops test for the type of memory and call the rsp
> > fb_sys_ of fb_cfb_ functions.
> >
> > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > interfaces to access the buffer.
> >
> > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > I/O memory and avoid a HW exception. With the introduction of struct
> > dma_buf_map, this is not required any longer. The patch removes the rsp
> > code from both, bochs and fbdev.
> >
> > v4:
> > * move dma_buf_map changes into separate patch (Daniel)
> > * TODO list: comment on fbdev updates (Daniel)
>
> I have been offline for a while so have not followed all the threads on
> this. So may comments below may well be addressed but I failed to see
> it.
>
> If the point about fb_sync is already addressed/considered then:
> Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
It has not been brought up yet. See below.
>
>
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> > Documentation/gpu/todo.rst | 19 ++-
> > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > include/drm/drm_mode_config.h | 12 --
> > 4 files changed, 220 insertions(+), 29 deletions(-)
> >
> > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > index 7e6fc3c04add..638b7f704339 100644
> > --- a/Documentation/gpu/todo.rst
> > +++ b/Documentation/gpu/todo.rst
> > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > ------------------------------------------------
> >
> > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > -expects the framebuffer in system memory (or system-like memory).
> > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > emulation +expected the framebuffer in system memory or system-like
> > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > memory can be supported +as well.
> >
> > Contact: Maintainer of the driver you plan to convert
> >
> > Level: Intermediate
> >
> > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > +-------------------------------------------------------
> > +
> > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > +being rewritten without dependencies on the fbdev module. Some of the
> > +helpers could further benefit from using struct dma_buf_map instead of
> > +raw pointers.
> > +
> > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > +
> > +Level: Advanced
> > +
> > +
> > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > -----------------------------------------------------------------
> >
> > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > bochs->dev->mode_config.preferred_depth = 24;
> > bochs->dev->mode_config.prefer_shadow = 0;
> > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > true;
> > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> Good to see this workaround gone again!
>
> > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > --- a/drivers/gpu/drm/drm_fb_helper.c
> > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > work_struct *work) }
> >
> > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > *fb_helper,
> > - struct drm_clip_rect *clip)
> > + struct drm_clip_rect *clip,
> > + struct dma_buf_map *dst)
> > {
> > struct drm_framebuffer *fb = fb_helper->fb;
> > unsigned int cpp = fb->format->cpp[0];
> > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > void *src = fb_helper->fbdev->screen_buffer + offset;
> > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > size_t len = (clip->x2 - clip->x1) * cpp;
> > unsigned int y;
> >
> > - for (y = clip->y1; y < clip->y2; y++) {
> > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > - memcpy(dst, src, len);
> > - else
> > - memcpy_toio((void __iomem *)dst, src, len);
> > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > rect */
> > + for (y = clip->y1; y < clip->y2; y++) {
> > + dma_buf_map_memcpy_to(dst, src, len);
> > + dma_buf_map_incr(dst, fb->pitches[0]);
> > src += fb->pitches[0];
> > - dst += fb->pitches[0];
> > }
> > }
> >
> > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > if (ret)
> > return;
> > - drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy);
> > + drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy, &map); }
> > +
> > if (helper->fb->funcs->dirty)
> > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > &clip_copy, 1);
> > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > *info, }
> > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> >
> So far everything looks good.
>
> > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *dst;
> > + u8 __iomem *src;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p >= total_size)
> > + return 0;
> > +
> > + if (count >= total_size)
> > + count = total_size;
> > +
> > + if (count + p > total_size)
> > + count = total_size - p;
> > +
> > + src = (u8 __iomem *)(info->screen_base + p);
> screen_base is a char __iomem * - so this cast looks semi redundant.
I took the basic code from fbdev. Maybe there's a reason for the case,
otherwise I'll remove it.
>
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!dst)
> > + return -ENOMEM;
> > +
> Same comment as below about fb_sync.
>
>
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + memcpy_fromio(dst, src, c);
> > + if (copy_to_user(buf, dst, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > +
> > + src += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(dst);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *src;
> > + u8 __iomem *dst;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p > total_size)
> > + return -EFBIG;
> > +
> > + if (count > total_size) {
> > + err = -EFBIG;
> > + count = total_size;
> > + }
> > +
> > + if (count + p > total_size) {
> > + /*
> > + * The framebuffer is too small. We do the
> > + * copy operation, but return an error code
> > + * afterwards. Taken from fbdev.
> > + */
> > + if (!err)
> > + err = -ENOSPC;
> > + count = total_size - p;
> > + }
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + src = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!src)
> > + return -ENOMEM;
> > +
> > + dst = (u8 __iomem *)(info->screen_base + p);
> > +
>
> The fbdev variant call the fb_sync callback here.
> noveau and gma500 implments the fb_sync callback - but no-one else.
These drivers implement some form of HW acceleration. If they have a HW
blit/draw/etc op queued up, they have to wait for it to complete. Otherwise,
the copied memory would contain an old state. The fb_sync acts as the fence.
Fbdev only uses software copying, so the fb_sync is not required.
From what I heard, the HW acceleration is not useful on modern machines. I
hope to convert more drivers to generic fbdev after these patches for
I/O-memory support have been merged.
>
>
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + if (copy_from_user(src, buf, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > + memcpy_toio(dst, src, c);
> When we rewrite this part to use dma_buf_map_memcpy_to() then we can
> merge the two variants of helper_{sys,cfb}_read()?
> Which is part of the todo - so OK
I'm not sure if dma_buf_map is a good fit here. The I/O-memory function does
an additional copy between system memory and I/O memory. Of course, the top
and bottom of both functions are similar and could probably be shared.
Best regards
Thomas
> > +
> > + dst += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(src);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > /**
> > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > * @info: fbdev registered by the helper
> > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > struct vm_area_struct *vma) return -ENODEV;
> > }
> >
> > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > +}
> > +
> > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > +}
> > +
> > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > + const struct fb_fillrect *rect)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_fillrect(info, rect);
> > + else
> > + drm_fb_helper_cfb_fillrect(info, rect);
> > +}
> > +
> > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > + const struct fb_copyarea *area)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_copyarea(info, area);
> > + else
> > + drm_fb_helper_cfb_copyarea(info, area);
> > +}
> > +
> > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > + const struct fb_image *image)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_imageblit(info, image);
> > + else
> > + drm_fb_helper_cfb_imageblit(info, image);
> > +}
> > +
> > static const struct fb_ops drm_fbdev_fb_ops = {
> > .owner = THIS_MODULE,
> > DRM_FB_HELPER_DEFAULT_OPS,
> > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > .fb_release = drm_fbdev_fb_release,
> > .fb_destroy = drm_fbdev_fb_destroy,
> > .fb_mmap = drm_fbdev_fb_mmap,
> > - .fb_read = drm_fb_helper_sys_read,
> > - .fb_write = drm_fb_helper_sys_write,
> > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > + .fb_read = drm_fbdev_fb_read,
> > + .fb_write = drm_fbdev_fb_write,
> > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > };
> >
> > static struct fb_deferred_io drm_fbdev_defio = {
> > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > index 5ffbb4ed5b35..ab424ddd7665 100644
> > --- a/include/drm/drm_mode_config.h
> > +++ b/include/drm/drm_mode_config.h
> > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > */
> > bool prefer_shadow_fbdev;
> >
> > - /**
> > - * @fbdev_use_iomem:
> > - *
> > - * Set to true if framebuffer reside in iomem.
> > - * When set to true memcpy_toio() is used when copying the
> > framebuffer in
> > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > - *
> > - * FIXME: This should be replaced with a per-mapping is_iomem
> > - * flag (like ttm does), and then used everywhere in fbdev code.
> > - */
> > - bool fbdev_use_iomem;
> > -
> > /**
> > * @quirk_addfb_prefer_xbgr_30bpp:
> > *
> > --
> > 2.28.0
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 12:03 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 12:03 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: maarten.lankhorst, mripard, airlied, daniel, alexander.deucher,
christian.koenig, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang, dri-devel, amd-gfx, virtualization, etnaviv,
linux-arm-kernel, linux-samsung-soc, lima, nouveau, spice-devel,
linux-rockchip, xen-devel, linux-media, linaro-mm-sig
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> At least sparc64 requires I/O-specific access to framebuffers. This
> patch updates the fbdev console accordingly.
>
> For drivers with direct access to the framebuffer memory, the callback
> functions in struct fb_ops test for the type of memory and call the rsp
> fb_sys_ of fb_cfb_ functions.
>
> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> interfaces to access the buffer.
>
> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> I/O memory and avoid a HW exception. With the introduction of struct
> dma_buf_map, this is not required any longer. The patch removes the rsp
> code from both, bochs and fbdev.
>
> v4:
> * move dma_buf_map changes into separate patch (Daniel)
> * TODO list: comment on fbdev updates (Daniel)
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
The original workaround fixed it so we could run qemu with the
-nographic option.
So I went ahead and tried to run quemu version:
v5.0.0-1970-g0b100c8e72-dirty.
And with the BOCHS driver built-in.
With the following command line:
qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
Behaviour was the same before and after applying this patch.
(panic due to VFS: Unable to mount root fs on unknown-block(0,0))
So I consider it fixed for real now and not just a workaround.
I also tested with:
qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial stdio
and it worked in both cases too.
All the comments above so future-me have an easier time finding how to
reproduce.
Tested-by: Sam Ravnborg <sam@ravnborg.org>
Sam
> ---
> Documentation/gpu/todo.rst | 19 ++-
> drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> include/drm/drm_mode_config.h | 12 --
> 4 files changed, 220 insertions(+), 29 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 7e6fc3c04add..638b7f704339 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> ------------------------------------------------
>
> Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> -expects the framebuffer in system memory (or system-like memory).
> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> +expected the framebuffer in system memory or system-like memory. By employing
> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> +as well.
>
> Contact: Maintainer of the driver you plan to convert
>
> Level: Intermediate
>
> +Reimplement functions in drm_fbdev_fb_ops without fbdev
> +-------------------------------------------------------
> +
> +A number of callback functions in drm_fbdev_fb_ops could benefit from
> +being rewritten without dependencies on the fbdev module. Some of the
> +helpers could further benefit from using struct dma_buf_map instead of
> +raw pointers.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> +
> +Level: Advanced
> +
> +
> drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> -----------------------------------------------------------------
>
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> index 13d0d04c4457..853081d186d5 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> bochs->dev->mode_config.preferred_depth = 24;
> bochs->dev->mode_config.prefer_shadow = 0;
> bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> - bochs->dev->mode_config.fbdev_use_iomem = true;
> bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>
> bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 6212cd7cde1d..462b0c130ebb 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
> }
>
> static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> - struct drm_clip_rect *clip)
> + struct drm_clip_rect *clip,
> + struct dma_buf_map *dst)
> {
> struct drm_framebuffer *fb = fb_helper->fb;
> unsigned int cpp = fb->format->cpp[0];
> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> void *src = fb_helper->fbdev->screen_buffer + offset;
> - void *dst = fb_helper->buffer->map.vaddr + offset;
> size_t len = (clip->x2 - clip->x1) * cpp;
> unsigned int y;
>
> - for (y = clip->y1; y < clip->y2; y++) {
> - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> - memcpy(dst, src, len);
> - else
> - memcpy_toio((void __iomem *)dst, src, len);
> + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>
> + for (y = clip->y1; y < clip->y2; y++) {
> + dma_buf_map_memcpy_to(dst, src, len);
> + dma_buf_map_incr(dst, fb->pitches[0]);
> src += fb->pitches[0];
> - dst += fb->pitches[0];
> }
> }
>
> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
> ret = drm_client_buffer_vmap(helper->buffer, &map);
> if (ret)
> return;
> - drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
> }
> +
> if (helper->fb->funcs->dirty)
> helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> &clip_copy, 1);
> @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
> }
> EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
>
> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *dst;
> + u8 __iomem *src;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p >= total_size)
> + return 0;
> +
> + if (count >= total_size)
> + count = total_size;
> +
> + if (count + p > total_size)
> + count = total_size - p;
> +
> + src = (u8 __iomem *)(info->screen_base + p);
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + dst = kmalloc(alloc_size, GFP_KERNEL);
> + if (!dst)
> + return -ENOMEM;
> +
> + while (count) {
> + c = min(count, alloc_size);
> +
> + memcpy_fromio(dst, src, c);
> + if (copy_to_user(buf, dst, c)) {
> + err = -EFAULT;
> + break;
> + }
> +
> + src += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(dst);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *src;
> + u8 __iomem *dst;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p > total_size)
> + return -EFBIG;
> +
> + if (count > total_size) {
> + err = -EFBIG;
> + count = total_size;
> + }
> +
> + if (count + p > total_size) {
> + /*
> + * The framebuffer is too small. We do the
> + * copy operation, but return an error code
> + * afterwards. Taken from fbdev.
> + */
> + if (!err)
> + err = -ENOSPC;
> + count = total_size - p;
> + }
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + src = kmalloc(alloc_size, GFP_KERNEL);
> + if (!src)
> + return -ENOMEM;
> +
> + dst = (u8 __iomem *)(info->screen_base + p);
> +
> + while (count) {
> + c = min(count, alloc_size);
> +
> + if (copy_from_user(src, buf, c)) {
> + err = -EFAULT;
> + break;
> + }
> + memcpy_toio(dst, src, c);
> +
> + dst += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(src);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> /**
> * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> * @info: fbdev registered by the helper
> @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> return -ENODEV;
> }
>
> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_read(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> +}
> +
> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_write(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> +}
> +
> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> + const struct fb_fillrect *rect)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_fillrect(info, rect);
> + else
> + drm_fb_helper_cfb_fillrect(info, rect);
> +}
> +
> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> + const struct fb_copyarea *area)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_copyarea(info, area);
> + else
> + drm_fb_helper_cfb_copyarea(info, area);
> +}
> +
> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> + const struct fb_image *image)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_imageblit(info, image);
> + else
> + drm_fb_helper_cfb_imageblit(info, image);
> +}
> +
> static const struct fb_ops drm_fbdev_fb_ops = {
> .owner = THIS_MODULE,
> DRM_FB_HELPER_DEFAULT_OPS,
> @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> .fb_release = drm_fbdev_fb_release,
> .fb_destroy = drm_fbdev_fb_destroy,
> .fb_mmap = drm_fbdev_fb_mmap,
> - .fb_read = drm_fb_helper_sys_read,
> - .fb_write = drm_fb_helper_sys_write,
> - .fb_fillrect = drm_fb_helper_sys_fillrect,
> - .fb_copyarea = drm_fb_helper_sys_copyarea,
> - .fb_imageblit = drm_fb_helper_sys_imageblit,
> + .fb_read = drm_fbdev_fb_read,
> + .fb_write = drm_fbdev_fb_write,
> + .fb_fillrect = drm_fbdev_fb_fillrect,
> + .fb_copyarea = drm_fbdev_fb_copyarea,
> + .fb_imageblit = drm_fbdev_fb_imageblit,
> };
>
> static struct fb_deferred_io drm_fbdev_defio = {
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 5ffbb4ed5b35..ab424ddd7665 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -877,18 +877,6 @@ struct drm_mode_config {
> */
> bool prefer_shadow_fbdev;
>
> - /**
> - * @fbdev_use_iomem:
> - *
> - * Set to true if framebuffer reside in iomem.
> - * When set to true memcpy_toio() is used when copying the framebuffer in
> - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> - *
> - * FIXME: This should be replaced with a per-mapping is_iomem
> - * flag (like ttm does), and then used everywhere in fbdev code.
> - */
> - bool fbdev_use_iomem;
> -
> /**
> * @quirk_addfb_prefer_xbgr_30bpp:
> *
> --
> 2.28.0
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 12:03 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 12:03 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov-5C7GfCeVMHo, heiko-4mtYJXux2i+zQB+pC5nmwQ,
airlied-cv59FeDIM0c, nouveau-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
linus.walleij-QSEj5FYQhm4dnm+yROfE0A,
dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
chris-Y6uKTt2uX1cEflXRtASbqLVCufUGDwFn,
melissa.srw-Re5JQEeQqe8AvxtiuMwx3w, eric-WhKQ6XTQaPysTnJN9+BGXg,
ray.huang-5C7GfCeVMHo, kraxel-H+wXaHxf7aLQT0dZR+AlfA,
sumit.semwal-QSEj5FYQhm4dnm+yROfE0A,
emil.velikov-ZGY8ohtN/8qB+jHODAdFcQ, robh-DgEjT+Ai2ygdnm+yROfE0A,
linux-samsung-soc-u79uwXL29TY76Z2rM5mHXA,
jy0922.shim-Sze3O3UU22JBDgjK7y7TUQ,
lima-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
oleksandr_andrushchenko-uRwfk40T5oI, krzk-DgEjT+Ai2ygdnm+yROfE0A,
steven.price-5wv7dgnIgG8,
linux-rockchip-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
kgene-DgEjT+Ai2ygdnm+yROfE0A,
alyssa.rosenzweig-ZGY8ohtN/8qB+jHODAdFcQ,
linux+etnaviv-I+IVW8TIWO2tmTQ+vhA3Yw,
spice-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
bskeggs-H+wXaHxf7aLQT0dZR+AlfA,
maarten.lankhorst-VuQAYsv1563Yd54FQh9/CA,
etnaviv-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
mripard-DgEjT+Ai2ygdnm+yROfE0A, inki.dae-Sze3O3UU22JBDgjK7y7TUQ,
hdegoede-H+wXaHxf7aLQT0dZR+AlfA,
christian.gmeiner-Re5JQEeQqe8AvxtiuMwx3w,
xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
virtualization-cunTk1MwBs9QetFLy7KEmxxBWXNxL4zz
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> At least sparc64 requires I/O-specific access to framebuffers. This
> patch updates the fbdev console accordingly.
>
> For drivers with direct access to the framebuffer memory, the callback
> functions in struct fb_ops test for the type of memory and call the rsp
> fb_sys_ of fb_cfb_ functions.
>
> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> interfaces to access the buffer.
>
> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> I/O memory and avoid a HW exception. With the introduction of struct
> dma_buf_map, this is not required any longer. The patch removes the rsp
> code from both, bochs and fbdev.
>
> v4:
> * move dma_buf_map changes into separate patch (Daniel)
> * TODO list: comment on fbdev updates (Daniel)
>
> Signed-off-by: Thomas Zimmermann <tzimmermann-l3A5Bk7waGM@public.gmane.org>
The original workaround fixed it so we could run qemu with the
-nographic option.
So I went ahead and tried to run quemu version:
v5.0.0-1970-g0b100c8e72-dirty.
And with the BOCHS driver built-in.
With the following command line:
qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
Behaviour was the same before and after applying this patch.
(panic due to VFS: Unable to mount root fs on unknown-block(0,0))
So I consider it fixed for real now and not just a workaround.
I also tested with:
qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial stdio
and it worked in both cases too.
All the comments above so future-me have an easier time finding how to
reproduce.
Tested-by: Sam Ravnborg <sam-uyr5N9Q2VtJg9hUCZPvPmw@public.gmane.org>
Sam
> ---
> Documentation/gpu/todo.rst | 19 ++-
> drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> include/drm/drm_mode_config.h | 12 --
> 4 files changed, 220 insertions(+), 29 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 7e6fc3c04add..638b7f704339 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> ------------------------------------------------
>
> Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> -expects the framebuffer in system memory (or system-like memory).
> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> +expected the framebuffer in system memory or system-like memory. By employing
> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> +as well.
>
> Contact: Maintainer of the driver you plan to convert
>
> Level: Intermediate
>
> +Reimplement functions in drm_fbdev_fb_ops without fbdev
> +-------------------------------------------------------
> +
> +A number of callback functions in drm_fbdev_fb_ops could benefit from
> +being rewritten without dependencies on the fbdev module. Some of the
> +helpers could further benefit from using struct dma_buf_map instead of
> +raw pointers.
> +
> +Contact: Thomas Zimmermann <tzimmermann-l3A5Bk7waGM@public.gmane.org>, Daniel Vetter
> +
> +Level: Advanced
> +
> +
> drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> -----------------------------------------------------------------
>
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> index 13d0d04c4457..853081d186d5 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> bochs->dev->mode_config.preferred_depth = 24;
> bochs->dev->mode_config.prefer_shadow = 0;
> bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> - bochs->dev->mode_config.fbdev_use_iomem = true;
> bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>
> bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 6212cd7cde1d..462b0c130ebb 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
> }
>
> static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> - struct drm_clip_rect *clip)
> + struct drm_clip_rect *clip,
> + struct dma_buf_map *dst)
> {
> struct drm_framebuffer *fb = fb_helper->fb;
> unsigned int cpp = fb->format->cpp[0];
> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> void *src = fb_helper->fbdev->screen_buffer + offset;
> - void *dst = fb_helper->buffer->map.vaddr + offset;
> size_t len = (clip->x2 - clip->x1) * cpp;
> unsigned int y;
>
> - for (y = clip->y1; y < clip->y2; y++) {
> - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> - memcpy(dst, src, len);
> - else
> - memcpy_toio((void __iomem *)dst, src, len);
> + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>
> + for (y = clip->y1; y < clip->y2; y++) {
> + dma_buf_map_memcpy_to(dst, src, len);
> + dma_buf_map_incr(dst, fb->pitches[0]);
> src += fb->pitches[0];
> - dst += fb->pitches[0];
> }
> }
>
> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
> ret = drm_client_buffer_vmap(helper->buffer, &map);
> if (ret)
> return;
> - drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
> }
> +
> if (helper->fb->funcs->dirty)
> helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> &clip_copy, 1);
> @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
> }
> EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
>
> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *dst;
> + u8 __iomem *src;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p >= total_size)
> + return 0;
> +
> + if (count >= total_size)
> + count = total_size;
> +
> + if (count + p > total_size)
> + count = total_size - p;
> +
> + src = (u8 __iomem *)(info->screen_base + p);
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + dst = kmalloc(alloc_size, GFP_KERNEL);
> + if (!dst)
> + return -ENOMEM;
> +
> + while (count) {
> + c = min(count, alloc_size);
> +
> + memcpy_fromio(dst, src, c);
> + if (copy_to_user(buf, dst, c)) {
> + err = -EFAULT;
> + break;
> + }
> +
> + src += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(dst);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *src;
> + u8 __iomem *dst;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p > total_size)
> + return -EFBIG;
> +
> + if (count > total_size) {
> + err = -EFBIG;
> + count = total_size;
> + }
> +
> + if (count + p > total_size) {
> + /*
> + * The framebuffer is too small. We do the
> + * copy operation, but return an error code
> + * afterwards. Taken from fbdev.
> + */
> + if (!err)
> + err = -ENOSPC;
> + count = total_size - p;
> + }
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + src = kmalloc(alloc_size, GFP_KERNEL);
> + if (!src)
> + return -ENOMEM;
> +
> + dst = (u8 __iomem *)(info->screen_base + p);
> +
> + while (count) {
> + c = min(count, alloc_size);
> +
> + if (copy_from_user(src, buf, c)) {
> + err = -EFAULT;
> + break;
> + }
> + memcpy_toio(dst, src, c);
> +
> + dst += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(src);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> /**
> * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> * @info: fbdev registered by the helper
> @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> return -ENODEV;
> }
>
> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_read(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> +}
> +
> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_write(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> +}
> +
> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> + const struct fb_fillrect *rect)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_fillrect(info, rect);
> + else
> + drm_fb_helper_cfb_fillrect(info, rect);
> +}
> +
> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> + const struct fb_copyarea *area)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_copyarea(info, area);
> + else
> + drm_fb_helper_cfb_copyarea(info, area);
> +}
> +
> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> + const struct fb_image *image)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_imageblit(info, image);
> + else
> + drm_fb_helper_cfb_imageblit(info, image);
> +}
> +
> static const struct fb_ops drm_fbdev_fb_ops = {
> .owner = THIS_MODULE,
> DRM_FB_HELPER_DEFAULT_OPS,
> @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> .fb_release = drm_fbdev_fb_release,
> .fb_destroy = drm_fbdev_fb_destroy,
> .fb_mmap = drm_fbdev_fb_mmap,
> - .fb_read = drm_fb_helper_sys_read,
> - .fb_write = drm_fb_helper_sys_write,
> - .fb_fillrect = drm_fb_helper_sys_fillrect,
> - .fb_copyarea = drm_fb_helper_sys_copyarea,
> - .fb_imageblit = drm_fb_helper_sys_imageblit,
> + .fb_read = drm_fbdev_fb_read,
> + .fb_write = drm_fbdev_fb_write,
> + .fb_fillrect = drm_fbdev_fb_fillrect,
> + .fb_copyarea = drm_fbdev_fb_copyarea,
> + .fb_imageblit = drm_fbdev_fb_imageblit,
> };
>
> static struct fb_deferred_io drm_fbdev_defio = {
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 5ffbb4ed5b35..ab424ddd7665 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -877,18 +877,6 @@ struct drm_mode_config {
> */
> bool prefer_shadow_fbdev;
>
> - /**
> - * @fbdev_use_iomem:
> - *
> - * Set to true if framebuffer reside in iomem.
> - * When set to true memcpy_toio() is used when copying the framebuffer in
> - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> - *
> - * FIXME: This should be replaced with a per-mapping is_iomem
> - * flag (like ttm does), and then used everywhere in fbdev code.
> - */
> - bool fbdev_use_iomem;
> -
> /**
> * @quirk_addfb_prefer_xbgr_30bpp:
> *
> --
> 2.28.0
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 12:03 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 12:03 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, kraxel, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, christian.koenig, l.stach
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> At least sparc64 requires I/O-specific access to framebuffers. This
> patch updates the fbdev console accordingly.
>
> For drivers with direct access to the framebuffer memory, the callback
> functions in struct fb_ops test for the type of memory and call the rsp
> fb_sys_ of fb_cfb_ functions.
>
> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> interfaces to access the buffer.
>
> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> I/O memory and avoid a HW exception. With the introduction of struct
> dma_buf_map, this is not required any longer. The patch removes the rsp
> code from both, bochs and fbdev.
>
> v4:
> * move dma_buf_map changes into separate patch (Daniel)
> * TODO list: comment on fbdev updates (Daniel)
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
The original workaround fixed it so we could run qemu with the
-nographic option.
So I went ahead and tried to run quemu version:
v5.0.0-1970-g0b100c8e72-dirty.
And with the BOCHS driver built-in.
With the following command line:
qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
Behaviour was the same before and after applying this patch.
(panic due to VFS: Unable to mount root fs on unknown-block(0,0))
So I consider it fixed for real now and not just a workaround.
I also tested with:
qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial stdio
and it worked in both cases too.
All the comments above so future-me have an easier time finding how to
reproduce.
Tested-by: Sam Ravnborg <sam@ravnborg.org>
Sam
> ---
> Documentation/gpu/todo.rst | 19 ++-
> drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> include/drm/drm_mode_config.h | 12 --
> 4 files changed, 220 insertions(+), 29 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 7e6fc3c04add..638b7f704339 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> ------------------------------------------------
>
> Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> -expects the framebuffer in system memory (or system-like memory).
> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> +expected the framebuffer in system memory or system-like memory. By employing
> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> +as well.
>
> Contact: Maintainer of the driver you plan to convert
>
> Level: Intermediate
>
> +Reimplement functions in drm_fbdev_fb_ops without fbdev
> +-------------------------------------------------------
> +
> +A number of callback functions in drm_fbdev_fb_ops could benefit from
> +being rewritten without dependencies on the fbdev module. Some of the
> +helpers could further benefit from using struct dma_buf_map instead of
> +raw pointers.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> +
> +Level: Advanced
> +
> +
> drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> -----------------------------------------------------------------
>
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> index 13d0d04c4457..853081d186d5 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> bochs->dev->mode_config.preferred_depth = 24;
> bochs->dev->mode_config.prefer_shadow = 0;
> bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> - bochs->dev->mode_config.fbdev_use_iomem = true;
> bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>
> bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 6212cd7cde1d..462b0c130ebb 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
> }
>
> static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> - struct drm_clip_rect *clip)
> + struct drm_clip_rect *clip,
> + struct dma_buf_map *dst)
> {
> struct drm_framebuffer *fb = fb_helper->fb;
> unsigned int cpp = fb->format->cpp[0];
> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> void *src = fb_helper->fbdev->screen_buffer + offset;
> - void *dst = fb_helper->buffer->map.vaddr + offset;
> size_t len = (clip->x2 - clip->x1) * cpp;
> unsigned int y;
>
> - for (y = clip->y1; y < clip->y2; y++) {
> - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> - memcpy(dst, src, len);
> - else
> - memcpy_toio((void __iomem *)dst, src, len);
> + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>
> + for (y = clip->y1; y < clip->y2; y++) {
> + dma_buf_map_memcpy_to(dst, src, len);
> + dma_buf_map_incr(dst, fb->pitches[0]);
> src += fb->pitches[0];
> - dst += fb->pitches[0];
> }
> }
>
> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
> ret = drm_client_buffer_vmap(helper->buffer, &map);
> if (ret)
> return;
> - drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
> }
> +
> if (helper->fb->funcs->dirty)
> helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> &clip_copy, 1);
> @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
> }
> EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
>
> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *dst;
> + u8 __iomem *src;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p >= total_size)
> + return 0;
> +
> + if (count >= total_size)
> + count = total_size;
> +
> + if (count + p > total_size)
> + count = total_size - p;
> +
> + src = (u8 __iomem *)(info->screen_base + p);
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + dst = kmalloc(alloc_size, GFP_KERNEL);
> + if (!dst)
> + return -ENOMEM;
> +
> + while (count) {
> + c = min(count, alloc_size);
> +
> + memcpy_fromio(dst, src, c);
> + if (copy_to_user(buf, dst, c)) {
> + err = -EFAULT;
> + break;
> + }
> +
> + src += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(dst);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *src;
> + u8 __iomem *dst;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p > total_size)
> + return -EFBIG;
> +
> + if (count > total_size) {
> + err = -EFBIG;
> + count = total_size;
> + }
> +
> + if (count + p > total_size) {
> + /*
> + * The framebuffer is too small. We do the
> + * copy operation, but return an error code
> + * afterwards. Taken from fbdev.
> + */
> + if (!err)
> + err = -ENOSPC;
> + count = total_size - p;
> + }
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + src = kmalloc(alloc_size, GFP_KERNEL);
> + if (!src)
> + return -ENOMEM;
> +
> + dst = (u8 __iomem *)(info->screen_base + p);
> +
> + while (count) {
> + c = min(count, alloc_size);
> +
> + if (copy_from_user(src, buf, c)) {
> + err = -EFAULT;
> + break;
> + }
> + memcpy_toio(dst, src, c);
> +
> + dst += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(src);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> /**
> * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> * @info: fbdev registered by the helper
> @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> return -ENODEV;
> }
>
> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_read(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> +}
> +
> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_write(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> +}
> +
> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> + const struct fb_fillrect *rect)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_fillrect(info, rect);
> + else
> + drm_fb_helper_cfb_fillrect(info, rect);
> +}
> +
> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> + const struct fb_copyarea *area)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_copyarea(info, area);
> + else
> + drm_fb_helper_cfb_copyarea(info, area);
> +}
> +
> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> + const struct fb_image *image)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_imageblit(info, image);
> + else
> + drm_fb_helper_cfb_imageblit(info, image);
> +}
> +
> static const struct fb_ops drm_fbdev_fb_ops = {
> .owner = THIS_MODULE,
> DRM_FB_HELPER_DEFAULT_OPS,
> @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> .fb_release = drm_fbdev_fb_release,
> .fb_destroy = drm_fbdev_fb_destroy,
> .fb_mmap = drm_fbdev_fb_mmap,
> - .fb_read = drm_fb_helper_sys_read,
> - .fb_write = drm_fb_helper_sys_write,
> - .fb_fillrect = drm_fb_helper_sys_fillrect,
> - .fb_copyarea = drm_fb_helper_sys_copyarea,
> - .fb_imageblit = drm_fb_helper_sys_imageblit,
> + .fb_read = drm_fbdev_fb_read,
> + .fb_write = drm_fbdev_fb_write,
> + .fb_fillrect = drm_fbdev_fb_fillrect,
> + .fb_copyarea = drm_fbdev_fb_copyarea,
> + .fb_imageblit = drm_fbdev_fb_imageblit,
> };
>
> static struct fb_deferred_io drm_fbdev_defio = {
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 5ffbb4ed5b35..ab424ddd7665 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -877,18 +877,6 @@ struct drm_mode_config {
> */
> bool prefer_shadow_fbdev;
>
> - /**
> - * @fbdev_use_iomem:
> - *
> - * Set to true if framebuffer reside in iomem.
> - * When set to true memcpy_toio() is used when copying the framebuffer in
> - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> - *
> - * FIXME: This should be replaced with a per-mapping is_iomem
> - * flag (like ttm does), and then used everywhere in fbdev code.
> - */
> - bool fbdev_use_iomem;
> -
> /**
> * @quirk_addfb_prefer_xbgr_30bpp:
> *
> --
> 2.28.0
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 12:03 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 12:03 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, sumit.semwal, emil.velikov,
robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, christian.koenig, l.stach
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> At least sparc64 requires I/O-specific access to framebuffers. This
> patch updates the fbdev console accordingly.
>
> For drivers with direct access to the framebuffer memory, the callback
> functions in struct fb_ops test for the type of memory and call the rsp
> fb_sys_ of fb_cfb_ functions.
>
> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> interfaces to access the buffer.
>
> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> I/O memory and avoid a HW exception. With the introduction of struct
> dma_buf_map, this is not required any longer. The patch removes the rsp
> code from both, bochs and fbdev.
>
> v4:
> * move dma_buf_map changes into separate patch (Daniel)
> * TODO list: comment on fbdev updates (Daniel)
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
The original workaround fixed it so we could run qemu with the
-nographic option.
So I went ahead and tried to run quemu version:
v5.0.0-1970-g0b100c8e72-dirty.
And with the BOCHS driver built-in.
With the following command line:
qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
Behaviour was the same before and after applying this patch.
(panic due to VFS: Unable to mount root fs on unknown-block(0,0))
So I consider it fixed for real now and not just a workaround.
I also tested with:
qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial stdio
and it worked in both cases too.
All the comments above so future-me have an easier time finding how to
reproduce.
Tested-by: Sam Ravnborg <sam@ravnborg.org>
Sam
> ---
> Documentation/gpu/todo.rst | 19 ++-
> drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> include/drm/drm_mode_config.h | 12 --
> 4 files changed, 220 insertions(+), 29 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 7e6fc3c04add..638b7f704339 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> ------------------------------------------------
>
> Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> -expects the framebuffer in system memory (or system-like memory).
> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> +expected the framebuffer in system memory or system-like memory. By employing
> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> +as well.
>
> Contact: Maintainer of the driver you plan to convert
>
> Level: Intermediate
>
> +Reimplement functions in drm_fbdev_fb_ops without fbdev
> +-------------------------------------------------------
> +
> +A number of callback functions in drm_fbdev_fb_ops could benefit from
> +being rewritten without dependencies on the fbdev module. Some of the
> +helpers could further benefit from using struct dma_buf_map instead of
> +raw pointers.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> +
> +Level: Advanced
> +
> +
> drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> -----------------------------------------------------------------
>
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> index 13d0d04c4457..853081d186d5 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> bochs->dev->mode_config.preferred_depth = 24;
> bochs->dev->mode_config.prefer_shadow = 0;
> bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> - bochs->dev->mode_config.fbdev_use_iomem = true;
> bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>
> bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 6212cd7cde1d..462b0c130ebb 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
> }
>
> static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> - struct drm_clip_rect *clip)
> + struct drm_clip_rect *clip,
> + struct dma_buf_map *dst)
> {
> struct drm_framebuffer *fb = fb_helper->fb;
> unsigned int cpp = fb->format->cpp[0];
> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> void *src = fb_helper->fbdev->screen_buffer + offset;
> - void *dst = fb_helper->buffer->map.vaddr + offset;
> size_t len = (clip->x2 - clip->x1) * cpp;
> unsigned int y;
>
> - for (y = clip->y1; y < clip->y2; y++) {
> - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> - memcpy(dst, src, len);
> - else
> - memcpy_toio((void __iomem *)dst, src, len);
> + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>
> + for (y = clip->y1; y < clip->y2; y++) {
> + dma_buf_map_memcpy_to(dst, src, len);
> + dma_buf_map_incr(dst, fb->pitches[0]);
> src += fb->pitches[0];
> - dst += fb->pitches[0];
> }
> }
>
> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
> ret = drm_client_buffer_vmap(helper->buffer, &map);
> if (ret)
> return;
> - drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
> }
> +
> if (helper->fb->funcs->dirty)
> helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> &clip_copy, 1);
> @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
> }
> EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
>
> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *dst;
> + u8 __iomem *src;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p >= total_size)
> + return 0;
> +
> + if (count >= total_size)
> + count = total_size;
> +
> + if (count + p > total_size)
> + count = total_size - p;
> +
> + src = (u8 __iomem *)(info->screen_base + p);
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + dst = kmalloc(alloc_size, GFP_KERNEL);
> + if (!dst)
> + return -ENOMEM;
> +
> + while (count) {
> + c = min(count, alloc_size);
> +
> + memcpy_fromio(dst, src, c);
> + if (copy_to_user(buf, dst, c)) {
> + err = -EFAULT;
> + break;
> + }
> +
> + src += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(dst);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *src;
> + u8 __iomem *dst;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p > total_size)
> + return -EFBIG;
> +
> + if (count > total_size) {
> + err = -EFBIG;
> + count = total_size;
> + }
> +
> + if (count + p > total_size) {
> + /*
> + * The framebuffer is too small. We do the
> + * copy operation, but return an error code
> + * afterwards. Taken from fbdev.
> + */
> + if (!err)
> + err = -ENOSPC;
> + count = total_size - p;
> + }
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + src = kmalloc(alloc_size, GFP_KERNEL);
> + if (!src)
> + return -ENOMEM;
> +
> + dst = (u8 __iomem *)(info->screen_base + p);
> +
> + while (count) {
> + c = min(count, alloc_size);
> +
> + if (copy_from_user(src, buf, c)) {
> + err = -EFAULT;
> + break;
> + }
> + memcpy_toio(dst, src, c);
> +
> + dst += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(src);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> /**
> * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> * @info: fbdev registered by the helper
> @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> return -ENODEV;
> }
>
> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_read(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> +}
> +
> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_write(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> +}
> +
> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> + const struct fb_fillrect *rect)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_fillrect(info, rect);
> + else
> + drm_fb_helper_cfb_fillrect(info, rect);
> +}
> +
> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> + const struct fb_copyarea *area)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_copyarea(info, area);
> + else
> + drm_fb_helper_cfb_copyarea(info, area);
> +}
> +
> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> + const struct fb_image *image)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_imageblit(info, image);
> + else
> + drm_fb_helper_cfb_imageblit(info, image);
> +}
> +
> static const struct fb_ops drm_fbdev_fb_ops = {
> .owner = THIS_MODULE,
> DRM_FB_HELPER_DEFAULT_OPS,
> @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> .fb_release = drm_fbdev_fb_release,
> .fb_destroy = drm_fbdev_fb_destroy,
> .fb_mmap = drm_fbdev_fb_mmap,
> - .fb_read = drm_fb_helper_sys_read,
> - .fb_write = drm_fb_helper_sys_write,
> - .fb_fillrect = drm_fb_helper_sys_fillrect,
> - .fb_copyarea = drm_fb_helper_sys_copyarea,
> - .fb_imageblit = drm_fb_helper_sys_imageblit,
> + .fb_read = drm_fbdev_fb_read,
> + .fb_write = drm_fbdev_fb_write,
> + .fb_fillrect = drm_fbdev_fb_fillrect,
> + .fb_copyarea = drm_fbdev_fb_copyarea,
> + .fb_imageblit = drm_fbdev_fb_imageblit,
> };
>
> static struct fb_deferred_io drm_fbdev_defio = {
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 5ffbb4ed5b35..ab424ddd7665 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -877,18 +877,6 @@ struct drm_mode_config {
> */
> bool prefer_shadow_fbdev;
>
> - /**
> - * @fbdev_use_iomem:
> - *
> - * Set to true if framebuffer reside in iomem.
> - * When set to true memcpy_toio() is used when copying the framebuffer in
> - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> - *
> - * FIXME: This should be replaced with a per-mapping is_iomem
> - * flag (like ttm does), and then used everywhere in fbdev code.
> - */
> - bool fbdev_use_iomem;
> -
> /**
> * @quirk_addfb_prefer_xbgr_30bpp:
> *
> --
> 2.28.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 12:03 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 12:03 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, airlied, nouveau, dri-devel, chris, melissa.srw,
ray.huang, kraxel, emil.velikov, linux-samsung-soc, jy0922.shim,
lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, kgene, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, hdegoede, xen-devel,
virtualization, sean, apaneers, linux-arm-kernel, linaro-mm-sig,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, yuq825, alexander.deucher, linux-media,
christian.koenig
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> At least sparc64 requires I/O-specific access to framebuffers. This
> patch updates the fbdev console accordingly.
>
> For drivers with direct access to the framebuffer memory, the callback
> functions in struct fb_ops test for the type of memory and call the rsp
> fb_sys_ of fb_cfb_ functions.
>
> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> interfaces to access the buffer.
>
> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> I/O memory and avoid a HW exception. With the introduction of struct
> dma_buf_map, this is not required any longer. The patch removes the rsp
> code from both, bochs and fbdev.
>
> v4:
> * move dma_buf_map changes into separate patch (Daniel)
> * TODO list: comment on fbdev updates (Daniel)
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
The original workaround fixed it so we could run qemu with the
-nographic option.
So I went ahead and tried to run quemu version:
v5.0.0-1970-g0b100c8e72-dirty.
And with the BOCHS driver built-in.
With the following command line:
qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
Behaviour was the same before and after applying this patch.
(panic due to VFS: Unable to mount root fs on unknown-block(0,0))
So I consider it fixed for real now and not just a workaround.
I also tested with:
qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial stdio
and it worked in both cases too.
All the comments above so future-me have an easier time finding how to
reproduce.
Tested-by: Sam Ravnborg <sam@ravnborg.org>
Sam
> ---
> Documentation/gpu/todo.rst | 19 ++-
> drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> include/drm/drm_mode_config.h | 12 --
> 4 files changed, 220 insertions(+), 29 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 7e6fc3c04add..638b7f704339 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> ------------------------------------------------
>
> Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> -expects the framebuffer in system memory (or system-like memory).
> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> +expected the framebuffer in system memory or system-like memory. By employing
> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> +as well.
>
> Contact: Maintainer of the driver you plan to convert
>
> Level: Intermediate
>
> +Reimplement functions in drm_fbdev_fb_ops without fbdev
> +-------------------------------------------------------
> +
> +A number of callback functions in drm_fbdev_fb_ops could benefit from
> +being rewritten without dependencies on the fbdev module. Some of the
> +helpers could further benefit from using struct dma_buf_map instead of
> +raw pointers.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> +
> +Level: Advanced
> +
> +
> drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> -----------------------------------------------------------------
>
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> index 13d0d04c4457..853081d186d5 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> bochs->dev->mode_config.preferred_depth = 24;
> bochs->dev->mode_config.prefer_shadow = 0;
> bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> - bochs->dev->mode_config.fbdev_use_iomem = true;
> bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>
> bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 6212cd7cde1d..462b0c130ebb 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
> }
>
> static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> - struct drm_clip_rect *clip)
> + struct drm_clip_rect *clip,
> + struct dma_buf_map *dst)
> {
> struct drm_framebuffer *fb = fb_helper->fb;
> unsigned int cpp = fb->format->cpp[0];
> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> void *src = fb_helper->fbdev->screen_buffer + offset;
> - void *dst = fb_helper->buffer->map.vaddr + offset;
> size_t len = (clip->x2 - clip->x1) * cpp;
> unsigned int y;
>
> - for (y = clip->y1; y < clip->y2; y++) {
> - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> - memcpy(dst, src, len);
> - else
> - memcpy_toio((void __iomem *)dst, src, len);
> + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>
> + for (y = clip->y1; y < clip->y2; y++) {
> + dma_buf_map_memcpy_to(dst, src, len);
> + dma_buf_map_incr(dst, fb->pitches[0]);
> src += fb->pitches[0];
> - dst += fb->pitches[0];
> }
> }
>
> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
> ret = drm_client_buffer_vmap(helper->buffer, &map);
> if (ret)
> return;
> - drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
> }
> +
> if (helper->fb->funcs->dirty)
> helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> &clip_copy, 1);
> @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
> }
> EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
>
> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *dst;
> + u8 __iomem *src;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p >= total_size)
> + return 0;
> +
> + if (count >= total_size)
> + count = total_size;
> +
> + if (count + p > total_size)
> + count = total_size - p;
> +
> + src = (u8 __iomem *)(info->screen_base + p);
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + dst = kmalloc(alloc_size, GFP_KERNEL);
> + if (!dst)
> + return -ENOMEM;
> +
> + while (count) {
> + c = min(count, alloc_size);
> +
> + memcpy_fromio(dst, src, c);
> + if (copy_to_user(buf, dst, c)) {
> + err = -EFAULT;
> + break;
> + }
> +
> + src += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(dst);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *src;
> + u8 __iomem *dst;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p > total_size)
> + return -EFBIG;
> +
> + if (count > total_size) {
> + err = -EFBIG;
> + count = total_size;
> + }
> +
> + if (count + p > total_size) {
> + /*
> + * The framebuffer is too small. We do the
> + * copy operation, but return an error code
> + * afterwards. Taken from fbdev.
> + */
> + if (!err)
> + err = -ENOSPC;
> + count = total_size - p;
> + }
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + src = kmalloc(alloc_size, GFP_KERNEL);
> + if (!src)
> + return -ENOMEM;
> +
> + dst = (u8 __iomem *)(info->screen_base + p);
> +
> + while (count) {
> + c = min(count, alloc_size);
> +
> + if (copy_from_user(src, buf, c)) {
> + err = -EFAULT;
> + break;
> + }
> + memcpy_toio(dst, src, c);
> +
> + dst += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(src);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> /**
> * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> * @info: fbdev registered by the helper
> @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> return -ENODEV;
> }
>
> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_read(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> +}
> +
> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_write(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> +}
> +
> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> + const struct fb_fillrect *rect)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_fillrect(info, rect);
> + else
> + drm_fb_helper_cfb_fillrect(info, rect);
> +}
> +
> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> + const struct fb_copyarea *area)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_copyarea(info, area);
> + else
> + drm_fb_helper_cfb_copyarea(info, area);
> +}
> +
> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> + const struct fb_image *image)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_imageblit(info, image);
> + else
> + drm_fb_helper_cfb_imageblit(info, image);
> +}
> +
> static const struct fb_ops drm_fbdev_fb_ops = {
> .owner = THIS_MODULE,
> DRM_FB_HELPER_DEFAULT_OPS,
> @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> .fb_release = drm_fbdev_fb_release,
> .fb_destroy = drm_fbdev_fb_destroy,
> .fb_mmap = drm_fbdev_fb_mmap,
> - .fb_read = drm_fb_helper_sys_read,
> - .fb_write = drm_fb_helper_sys_write,
> - .fb_fillrect = drm_fb_helper_sys_fillrect,
> - .fb_copyarea = drm_fb_helper_sys_copyarea,
> - .fb_imageblit = drm_fb_helper_sys_imageblit,
> + .fb_read = drm_fbdev_fb_read,
> + .fb_write = drm_fbdev_fb_write,
> + .fb_fillrect = drm_fbdev_fb_fillrect,
> + .fb_copyarea = drm_fbdev_fb_copyarea,
> + .fb_imageblit = drm_fbdev_fb_imageblit,
> };
>
> static struct fb_deferred_io drm_fbdev_defio = {
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 5ffbb4ed5b35..ab424ddd7665 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -877,18 +877,6 @@ struct drm_mode_config {
> */
> bool prefer_shadow_fbdev;
>
> - /**
> - * @fbdev_use_iomem:
> - *
> - * Set to true if framebuffer reside in iomem.
> - * When set to true memcpy_toio() is used when copying the framebuffer in
> - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> - *
> - * FIXME: This should be replaced with a per-mapping is_iomem
> - * flag (like ttm does), and then used everywhere in fbdev code.
> - */
> - bool fbdev_use_iomem;
> -
> /**
> * @quirk_addfb_prefer_xbgr_30bpp:
> *
> --
> 2.28.0
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 12:03 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 12:03 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, kraxel, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, christian.koenig, l.stach
Hi Thomas.
On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> At least sparc64 requires I/O-specific access to framebuffers. This
> patch updates the fbdev console accordingly.
>
> For drivers with direct access to the framebuffer memory, the callback
> functions in struct fb_ops test for the type of memory and call the rsp
> fb_sys_ of fb_cfb_ functions.
>
> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> interfaces to access the buffer.
>
> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> I/O memory and avoid a HW exception. With the introduction of struct
> dma_buf_map, this is not required any longer. The patch removes the rsp
> code from both, bochs and fbdev.
>
> v4:
> * move dma_buf_map changes into separate patch (Daniel)
> * TODO list: comment on fbdev updates (Daniel)
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
The original workaround fixed it so we could run qemu with the
-nographic option.
So I went ahead and tried to run quemu version:
v5.0.0-1970-g0b100c8e72-dirty.
And with the BOCHS driver built-in.
With the following command line:
qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
Behaviour was the same before and after applying this patch.
(panic due to VFS: Unable to mount root fs on unknown-block(0,0))
So I consider it fixed for real now and not just a workaround.
I also tested with:
qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial stdio
and it worked in both cases too.
All the comments above so future-me have an easier time finding how to
reproduce.
Tested-by: Sam Ravnborg <sam@ravnborg.org>
Sam
> ---
> Documentation/gpu/todo.rst | 19 ++-
> drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> include/drm/drm_mode_config.h | 12 --
> 4 files changed, 220 insertions(+), 29 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 7e6fc3c04add..638b7f704339 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> ------------------------------------------------
>
> Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> -expects the framebuffer in system memory (or system-like memory).
> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> +expected the framebuffer in system memory or system-like memory. By employing
> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> +as well.
>
> Contact: Maintainer of the driver you plan to convert
>
> Level: Intermediate
>
> +Reimplement functions in drm_fbdev_fb_ops without fbdev
> +-------------------------------------------------------
> +
> +A number of callback functions in drm_fbdev_fb_ops could benefit from
> +being rewritten without dependencies on the fbdev module. Some of the
> +helpers could further benefit from using struct dma_buf_map instead of
> +raw pointers.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> +
> +Level: Advanced
> +
> +
> drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> -----------------------------------------------------------------
>
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> index 13d0d04c4457..853081d186d5 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> bochs->dev->mode_config.preferred_depth = 24;
> bochs->dev->mode_config.prefer_shadow = 0;
> bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> - bochs->dev->mode_config.fbdev_use_iomem = true;
> bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>
> bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 6212cd7cde1d..462b0c130ebb 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
> }
>
> static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> - struct drm_clip_rect *clip)
> + struct drm_clip_rect *clip,
> + struct dma_buf_map *dst)
> {
> struct drm_framebuffer *fb = fb_helper->fb;
> unsigned int cpp = fb->format->cpp[0];
> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> void *src = fb_helper->fbdev->screen_buffer + offset;
> - void *dst = fb_helper->buffer->map.vaddr + offset;
> size_t len = (clip->x2 - clip->x1) * cpp;
> unsigned int y;
>
> - for (y = clip->y1; y < clip->y2; y++) {
> - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> - memcpy(dst, src, len);
> - else
> - memcpy_toio((void __iomem *)dst, src, len);
> + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>
> + for (y = clip->y1; y < clip->y2; y++) {
> + dma_buf_map_memcpy_to(dst, src, len);
> + dma_buf_map_incr(dst, fb->pitches[0]);
> src += fb->pitches[0];
> - dst += fb->pitches[0];
> }
> }
>
> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
> ret = drm_client_buffer_vmap(helper->buffer, &map);
> if (ret)
> return;
> - drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
> }
> +
> if (helper->fb->funcs->dirty)
> helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> &clip_copy, 1);
> @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
> }
> EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
>
> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *dst;
> + u8 __iomem *src;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p >= total_size)
> + return 0;
> +
> + if (count >= total_size)
> + count = total_size;
> +
> + if (count + p > total_size)
> + count = total_size - p;
> +
> + src = (u8 __iomem *)(info->screen_base + p);
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + dst = kmalloc(alloc_size, GFP_KERNEL);
> + if (!dst)
> + return -ENOMEM;
> +
> + while (count) {
> + c = min(count, alloc_size);
> +
> + memcpy_fromio(dst, src, c);
> + if (copy_to_user(buf, dst, c)) {
> + err = -EFAULT;
> + break;
> + }
> +
> + src += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(dst);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + unsigned long p = *ppos;
> + u8 *src;
> + u8 __iomem *dst;
> + int c, err = 0;
> + unsigned long total_size;
> + unsigned long alloc_size;
> + ssize_t ret = 0;
> +
> + if (info->state != FBINFO_STATE_RUNNING)
> + return -EPERM;
> +
> + total_size = info->screen_size;
> +
> + if (total_size == 0)
> + total_size = info->fix.smem_len;
> +
> + if (p > total_size)
> + return -EFBIG;
> +
> + if (count > total_size) {
> + err = -EFBIG;
> + count = total_size;
> + }
> +
> + if (count + p > total_size) {
> + /*
> + * The framebuffer is too small. We do the
> + * copy operation, but return an error code
> + * afterwards. Taken from fbdev.
> + */
> + if (!err)
> + err = -ENOSPC;
> + count = total_size - p;
> + }
> +
> + alloc_size = min(count, PAGE_SIZE);
> +
> + src = kmalloc(alloc_size, GFP_KERNEL);
> + if (!src)
> + return -ENOMEM;
> +
> + dst = (u8 __iomem *)(info->screen_base + p);
> +
> + while (count) {
> + c = min(count, alloc_size);
> +
> + if (copy_from_user(src, buf, c)) {
> + err = -EFAULT;
> + break;
> + }
> + memcpy_toio(dst, src, c);
> +
> + dst += c;
> + *ppos += c;
> + buf += c;
> + ret += c;
> + count -= c;
> + }
> +
> + kfree(src);
> +
> + if (err)
> + return err;
> +
> + return ret;
> +}
> +
> /**
> * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> * @info: fbdev registered by the helper
> @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> return -ENODEV;
> }
>
> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_read(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> +}
> +
> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + return drm_fb_helper_sys_write(info, buf, count, ppos);
> + else
> + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> +}
> +
> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> + const struct fb_fillrect *rect)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_fillrect(info, rect);
> + else
> + drm_fb_helper_cfb_fillrect(info, rect);
> +}
> +
> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> + const struct fb_copyarea *area)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_copyarea(info, area);
> + else
> + drm_fb_helper_cfb_copyarea(info, area);
> +}
> +
> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> + const struct fb_image *image)
> +{
> + struct drm_fb_helper *fb_helper = info->par;
> + struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> + drm_fb_helper_sys_imageblit(info, image);
> + else
> + drm_fb_helper_cfb_imageblit(info, image);
> +}
> +
> static const struct fb_ops drm_fbdev_fb_ops = {
> .owner = THIS_MODULE,
> DRM_FB_HELPER_DEFAULT_OPS,
> @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> .fb_release = drm_fbdev_fb_release,
> .fb_destroy = drm_fbdev_fb_destroy,
> .fb_mmap = drm_fbdev_fb_mmap,
> - .fb_read = drm_fb_helper_sys_read,
> - .fb_write = drm_fb_helper_sys_write,
> - .fb_fillrect = drm_fb_helper_sys_fillrect,
> - .fb_copyarea = drm_fb_helper_sys_copyarea,
> - .fb_imageblit = drm_fb_helper_sys_imageblit,
> + .fb_read = drm_fbdev_fb_read,
> + .fb_write = drm_fbdev_fb_write,
> + .fb_fillrect = drm_fbdev_fb_fillrect,
> + .fb_copyarea = drm_fbdev_fb_copyarea,
> + .fb_imageblit = drm_fbdev_fb_imageblit,
> };
>
> static struct fb_deferred_io drm_fbdev_defio = {
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 5ffbb4ed5b35..ab424ddd7665 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -877,18 +877,6 @@ struct drm_mode_config {
> */
> bool prefer_shadow_fbdev;
>
> - /**
> - * @fbdev_use_iomem:
> - *
> - * Set to true if framebuffer reside in iomem.
> - * When set to true memcpy_toio() is used when copying the framebuffer in
> - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> - *
> - * FIXME: This should be replaced with a per-mapping is_iomem
> - * flag (like ttm does), and then used everywhere in fbdev code.
> - */
> - bool fbdev_use_iomem;
> -
> /**
> * @quirk_addfb_prefer_xbgr_30bpp:
> *
> --
> 2.28.0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
2020-10-16 12:03 ` Sam Ravnborg
` (3 preceding siblings ...)
(?)
@ 2020-10-16 12:19 ` Thomas Zimmermann
-1 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 12:19 UTC (permalink / raw)
To: Sam Ravnborg
Cc: maarten.lankhorst, mripard, airlied, daniel, alexander.deucher,
christian.koenig, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang, dri-devel, amd-gfx, virtualization, etnaviv,
linux-arm-kernel, linux-samsung-soc, lima, nouveau, spice-devel,
linux-rockchip, xen-devel, linux-media, linaro-mm-sig
Hi
On Fri, 16 Oct 2020 14:03:47 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > At least sparc64 requires I/O-specific access to framebuffers. This
> > patch updates the fbdev console accordingly.
> >
> > For drivers with direct access to the framebuffer memory, the callback
> > functions in struct fb_ops test for the type of memory and call the rsp
> > fb_sys_ of fb_cfb_ functions.
> >
> > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > interfaces to access the buffer.
> >
> > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > I/O memory and avoid a HW exception. With the introduction of struct
> > dma_buf_map, this is not required any longer. The patch removes the rsp
> > code from both, bochs and fbdev.
> >
> > v4:
> > * move dma_buf_map changes into separate patch (Daniel)
> > * TODO list: comment on fbdev updates (Daniel)
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>
> The original workaround fixed it so we could run qemu with the
> -nographic option.
>
> So I went ahead and tried to run quemu version:
> v5.0.0-1970-g0b100c8e72-dirty.
> And with the BOCHS driver built-in.
>
> With the following command line:
> qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
>
> Behaviour was the same before and after applying this patch.
> (panic due to VFS: Unable to mount root fs on unknown-block(0,0))
> So I consider it fixed for real now and not just a workaround.
>
> I also tested with:
> qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial
> stdio
>
> and it worked in both cases too.
FTR, you booted a kernel and got graphics output. The error is simply that
there was no disk to mount?
Best regards
Thomas
>
> All the comments above so future-me have an easier time finding how to
> reproduce.
>
> Tested-by: Sam Ravnborg <sam@ravnborg.org>
>
> Sam
>
> > ---
> > Documentation/gpu/todo.rst | 19 ++-
> > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > include/drm/drm_mode_config.h | 12 --
> > 4 files changed, 220 insertions(+), 29 deletions(-)
> >
> > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > index 7e6fc3c04add..638b7f704339 100644
> > --- a/Documentation/gpu/todo.rst
> > +++ b/Documentation/gpu/todo.rst
> > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > ------------------------------------------------
> >
> > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > -expects the framebuffer in system memory (or system-like memory).
> > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > emulation +expected the framebuffer in system memory or system-like
> > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > memory can be supported +as well.
> >
> > Contact: Maintainer of the driver you plan to convert
> >
> > Level: Intermediate
> >
> > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > +-------------------------------------------------------
> > +
> > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > +being rewritten without dependencies on the fbdev module. Some of the
> > +helpers could further benefit from using struct dma_buf_map instead of
> > +raw pointers.
> > +
> > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > +
> > +Level: Advanced
> > +
> > +
> > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > -----------------------------------------------------------------
> >
> > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > bochs->dev->mode_config.preferred_depth = 24;
> > bochs->dev->mode_config.prefer_shadow = 0;
> > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > true;
> > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > --- a/drivers/gpu/drm/drm_fb_helper.c
> > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > work_struct *work) }
> >
> > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > *fb_helper,
> > - struct drm_clip_rect *clip)
> > + struct drm_clip_rect *clip,
> > + struct dma_buf_map *dst)
> > {
> > struct drm_framebuffer *fb = fb_helper->fb;
> > unsigned int cpp = fb->format->cpp[0];
> > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > void *src = fb_helper->fbdev->screen_buffer + offset;
> > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > size_t len = (clip->x2 - clip->x1) * cpp;
> > unsigned int y;
> >
> > - for (y = clip->y1; y < clip->y2; y++) {
> > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > - memcpy(dst, src, len);
> > - else
> > - memcpy_toio((void __iomem *)dst, src, len);
> > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > rect */
> > + for (y = clip->y1; y < clip->y2; y++) {
> > + dma_buf_map_memcpy_to(dst, src, len);
> > + dma_buf_map_incr(dst, fb->pitches[0]);
> > src += fb->pitches[0];
> > - dst += fb->pitches[0];
> > }
> > }
> >
> > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > if (ret)
> > return;
> > - drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy);
> > + drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy, &map); }
> > +
> > if (helper->fb->funcs->dirty)
> > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > &clip_copy, 1);
> > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > *info, }
> > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> >
> > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *dst;
> > + u8 __iomem *src;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p >= total_size)
> > + return 0;
> > +
> > + if (count >= total_size)
> > + count = total_size;
> > +
> > + if (count + p > total_size)
> > + count = total_size - p;
> > +
> > + src = (u8 __iomem *)(info->screen_base + p);
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!dst)
> > + return -ENOMEM;
> > +
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + memcpy_fromio(dst, src, c);
> > + if (copy_to_user(buf, dst, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > +
> > + src += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(dst);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *src;
> > + u8 __iomem *dst;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p > total_size)
> > + return -EFBIG;
> > +
> > + if (count > total_size) {
> > + err = -EFBIG;
> > + count = total_size;
> > + }
> > +
> > + if (count + p > total_size) {
> > + /*
> > + * The framebuffer is too small. We do the
> > + * copy operation, but return an error code
> > + * afterwards. Taken from fbdev.
> > + */
> > + if (!err)
> > + err = -ENOSPC;
> > + count = total_size - p;
> > + }
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + src = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!src)
> > + return -ENOMEM;
> > +
> > + dst = (u8 __iomem *)(info->screen_base + p);
> > +
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + if (copy_from_user(src, buf, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > + memcpy_toio(dst, src, c);
> > +
> > + dst += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(src);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > /**
> > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > * @info: fbdev registered by the helper
> > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > struct vm_area_struct *vma) return -ENODEV;
> > }
> >
> > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > +}
> > +
> > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > +}
> > +
> > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > + const struct fb_fillrect *rect)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_fillrect(info, rect);
> > + else
> > + drm_fb_helper_cfb_fillrect(info, rect);
> > +}
> > +
> > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > + const struct fb_copyarea *area)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_copyarea(info, area);
> > + else
> > + drm_fb_helper_cfb_copyarea(info, area);
> > +}
> > +
> > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > + const struct fb_image *image)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_imageblit(info, image);
> > + else
> > + drm_fb_helper_cfb_imageblit(info, image);
> > +}
> > +
> > static const struct fb_ops drm_fbdev_fb_ops = {
> > .owner = THIS_MODULE,
> > DRM_FB_HELPER_DEFAULT_OPS,
> > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > .fb_release = drm_fbdev_fb_release,
> > .fb_destroy = drm_fbdev_fb_destroy,
> > .fb_mmap = drm_fbdev_fb_mmap,
> > - .fb_read = drm_fb_helper_sys_read,
> > - .fb_write = drm_fb_helper_sys_write,
> > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > + .fb_read = drm_fbdev_fb_read,
> > + .fb_write = drm_fbdev_fb_write,
> > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > };
> >
> > static struct fb_deferred_io drm_fbdev_defio = {
> > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > index 5ffbb4ed5b35..ab424ddd7665 100644
> > --- a/include/drm/drm_mode_config.h
> > +++ b/include/drm/drm_mode_config.h
> > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > */
> > bool prefer_shadow_fbdev;
> >
> > - /**
> > - * @fbdev_use_iomem:
> > - *
> > - * Set to true if framebuffer reside in iomem.
> > - * When set to true memcpy_toio() is used when copying the
> > framebuffer in
> > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > - *
> > - * FIXME: This should be replaced with a per-mapping is_iomem
> > - * flag (like ttm does), and then used everywhere in fbdev code.
> > - */
> > - bool fbdev_use_iomem;
> > -
> > /**
> > * @quirk_addfb_prefer_xbgr_30bpp:
> > *
> > --
> > 2.28.0
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 12:19 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 12:19 UTC (permalink / raw)
To: Sam Ravnborg
Cc: maarten.lankhorst, mripard, airlied, daniel, alexander.deucher,
christian.koenig, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, mel
Hi
On Fri, 16 Oct 2020 14:03:47 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > At least sparc64 requires I/O-specific access to framebuffers. This
> > patch updates the fbdev console accordingly.
> >
> > For drivers with direct access to the framebuffer memory, the callback
> > functions in struct fb_ops test for the type of memory and call the rsp
> > fb_sys_ of fb_cfb_ functions.
> >
> > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > interfaces to access the buffer.
> >
> > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > I/O memory and avoid a HW exception. With the introduction of struct
> > dma_buf_map, this is not required any longer. The patch removes the rsp
> > code from both, bochs and fbdev.
> >
> > v4:
> > * move dma_buf_map changes into separate patch (Daniel)
> > * TODO list: comment on fbdev updates (Daniel)
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>
> The original workaround fixed it so we could run qemu with the
> -nographic option.
>
> So I went ahead and tried to run quemu version:
> v5.0.0-1970-g0b100c8e72-dirty.
> And with the BOCHS driver built-in.
>
> With the following command line:
> qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
>
> Behaviour was the same before and after applying this patch.
> (panic due to VFS: Unable to mount root fs on unknown-block(0,0))
> So I consider it fixed for real now and not just a workaround.
>
> I also tested with:
> qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial
> stdio
>
> and it worked in both cases too.
FTR, you booted a kernel and got graphics output. The error is simply that
there was no disk to mount?
Best regards
Thomas
>
> All the comments above so future-me have an easier time finding how to
> reproduce.
>
> Tested-by: Sam Ravnborg <sam@ravnborg.org>
>
> Sam
>
> > ---
> > Documentation/gpu/todo.rst | 19 ++-
> > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > include/drm/drm_mode_config.h | 12 --
> > 4 files changed, 220 insertions(+), 29 deletions(-)
> >
> > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > index 7e6fc3c04add..638b7f704339 100644
> > --- a/Documentation/gpu/todo.rst
> > +++ b/Documentation/gpu/todo.rst
> > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > ------------------------------------------------
> >
> > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > -expects the framebuffer in system memory (or system-like memory).
> > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > emulation +expected the framebuffer in system memory or system-like
> > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > memory can be supported +as well.
> >
> > Contact: Maintainer of the driver you plan to convert
> >
> > Level: Intermediate
> >
> > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > +-------------------------------------------------------
> > +
> > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > +being rewritten without dependencies on the fbdev module. Some of the
> > +helpers could further benefit from using struct dma_buf_map instead of
> > +raw pointers.
> > +
> > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > +
> > +Level: Advanced
> > +
> > +
> > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > -----------------------------------------------------------------
> >
> > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > bochs->dev->mode_config.preferred_depth = 24;
> > bochs->dev->mode_config.prefer_shadow = 0;
> > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > true;
> > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > --- a/drivers/gpu/drm/drm_fb_helper.c
> > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > work_struct *work) }
> >
> > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > *fb_helper,
> > - struct drm_clip_rect *clip)
> > + struct drm_clip_rect *clip,
> > + struct dma_buf_map *dst)
> > {
> > struct drm_framebuffer *fb = fb_helper->fb;
> > unsigned int cpp = fb->format->cpp[0];
> > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > void *src = fb_helper->fbdev->screen_buffer + offset;
> > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > size_t len = (clip->x2 - clip->x1) * cpp;
> > unsigned int y;
> >
> > - for (y = clip->y1; y < clip->y2; y++) {
> > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > - memcpy(dst, src, len);
> > - else
> > - memcpy_toio((void __iomem *)dst, src, len);
> > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > rect */
> > + for (y = clip->y1; y < clip->y2; y++) {
> > + dma_buf_map_memcpy_to(dst, src, len);
> > + dma_buf_map_incr(dst, fb->pitches[0]);
> > src += fb->pitches[0];
> > - dst += fb->pitches[0];
> > }
> > }
> >
> > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > if (ret)
> > return;
> > - drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy);
> > + drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy, &map); }
> > +
> > if (helper->fb->funcs->dirty)
> > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > &clip_copy, 1);
> > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > *info, }
> > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> >
> > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *dst;
> > + u8 __iomem *src;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p >= total_size)
> > + return 0;
> > +
> > + if (count >= total_size)
> > + count = total_size;
> > +
> > + if (count + p > total_size)
> > + count = total_size - p;
> > +
> > + src = (u8 __iomem *)(info->screen_base + p);
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!dst)
> > + return -ENOMEM;
> > +
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + memcpy_fromio(dst, src, c);
> > + if (copy_to_user(buf, dst, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > +
> > + src += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(dst);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *src;
> > + u8 __iomem *dst;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p > total_size)
> > + return -EFBIG;
> > +
> > + if (count > total_size) {
> > + err = -EFBIG;
> > + count = total_size;
> > + }
> > +
> > + if (count + p > total_size) {
> > + /*
> > + * The framebuffer is too small. We do the
> > + * copy operation, but return an error code
> > + * afterwards. Taken from fbdev.
> > + */
> > + if (!err)
> > + err = -ENOSPC;
> > + count = total_size - p;
> > + }
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + src = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!src)
> > + return -ENOMEM;
> > +
> > + dst = (u8 __iomem *)(info->screen_base + p);
> > +
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + if (copy_from_user(src, buf, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > + memcpy_toio(dst, src, c);
> > +
> > + dst += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(src);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > /**
> > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > * @info: fbdev registered by the helper
> > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > struct vm_area_struct *vma) return -ENODEV;
> > }
> >
> > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > +}
> > +
> > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > +}
> > +
> > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > + const struct fb_fillrect *rect)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_fillrect(info, rect);
> > + else
> > + drm_fb_helper_cfb_fillrect(info, rect);
> > +}
> > +
> > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > + const struct fb_copyarea *area)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_copyarea(info, area);
> > + else
> > + drm_fb_helper_cfb_copyarea(info, area);
> > +}
> > +
> > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > + const struct fb_image *image)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_imageblit(info, image);
> > + else
> > + drm_fb_helper_cfb_imageblit(info, image);
> > +}
> > +
> > static const struct fb_ops drm_fbdev_fb_ops = {
> > .owner = THIS_MODULE,
> > DRM_FB_HELPER_DEFAULT_OPS,
> > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > .fb_release = drm_fbdev_fb_release,
> > .fb_destroy = drm_fbdev_fb_destroy,
> > .fb_mmap = drm_fbdev_fb_mmap,
> > - .fb_read = drm_fb_helper_sys_read,
> > - .fb_write = drm_fb_helper_sys_write,
> > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > + .fb_read = drm_fbdev_fb_read,
> > + .fb_write = drm_fbdev_fb_write,
> > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > };
> >
> > static struct fb_deferred_io drm_fbdev_defio = {
> > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > index 5ffbb4ed5b35..ab424ddd7665 100644
> > --- a/include/drm/drm_mode_config.h
> > +++ b/include/drm/drm_mode_config.h
> > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > */
> > bool prefer_shadow_fbdev;
> >
> > - /**
> > - * @fbdev_use_iomem:
> > - *
> > - * Set to true if framebuffer reside in iomem.
> > - * When set to true memcpy_toio() is used when copying the
> > framebuffer in
> > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > - *
> > - * FIXME: This should be replaced with a per-mapping is_iomem
> > - * flag (like ttm does), and then used everywhere in fbdev code.
> > - */
> > - bool fbdev_use_iomem;
> > -
> > /**
> > * @quirk_addfb_prefer_xbgr_30bpp:
> > *
> > --
> > 2.28.0
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 12:19 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 12:19 UTC (permalink / raw)
To: Sam Ravnborg
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, kraxel, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, christian.koenig, l.stach
Hi
On Fri, 16 Oct 2020 14:03:47 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > At least sparc64 requires I/O-specific access to framebuffers. This
> > patch updates the fbdev console accordingly.
> >
> > For drivers with direct access to the framebuffer memory, the callback
> > functions in struct fb_ops test for the type of memory and call the rsp
> > fb_sys_ of fb_cfb_ functions.
> >
> > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > interfaces to access the buffer.
> >
> > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > I/O memory and avoid a HW exception. With the introduction of struct
> > dma_buf_map, this is not required any longer. The patch removes the rsp
> > code from both, bochs and fbdev.
> >
> > v4:
> > * move dma_buf_map changes into separate patch (Daniel)
> > * TODO list: comment on fbdev updates (Daniel)
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>
> The original workaround fixed it so we could run qemu with the
> -nographic option.
>
> So I went ahead and tried to run quemu version:
> v5.0.0-1970-g0b100c8e72-dirty.
> And with the BOCHS driver built-in.
>
> With the following command line:
> qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
>
> Behaviour was the same before and after applying this patch.
> (panic due to VFS: Unable to mount root fs on unknown-block(0,0))
> So I consider it fixed for real now and not just a workaround.
>
> I also tested with:
> qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial
> stdio
>
> and it worked in both cases too.
FTR, you booted a kernel and got graphics output. The error is simply that
there was no disk to mount?
Best regards
Thomas
>
> All the comments above so future-me have an easier time finding how to
> reproduce.
>
> Tested-by: Sam Ravnborg <sam@ravnborg.org>
>
> Sam
>
> > ---
> > Documentation/gpu/todo.rst | 19 ++-
> > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > include/drm/drm_mode_config.h | 12 --
> > 4 files changed, 220 insertions(+), 29 deletions(-)
> >
> > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > index 7e6fc3c04add..638b7f704339 100644
> > --- a/Documentation/gpu/todo.rst
> > +++ b/Documentation/gpu/todo.rst
> > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > ------------------------------------------------
> >
> > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > -expects the framebuffer in system memory (or system-like memory).
> > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > emulation +expected the framebuffer in system memory or system-like
> > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > memory can be supported +as well.
> >
> > Contact: Maintainer of the driver you plan to convert
> >
> > Level: Intermediate
> >
> > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > +-------------------------------------------------------
> > +
> > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > +being rewritten without dependencies on the fbdev module. Some of the
> > +helpers could further benefit from using struct dma_buf_map instead of
> > +raw pointers.
> > +
> > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > +
> > +Level: Advanced
> > +
> > +
> > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > -----------------------------------------------------------------
> >
> > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > bochs->dev->mode_config.preferred_depth = 24;
> > bochs->dev->mode_config.prefer_shadow = 0;
> > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > true;
> > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > --- a/drivers/gpu/drm/drm_fb_helper.c
> > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > work_struct *work) }
> >
> > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > *fb_helper,
> > - struct drm_clip_rect *clip)
> > + struct drm_clip_rect *clip,
> > + struct dma_buf_map *dst)
> > {
> > struct drm_framebuffer *fb = fb_helper->fb;
> > unsigned int cpp = fb->format->cpp[0];
> > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > void *src = fb_helper->fbdev->screen_buffer + offset;
> > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > size_t len = (clip->x2 - clip->x1) * cpp;
> > unsigned int y;
> >
> > - for (y = clip->y1; y < clip->y2; y++) {
> > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > - memcpy(dst, src, len);
> > - else
> > - memcpy_toio((void __iomem *)dst, src, len);
> > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > rect */
> > + for (y = clip->y1; y < clip->y2; y++) {
> > + dma_buf_map_memcpy_to(dst, src, len);
> > + dma_buf_map_incr(dst, fb->pitches[0]);
> > src += fb->pitches[0];
> > - dst += fb->pitches[0];
> > }
> > }
> >
> > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > if (ret)
> > return;
> > - drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy);
> > + drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy, &map); }
> > +
> > if (helper->fb->funcs->dirty)
> > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > &clip_copy, 1);
> > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > *info, }
> > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> >
> > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *dst;
> > + u8 __iomem *src;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p >= total_size)
> > + return 0;
> > +
> > + if (count >= total_size)
> > + count = total_size;
> > +
> > + if (count + p > total_size)
> > + count = total_size - p;
> > +
> > + src = (u8 __iomem *)(info->screen_base + p);
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!dst)
> > + return -ENOMEM;
> > +
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + memcpy_fromio(dst, src, c);
> > + if (copy_to_user(buf, dst, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > +
> > + src += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(dst);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *src;
> > + u8 __iomem *dst;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p > total_size)
> > + return -EFBIG;
> > +
> > + if (count > total_size) {
> > + err = -EFBIG;
> > + count = total_size;
> > + }
> > +
> > + if (count + p > total_size) {
> > + /*
> > + * The framebuffer is too small. We do the
> > + * copy operation, but return an error code
> > + * afterwards. Taken from fbdev.
> > + */
> > + if (!err)
> > + err = -ENOSPC;
> > + count = total_size - p;
> > + }
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + src = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!src)
> > + return -ENOMEM;
> > +
> > + dst = (u8 __iomem *)(info->screen_base + p);
> > +
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + if (copy_from_user(src, buf, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > + memcpy_toio(dst, src, c);
> > +
> > + dst += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(src);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > /**
> > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > * @info: fbdev registered by the helper
> > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > struct vm_area_struct *vma) return -ENODEV;
> > }
> >
> > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > +}
> > +
> > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > +}
> > +
> > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > + const struct fb_fillrect *rect)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_fillrect(info, rect);
> > + else
> > + drm_fb_helper_cfb_fillrect(info, rect);
> > +}
> > +
> > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > + const struct fb_copyarea *area)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_copyarea(info, area);
> > + else
> > + drm_fb_helper_cfb_copyarea(info, area);
> > +}
> > +
> > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > + const struct fb_image *image)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_imageblit(info, image);
> > + else
> > + drm_fb_helper_cfb_imageblit(info, image);
> > +}
> > +
> > static const struct fb_ops drm_fbdev_fb_ops = {
> > .owner = THIS_MODULE,
> > DRM_FB_HELPER_DEFAULT_OPS,
> > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > .fb_release = drm_fbdev_fb_release,
> > .fb_destroy = drm_fbdev_fb_destroy,
> > .fb_mmap = drm_fbdev_fb_mmap,
> > - .fb_read = drm_fb_helper_sys_read,
> > - .fb_write = drm_fb_helper_sys_write,
> > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > + .fb_read = drm_fbdev_fb_read,
> > + .fb_write = drm_fbdev_fb_write,
> > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > };
> >
> > static struct fb_deferred_io drm_fbdev_defio = {
> > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > index 5ffbb4ed5b35..ab424ddd7665 100644
> > --- a/include/drm/drm_mode_config.h
> > +++ b/include/drm/drm_mode_config.h
> > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > */
> > bool prefer_shadow_fbdev;
> >
> > - /**
> > - * @fbdev_use_iomem:
> > - *
> > - * Set to true if framebuffer reside in iomem.
> > - * When set to true memcpy_toio() is used when copying the
> > framebuffer in
> > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > - *
> > - * FIXME: This should be replaced with a per-mapping is_iomem
> > - * flag (like ttm does), and then used everywhere in fbdev code.
> > - */
> > - bool fbdev_use_iomem;
> > -
> > /**
> > * @quirk_addfb_prefer_xbgr_30bpp:
> > *
> > --
> > 2.28.0
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 12:19 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 12:19 UTC (permalink / raw)
To: Sam Ravnborg
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, sumit.semwal, emil.velikov,
robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, christian.koenig, l.stach
Hi
On Fri, 16 Oct 2020 14:03:47 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > At least sparc64 requires I/O-specific access to framebuffers. This
> > patch updates the fbdev console accordingly.
> >
> > For drivers with direct access to the framebuffer memory, the callback
> > functions in struct fb_ops test for the type of memory and call the rsp
> > fb_sys_ of fb_cfb_ functions.
> >
> > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > interfaces to access the buffer.
> >
> > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > I/O memory and avoid a HW exception. With the introduction of struct
> > dma_buf_map, this is not required any longer. The patch removes the rsp
> > code from both, bochs and fbdev.
> >
> > v4:
> > * move dma_buf_map changes into separate patch (Daniel)
> > * TODO list: comment on fbdev updates (Daniel)
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>
> The original workaround fixed it so we could run qemu with the
> -nographic option.
>
> So I went ahead and tried to run quemu version:
> v5.0.0-1970-g0b100c8e72-dirty.
> And with the BOCHS driver built-in.
>
> With the following command line:
> qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
>
> Behaviour was the same before and after applying this patch.
> (panic due to VFS: Unable to mount root fs on unknown-block(0,0))
> So I consider it fixed for real now and not just a workaround.
>
> I also tested with:
> qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial
> stdio
>
> and it worked in both cases too.
FTR, you booted a kernel and got graphics output. The error is simply that
there was no disk to mount?
Best regards
Thomas
>
> All the comments above so future-me have an easier time finding how to
> reproduce.
>
> Tested-by: Sam Ravnborg <sam@ravnborg.org>
>
> Sam
>
> > ---
> > Documentation/gpu/todo.rst | 19 ++-
> > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > include/drm/drm_mode_config.h | 12 --
> > 4 files changed, 220 insertions(+), 29 deletions(-)
> >
> > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > index 7e6fc3c04add..638b7f704339 100644
> > --- a/Documentation/gpu/todo.rst
> > +++ b/Documentation/gpu/todo.rst
> > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > ------------------------------------------------
> >
> > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > -expects the framebuffer in system memory (or system-like memory).
> > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > emulation +expected the framebuffer in system memory or system-like
> > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > memory can be supported +as well.
> >
> > Contact: Maintainer of the driver you plan to convert
> >
> > Level: Intermediate
> >
> > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > +-------------------------------------------------------
> > +
> > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > +being rewritten without dependencies on the fbdev module. Some of the
> > +helpers could further benefit from using struct dma_buf_map instead of
> > +raw pointers.
> > +
> > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > +
> > +Level: Advanced
> > +
> > +
> > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > -----------------------------------------------------------------
> >
> > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > bochs->dev->mode_config.preferred_depth = 24;
> > bochs->dev->mode_config.prefer_shadow = 0;
> > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > true;
> > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > --- a/drivers/gpu/drm/drm_fb_helper.c
> > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > work_struct *work) }
> >
> > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > *fb_helper,
> > - struct drm_clip_rect *clip)
> > + struct drm_clip_rect *clip,
> > + struct dma_buf_map *dst)
> > {
> > struct drm_framebuffer *fb = fb_helper->fb;
> > unsigned int cpp = fb->format->cpp[0];
> > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > void *src = fb_helper->fbdev->screen_buffer + offset;
> > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > size_t len = (clip->x2 - clip->x1) * cpp;
> > unsigned int y;
> >
> > - for (y = clip->y1; y < clip->y2; y++) {
> > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > - memcpy(dst, src, len);
> > - else
> > - memcpy_toio((void __iomem *)dst, src, len);
> > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > rect */
> > + for (y = clip->y1; y < clip->y2; y++) {
> > + dma_buf_map_memcpy_to(dst, src, len);
> > + dma_buf_map_incr(dst, fb->pitches[0]);
> > src += fb->pitches[0];
> > - dst += fb->pitches[0];
> > }
> > }
> >
> > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > if (ret)
> > return;
> > - drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy);
> > + drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy, &map); }
> > +
> > if (helper->fb->funcs->dirty)
> > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > &clip_copy, 1);
> > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > *info, }
> > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> >
> > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *dst;
> > + u8 __iomem *src;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p >= total_size)
> > + return 0;
> > +
> > + if (count >= total_size)
> > + count = total_size;
> > +
> > + if (count + p > total_size)
> > + count = total_size - p;
> > +
> > + src = (u8 __iomem *)(info->screen_base + p);
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!dst)
> > + return -ENOMEM;
> > +
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + memcpy_fromio(dst, src, c);
> > + if (copy_to_user(buf, dst, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > +
> > + src += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(dst);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *src;
> > + u8 __iomem *dst;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p > total_size)
> > + return -EFBIG;
> > +
> > + if (count > total_size) {
> > + err = -EFBIG;
> > + count = total_size;
> > + }
> > +
> > + if (count + p > total_size) {
> > + /*
> > + * The framebuffer is too small. We do the
> > + * copy operation, but return an error code
> > + * afterwards. Taken from fbdev.
> > + */
> > + if (!err)
> > + err = -ENOSPC;
> > + count = total_size - p;
> > + }
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + src = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!src)
> > + return -ENOMEM;
> > +
> > + dst = (u8 __iomem *)(info->screen_base + p);
> > +
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + if (copy_from_user(src, buf, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > + memcpy_toio(dst, src, c);
> > +
> > + dst += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(src);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > /**
> > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > * @info: fbdev registered by the helper
> > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > struct vm_area_struct *vma) return -ENODEV;
> > }
> >
> > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > +}
> > +
> > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > +}
> > +
> > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > + const struct fb_fillrect *rect)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_fillrect(info, rect);
> > + else
> > + drm_fb_helper_cfb_fillrect(info, rect);
> > +}
> > +
> > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > + const struct fb_copyarea *area)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_copyarea(info, area);
> > + else
> > + drm_fb_helper_cfb_copyarea(info, area);
> > +}
> > +
> > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > + const struct fb_image *image)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_imageblit(info, image);
> > + else
> > + drm_fb_helper_cfb_imageblit(info, image);
> > +}
> > +
> > static const struct fb_ops drm_fbdev_fb_ops = {
> > .owner = THIS_MODULE,
> > DRM_FB_HELPER_DEFAULT_OPS,
> > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > .fb_release = drm_fbdev_fb_release,
> > .fb_destroy = drm_fbdev_fb_destroy,
> > .fb_mmap = drm_fbdev_fb_mmap,
> > - .fb_read = drm_fb_helper_sys_read,
> > - .fb_write = drm_fb_helper_sys_write,
> > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > + .fb_read = drm_fbdev_fb_read,
> > + .fb_write = drm_fbdev_fb_write,
> > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > };
> >
> > static struct fb_deferred_io drm_fbdev_defio = {
> > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > index 5ffbb4ed5b35..ab424ddd7665 100644
> > --- a/include/drm/drm_mode_config.h
> > +++ b/include/drm/drm_mode_config.h
> > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > */
> > bool prefer_shadow_fbdev;
> >
> > - /**
> > - * @fbdev_use_iomem:
> > - *
> > - * Set to true if framebuffer reside in iomem.
> > - * When set to true memcpy_toio() is used when copying the
> > framebuffer in
> > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > - *
> > - * FIXME: This should be replaced with a per-mapping is_iomem
> > - * flag (like ttm does), and then used everywhere in fbdev code.
> > - */
> > - bool fbdev_use_iomem;
> > -
> > /**
> > * @quirk_addfb_prefer_xbgr_30bpp:
> > *
> > --
> > 2.28.0
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 12:19 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 12:19 UTC (permalink / raw)
To: Sam Ravnborg
Cc: luben.tuikov, airlied, nouveau, dri-devel, chris, melissa.srw,
ray.huang, kraxel, emil.velikov, linux-samsung-soc, jy0922.shim,
lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, kgene, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, hdegoede, xen-devel,
virtualization, sean, apaneers, linux-arm-kernel, linaro-mm-sig,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, yuq825, alexander.deucher, linux-media,
christian.koenig
Hi
On Fri, 16 Oct 2020 14:03:47 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > At least sparc64 requires I/O-specific access to framebuffers. This
> > patch updates the fbdev console accordingly.
> >
> > For drivers with direct access to the framebuffer memory, the callback
> > functions in struct fb_ops test for the type of memory and call the rsp
> > fb_sys_ of fb_cfb_ functions.
> >
> > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > interfaces to access the buffer.
> >
> > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > I/O memory and avoid a HW exception. With the introduction of struct
> > dma_buf_map, this is not required any longer. The patch removes the rsp
> > code from both, bochs and fbdev.
> >
> > v4:
> > * move dma_buf_map changes into separate patch (Daniel)
> > * TODO list: comment on fbdev updates (Daniel)
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>
> The original workaround fixed it so we could run qemu with the
> -nographic option.
>
> So I went ahead and tried to run quemu version:
> v5.0.0-1970-g0b100c8e72-dirty.
> And with the BOCHS driver built-in.
>
> With the following command line:
> qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
>
> Behaviour was the same before and after applying this patch.
> (panic due to VFS: Unable to mount root fs on unknown-block(0,0))
> So I consider it fixed for real now and not just a workaround.
>
> I also tested with:
> qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial
> stdio
>
> and it worked in both cases too.
FTR, you booted a kernel and got graphics output. The error is simply that
there was no disk to mount?
Best regards
Thomas
>
> All the comments above so future-me have an easier time finding how to
> reproduce.
>
> Tested-by: Sam Ravnborg <sam@ravnborg.org>
>
> Sam
>
> > ---
> > Documentation/gpu/todo.rst | 19 ++-
> > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > include/drm/drm_mode_config.h | 12 --
> > 4 files changed, 220 insertions(+), 29 deletions(-)
> >
> > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > index 7e6fc3c04add..638b7f704339 100644
> > --- a/Documentation/gpu/todo.rst
> > +++ b/Documentation/gpu/todo.rst
> > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > ------------------------------------------------
> >
> > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > -expects the framebuffer in system memory (or system-like memory).
> > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > emulation +expected the framebuffer in system memory or system-like
> > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > memory can be supported +as well.
> >
> > Contact: Maintainer of the driver you plan to convert
> >
> > Level: Intermediate
> >
> > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > +-------------------------------------------------------
> > +
> > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > +being rewritten without dependencies on the fbdev module. Some of the
> > +helpers could further benefit from using struct dma_buf_map instead of
> > +raw pointers.
> > +
> > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > +
> > +Level: Advanced
> > +
> > +
> > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > -----------------------------------------------------------------
> >
> > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > bochs->dev->mode_config.preferred_depth = 24;
> > bochs->dev->mode_config.prefer_shadow = 0;
> > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > true;
> > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > --- a/drivers/gpu/drm/drm_fb_helper.c
> > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > work_struct *work) }
> >
> > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > *fb_helper,
> > - struct drm_clip_rect *clip)
> > + struct drm_clip_rect *clip,
> > + struct dma_buf_map *dst)
> > {
> > struct drm_framebuffer *fb = fb_helper->fb;
> > unsigned int cpp = fb->format->cpp[0];
> > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > void *src = fb_helper->fbdev->screen_buffer + offset;
> > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > size_t len = (clip->x2 - clip->x1) * cpp;
> > unsigned int y;
> >
> > - for (y = clip->y1; y < clip->y2; y++) {
> > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > - memcpy(dst, src, len);
> > - else
> > - memcpy_toio((void __iomem *)dst, src, len);
> > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > rect */
> > + for (y = clip->y1; y < clip->y2; y++) {
> > + dma_buf_map_memcpy_to(dst, src, len);
> > + dma_buf_map_incr(dst, fb->pitches[0]);
> > src += fb->pitches[0];
> > - dst += fb->pitches[0];
> > }
> > }
> >
> > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > if (ret)
> > return;
> > - drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy);
> > + drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy, &map); }
> > +
> > if (helper->fb->funcs->dirty)
> > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > &clip_copy, 1);
> > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > *info, }
> > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> >
> > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *dst;
> > + u8 __iomem *src;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p >= total_size)
> > + return 0;
> > +
> > + if (count >= total_size)
> > + count = total_size;
> > +
> > + if (count + p > total_size)
> > + count = total_size - p;
> > +
> > + src = (u8 __iomem *)(info->screen_base + p);
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!dst)
> > + return -ENOMEM;
> > +
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + memcpy_fromio(dst, src, c);
> > + if (copy_to_user(buf, dst, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > +
> > + src += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(dst);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *src;
> > + u8 __iomem *dst;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p > total_size)
> > + return -EFBIG;
> > +
> > + if (count > total_size) {
> > + err = -EFBIG;
> > + count = total_size;
> > + }
> > +
> > + if (count + p > total_size) {
> > + /*
> > + * The framebuffer is too small. We do the
> > + * copy operation, but return an error code
> > + * afterwards. Taken from fbdev.
> > + */
> > + if (!err)
> > + err = -ENOSPC;
> > + count = total_size - p;
> > + }
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + src = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!src)
> > + return -ENOMEM;
> > +
> > + dst = (u8 __iomem *)(info->screen_base + p);
> > +
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + if (copy_from_user(src, buf, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > + memcpy_toio(dst, src, c);
> > +
> > + dst += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(src);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > /**
> > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > * @info: fbdev registered by the helper
> > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > struct vm_area_struct *vma) return -ENODEV;
> > }
> >
> > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > +}
> > +
> > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > +}
> > +
> > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > + const struct fb_fillrect *rect)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_fillrect(info, rect);
> > + else
> > + drm_fb_helper_cfb_fillrect(info, rect);
> > +}
> > +
> > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > + const struct fb_copyarea *area)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_copyarea(info, area);
> > + else
> > + drm_fb_helper_cfb_copyarea(info, area);
> > +}
> > +
> > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > + const struct fb_image *image)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_imageblit(info, image);
> > + else
> > + drm_fb_helper_cfb_imageblit(info, image);
> > +}
> > +
> > static const struct fb_ops drm_fbdev_fb_ops = {
> > .owner = THIS_MODULE,
> > DRM_FB_HELPER_DEFAULT_OPS,
> > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > .fb_release = drm_fbdev_fb_release,
> > .fb_destroy = drm_fbdev_fb_destroy,
> > .fb_mmap = drm_fbdev_fb_mmap,
> > - .fb_read = drm_fb_helper_sys_read,
> > - .fb_write = drm_fb_helper_sys_write,
> > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > + .fb_read = drm_fbdev_fb_read,
> > + .fb_write = drm_fbdev_fb_write,
> > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > };
> >
> > static struct fb_deferred_io drm_fbdev_defio = {
> > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > index 5ffbb4ed5b35..ab424ddd7665 100644
> > --- a/include/drm/drm_mode_config.h
> > +++ b/include/drm/drm_mode_config.h
> > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > */
> > bool prefer_shadow_fbdev;
> >
> > - /**
> > - * @fbdev_use_iomem:
> > - *
> > - * Set to true if framebuffer reside in iomem.
> > - * When set to true memcpy_toio() is used when copying the
> > framebuffer in
> > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > - *
> > - * FIXME: This should be replaced with a per-mapping is_iomem
> > - * flag (like ttm does), and then used everywhere in fbdev code.
> > - */
> > - bool fbdev_use_iomem;
> > -
> > /**
> > * @quirk_addfb_prefer_xbgr_30bpp:
> > *
> > --
> > 2.28.0
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 12:19 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-16 12:19 UTC (permalink / raw)
To: Sam Ravnborg
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, kraxel, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, christian.koenig, l.stach
Hi
On Fri, 16 Oct 2020 14:03:47 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> Hi Thomas.
>
> On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > At least sparc64 requires I/O-specific access to framebuffers. This
> > patch updates the fbdev console accordingly.
> >
> > For drivers with direct access to the framebuffer memory, the callback
> > functions in struct fb_ops test for the type of memory and call the rsp
> > fb_sys_ of fb_cfb_ functions.
> >
> > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > interfaces to access the buffer.
> >
> > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > I/O memory and avoid a HW exception. With the introduction of struct
> > dma_buf_map, this is not required any longer. The patch removes the rsp
> > code from both, bochs and fbdev.
> >
> > v4:
> > * move dma_buf_map changes into separate patch (Daniel)
> > * TODO list: comment on fbdev updates (Daniel)
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>
> The original workaround fixed it so we could run qemu with the
> -nographic option.
>
> So I went ahead and tried to run quemu version:
> v5.0.0-1970-g0b100c8e72-dirty.
> And with the BOCHS driver built-in.
>
> With the following command line:
> qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
>
> Behaviour was the same before and after applying this patch.
> (panic due to VFS: Unable to mount root fs on unknown-block(0,0))
> So I consider it fixed for real now and not just a workaround.
>
> I also tested with:
> qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial
> stdio
>
> and it worked in both cases too.
FTR, you booted a kernel and got graphics output. The error is simply that
there was no disk to mount?
Best regards
Thomas
>
> All the comments above so future-me have an easier time finding how to
> reproduce.
>
> Tested-by: Sam Ravnborg <sam@ravnborg.org>
>
> Sam
>
> > ---
> > Documentation/gpu/todo.rst | 19 ++-
> > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > include/drm/drm_mode_config.h | 12 --
> > 4 files changed, 220 insertions(+), 29 deletions(-)
> >
> > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > index 7e6fc3c04add..638b7f704339 100644
> > --- a/Documentation/gpu/todo.rst
> > +++ b/Documentation/gpu/todo.rst
> > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > ------------------------------------------------
> >
> > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > -expects the framebuffer in system memory (or system-like memory).
> > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > emulation +expected the framebuffer in system memory or system-like
> > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > memory can be supported +as well.
> >
> > Contact: Maintainer of the driver you plan to convert
> >
> > Level: Intermediate
> >
> > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > +-------------------------------------------------------
> > +
> > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > +being rewritten without dependencies on the fbdev module. Some of the
> > +helpers could further benefit from using struct dma_buf_map instead of
> > +raw pointers.
> > +
> > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > +
> > +Level: Advanced
> > +
> > +
> > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > -----------------------------------------------------------------
> >
> > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > bochs->dev->mode_config.preferred_depth = 24;
> > bochs->dev->mode_config.prefer_shadow = 0;
> > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > true;
> > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > --- a/drivers/gpu/drm/drm_fb_helper.c
> > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > work_struct *work) }
> >
> > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > *fb_helper,
> > - struct drm_clip_rect *clip)
> > + struct drm_clip_rect *clip,
> > + struct dma_buf_map *dst)
> > {
> > struct drm_framebuffer *fb = fb_helper->fb;
> > unsigned int cpp = fb->format->cpp[0];
> > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > void *src = fb_helper->fbdev->screen_buffer + offset;
> > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > size_t len = (clip->x2 - clip->x1) * cpp;
> > unsigned int y;
> >
> > - for (y = clip->y1; y < clip->y2; y++) {
> > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > - memcpy(dst, src, len);
> > - else
> > - memcpy_toio((void __iomem *)dst, src, len);
> > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > rect */
> > + for (y = clip->y1; y < clip->y2; y++) {
> > + dma_buf_map_memcpy_to(dst, src, len);
> > + dma_buf_map_incr(dst, fb->pitches[0]);
> > src += fb->pitches[0];
> > - dst += fb->pitches[0];
> > }
> > }
> >
> > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > if (ret)
> > return;
> > - drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy);
> > + drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy, &map); }
> > +
> > if (helper->fb->funcs->dirty)
> > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > &clip_copy, 1);
> > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > *info, }
> > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> >
> > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *dst;
> > + u8 __iomem *src;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p >= total_size)
> > + return 0;
> > +
> > + if (count >= total_size)
> > + count = total_size;
> > +
> > + if (count + p > total_size)
> > + count = total_size - p;
> > +
> > + src = (u8 __iomem *)(info->screen_base + p);
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!dst)
> > + return -ENOMEM;
> > +
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + memcpy_fromio(dst, src, c);
> > + if (copy_to_user(buf, dst, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > +
> > + src += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(dst);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + unsigned long p = *ppos;
> > + u8 *src;
> > + u8 __iomem *dst;
> > + int c, err = 0;
> > + unsigned long total_size;
> > + unsigned long alloc_size;
> > + ssize_t ret = 0;
> > +
> > + if (info->state != FBINFO_STATE_RUNNING)
> > + return -EPERM;
> > +
> > + total_size = info->screen_size;
> > +
> > + if (total_size == 0)
> > + total_size = info->fix.smem_len;
> > +
> > + if (p > total_size)
> > + return -EFBIG;
> > +
> > + if (count > total_size) {
> > + err = -EFBIG;
> > + count = total_size;
> > + }
> > +
> > + if (count + p > total_size) {
> > + /*
> > + * The framebuffer is too small. We do the
> > + * copy operation, but return an error code
> > + * afterwards. Taken from fbdev.
> > + */
> > + if (!err)
> > + err = -ENOSPC;
> > + count = total_size - p;
> > + }
> > +
> > + alloc_size = min(count, PAGE_SIZE);
> > +
> > + src = kmalloc(alloc_size, GFP_KERNEL);
> > + if (!src)
> > + return -ENOMEM;
> > +
> > + dst = (u8 __iomem *)(info->screen_base + p);
> > +
> > + while (count) {
> > + c = min(count, alloc_size);
> > +
> > + if (copy_from_user(src, buf, c)) {
> > + err = -EFAULT;
> > + break;
> > + }
> > + memcpy_toio(dst, src, c);
> > +
> > + dst += c;
> > + *ppos += c;
> > + buf += c;
> > + ret += c;
> > + count -= c;
> > + }
> > +
> > + kfree(src);
> > +
> > + if (err)
> > + return err;
> > +
> > + return ret;
> > +}
> > +
> > /**
> > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > * @info: fbdev registered by the helper
> > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > struct vm_area_struct *vma) return -ENODEV;
> > }
> >
> > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > +}
> > +
> > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > __user *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > + else
> > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > +}
> > +
> > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > + const struct fb_fillrect *rect)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_fillrect(info, rect);
> > + else
> > + drm_fb_helper_cfb_fillrect(info, rect);
> > +}
> > +
> > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > + const struct fb_copyarea *area)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_copyarea(info, area);
> > + else
> > + drm_fb_helper_cfb_copyarea(info, area);
> > +}
> > +
> > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > + const struct fb_image *image)
> > +{
> > + struct drm_fb_helper *fb_helper = info->par;
> > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > + drm_fb_helper_sys_imageblit(info, image);
> > + else
> > + drm_fb_helper_cfb_imageblit(info, image);
> > +}
> > +
> > static const struct fb_ops drm_fbdev_fb_ops = {
> > .owner = THIS_MODULE,
> > DRM_FB_HELPER_DEFAULT_OPS,
> > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > .fb_release = drm_fbdev_fb_release,
> > .fb_destroy = drm_fbdev_fb_destroy,
> > .fb_mmap = drm_fbdev_fb_mmap,
> > - .fb_read = drm_fb_helper_sys_read,
> > - .fb_write = drm_fb_helper_sys_write,
> > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > + .fb_read = drm_fbdev_fb_read,
> > + .fb_write = drm_fbdev_fb_write,
> > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > };
> >
> > static struct fb_deferred_io drm_fbdev_defio = {
> > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > index 5ffbb4ed5b35..ab424ddd7665 100644
> > --- a/include/drm/drm_mode_config.h
> > +++ b/include/drm/drm_mode_config.h
> > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > */
> > bool prefer_shadow_fbdev;
> >
> > - /**
> > - * @fbdev_use_iomem:
> > - *
> > - * Set to true if framebuffer reside in iomem.
> > - * When set to true memcpy_toio() is used when copying the
> > framebuffer in
> > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > - *
> > - * FIXME: This should be replaced with a per-mapping is_iomem
> > - * flag (like ttm does), and then used everywhere in fbdev code.
> > - */
> > - bool fbdev_use_iomem;
> > -
> > /**
> > * @quirk_addfb_prefer_xbgr_30bpp:
> > *
> > --
> > 2.28.0
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
2020-10-16 12:19 ` Thomas Zimmermann
` (3 preceding siblings ...)
(?)
@ 2020-10-16 12:48 ` Sam Ravnborg
-1 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 12:48 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: maarten.lankhorst, mripard, airlied, daniel, alexander.deucher,
christian.koenig, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang, dri-devel, amd-gfx, virtualization, etnaviv,
linux-arm-kernel, linux-samsung-soc, lima, nouveau, spice-devel,
linux-rockchip, xen-devel, linux-media, linaro-mm-sig
On Fri, Oct 16, 2020 at 02:19:42PM +0200, Thomas Zimmermann wrote:
> Hi
>
> On Fri, 16 Oct 2020 14:03:47 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
>
> > Hi Thomas.
> >
> > On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > > At least sparc64 requires I/O-specific access to framebuffers. This
> > > patch updates the fbdev console accordingly.
> > >
> > > For drivers with direct access to the framebuffer memory, the callback
> > > functions in struct fb_ops test for the type of memory and call the rsp
> > > fb_sys_ of fb_cfb_ functions.
> > >
> > > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > > interfaces to access the buffer.
> > >
> > > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > > I/O memory and avoid a HW exception. With the introduction of struct
> > > dma_buf_map, this is not required any longer. The patch removes the rsp
> > > code from both, bochs and fbdev.
> > >
> > > v4:
> > > * move dma_buf_map changes into separate patch (Daniel)
> > > * TODO list: comment on fbdev updates (Daniel)
> > >
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >
> > The original workaround fixed it so we could run qemu with the
> > -nographic option.
> >
> > So I went ahead and tried to run quemu version:
> > v5.0.0-1970-g0b100c8e72-dirty.
> > And with the BOCHS driver built-in.
> >
> > With the following command line:
> > qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
> >
> > Behaviour was the same before and after applying this patch.
> > (panic due to VFS: Unable to mount root fs on unknown-block(0,0))
> > So I consider it fixed for real now and not just a workaround.
> >
> > I also tested with:
> > qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial
> > stdio
> >
> > and it worked in both cases too.
>
> FTR, you booted a kernel and got graphics output. The error is simply that
> there was no disk to mount?
The short version "Yes".
The longer version:
With "qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0
-serial stdio" I got graphical output - one penguin.
With "qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0
-nographic" I got no graphical output, as implied by the -nographic
option. But the boot continued - where it would panic before when we
accessed IO memory as system memory.
In both cases I got an error because I had not specified any rootfs, so
qemu failed to mount any rootfs. So expected.
Sam
>
> Best regards
> Thomas
>
> >
> > All the comments above so future-me have an easier time finding how to
> > reproduce.
> >
> > Tested-by: Sam Ravnborg <sam@ravnborg.org>
> >
> > Sam
> >
> > > ---
> > > Documentation/gpu/todo.rst | 19 ++-
> > > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > > include/drm/drm_mode_config.h | 12 --
> > > 4 files changed, 220 insertions(+), 29 deletions(-)
> > >
> > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > > index 7e6fc3c04add..638b7f704339 100644
> > > --- a/Documentation/gpu/todo.rst
> > > +++ b/Documentation/gpu/todo.rst
> > > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > > ------------------------------------------------
> > >
> > > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > > -expects the framebuffer in system memory (or system-like memory).
> > > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > > emulation +expected the framebuffer in system memory or system-like
> > > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > > memory can be supported +as well.
> > >
> > > Contact: Maintainer of the driver you plan to convert
> > >
> > > Level: Intermediate
> > >
> > > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > > +-------------------------------------------------------
> > > +
> > > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > > +being rewritten without dependencies on the fbdev module. Some of the
> > > +helpers could further benefit from using struct dma_buf_map instead of
> > > +raw pointers.
> > > +
> > > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > > +
> > > +Level: Advanced
> > > +
> > > +
> > > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > > -----------------------------------------------------------------
> > >
> > > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > > bochs->dev->mode_config.preferred_depth = 24;
> > > bochs->dev->mode_config.prefer_shadow = 0;
> > > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > > true;
> > > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> > > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > > --- a/drivers/gpu/drm/drm_fb_helper.c
> > > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > > work_struct *work) }
> > >
> > > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > > *fb_helper,
> > > - struct drm_clip_rect *clip)
> > > + struct drm_clip_rect *clip,
> > > + struct dma_buf_map *dst)
> > > {
> > > struct drm_framebuffer *fb = fb_helper->fb;
> > > unsigned int cpp = fb->format->cpp[0];
> > > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > > void *src = fb_helper->fbdev->screen_buffer + offset;
> > > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > > size_t len = (clip->x2 - clip->x1) * cpp;
> > > unsigned int y;
> > >
> > > - for (y = clip->y1; y < clip->y2; y++) {
> > > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > > - memcpy(dst, src, len);
> > > - else
> > > - memcpy_toio((void __iomem *)dst, src, len);
> > > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > > rect */
> > > + for (y = clip->y1; y < clip->y2; y++) {
> > > + dma_buf_map_memcpy_to(dst, src, len);
> > > + dma_buf_map_incr(dst, fb->pitches[0]);
> > > src += fb->pitches[0];
> > > - dst += fb->pitches[0];
> > > }
> > > }
> > >
> > > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > > if (ret)
> > > return;
> > > - drm_fb_helper_dirty_blit_real(helper,
> > > &clip_copy);
> > > + drm_fb_helper_dirty_blit_real(helper,
> > > &clip_copy, &map); }
> > > +
> > > if (helper->fb->funcs->dirty)
> > > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > > &clip_copy, 1);
> > > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > > *info, }
> > > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> > >
> > > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > > *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + unsigned long p = *ppos;
> > > + u8 *dst;
> > > + u8 __iomem *src;
> > > + int c, err = 0;
> > > + unsigned long total_size;
> > > + unsigned long alloc_size;
> > > + ssize_t ret = 0;
> > > +
> > > + if (info->state != FBINFO_STATE_RUNNING)
> > > + return -EPERM;
> > > +
> > > + total_size = info->screen_size;
> > > +
> > > + if (total_size == 0)
> > > + total_size = info->fix.smem_len;
> > > +
> > > + if (p >= total_size)
> > > + return 0;
> > > +
> > > + if (count >= total_size)
> > > + count = total_size;
> > > +
> > > + if (count + p > total_size)
> > > + count = total_size - p;
> > > +
> > > + src = (u8 __iomem *)(info->screen_base + p);
> > > +
> > > + alloc_size = min(count, PAGE_SIZE);
> > > +
> > > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > > + if (!dst)
> > > + return -ENOMEM;
> > > +
> > > + while (count) {
> > > + c = min(count, alloc_size);
> > > +
> > > + memcpy_fromio(dst, src, c);
> > > + if (copy_to_user(buf, dst, c)) {
> > > + err = -EFAULT;
> > > + break;
> > > + }
> > > +
> > > + src += c;
> > > + *ppos += c;
> > > + buf += c;
> > > + ret += c;
> > > + count -= c;
> > > + }
> > > +
> > > + kfree(dst);
> > > +
> > > + if (err)
> > > + return err;
> > > +
> > > + return ret;
> > > +}
> > > +
> > > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > > __user *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + unsigned long p = *ppos;
> > > + u8 *src;
> > > + u8 __iomem *dst;
> > > + int c, err = 0;
> > > + unsigned long total_size;
> > > + unsigned long alloc_size;
> > > + ssize_t ret = 0;
> > > +
> > > + if (info->state != FBINFO_STATE_RUNNING)
> > > + return -EPERM;
> > > +
> > > + total_size = info->screen_size;
> > > +
> > > + if (total_size == 0)
> > > + total_size = info->fix.smem_len;
> > > +
> > > + if (p > total_size)
> > > + return -EFBIG;
> > > +
> > > + if (count > total_size) {
> > > + err = -EFBIG;
> > > + count = total_size;
> > > + }
> > > +
> > > + if (count + p > total_size) {
> > > + /*
> > > + * The framebuffer is too small. We do the
> > > + * copy operation, but return an error code
> > > + * afterwards. Taken from fbdev.
> > > + */
> > > + if (!err)
> > > + err = -ENOSPC;
> > > + count = total_size - p;
> > > + }
> > > +
> > > + alloc_size = min(count, PAGE_SIZE);
> > > +
> > > + src = kmalloc(alloc_size, GFP_KERNEL);
> > > + if (!src)
> > > + return -ENOMEM;
> > > +
> > > + dst = (u8 __iomem *)(info->screen_base + p);
> > > +
> > > + while (count) {
> > > + c = min(count, alloc_size);
> > > +
> > > + if (copy_from_user(src, buf, c)) {
> > > + err = -EFAULT;
> > > + break;
> > > + }
> > > + memcpy_toio(dst, src, c);
> > > +
> > > + dst += c;
> > > + *ppos += c;
> > > + buf += c;
> > > + ret += c;
> > > + count -= c;
> > > + }
> > > +
> > > + kfree(src);
> > > +
> > > + if (err)
> > > + return err;
> > > +
> > > + return ret;
> > > +}
> > > +
> > > /**
> > > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > > * @info: fbdev registered by the helper
> > > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > > struct vm_area_struct *vma) return -ENODEV;
> > > }
> > >
> > > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > > + else
> > > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > > +}
> > > +
> > > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > > __user *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > > + else
> > > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > > + const struct fb_fillrect *rect)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + drm_fb_helper_sys_fillrect(info, rect);
> > > + else
> > > + drm_fb_helper_cfb_fillrect(info, rect);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > > + const struct fb_copyarea *area)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + drm_fb_helper_sys_copyarea(info, area);
> > > + else
> > > + drm_fb_helper_cfb_copyarea(info, area);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > > + const struct fb_image *image)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + drm_fb_helper_sys_imageblit(info, image);
> > > + else
> > > + drm_fb_helper_cfb_imageblit(info, image);
> > > +}
> > > +
> > > static const struct fb_ops drm_fbdev_fb_ops = {
> > > .owner = THIS_MODULE,
> > > DRM_FB_HELPER_DEFAULT_OPS,
> > > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > > .fb_release = drm_fbdev_fb_release,
> > > .fb_destroy = drm_fbdev_fb_destroy,
> > > .fb_mmap = drm_fbdev_fb_mmap,
> > > - .fb_read = drm_fb_helper_sys_read,
> > > - .fb_write = drm_fb_helper_sys_write,
> > > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > > + .fb_read = drm_fbdev_fb_read,
> > > + .fb_write = drm_fbdev_fb_write,
> > > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > > };
> > >
> > > static struct fb_deferred_io drm_fbdev_defio = {
> > > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > > index 5ffbb4ed5b35..ab424ddd7665 100644
> > > --- a/include/drm/drm_mode_config.h
> > > +++ b/include/drm/drm_mode_config.h
> > > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > > */
> > > bool prefer_shadow_fbdev;
> > >
> > > - /**
> > > - * @fbdev_use_iomem:
> > > - *
> > > - * Set to true if framebuffer reside in iomem.
> > > - * When set to true memcpy_toio() is used when copying the
> > > framebuffer in
> > > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > > - *
> > > - * FIXME: This should be replaced with a per-mapping is_iomem
> > > - * flag (like ttm does), and then used everywhere in fbdev code.
> > > - */
> > > - bool fbdev_use_iomem;
> > > -
> > > /**
> > > * @quirk_addfb_prefer_xbgr_30bpp:
> > > *
> > > --
> > > 2.28.0
>
>
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 12:48 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 12:48 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov-5C7GfCeVMHo, heiko-4mtYJXux2i+zQB+pC5nmwQ,
airlied-cv59FeDIM0c, nouveau-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
linus.walleij-QSEj5FYQhm4dnm+yROfE0A,
dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
chris-Y6uKTt2uX1cEflXRtASbqLVCufUGDwFn,
melissa.srw-Re5JQEeQqe8AvxtiuMwx3w, eric-WhKQ6XTQaPysTnJN9+BGXg,
ray.huang-5C7GfCeVMHo, kraxel-H+wXaHxf7aLQT0dZR+AlfA,
sumit.semwal-QSEj5FYQhm4dnm+yROfE0A,
emil.velikov-ZGY8ohtN/8qB+jHODAdFcQ, robh-DgEjT+Ai2ygdnm+yROfE0A,
linux-samsung-soc-u79uwXL29TY76Z2rM5mHXA,
jy0922.shim-Sze3O3UU22JBDgjK7y7TUQ,
lima-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
oleksandr_andrushchenko-uRwfk40T5oI, krzk-DgEjT+Ai2ygdnm+yROfE0A,
steven.price-5wv7dgnIgG8,
linux-rockchip-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
kgene-DgEjT+Ai2ygdnm+yROfE0A,
alyssa.rosenzweig-ZGY8ohtN/8qB+jHODAdFcQ,
linux+etnaviv-I+IVW8TIWO2tmTQ+vhA3Yw,
spice-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
bskeggs-H+wXaHxf7aLQT0dZR+AlfA,
maarten.lankhorst-VuQAYsv1563Yd54FQh9/CA,
etnaviv-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
mripard-DgEjT+Ai2ygdnm+yROfE0A, inki.dae-Sze3O3UU22JBDgjK7y7TUQ,
hdegoede-H+wXaHxf7aLQT0dZR+AlfA,
christian.gmeiner-Re5JQEeQqe8AvxtiuMwx3w,
xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
virtualization-cunTk1MwBs9QetFLy7KEmxxBWXNxL4zz
On Fri, Oct 16, 2020 at 02:19:42PM +0200, Thomas Zimmermann wrote:
> Hi
>
> On Fri, 16 Oct 2020 14:03:47 +0200 Sam Ravnborg <sam-uyr5N9Q2VtJg9hUCZPvPmw@public.gmane.org> wrote:
>
> > Hi Thomas.
> >
> > On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > > At least sparc64 requires I/O-specific access to framebuffers. This
> > > patch updates the fbdev console accordingly.
> > >
> > > For drivers with direct access to the framebuffer memory, the callback
> > > functions in struct fb_ops test for the type of memory and call the rsp
> > > fb_sys_ of fb_cfb_ functions.
> > >
> > > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > > interfaces to access the buffer.
> > >
> > > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > > I/O memory and avoid a HW exception. With the introduction of struct
> > > dma_buf_map, this is not required any longer. The patch removes the rsp
> > > code from both, bochs and fbdev.
> > >
> > > v4:
> > > * move dma_buf_map changes into separate patch (Daniel)
> > > * TODO list: comment on fbdev updates (Daniel)
> > >
> > > Signed-off-by: Thomas Zimmermann <tzimmermann-l3A5Bk7waGM@public.gmane.org>
> >
> > The original workaround fixed it so we could run qemu with the
> > -nographic option.
> >
> > So I went ahead and tried to run quemu version:
> > v5.0.0-1970-g0b100c8e72-dirty.
> > And with the BOCHS driver built-in.
> >
> > With the following command line:
> > qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
> >
> > Behaviour was the same before and after applying this patch.
> > (panic due to VFS: Unable to mount root fs on unknown-block(0,0))
> > So I consider it fixed for real now and not just a workaround.
> >
> > I also tested with:
> > qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial
> > stdio
> >
> > and it worked in both cases too.
>
> FTR, you booted a kernel and got graphics output. The error is simply that
> there was no disk to mount?
The short version "Yes".
The longer version:
With "qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0
-serial stdio" I got graphical output - one penguin.
With "qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0
-nographic" I got no graphical output, as implied by the -nographic
option. But the boot continued - where it would panic before when we
accessed IO memory as system memory.
In both cases I got an error because I had not specified any rootfs, so
qemu failed to mount any rootfs. So expected.
Sam
>
> Best regards
> Thomas
>
> >
> > All the comments above so future-me have an easier time finding how to
> > reproduce.
> >
> > Tested-by: Sam Ravnborg <sam-uyr5N9Q2VtJg9hUCZPvPmw@public.gmane.org>
> >
> > Sam
> >
> > > ---
> > > Documentation/gpu/todo.rst | 19 ++-
> > > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > > include/drm/drm_mode_config.h | 12 --
> > > 4 files changed, 220 insertions(+), 29 deletions(-)
> > >
> > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > > index 7e6fc3c04add..638b7f704339 100644
> > > --- a/Documentation/gpu/todo.rst
> > > +++ b/Documentation/gpu/todo.rst
> > > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > > ------------------------------------------------
> > >
> > > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > > -expects the framebuffer in system memory (or system-like memory).
> > > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > > emulation +expected the framebuffer in system memory or system-like
> > > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > > memory can be supported +as well.
> > >
> > > Contact: Maintainer of the driver you plan to convert
> > >
> > > Level: Intermediate
> > >
> > > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > > +-------------------------------------------------------
> > > +
> > > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > > +being rewritten without dependencies on the fbdev module. Some of the
> > > +helpers could further benefit from using struct dma_buf_map instead of
> > > +raw pointers.
> > > +
> > > +Contact: Thomas Zimmermann <tzimmermann-l3A5Bk7waGM@public.gmane.org>, Daniel Vetter
> > > +
> > > +Level: Advanced
> > > +
> > > +
> > > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > > -----------------------------------------------------------------
> > >
> > > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > > bochs->dev->mode_config.preferred_depth = 24;
> > > bochs->dev->mode_config.prefer_shadow = 0;
> > > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > > true;
> > > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> > > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > > --- a/drivers/gpu/drm/drm_fb_helper.c
> > > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > > work_struct *work) }
> > >
> > > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > > *fb_helper,
> > > - struct drm_clip_rect *clip)
> > > + struct drm_clip_rect *clip,
> > > + struct dma_buf_map *dst)
> > > {
> > > struct drm_framebuffer *fb = fb_helper->fb;
> > > unsigned int cpp = fb->format->cpp[0];
> > > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > > void *src = fb_helper->fbdev->screen_buffer + offset;
> > > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > > size_t len = (clip->x2 - clip->x1) * cpp;
> > > unsigned int y;
> > >
> > > - for (y = clip->y1; y < clip->y2; y++) {
> > > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > > - memcpy(dst, src, len);
> > > - else
> > > - memcpy_toio((void __iomem *)dst, src, len);
> > > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > > rect */
> > > + for (y = clip->y1; y < clip->y2; y++) {
> > > + dma_buf_map_memcpy_to(dst, src, len);
> > > + dma_buf_map_incr(dst, fb->pitches[0]);
> > > src += fb->pitches[0];
> > > - dst += fb->pitches[0];
> > > }
> > > }
> > >
> > > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > > if (ret)
> > > return;
> > > - drm_fb_helper_dirty_blit_real(helper,
> > > &clip_copy);
> > > + drm_fb_helper_dirty_blit_real(helper,
> > > &clip_copy, &map); }
> > > +
> > > if (helper->fb->funcs->dirty)
> > > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > > &clip_copy, 1);
> > > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > > *info, }
> > > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> > >
> > > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > > *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + unsigned long p = *ppos;
> > > + u8 *dst;
> > > + u8 __iomem *src;
> > > + int c, err = 0;
> > > + unsigned long total_size;
> > > + unsigned long alloc_size;
> > > + ssize_t ret = 0;
> > > +
> > > + if (info->state != FBINFO_STATE_RUNNING)
> > > + return -EPERM;
> > > +
> > > + total_size = info->screen_size;
> > > +
> > > + if (total_size == 0)
> > > + total_size = info->fix.smem_len;
> > > +
> > > + if (p >= total_size)
> > > + return 0;
> > > +
> > > + if (count >= total_size)
> > > + count = total_size;
> > > +
> > > + if (count + p > total_size)
> > > + count = total_size - p;
> > > +
> > > + src = (u8 __iomem *)(info->screen_base + p);
> > > +
> > > + alloc_size = min(count, PAGE_SIZE);
> > > +
> > > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > > + if (!dst)
> > > + return -ENOMEM;
> > > +
> > > + while (count) {
> > > + c = min(count, alloc_size);
> > > +
> > > + memcpy_fromio(dst, src, c);
> > > + if (copy_to_user(buf, dst, c)) {
> > > + err = -EFAULT;
> > > + break;
> > > + }
> > > +
> > > + src += c;
> > > + *ppos += c;
> > > + buf += c;
> > > + ret += c;
> > > + count -= c;
> > > + }
> > > +
> > > + kfree(dst);
> > > +
> > > + if (err)
> > > + return err;
> > > +
> > > + return ret;
> > > +}
> > > +
> > > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > > __user *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + unsigned long p = *ppos;
> > > + u8 *src;
> > > + u8 __iomem *dst;
> > > + int c, err = 0;
> > > + unsigned long total_size;
> > > + unsigned long alloc_size;
> > > + ssize_t ret = 0;
> > > +
> > > + if (info->state != FBINFO_STATE_RUNNING)
> > > + return -EPERM;
> > > +
> > > + total_size = info->screen_size;
> > > +
> > > + if (total_size == 0)
> > > + total_size = info->fix.smem_len;
> > > +
> > > + if (p > total_size)
> > > + return -EFBIG;
> > > +
> > > + if (count > total_size) {
> > > + err = -EFBIG;
> > > + count = total_size;
> > > + }
> > > +
> > > + if (count + p > total_size) {
> > > + /*
> > > + * The framebuffer is too small. We do the
> > > + * copy operation, but return an error code
> > > + * afterwards. Taken from fbdev.
> > > + */
> > > + if (!err)
> > > + err = -ENOSPC;
> > > + count = total_size - p;
> > > + }
> > > +
> > > + alloc_size = min(count, PAGE_SIZE);
> > > +
> > > + src = kmalloc(alloc_size, GFP_KERNEL);
> > > + if (!src)
> > > + return -ENOMEM;
> > > +
> > > + dst = (u8 __iomem *)(info->screen_base + p);
> > > +
> > > + while (count) {
> > > + c = min(count, alloc_size);
> > > +
> > > + if (copy_from_user(src, buf, c)) {
> > > + err = -EFAULT;
> > > + break;
> > > + }
> > > + memcpy_toio(dst, src, c);
> > > +
> > > + dst += c;
> > > + *ppos += c;
> > > + buf += c;
> > > + ret += c;
> > > + count -= c;
> > > + }
> > > +
> > > + kfree(src);
> > > +
> > > + if (err)
> > > + return err;
> > > +
> > > + return ret;
> > > +}
> > > +
> > > /**
> > > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > > * @info: fbdev registered by the helper
> > > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > > struct vm_area_struct *vma) return -ENODEV;
> > > }
> > >
> > > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > > + else
> > > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > > +}
> > > +
> > > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > > __user *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > > + else
> > > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > > + const struct fb_fillrect *rect)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + drm_fb_helper_sys_fillrect(info, rect);
> > > + else
> > > + drm_fb_helper_cfb_fillrect(info, rect);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > > + const struct fb_copyarea *area)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + drm_fb_helper_sys_copyarea(info, area);
> > > + else
> > > + drm_fb_helper_cfb_copyarea(info, area);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > > + const struct fb_image *image)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + drm_fb_helper_sys_imageblit(info, image);
> > > + else
> > > + drm_fb_helper_cfb_imageblit(info, image);
> > > +}
> > > +
> > > static const struct fb_ops drm_fbdev_fb_ops = {
> > > .owner = THIS_MODULE,
> > > DRM_FB_HELPER_DEFAULT_OPS,
> > > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > > .fb_release = drm_fbdev_fb_release,
> > > .fb_destroy = drm_fbdev_fb_destroy,
> > > .fb_mmap = drm_fbdev_fb_mmap,
> > > - .fb_read = drm_fb_helper_sys_read,
> > > - .fb_write = drm_fb_helper_sys_write,
> > > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > > + .fb_read = drm_fbdev_fb_read,
> > > + .fb_write = drm_fbdev_fb_write,
> > > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > > };
> > >
> > > static struct fb_deferred_io drm_fbdev_defio = {
> > > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > > index 5ffbb4ed5b35..ab424ddd7665 100644
> > > --- a/include/drm/drm_mode_config.h
> > > +++ b/include/drm/drm_mode_config.h
> > > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > > */
> > > bool prefer_shadow_fbdev;
> > >
> > > - /**
> > > - * @fbdev_use_iomem:
> > > - *
> > > - * Set to true if framebuffer reside in iomem.
> > > - * When set to true memcpy_toio() is used when copying the
> > > framebuffer in
> > > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > > - *
> > > - * FIXME: This should be replaced with a per-mapping is_iomem
> > > - * flag (like ttm does), and then used everywhere in fbdev code.
> > > - */
> > > - bool fbdev_use_iomem;
> > > -
> > > /**
> > > * @quirk_addfb_prefer_xbgr_30bpp:
> > > *
> > > --
> > > 2.28.0
>
>
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 12:48 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 12:48 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, kraxel, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, christian.koenig, l.stach
On Fri, Oct 16, 2020 at 02:19:42PM +0200, Thomas Zimmermann wrote:
> Hi
>
> On Fri, 16 Oct 2020 14:03:47 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
>
> > Hi Thomas.
> >
> > On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > > At least sparc64 requires I/O-specific access to framebuffers. This
> > > patch updates the fbdev console accordingly.
> > >
> > > For drivers with direct access to the framebuffer memory, the callback
> > > functions in struct fb_ops test for the type of memory and call the rsp
> > > fb_sys_ of fb_cfb_ functions.
> > >
> > > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > > interfaces to access the buffer.
> > >
> > > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > > I/O memory and avoid a HW exception. With the introduction of struct
> > > dma_buf_map, this is not required any longer. The patch removes the rsp
> > > code from both, bochs and fbdev.
> > >
> > > v4:
> > > * move dma_buf_map changes into separate patch (Daniel)
> > > * TODO list: comment on fbdev updates (Daniel)
> > >
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >
> > The original workaround fixed it so we could run qemu with the
> > -nographic option.
> >
> > So I went ahead and tried to run quemu version:
> > v5.0.0-1970-g0b100c8e72-dirty.
> > And with the BOCHS driver built-in.
> >
> > With the following command line:
> > qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
> >
> > Behaviour was the same before and after applying this patch.
> > (panic due to VFS: Unable to mount root fs on unknown-block(0,0))
> > So I consider it fixed for real now and not just a workaround.
> >
> > I also tested with:
> > qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial
> > stdio
> >
> > and it worked in both cases too.
>
> FTR, you booted a kernel and got graphics output. The error is simply that
> there was no disk to mount?
The short version "Yes".
The longer version:
With "qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0
-serial stdio" I got graphical output - one penguin.
With "qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0
-nographic" I got no graphical output, as implied by the -nographic
option. But the boot continued - where it would panic before when we
accessed IO memory as system memory.
In both cases I got an error because I had not specified any rootfs, so
qemu failed to mount any rootfs. So expected.
Sam
>
> Best regards
> Thomas
>
> >
> > All the comments above so future-me have an easier time finding how to
> > reproduce.
> >
> > Tested-by: Sam Ravnborg <sam@ravnborg.org>
> >
> > Sam
> >
> > > ---
> > > Documentation/gpu/todo.rst | 19 ++-
> > > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > > include/drm/drm_mode_config.h | 12 --
> > > 4 files changed, 220 insertions(+), 29 deletions(-)
> > >
> > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > > index 7e6fc3c04add..638b7f704339 100644
> > > --- a/Documentation/gpu/todo.rst
> > > +++ b/Documentation/gpu/todo.rst
> > > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > > ------------------------------------------------
> > >
> > > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > > -expects the framebuffer in system memory (or system-like memory).
> > > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > > emulation +expected the framebuffer in system memory or system-like
> > > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > > memory can be supported +as well.
> > >
> > > Contact: Maintainer of the driver you plan to convert
> > >
> > > Level: Intermediate
> > >
> > > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > > +-------------------------------------------------------
> > > +
> > > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > > +being rewritten without dependencies on the fbdev module. Some of the
> > > +helpers could further benefit from using struct dma_buf_map instead of
> > > +raw pointers.
> > > +
> > > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > > +
> > > +Level: Advanced
> > > +
> > > +
> > > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > > -----------------------------------------------------------------
> > >
> > > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > > bochs->dev->mode_config.preferred_depth = 24;
> > > bochs->dev->mode_config.prefer_shadow = 0;
> > > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > > true;
> > > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> > > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > > --- a/drivers/gpu/drm/drm_fb_helper.c
> > > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > > work_struct *work) }
> > >
> > > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > > *fb_helper,
> > > - struct drm_clip_rect *clip)
> > > + struct drm_clip_rect *clip,
> > > + struct dma_buf_map *dst)
> > > {
> > > struct drm_framebuffer *fb = fb_helper->fb;
> > > unsigned int cpp = fb->format->cpp[0];
> > > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > > void *src = fb_helper->fbdev->screen_buffer + offset;
> > > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > > size_t len = (clip->x2 - clip->x1) * cpp;
> > > unsigned int y;
> > >
> > > - for (y = clip->y1; y < clip->y2; y++) {
> > > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > > - memcpy(dst, src, len);
> > > - else
> > > - memcpy_toio((void __iomem *)dst, src, len);
> > > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > > rect */
> > > + for (y = clip->y1; y < clip->y2; y++) {
> > > + dma_buf_map_memcpy_to(dst, src, len);
> > > + dma_buf_map_incr(dst, fb->pitches[0]);
> > > src += fb->pitches[0];
> > > - dst += fb->pitches[0];
> > > }
> > > }
> > >
> > > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > > if (ret)
> > > return;
> > > - drm_fb_helper_dirty_blit_real(helper,
> > > &clip_copy);
> > > + drm_fb_helper_dirty_blit_real(helper,
> > > &clip_copy, &map); }
> > > +
> > > if (helper->fb->funcs->dirty)
> > > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > > &clip_copy, 1);
> > > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > > *info, }
> > > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> > >
> > > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > > *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + unsigned long p = *ppos;
> > > + u8 *dst;
> > > + u8 __iomem *src;
> > > + int c, err = 0;
> > > + unsigned long total_size;
> > > + unsigned long alloc_size;
> > > + ssize_t ret = 0;
> > > +
> > > + if (info->state != FBINFO_STATE_RUNNING)
> > > + return -EPERM;
> > > +
> > > + total_size = info->screen_size;
> > > +
> > > + if (total_size == 0)
> > > + total_size = info->fix.smem_len;
> > > +
> > > + if (p >= total_size)
> > > + return 0;
> > > +
> > > + if (count >= total_size)
> > > + count = total_size;
> > > +
> > > + if (count + p > total_size)
> > > + count = total_size - p;
> > > +
> > > + src = (u8 __iomem *)(info->screen_base + p);
> > > +
> > > + alloc_size = min(count, PAGE_SIZE);
> > > +
> > > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > > + if (!dst)
> > > + return -ENOMEM;
> > > +
> > > + while (count) {
> > > + c = min(count, alloc_size);
> > > +
> > > + memcpy_fromio(dst, src, c);
> > > + if (copy_to_user(buf, dst, c)) {
> > > + err = -EFAULT;
> > > + break;
> > > + }
> > > +
> > > + src += c;
> > > + *ppos += c;
> > > + buf += c;
> > > + ret += c;
> > > + count -= c;
> > > + }
> > > +
> > > + kfree(dst);
> > > +
> > > + if (err)
> > > + return err;
> > > +
> > > + return ret;
> > > +}
> > > +
> > > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > > __user *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + unsigned long p = *ppos;
> > > + u8 *src;
> > > + u8 __iomem *dst;
> > > + int c, err = 0;
> > > + unsigned long total_size;
> > > + unsigned long alloc_size;
> > > + ssize_t ret = 0;
> > > +
> > > + if (info->state != FBINFO_STATE_RUNNING)
> > > + return -EPERM;
> > > +
> > > + total_size = info->screen_size;
> > > +
> > > + if (total_size == 0)
> > > + total_size = info->fix.smem_len;
> > > +
> > > + if (p > total_size)
> > > + return -EFBIG;
> > > +
> > > + if (count > total_size) {
> > > + err = -EFBIG;
> > > + count = total_size;
> > > + }
> > > +
> > > + if (count + p > total_size) {
> > > + /*
> > > + * The framebuffer is too small. We do the
> > > + * copy operation, but return an error code
> > > + * afterwards. Taken from fbdev.
> > > + */
> > > + if (!err)
> > > + err = -ENOSPC;
> > > + count = total_size - p;
> > > + }
> > > +
> > > + alloc_size = min(count, PAGE_SIZE);
> > > +
> > > + src = kmalloc(alloc_size, GFP_KERNEL);
> > > + if (!src)
> > > + return -ENOMEM;
> > > +
> > > + dst = (u8 __iomem *)(info->screen_base + p);
> > > +
> > > + while (count) {
> > > + c = min(count, alloc_size);
> > > +
> > > + if (copy_from_user(src, buf, c)) {
> > > + err = -EFAULT;
> > > + break;
> > > + }
> > > + memcpy_toio(dst, src, c);
> > > +
> > > + dst += c;
> > > + *ppos += c;
> > > + buf += c;
> > > + ret += c;
> > > + count -= c;
> > > + }
> > > +
> > > + kfree(src);
> > > +
> > > + if (err)
> > > + return err;
> > > +
> > > + return ret;
> > > +}
> > > +
> > > /**
> > > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > > * @info: fbdev registered by the helper
> > > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > > struct vm_area_struct *vma) return -ENODEV;
> > > }
> > >
> > > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > > + else
> > > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > > +}
> > > +
> > > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > > __user *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > > + else
> > > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > > + const struct fb_fillrect *rect)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + drm_fb_helper_sys_fillrect(info, rect);
> > > + else
> > > + drm_fb_helper_cfb_fillrect(info, rect);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > > + const struct fb_copyarea *area)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + drm_fb_helper_sys_copyarea(info, area);
> > > + else
> > > + drm_fb_helper_cfb_copyarea(info, area);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > > + const struct fb_image *image)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + drm_fb_helper_sys_imageblit(info, image);
> > > + else
> > > + drm_fb_helper_cfb_imageblit(info, image);
> > > +}
> > > +
> > > static const struct fb_ops drm_fbdev_fb_ops = {
> > > .owner = THIS_MODULE,
> > > DRM_FB_HELPER_DEFAULT_OPS,
> > > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > > .fb_release = drm_fbdev_fb_release,
> > > .fb_destroy = drm_fbdev_fb_destroy,
> > > .fb_mmap = drm_fbdev_fb_mmap,
> > > - .fb_read = drm_fb_helper_sys_read,
> > > - .fb_write = drm_fb_helper_sys_write,
> > > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > > + .fb_read = drm_fbdev_fb_read,
> > > + .fb_write = drm_fbdev_fb_write,
> > > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > > };
> > >
> > > static struct fb_deferred_io drm_fbdev_defio = {
> > > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > > index 5ffbb4ed5b35..ab424ddd7665 100644
> > > --- a/include/drm/drm_mode_config.h
> > > +++ b/include/drm/drm_mode_config.h
> > > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > > */
> > > bool prefer_shadow_fbdev;
> > >
> > > - /**
> > > - * @fbdev_use_iomem:
> > > - *
> > > - * Set to true if framebuffer reside in iomem.
> > > - * When set to true memcpy_toio() is used when copying the
> > > framebuffer in
> > > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > > - *
> > > - * FIXME: This should be replaced with a per-mapping is_iomem
> > > - * flag (like ttm does), and then used everywhere in fbdev code.
> > > - */
> > > - bool fbdev_use_iomem;
> > > -
> > > /**
> > > * @quirk_addfb_prefer_xbgr_30bpp:
> > > *
> > > --
> > > 2.28.0
>
>
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 12:48 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 12:48 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, sumit.semwal, emil.velikov,
robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, christian.koenig, l.stach
On Fri, Oct 16, 2020 at 02:19:42PM +0200, Thomas Zimmermann wrote:
> Hi
>
> On Fri, 16 Oct 2020 14:03:47 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
>
> > Hi Thomas.
> >
> > On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > > At least sparc64 requires I/O-specific access to framebuffers. This
> > > patch updates the fbdev console accordingly.
> > >
> > > For drivers with direct access to the framebuffer memory, the callback
> > > functions in struct fb_ops test for the type of memory and call the rsp
> > > fb_sys_ of fb_cfb_ functions.
> > >
> > > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > > interfaces to access the buffer.
> > >
> > > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > > I/O memory and avoid a HW exception. With the introduction of struct
> > > dma_buf_map, this is not required any longer. The patch removes the rsp
> > > code from both, bochs and fbdev.
> > >
> > > v4:
> > > * move dma_buf_map changes into separate patch (Daniel)
> > > * TODO list: comment on fbdev updates (Daniel)
> > >
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >
> > The original workaround fixed it so we could run qemu with the
> > -nographic option.
> >
> > So I went ahead and tried to run quemu version:
> > v5.0.0-1970-g0b100c8e72-dirty.
> > And with the BOCHS driver built-in.
> >
> > With the following command line:
> > qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
> >
> > Behaviour was the same before and after applying this patch.
> > (panic due to VFS: Unable to mount root fs on unknown-block(0,0))
> > So I consider it fixed for real now and not just a workaround.
> >
> > I also tested with:
> > qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial
> > stdio
> >
> > and it worked in both cases too.
>
> FTR, you booted a kernel and got graphics output. The error is simply that
> there was no disk to mount?
The short version "Yes".
The longer version:
With "qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0
-serial stdio" I got graphical output - one penguin.
With "qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0
-nographic" I got no graphical output, as implied by the -nographic
option. But the boot continued - where it would panic before when we
accessed IO memory as system memory.
In both cases I got an error because I had not specified any rootfs, so
qemu failed to mount any rootfs. So expected.
Sam
>
> Best regards
> Thomas
>
> >
> > All the comments above so future-me have an easier time finding how to
> > reproduce.
> >
> > Tested-by: Sam Ravnborg <sam@ravnborg.org>
> >
> > Sam
> >
> > > ---
> > > Documentation/gpu/todo.rst | 19 ++-
> > > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > > include/drm/drm_mode_config.h | 12 --
> > > 4 files changed, 220 insertions(+), 29 deletions(-)
> > >
> > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > > index 7e6fc3c04add..638b7f704339 100644
> > > --- a/Documentation/gpu/todo.rst
> > > +++ b/Documentation/gpu/todo.rst
> > > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > > ------------------------------------------------
> > >
> > > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > > -expects the framebuffer in system memory (or system-like memory).
> > > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > > emulation +expected the framebuffer in system memory or system-like
> > > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > > memory can be supported +as well.
> > >
> > > Contact: Maintainer of the driver you plan to convert
> > >
> > > Level: Intermediate
> > >
> > > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > > +-------------------------------------------------------
> > > +
> > > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > > +being rewritten without dependencies on the fbdev module. Some of the
> > > +helpers could further benefit from using struct dma_buf_map instead of
> > > +raw pointers.
> > > +
> > > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > > +
> > > +Level: Advanced
> > > +
> > > +
> > > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > > -----------------------------------------------------------------
> > >
> > > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > > bochs->dev->mode_config.preferred_depth = 24;
> > > bochs->dev->mode_config.prefer_shadow = 0;
> > > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > > true;
> > > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> > > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > > --- a/drivers/gpu/drm/drm_fb_helper.c
> > > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > > work_struct *work) }
> > >
> > > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > > *fb_helper,
> > > - struct drm_clip_rect *clip)
> > > + struct drm_clip_rect *clip,
> > > + struct dma_buf_map *dst)
> > > {
> > > struct drm_framebuffer *fb = fb_helper->fb;
> > > unsigned int cpp = fb->format->cpp[0];
> > > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > > void *src = fb_helper->fbdev->screen_buffer + offset;
> > > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > > size_t len = (clip->x2 - clip->x1) * cpp;
> > > unsigned int y;
> > >
> > > - for (y = clip->y1; y < clip->y2; y++) {
> > > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > > - memcpy(dst, src, len);
> > > - else
> > > - memcpy_toio((void __iomem *)dst, src, len);
> > > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > > rect */
> > > + for (y = clip->y1; y < clip->y2; y++) {
> > > + dma_buf_map_memcpy_to(dst, src, len);
> > > + dma_buf_map_incr(dst, fb->pitches[0]);
> > > src += fb->pitches[0];
> > > - dst += fb->pitches[0];
> > > }
> > > }
> > >
> > > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > > if (ret)
> > > return;
> > > - drm_fb_helper_dirty_blit_real(helper,
> > > &clip_copy);
> > > + drm_fb_helper_dirty_blit_real(helper,
> > > &clip_copy, &map); }
> > > +
> > > if (helper->fb->funcs->dirty)
> > > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > > &clip_copy, 1);
> > > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > > *info, }
> > > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> > >
> > > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > > *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + unsigned long p = *ppos;
> > > + u8 *dst;
> > > + u8 __iomem *src;
> > > + int c, err = 0;
> > > + unsigned long total_size;
> > > + unsigned long alloc_size;
> > > + ssize_t ret = 0;
> > > +
> > > + if (info->state != FBINFO_STATE_RUNNING)
> > > + return -EPERM;
> > > +
> > > + total_size = info->screen_size;
> > > +
> > > + if (total_size == 0)
> > > + total_size = info->fix.smem_len;
> > > +
> > > + if (p >= total_size)
> > > + return 0;
> > > +
> > > + if (count >= total_size)
> > > + count = total_size;
> > > +
> > > + if (count + p > total_size)
> > > + count = total_size - p;
> > > +
> > > + src = (u8 __iomem *)(info->screen_base + p);
> > > +
> > > + alloc_size = min(count, PAGE_SIZE);
> > > +
> > > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > > + if (!dst)
> > > + return -ENOMEM;
> > > +
> > > + while (count) {
> > > + c = min(count, alloc_size);
> > > +
> > > + memcpy_fromio(dst, src, c);
> > > + if (copy_to_user(buf, dst, c)) {
> > > + err = -EFAULT;
> > > + break;
> > > + }
> > > +
> > > + src += c;
> > > + *ppos += c;
> > > + buf += c;
> > > + ret += c;
> > > + count -= c;
> > > + }
> > > +
> > > + kfree(dst);
> > > +
> > > + if (err)
> > > + return err;
> > > +
> > > + return ret;
> > > +}
> > > +
> > > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > > __user *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + unsigned long p = *ppos;
> > > + u8 *src;
> > > + u8 __iomem *dst;
> > > + int c, err = 0;
> > > + unsigned long total_size;
> > > + unsigned long alloc_size;
> > > + ssize_t ret = 0;
> > > +
> > > + if (info->state != FBINFO_STATE_RUNNING)
> > > + return -EPERM;
> > > +
> > > + total_size = info->screen_size;
> > > +
> > > + if (total_size == 0)
> > > + total_size = info->fix.smem_len;
> > > +
> > > + if (p > total_size)
> > > + return -EFBIG;
> > > +
> > > + if (count > total_size) {
> > > + err = -EFBIG;
> > > + count = total_size;
> > > + }
> > > +
> > > + if (count + p > total_size) {
> > > + /*
> > > + * The framebuffer is too small. We do the
> > > + * copy operation, but return an error code
> > > + * afterwards. Taken from fbdev.
> > > + */
> > > + if (!err)
> > > + err = -ENOSPC;
> > > + count = total_size - p;
> > > + }
> > > +
> > > + alloc_size = min(count, PAGE_SIZE);
> > > +
> > > + src = kmalloc(alloc_size, GFP_KERNEL);
> > > + if (!src)
> > > + return -ENOMEM;
> > > +
> > > + dst = (u8 __iomem *)(info->screen_base + p);
> > > +
> > > + while (count) {
> > > + c = min(count, alloc_size);
> > > +
> > > + if (copy_from_user(src, buf, c)) {
> > > + err = -EFAULT;
> > > + break;
> > > + }
> > > + memcpy_toio(dst, src, c);
> > > +
> > > + dst += c;
> > > + *ppos += c;
> > > + buf += c;
> > > + ret += c;
> > > + count -= c;
> > > + }
> > > +
> > > + kfree(src);
> > > +
> > > + if (err)
> > > + return err;
> > > +
> > > + return ret;
> > > +}
> > > +
> > > /**
> > > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > > * @info: fbdev registered by the helper
> > > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > > struct vm_area_struct *vma) return -ENODEV;
> > > }
> > >
> > > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > > + else
> > > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > > +}
> > > +
> > > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > > __user *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > > + else
> > > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > > + const struct fb_fillrect *rect)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + drm_fb_helper_sys_fillrect(info, rect);
> > > + else
> > > + drm_fb_helper_cfb_fillrect(info, rect);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > > + const struct fb_copyarea *area)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + drm_fb_helper_sys_copyarea(info, area);
> > > + else
> > > + drm_fb_helper_cfb_copyarea(info, area);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > > + const struct fb_image *image)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + drm_fb_helper_sys_imageblit(info, image);
> > > + else
> > > + drm_fb_helper_cfb_imageblit(info, image);
> > > +}
> > > +
> > > static const struct fb_ops drm_fbdev_fb_ops = {
> > > .owner = THIS_MODULE,
> > > DRM_FB_HELPER_DEFAULT_OPS,
> > > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > > .fb_release = drm_fbdev_fb_release,
> > > .fb_destroy = drm_fbdev_fb_destroy,
> > > .fb_mmap = drm_fbdev_fb_mmap,
> > > - .fb_read = drm_fb_helper_sys_read,
> > > - .fb_write = drm_fb_helper_sys_write,
> > > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > > + .fb_read = drm_fbdev_fb_read,
> > > + .fb_write = drm_fbdev_fb_write,
> > > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > > };
> > >
> > > static struct fb_deferred_io drm_fbdev_defio = {
> > > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > > index 5ffbb4ed5b35..ab424ddd7665 100644
> > > --- a/include/drm/drm_mode_config.h
> > > +++ b/include/drm/drm_mode_config.h
> > > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > > */
> > > bool prefer_shadow_fbdev;
> > >
> > > - /**
> > > - * @fbdev_use_iomem:
> > > - *
> > > - * Set to true if framebuffer reside in iomem.
> > > - * When set to true memcpy_toio() is used when copying the
> > > framebuffer in
> > > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > > - *
> > > - * FIXME: This should be replaced with a per-mapping is_iomem
> > > - * flag (like ttm does), and then used everywhere in fbdev code.
> > > - */
> > > - bool fbdev_use_iomem;
> > > -
> > > /**
> > > * @quirk_addfb_prefer_xbgr_30bpp:
> > > *
> > > --
> > > 2.28.0
>
>
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 12:48 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 12:48 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, airlied, nouveau, dri-devel, chris, melissa.srw,
ray.huang, kraxel, emil.velikov, linux-samsung-soc, jy0922.shim,
lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, kgene, alyssa.rosenzweig, linux+etnaviv,
spice-devel, bskeggs, etnaviv, hdegoede, xen-devel,
virtualization, sean, apaneers, linux-arm-kernel, linaro-mm-sig,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, yuq825, alexander.deucher, linux-media,
christian.koenig
On Fri, Oct 16, 2020 at 02:19:42PM +0200, Thomas Zimmermann wrote:
> Hi
>
> On Fri, 16 Oct 2020 14:03:47 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
>
> > Hi Thomas.
> >
> > On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > > At least sparc64 requires I/O-specific access to framebuffers. This
> > > patch updates the fbdev console accordingly.
> > >
> > > For drivers with direct access to the framebuffer memory, the callback
> > > functions in struct fb_ops test for the type of memory and call the rsp
> > > fb_sys_ of fb_cfb_ functions.
> > >
> > > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > > interfaces to access the buffer.
> > >
> > > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > > I/O memory and avoid a HW exception. With the introduction of struct
> > > dma_buf_map, this is not required any longer. The patch removes the rsp
> > > code from both, bochs and fbdev.
> > >
> > > v4:
> > > * move dma_buf_map changes into separate patch (Daniel)
> > > * TODO list: comment on fbdev updates (Daniel)
> > >
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >
> > The original workaround fixed it so we could run qemu with the
> > -nographic option.
> >
> > So I went ahead and tried to run quemu version:
> > v5.0.0-1970-g0b100c8e72-dirty.
> > And with the BOCHS driver built-in.
> >
> > With the following command line:
> > qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
> >
> > Behaviour was the same before and after applying this patch.
> > (panic due to VFS: Unable to mount root fs on unknown-block(0,0))
> > So I consider it fixed for real now and not just a workaround.
> >
> > I also tested with:
> > qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial
> > stdio
> >
> > and it worked in both cases too.
>
> FTR, you booted a kernel and got graphics output. The error is simply that
> there was no disk to mount?
The short version "Yes".
The longer version:
With "qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0
-serial stdio" I got graphical output - one penguin.
With "qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0
-nographic" I got no graphical output, as implied by the -nographic
option. But the boot continued - where it would panic before when we
accessed IO memory as system memory.
In both cases I got an error because I had not specified any rootfs, so
qemu failed to mount any rootfs. So expected.
Sam
>
> Best regards
> Thomas
>
> >
> > All the comments above so future-me have an easier time finding how to
> > reproduce.
> >
> > Tested-by: Sam Ravnborg <sam@ravnborg.org>
> >
> > Sam
> >
> > > ---
> > > Documentation/gpu/todo.rst | 19 ++-
> > > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > > include/drm/drm_mode_config.h | 12 --
> > > 4 files changed, 220 insertions(+), 29 deletions(-)
> > >
> > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > > index 7e6fc3c04add..638b7f704339 100644
> > > --- a/Documentation/gpu/todo.rst
> > > +++ b/Documentation/gpu/todo.rst
> > > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > > ------------------------------------------------
> > >
> > > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > > -expects the framebuffer in system memory (or system-like memory).
> > > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > > emulation +expected the framebuffer in system memory or system-like
> > > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > > memory can be supported +as well.
> > >
> > > Contact: Maintainer of the driver you plan to convert
> > >
> > > Level: Intermediate
> > >
> > > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > > +-------------------------------------------------------
> > > +
> > > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > > +being rewritten without dependencies on the fbdev module. Some of the
> > > +helpers could further benefit from using struct dma_buf_map instead of
> > > +raw pointers.
> > > +
> > > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > > +
> > > +Level: Advanced
> > > +
> > > +
> > > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > > -----------------------------------------------------------------
> > >
> > > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > > bochs->dev->mode_config.preferred_depth = 24;
> > > bochs->dev->mode_config.prefer_shadow = 0;
> > > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > > true;
> > > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> > > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > > --- a/drivers/gpu/drm/drm_fb_helper.c
> > > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > > work_struct *work) }
> > >
> > > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > > *fb_helper,
> > > - struct drm_clip_rect *clip)
> > > + struct drm_clip_rect *clip,
> > > + struct dma_buf_map *dst)
> > > {
> > > struct drm_framebuffer *fb = fb_helper->fb;
> > > unsigned int cpp = fb->format->cpp[0];
> > > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > > void *src = fb_helper->fbdev->screen_buffer + offset;
> > > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > > size_t len = (clip->x2 - clip->x1) * cpp;
> > > unsigned int y;
> > >
> > > - for (y = clip->y1; y < clip->y2; y++) {
> > > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > > - memcpy(dst, src, len);
> > > - else
> > > - memcpy_toio((void __iomem *)dst, src, len);
> > > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > > rect */
> > > + for (y = clip->y1; y < clip->y2; y++) {
> > > + dma_buf_map_memcpy_to(dst, src, len);
> > > + dma_buf_map_incr(dst, fb->pitches[0]);
> > > src += fb->pitches[0];
> > > - dst += fb->pitches[0];
> > > }
> > > }
> > >
> > > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > > if (ret)
> > > return;
> > > - drm_fb_helper_dirty_blit_real(helper,
> > > &clip_copy);
> > > + drm_fb_helper_dirty_blit_real(helper,
> > > &clip_copy, &map); }
> > > +
> > > if (helper->fb->funcs->dirty)
> > > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > > &clip_copy, 1);
> > > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > > *info, }
> > > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> > >
> > > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > > *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + unsigned long p = *ppos;
> > > + u8 *dst;
> > > + u8 __iomem *src;
> > > + int c, err = 0;
> > > + unsigned long total_size;
> > > + unsigned long alloc_size;
> > > + ssize_t ret = 0;
> > > +
> > > + if (info->state != FBINFO_STATE_RUNNING)
> > > + return -EPERM;
> > > +
> > > + total_size = info->screen_size;
> > > +
> > > + if (total_size == 0)
> > > + total_size = info->fix.smem_len;
> > > +
> > > + if (p >= total_size)
> > > + return 0;
> > > +
> > > + if (count >= total_size)
> > > + count = total_size;
> > > +
> > > + if (count + p > total_size)
> > > + count = total_size - p;
> > > +
> > > + src = (u8 __iomem *)(info->screen_base + p);
> > > +
> > > + alloc_size = min(count, PAGE_SIZE);
> > > +
> > > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > > + if (!dst)
> > > + return -ENOMEM;
> > > +
> > > + while (count) {
> > > + c = min(count, alloc_size);
> > > +
> > > + memcpy_fromio(dst, src, c);
> > > + if (copy_to_user(buf, dst, c)) {
> > > + err = -EFAULT;
> > > + break;
> > > + }
> > > +
> > > + src += c;
> > > + *ppos += c;
> > > + buf += c;
> > > + ret += c;
> > > + count -= c;
> > > + }
> > > +
> > > + kfree(dst);
> > > +
> > > + if (err)
> > > + return err;
> > > +
> > > + return ret;
> > > +}
> > > +
> > > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > > __user *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + unsigned long p = *ppos;
> > > + u8 *src;
> > > + u8 __iomem *dst;
> > > + int c, err = 0;
> > > + unsigned long total_size;
> > > + unsigned long alloc_size;
> > > + ssize_t ret = 0;
> > > +
> > > + if (info->state != FBINFO_STATE_RUNNING)
> > > + return -EPERM;
> > > +
> > > + total_size = info->screen_size;
> > > +
> > > + if (total_size == 0)
> > > + total_size = info->fix.smem_len;
> > > +
> > > + if (p > total_size)
> > > + return -EFBIG;
> > > +
> > > + if (count > total_size) {
> > > + err = -EFBIG;
> > > + count = total_size;
> > > + }
> > > +
> > > + if (count + p > total_size) {
> > > + /*
> > > + * The framebuffer is too small. We do the
> > > + * copy operation, but return an error code
> > > + * afterwards. Taken from fbdev.
> > > + */
> > > + if (!err)
> > > + err = -ENOSPC;
> > > + count = total_size - p;
> > > + }
> > > +
> > > + alloc_size = min(count, PAGE_SIZE);
> > > +
> > > + src = kmalloc(alloc_size, GFP_KERNEL);
> > > + if (!src)
> > > + return -ENOMEM;
> > > +
> > > + dst = (u8 __iomem *)(info->screen_base + p);
> > > +
> > > + while (count) {
> > > + c = min(count, alloc_size);
> > > +
> > > + if (copy_from_user(src, buf, c)) {
> > > + err = -EFAULT;
> > > + break;
> > > + }
> > > + memcpy_toio(dst, src, c);
> > > +
> > > + dst += c;
> > > + *ppos += c;
> > > + buf += c;
> > > + ret += c;
> > > + count -= c;
> > > + }
> > > +
> > > + kfree(src);
> > > +
> > > + if (err)
> > > + return err;
> > > +
> > > + return ret;
> > > +}
> > > +
> > > /**
> > > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > > * @info: fbdev registered by the helper
> > > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > > struct vm_area_struct *vma) return -ENODEV;
> > > }
> > >
> > > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > > + else
> > > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > > +}
> > > +
> > > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > > __user *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > > + else
> > > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > > + const struct fb_fillrect *rect)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + drm_fb_helper_sys_fillrect(info, rect);
> > > + else
> > > + drm_fb_helper_cfb_fillrect(info, rect);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > > + const struct fb_copyarea *area)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + drm_fb_helper_sys_copyarea(info, area);
> > > + else
> > > + drm_fb_helper_cfb_copyarea(info, area);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > > + const struct fb_image *image)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + drm_fb_helper_sys_imageblit(info, image);
> > > + else
> > > + drm_fb_helper_cfb_imageblit(info, image);
> > > +}
> > > +
> > > static const struct fb_ops drm_fbdev_fb_ops = {
> > > .owner = THIS_MODULE,
> > > DRM_FB_HELPER_DEFAULT_OPS,
> > > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > > .fb_release = drm_fbdev_fb_release,
> > > .fb_destroy = drm_fbdev_fb_destroy,
> > > .fb_mmap = drm_fbdev_fb_mmap,
> > > - .fb_read = drm_fb_helper_sys_read,
> > > - .fb_write = drm_fb_helper_sys_write,
> > > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > > + .fb_read = drm_fbdev_fb_read,
> > > + .fb_write = drm_fbdev_fb_write,
> > > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > > };
> > >
> > > static struct fb_deferred_io drm_fbdev_defio = {
> > > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > > index 5ffbb4ed5b35..ab424ddd7665 100644
> > > --- a/include/drm/drm_mode_config.h
> > > +++ b/include/drm/drm_mode_config.h
> > > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > > */
> > > bool prefer_shadow_fbdev;
> > >
> > > - /**
> > > - * @fbdev_use_iomem:
> > > - *
> > > - * Set to true if framebuffer reside in iomem.
> > > - * When set to true memcpy_toio() is used when copying the
> > > framebuffer in
> > > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > > - *
> > > - * FIXME: This should be replaced with a per-mapping is_iomem
> > > - * flag (like ttm does), and then used everywhere in fbdev code.
> > > - */
> > > - bool fbdev_use_iomem;
> > > -
> > > /**
> > > * @quirk_addfb_prefer_xbgr_30bpp:
> > > *
> > > --
> > > 2.28.0
>
>
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
@ 2020-10-16 12:48 ` Sam Ravnborg
0 siblings, 0 replies; 195+ messages in thread
From: Sam Ravnborg @ 2020-10-16 12:48 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, kraxel, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs,
maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, xen-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825, daniel,
alexander.deucher, linux-media, christian.koenig, l.stach
On Fri, Oct 16, 2020 at 02:19:42PM +0200, Thomas Zimmermann wrote:
> Hi
>
> On Fri, 16 Oct 2020 14:03:47 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
>
> > Hi Thomas.
> >
> > On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > > At least sparc64 requires I/O-specific access to framebuffers. This
> > > patch updates the fbdev console accordingly.
> > >
> > > For drivers with direct access to the framebuffer memory, the callback
> > > functions in struct fb_ops test for the type of memory and call the rsp
> > > fb_sys_ of fb_cfb_ functions.
> > >
> > > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > > interfaces to access the buffer.
> > >
> > > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > > I/O memory and avoid a HW exception. With the introduction of struct
> > > dma_buf_map, this is not required any longer. The patch removes the rsp
> > > code from both, bochs and fbdev.
> > >
> > > v4:
> > > * move dma_buf_map changes into separate patch (Daniel)
> > > * TODO list: comment on fbdev updates (Daniel)
> > >
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >
> > The original workaround fixed it so we could run qemu with the
> > -nographic option.
> >
> > So I went ahead and tried to run quemu version:
> > v5.0.0-1970-g0b100c8e72-dirty.
> > And with the BOCHS driver built-in.
> >
> > With the following command line:
> > qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
> >
> > Behaviour was the same before and after applying this patch.
> > (panic due to VFS: Unable to mount root fs on unknown-block(0,0))
> > So I consider it fixed for real now and not just a workaround.
> >
> > I also tested with:
> > qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial
> > stdio
> >
> > and it worked in both cases too.
>
> FTR, you booted a kernel and got graphics output. The error is simply that
> there was no disk to mount?
The short version "Yes".
The longer version:
With "qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0
-serial stdio" I got graphical output - one penguin.
With "qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0
-nographic" I got no graphical output, as implied by the -nographic
option. But the boot continued - where it would panic before when we
accessed IO memory as system memory.
In both cases I got an error because I had not specified any rootfs, so
qemu failed to mount any rootfs. So expected.
Sam
>
> Best regards
> Thomas
>
> >
> > All the comments above so future-me have an easier time finding how to
> > reproduce.
> >
> > Tested-by: Sam Ravnborg <sam@ravnborg.org>
> >
> > Sam
> >
> > > ---
> > > Documentation/gpu/todo.rst | 19 ++-
> > > drivers/gpu/drm/bochs/bochs_kms.c | 1 -
> > > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++--
> > > include/drm/drm_mode_config.h | 12 --
> > > 4 files changed, 220 insertions(+), 29 deletions(-)
> > >
> > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > > index 7e6fc3c04add..638b7f704339 100644
> > > --- a/Documentation/gpu/todo.rst
> > > +++ b/Documentation/gpu/todo.rst
> > > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > > ------------------------------------------------
> > >
> > > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > > -expects the framebuffer in system memory (or system-like memory).
> > > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > > emulation +expected the framebuffer in system memory or system-like
> > > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > > memory can be supported +as well.
> > >
> > > Contact: Maintainer of the driver you plan to convert
> > >
> > > Level: Intermediate
> > >
> > > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > > +-------------------------------------------------------
> > > +
> > > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > > +being rewritten without dependencies on the fbdev module. Some of the
> > > +helpers could further benefit from using struct dma_buf_map instead of
> > > +raw pointers.
> > > +
> > > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > > +
> > > +Level: Advanced
> > > +
> > > +
> > > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > > -----------------------------------------------------------------
> > >
> > > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > > bochs->dev->mode_config.preferred_depth = 24;
> > > bochs->dev->mode_config.prefer_shadow = 0;
> > > bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > > - bochs->dev->mode_config.fbdev_use_iomem = true;
> > > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > > true;
> > > bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> > > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > > --- a/drivers/gpu/drm/drm_fb_helper.c
> > > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > > work_struct *work) }
> > >
> > > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > > *fb_helper,
> > > - struct drm_clip_rect *clip)
> > > + struct drm_clip_rect *clip,
> > > + struct dma_buf_map *dst)
> > > {
> > > struct drm_framebuffer *fb = fb_helper->fb;
> > > unsigned int cpp = fb->format->cpp[0];
> > > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > > void *src = fb_helper->fbdev->screen_buffer + offset;
> > > - void *dst = fb_helper->buffer->map.vaddr + offset;
> > > size_t len = (clip->x2 - clip->x1) * cpp;
> > > unsigned int y;
> > >
> > > - for (y = clip->y1; y < clip->y2; y++) {
> > > - if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > > - memcpy(dst, src, len);
> > > - else
> > > - memcpy_toio((void __iomem *)dst, src, len);
> > > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > > rect */
> > > + for (y = clip->y1; y < clip->y2; y++) {
> > > + dma_buf_map_memcpy_to(dst, src, len);
> > > + dma_buf_map_incr(dst, fb->pitches[0]);
> > > src += fb->pitches[0];
> > > - dst += fb->pitches[0];
> > > }
> > > }
> > >
> > > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > > if (ret)
> > > return;
> > > - drm_fb_helper_dirty_blit_real(helper,
> > > &clip_copy);
> > > + drm_fb_helper_dirty_blit_real(helper,
> > > &clip_copy, &map); }
> > > +
> > > if (helper->fb->funcs->dirty)
> > > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > > &clip_copy, 1);
> > > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > > *info, }
> > > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> > >
> > > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > > *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + unsigned long p = *ppos;
> > > + u8 *dst;
> > > + u8 __iomem *src;
> > > + int c, err = 0;
> > > + unsigned long total_size;
> > > + unsigned long alloc_size;
> > > + ssize_t ret = 0;
> > > +
> > > + if (info->state != FBINFO_STATE_RUNNING)
> > > + return -EPERM;
> > > +
> > > + total_size = info->screen_size;
> > > +
> > > + if (total_size == 0)
> > > + total_size = info->fix.smem_len;
> > > +
> > > + if (p >= total_size)
> > > + return 0;
> > > +
> > > + if (count >= total_size)
> > > + count = total_size;
> > > +
> > > + if (count + p > total_size)
> > > + count = total_size - p;
> > > +
> > > + src = (u8 __iomem *)(info->screen_base + p);
> > > +
> > > + alloc_size = min(count, PAGE_SIZE);
> > > +
> > > + dst = kmalloc(alloc_size, GFP_KERNEL);
> > > + if (!dst)
> > > + return -ENOMEM;
> > > +
> > > + while (count) {
> > > + c = min(count, alloc_size);
> > > +
> > > + memcpy_fromio(dst, src, c);
> > > + if (copy_to_user(buf, dst, c)) {
> > > + err = -EFAULT;
> > > + break;
> > > + }
> > > +
> > > + src += c;
> > > + *ppos += c;
> > > + buf += c;
> > > + ret += c;
> > > + count -= c;
> > > + }
> > > +
> > > + kfree(dst);
> > > +
> > > + if (err)
> > > + return err;
> > > +
> > > + return ret;
> > > +}
> > > +
> > > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > > __user *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + unsigned long p = *ppos;
> > > + u8 *src;
> > > + u8 __iomem *dst;
> > > + int c, err = 0;
> > > + unsigned long total_size;
> > > + unsigned long alloc_size;
> > > + ssize_t ret = 0;
> > > +
> > > + if (info->state != FBINFO_STATE_RUNNING)
> > > + return -EPERM;
> > > +
> > > + total_size = info->screen_size;
> > > +
> > > + if (total_size == 0)
> > > + total_size = info->fix.smem_len;
> > > +
> > > + if (p > total_size)
> > > + return -EFBIG;
> > > +
> > > + if (count > total_size) {
> > > + err = -EFBIG;
> > > + count = total_size;
> > > + }
> > > +
> > > + if (count + p > total_size) {
> > > + /*
> > > + * The framebuffer is too small. We do the
> > > + * copy operation, but return an error code
> > > + * afterwards. Taken from fbdev.
> > > + */
> > > + if (!err)
> > > + err = -ENOSPC;
> > > + count = total_size - p;
> > > + }
> > > +
> > > + alloc_size = min(count, PAGE_SIZE);
> > > +
> > > + src = kmalloc(alloc_size, GFP_KERNEL);
> > > + if (!src)
> > > + return -ENOMEM;
> > > +
> > > + dst = (u8 __iomem *)(info->screen_base + p);
> > > +
> > > + while (count) {
> > > + c = min(count, alloc_size);
> > > +
> > > + if (copy_from_user(src, buf, c)) {
> > > + err = -EFAULT;
> > > + break;
> > > + }
> > > + memcpy_toio(dst, src, c);
> > > +
> > > + dst += c;
> > > + *ppos += c;
> > > + buf += c;
> > > + ret += c;
> > > + count -= c;
> > > + }
> > > +
> > > + kfree(src);
> > > +
> > > + if (err)
> > > + return err;
> > > +
> > > + return ret;
> > > +}
> > > +
> > > /**
> > > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > > * @info: fbdev registered by the helper
> > > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > > struct vm_area_struct *vma) return -ENODEV;
> > > }
> > >
> > > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + return drm_fb_helper_sys_read(info, buf, count, ppos);
> > > + else
> > > + return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > > +}
> > > +
> > > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > > __user *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + return drm_fb_helper_sys_write(info, buf, count, ppos);
> > > + else
> > > + return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > > + const struct fb_fillrect *rect)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + drm_fb_helper_sys_fillrect(info, rect);
> > > + else
> > > + drm_fb_helper_cfb_fillrect(info, rect);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > > + const struct fb_copyarea *area)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + drm_fb_helper_sys_copyarea(info, area);
> > > + else
> > > + drm_fb_helper_cfb_copyarea(info, area);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > > + const struct fb_image *image)
> > > +{
> > > + struct drm_fb_helper *fb_helper = info->par;
> > > + struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > + drm_fb_helper_sys_imageblit(info, image);
> > > + else
> > > + drm_fb_helper_cfb_imageblit(info, image);
> > > +}
> > > +
> > > static const struct fb_ops drm_fbdev_fb_ops = {
> > > .owner = THIS_MODULE,
> > > DRM_FB_HELPER_DEFAULT_OPS,
> > > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > > .fb_release = drm_fbdev_fb_release,
> > > .fb_destroy = drm_fbdev_fb_destroy,
> > > .fb_mmap = drm_fbdev_fb_mmap,
> > > - .fb_read = drm_fb_helper_sys_read,
> > > - .fb_write = drm_fb_helper_sys_write,
> > > - .fb_fillrect = drm_fb_helper_sys_fillrect,
> > > - .fb_copyarea = drm_fb_helper_sys_copyarea,
> > > - .fb_imageblit = drm_fb_helper_sys_imageblit,
> > > + .fb_read = drm_fbdev_fb_read,
> > > + .fb_write = drm_fbdev_fb_write,
> > > + .fb_fillrect = drm_fbdev_fb_fillrect,
> > > + .fb_copyarea = drm_fbdev_fb_copyarea,
> > > + .fb_imageblit = drm_fbdev_fb_imageblit,
> > > };
> > >
> > > static struct fb_deferred_io drm_fbdev_defio = {
> > > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > > index 5ffbb4ed5b35..ab424ddd7665 100644
> > > --- a/include/drm/drm_mode_config.h
> > > +++ b/include/drm/drm_mode_config.h
> > > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > > */
> > > bool prefer_shadow_fbdev;
> > >
> > > - /**
> > > - * @fbdev_use_iomem:
> > > - *
> > > - * Set to true if framebuffer reside in iomem.
> > > - * When set to true memcpy_toio() is used when copying the
> > > framebuffer in
> > > - * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > > - *
> > > - * FIXME: This should be replaced with a per-mapping is_iomem
> > > - * flag (like ttm does), and then used everywhere in fbdev code.
> > > - */
> > > - bool fbdev_use_iomem;
> > > -
> > > /**
> > > * @quirk_addfb_prefer_xbgr_30bpp:
> > > *
> > > --
> > > 2.28.0
>
>
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
2020-10-15 14:08 ` Christian König
` (3 preceding siblings ...)
(?)
@ 2020-10-19 9:08 ` Thomas Zimmermann
-1 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-19 9:08 UTC (permalink / raw)
To: Christian König, maarten.lankhorst, mripard, airlied,
daniel, sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Hi Christian
On 15.10.20 16:08, Christian König wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
>> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
>> address space. The mapping's address is returned as struct dma_buf_map.
>> Each function is a simplified version of TTM's existing kmap code. Both
>> functions respect the memory's location ani/or writecombine flags.
>>
>> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
>> two helpers that convert a GEM object into the TTM BO and forward the
>> call
>> to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM
>> object
>> callbacks.
>>
>> v4:
>> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
>> Christian)
>
> Bunch of minor comments below, but over all look very solid to me.
>
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> ---
>> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
>> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
>> include/drm/drm_gem_ttm_helper.h | 6 +++
>> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
>> include/linux/dma-buf-map.h | 20 ++++++++
>> 5 files changed, 164 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
>> b/drivers/gpu/drm/drm_gem_ttm_helper.c
>> index 0e4fb9ba43ad..db4c14d78a30 100644
>> --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
>> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
>> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
>> unsigned int indent,
>> }
>> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>> +/**
>> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
>> + * @gem: GEM object.
>> + * @map: [out] returns the dma-buf mapping.
>> + *
>> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
>> + * &drm_gem_object_funcs.vmap callback.
>> + *
>> + * Returns:
>> + * 0 on success, or a negative errno code otherwise.
>> + */
>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map)
>> +{
>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>> +
>> + return ttm_bo_vmap(bo, map);
>> +
>> +}
>> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
>> +
>> +/**
>> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
>> + * @gem: GEM object.
>> + * @map: dma-buf mapping.
>> + *
>> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be
>> used as
>> + * &drm_gem_object_funcs.vmap callback.
>> + */
>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map)
>> +{
>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>> +
>> + ttm_bo_vunmap(bo, map);
>> +}
>> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
>> +
>> /**
>> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
>> * @gem: GEM object.
>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
>> b/drivers/gpu/drm/ttm/ttm_bo_util.c
>> index bdee4df1f3f2..80c42c774c7d 100644
>> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>> @@ -32,6 +32,7 @@
>> #include <drm/ttm/ttm_bo_driver.h>
>> #include <drm/ttm/ttm_placement.h>
>> #include <drm/drm_vma_manager.h>
>> +#include <linux/dma-buf-map.h>
>> #include <linux/io.h>
>> #include <linux/highmem.h>
>> #include <linux/wait.h>
>> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>> }
>> EXPORT_SYMBOL(ttm_bo_kunmap);
>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>> +{
>> + struct ttm_resource *mem = &bo->mem;
>> + int ret;
>> +
>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>> + if (ret)
>> + return ret;
>> +
>> + if (mem->bus.is_iomem) {
>> + void __iomem *vaddr_iomem;
>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
I thought the rule of thumb is to use u64 in source code. Yet TTM only
uses uint*_t types. Is there anything special about TTM?
>
> We have an unit tests of allocating a 8GB BO and that should work on a
> 32bit machine as well :)
>
>> +
>> + if (mem->bus.addr)
>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
I after reading the patch again, I realized that this is the
'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
options here: ignore this case in _vunmap(), or do an ioremap()
unconditionally. Which one is preferable?
Best regards
Thomas
>> + else if (mem->placement & TTM_PL_FLAG_WC)
>
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
>
>> + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
>> + else
>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>> +
>> + if (!vaddr_iomem)
>> + return -ENOMEM;
>> +
>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>> +
>> + } else {
>> + struct ttm_operation_ctx ctx = {
>> + .interruptible = false,
>> + .no_wait_gpu = false
>> + };
>> + struct ttm_tt *ttm = bo->ttm;
>> + pgprot_t prot;
>> + void *vaddr;
>> +
>> + BUG_ON(!ttm);
>
> I think we can drop this, populate will just crash badly anyway.
>
>> +
>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>> + if (ret)
>> + return ret;
>> +
>> + /*
>> + * We need to use vmap to get the desired page protection
>> + * or to make the buffer object look contiguous.
>> + */
>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>
> The calling convention has changed on drm-misc-next as well, but should
> be trivial to adapt.
>
> Regards,
> Christian.
>
>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>> + if (!vaddr)
>> + return -ENOMEM;
>> +
>> + dma_buf_map_set_vaddr(map, vaddr);
>> + }
>> +
>> + return 0;
>> +}
>> +EXPORT_SYMBOL(ttm_bo_vmap);
>> +
>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>> *map)
>> +{
>> + if (dma_buf_map_is_null(map))
>> + return;
>> +
>> + if (map->is_iomem)
>> + iounmap(map->vaddr_iomem);
>> + else
>> + vunmap(map->vaddr);
>> + dma_buf_map_clear(map);
>> +
>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>> +}
>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>> +
>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>> bool dst_use_tt)
>> {
>> diff --git a/include/drm/drm_gem_ttm_helper.h
>> b/include/drm/drm_gem_ttm_helper.h
>> index 118cef76f84f..7c6d874910b8 100644
>> --- a/include/drm/drm_gem_ttm_helper.h
>> +++ b/include/drm/drm_gem_ttm_helper.h
>> @@ -10,11 +10,17 @@
>> #include <drm/ttm/ttm_bo_api.h>
>> #include <drm/ttm/ttm_bo_driver.h>
>> +struct dma_buf_map;
>> +
>> #define drm_gem_ttm_of_gem(gem_obj) \
>> container_of(gem_obj, struct ttm_buffer_object, base)
>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>> indent,
>> const struct drm_gem_object *gem);
>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map);
>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map);
>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>> struct vm_area_struct *vma);
>> diff --git a/include/drm/ttm/ttm_bo_api.h
>> b/include/drm/ttm/ttm_bo_api.h
>> index 37102e45e496..2c59a785374c 100644
>> --- a/include/drm/ttm/ttm_bo_api.h
>> +++ b/include/drm/ttm/ttm_bo_api.h
>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>> struct ttm_bo_device;
>> +struct dma_buf_map;
>> +
>> struct drm_mm_node;
>> struct ttm_placement;
>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>> unsigned long start_page,
>> */
>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>> +/**
>> + * ttm_bo_vmap
>> + *
>> + * @bo: The buffer object.
>> + * @map: pointer to a struct dma_buf_map representing the map.
>> + *
>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>> + * data in the buffer object. The parameter @map returns the virtual
>> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
>> + *
>> + * Returns
>> + * -ENOMEM: Out of memory.
>> + * -EINVAL: Invalid range.
>> + */
>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>> +
>> +/**
>> + * ttm_bo_vunmap
>> + *
>> + * @bo: The buffer object.
>> + * @map: Object describing the map to unmap.
>> + *
>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>> + */
>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>> *map);
>> +
>> /**
>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>> *
>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>> index fd1aba545fdf..2e8bbecb5091 100644
>> --- a/include/linux/dma-buf-map.h
>> +++ b/include/linux/dma-buf-map.h
>> @@ -45,6 +45,12 @@
>> *
>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>> *
>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>> + *
>> + * .. code-block:: c
>> + *
>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>> + *
>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>> * dma_buf_map_is_null().
>> *
>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>> dma_buf_map *map, void *vaddr)
>> map->is_iomem = false;
>> }
>> +/**
>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>> an address in I/O memory
>> + * @map: The dma-buf mapping structure
>> + * @vaddr_iomem: An I/O-memory address
>> + *
>> + * Sets the address and the I/O-memory flag.
>> + */
>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>> + void __iomem *vaddr_iomem)
>> +{
>> + map->vaddr_iomem = vaddr_iomem;
>> + map->is_iomem = true;
>> +}
>> +
>> /**
>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>> for equality
>> * @lhs: The dma-buf mapping structure
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-19 9:08 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-19 9:08 UTC (permalink / raw)
To: Christian König, maarten.lankhorst, mripard, airlied,
daniel, sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Hi Christian
On 15.10.20 16:08, Christian König wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
>> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
>> address space. The mapping's address is returned as struct dma_buf_map.
>> Each function is a simplified version of TTM's existing kmap code. Both
>> functions respect the memory's location ani/or writecombine flags.
>>
>> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
>> two helpers that convert a GEM object into the TTM BO and forward the
>> call
>> to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM
>> object
>> callbacks.
>>
>> v4:
>> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
>> Christian)
>
> Bunch of minor comments below, but over all look very solid to me.
>
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> ---
>> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
>> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
>> include/drm/drm_gem_ttm_helper.h | 6 +++
>> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
>> include/linux/dma-buf-map.h | 20 ++++++++
>> 5 files changed, 164 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
>> b/drivers/gpu/drm/drm_gem_ttm_helper.c
>> index 0e4fb9ba43ad..db4c14d78a30 100644
>> --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
>> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
>> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
>> unsigned int indent,
>> }
>> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>> +/**
>> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
>> + * @gem: GEM object.
>> + * @map: [out] returns the dma-buf mapping.
>> + *
>> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
>> + * &drm_gem_object_funcs.vmap callback.
>> + *
>> + * Returns:
>> + * 0 on success, or a negative errno code otherwise.
>> + */
>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map)
>> +{
>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>> +
>> + return ttm_bo_vmap(bo, map);
>> +
>> +}
>> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
>> +
>> +/**
>> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
>> + * @gem: GEM object.
>> + * @map: dma-buf mapping.
>> + *
>> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be
>> used as
>> + * &drm_gem_object_funcs.vmap callback.
>> + */
>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map)
>> +{
>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>> +
>> + ttm_bo_vunmap(bo, map);
>> +}
>> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
>> +
>> /**
>> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
>> * @gem: GEM object.
>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
>> b/drivers/gpu/drm/ttm/ttm_bo_util.c
>> index bdee4df1f3f2..80c42c774c7d 100644
>> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>> @@ -32,6 +32,7 @@
>> #include <drm/ttm/ttm_bo_driver.h>
>> #include <drm/ttm/ttm_placement.h>
>> #include <drm/drm_vma_manager.h>
>> +#include <linux/dma-buf-map.h>
>> #include <linux/io.h>
>> #include <linux/highmem.h>
>> #include <linux/wait.h>
>> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>> }
>> EXPORT_SYMBOL(ttm_bo_kunmap);
>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>> +{
>> + struct ttm_resource *mem = &bo->mem;
>> + int ret;
>> +
>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>> + if (ret)
>> + return ret;
>> +
>> + if (mem->bus.is_iomem) {
>> + void __iomem *vaddr_iomem;
>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
I thought the rule of thumb is to use u64 in source code. Yet TTM only
uses uint*_t types. Is there anything special about TTM?
>
> We have an unit tests of allocating a 8GB BO and that should work on a
> 32bit machine as well :)
>
>> +
>> + if (mem->bus.addr)
>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
I after reading the patch again, I realized that this is the
'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
options here: ignore this case in _vunmap(), or do an ioremap()
unconditionally. Which one is preferable?
Best regards
Thomas
>> + else if (mem->placement & TTM_PL_FLAG_WC)
>
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
>
>> + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
>> + else
>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>> +
>> + if (!vaddr_iomem)
>> + return -ENOMEM;
>> +
>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>> +
>> + } else {
>> + struct ttm_operation_ctx ctx = {
>> + .interruptible = false,
>> + .no_wait_gpu = false
>> + };
>> + struct ttm_tt *ttm = bo->ttm;
>> + pgprot_t prot;
>> + void *vaddr;
>> +
>> + BUG_ON(!ttm);
>
> I think we can drop this, populate will just crash badly anyway.
>
>> +
>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>> + if (ret)
>> + return ret;
>> +
>> + /*
>> + * We need to use vmap to get the desired page protection
>> + * or to make the buffer object look contiguous.
>> + */
>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>
> The calling convention has changed on drm-misc-next as well, but should
> be trivial to adapt.
>
> Regards,
> Christian.
>
>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>> + if (!vaddr)
>> + return -ENOMEM;
>> +
>> + dma_buf_map_set_vaddr(map, vaddr);
>> + }
>> +
>> + return 0;
>> +}
>> +EXPORT_SYMBOL(ttm_bo_vmap);
>> +
>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>> *map)
>> +{
>> + if (dma_buf_map_is_null(map))
>> + return;
>> +
>> + if (map->is_iomem)
>> + iounmap(map->vaddr_iomem);
>> + else
>> + vunmap(map->vaddr);
>> + dma_buf_map_clear(map);
>> +
>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>> +}
>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>> +
>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>> bool dst_use_tt)
>> {
>> diff --git a/include/drm/drm_gem_ttm_helper.h
>> b/include/drm/drm_gem_ttm_helper.h
>> index 118cef76f84f..7c6d874910b8 100644
>> --- a/include/drm/drm_gem_ttm_helper.h
>> +++ b/include/drm/drm_gem_ttm_helper.h
>> @@ -10,11 +10,17 @@
>> #include <drm/ttm/ttm_bo_api.h>
>> #include <drm/ttm/ttm_bo_driver.h>
>> +struct dma_buf_map;
>> +
>> #define drm_gem_ttm_of_gem(gem_obj) \
>> container_of(gem_obj, struct ttm_buffer_object, base)
>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>> indent,
>> const struct drm_gem_object *gem);
>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map);
>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map);
>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>> struct vm_area_struct *vma);
>> diff --git a/include/drm/ttm/ttm_bo_api.h
>> b/include/drm/ttm/ttm_bo_api.h
>> index 37102e45e496..2c59a785374c 100644
>> --- a/include/drm/ttm/ttm_bo_api.h
>> +++ b/include/drm/ttm/ttm_bo_api.h
>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>> struct ttm_bo_device;
>> +struct dma_buf_map;
>> +
>> struct drm_mm_node;
>> struct ttm_placement;
>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>> unsigned long start_page,
>> */
>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>> +/**
>> + * ttm_bo_vmap
>> + *
>> + * @bo: The buffer object.
>> + * @map: pointer to a struct dma_buf_map representing the map.
>> + *
>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>> + * data in the buffer object. The parameter @map returns the virtual
>> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
>> + *
>> + * Returns
>> + * -ENOMEM: Out of memory.
>> + * -EINVAL: Invalid range.
>> + */
>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>> +
>> +/**
>> + * ttm_bo_vunmap
>> + *
>> + * @bo: The buffer object.
>> + * @map: Object describing the map to unmap.
>> + *
>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>> + */
>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>> *map);
>> +
>> /**
>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>> *
>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>> index fd1aba545fdf..2e8bbecb5091 100644
>> --- a/include/linux/dma-buf-map.h
>> +++ b/include/linux/dma-buf-map.h
>> @@ -45,6 +45,12 @@
>> *
>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>> *
>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>> + *
>> + * .. code-block:: c
>> + *
>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>> + *
>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>> * dma_buf_map_is_null().
>> *
>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>> dma_buf_map *map, void *vaddr)
>> map->is_iomem = false;
>> }
>> +/**
>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>> an address in I/O memory
>> + * @map: The dma-buf mapping structure
>> + * @vaddr_iomem: An I/O-memory address
>> + *
>> + * Sets the address and the I/O-memory flag.
>> + */
>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>> + void __iomem *vaddr_iomem)
>> +{
>> + map->vaddr_iomem = vaddr_iomem;
>> + map->is_iomem = true;
>> +}
>> +
>> /**
>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>> for equality
>> * @lhs: The dma-buf mapping structure
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-19 9:08 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-19 9:08 UTC (permalink / raw)
To: Christian König, maarten.lankhorst, mripard, airlied,
daniel, sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
spice-devel, xen-devel, linux-arm-kernel, linux-media
Hi Christian
On 15.10.20 16:08, Christian König wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
>> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
>> address space. The mapping's address is returned as struct dma_buf_map.
>> Each function is a simplified version of TTM's existing kmap code. Both
>> functions respect the memory's location ani/or writecombine flags.
>>
>> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
>> two helpers that convert a GEM object into the TTM BO and forward the
>> call
>> to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM
>> object
>> callbacks.
>>
>> v4:
>> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
>> Christian)
>
> Bunch of minor comments below, but over all look very solid to me.
>
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> ---
>> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
>> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
>> include/drm/drm_gem_ttm_helper.h | 6 +++
>> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
>> include/linux/dma-buf-map.h | 20 ++++++++
>> 5 files changed, 164 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
>> b/drivers/gpu/drm/drm_gem_ttm_helper.c
>> index 0e4fb9ba43ad..db4c14d78a30 100644
>> --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
>> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
>> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
>> unsigned int indent,
>> }
>> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>> +/**
>> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
>> + * @gem: GEM object.
>> + * @map: [out] returns the dma-buf mapping.
>> + *
>> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
>> + * &drm_gem_object_funcs.vmap callback.
>> + *
>> + * Returns:
>> + * 0 on success, or a negative errno code otherwise.
>> + */
>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map)
>> +{
>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>> +
>> + return ttm_bo_vmap(bo, map);
>> +
>> +}
>> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
>> +
>> +/**
>> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
>> + * @gem: GEM object.
>> + * @map: dma-buf mapping.
>> + *
>> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be
>> used as
>> + * &drm_gem_object_funcs.vmap callback.
>> + */
>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map)
>> +{
>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>> +
>> + ttm_bo_vunmap(bo, map);
>> +}
>> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
>> +
>> /**
>> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
>> * @gem: GEM object.
>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
>> b/drivers/gpu/drm/ttm/ttm_bo_util.c
>> index bdee4df1f3f2..80c42c774c7d 100644
>> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>> @@ -32,6 +32,7 @@
>> #include <drm/ttm/ttm_bo_driver.h>
>> #include <drm/ttm/ttm_placement.h>
>> #include <drm/drm_vma_manager.h>
>> +#include <linux/dma-buf-map.h>
>> #include <linux/io.h>
>> #include <linux/highmem.h>
>> #include <linux/wait.h>
>> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>> }
>> EXPORT_SYMBOL(ttm_bo_kunmap);
>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>> +{
>> + struct ttm_resource *mem = &bo->mem;
>> + int ret;
>> +
>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>> + if (ret)
>> + return ret;
>> +
>> + if (mem->bus.is_iomem) {
>> + void __iomem *vaddr_iomem;
>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
I thought the rule of thumb is to use u64 in source code. Yet TTM only
uses uint*_t types. Is there anything special about TTM?
>
> We have an unit tests of allocating a 8GB BO and that should work on a
> 32bit machine as well :)
>
>> +
>> + if (mem->bus.addr)
>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
I after reading the patch again, I realized that this is the
'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
options here: ignore this case in _vunmap(), or do an ioremap()
unconditionally. Which one is preferable?
Best regards
Thomas
>> + else if (mem->placement & TTM_PL_FLAG_WC)
>
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
>
>> + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
>> + else
>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>> +
>> + if (!vaddr_iomem)
>> + return -ENOMEM;
>> +
>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>> +
>> + } else {
>> + struct ttm_operation_ctx ctx = {
>> + .interruptible = false,
>> + .no_wait_gpu = false
>> + };
>> + struct ttm_tt *ttm = bo->ttm;
>> + pgprot_t prot;
>> + void *vaddr;
>> +
>> + BUG_ON(!ttm);
>
> I think we can drop this, populate will just crash badly anyway.
>
>> +
>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>> + if (ret)
>> + return ret;
>> +
>> + /*
>> + * We need to use vmap to get the desired page protection
>> + * or to make the buffer object look contiguous.
>> + */
>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>
> The calling convention has changed on drm-misc-next as well, but should
> be trivial to adapt.
>
> Regards,
> Christian.
>
>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>> + if (!vaddr)
>> + return -ENOMEM;
>> +
>> + dma_buf_map_set_vaddr(map, vaddr);
>> + }
>> +
>> + return 0;
>> +}
>> +EXPORT_SYMBOL(ttm_bo_vmap);
>> +
>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>> *map)
>> +{
>> + if (dma_buf_map_is_null(map))
>> + return;
>> +
>> + if (map->is_iomem)
>> + iounmap(map->vaddr_iomem);
>> + else
>> + vunmap(map->vaddr);
>> + dma_buf_map_clear(map);
>> +
>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>> +}
>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>> +
>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>> bool dst_use_tt)
>> {
>> diff --git a/include/drm/drm_gem_ttm_helper.h
>> b/include/drm/drm_gem_ttm_helper.h
>> index 118cef76f84f..7c6d874910b8 100644
>> --- a/include/drm/drm_gem_ttm_helper.h
>> +++ b/include/drm/drm_gem_ttm_helper.h
>> @@ -10,11 +10,17 @@
>> #include <drm/ttm/ttm_bo_api.h>
>> #include <drm/ttm/ttm_bo_driver.h>
>> +struct dma_buf_map;
>> +
>> #define drm_gem_ttm_of_gem(gem_obj) \
>> container_of(gem_obj, struct ttm_buffer_object, base)
>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>> indent,
>> const struct drm_gem_object *gem);
>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map);
>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map);
>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>> struct vm_area_struct *vma);
>> diff --git a/include/drm/ttm/ttm_bo_api.h
>> b/include/drm/ttm/ttm_bo_api.h
>> index 37102e45e496..2c59a785374c 100644
>> --- a/include/drm/ttm/ttm_bo_api.h
>> +++ b/include/drm/ttm/ttm_bo_api.h
>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>> struct ttm_bo_device;
>> +struct dma_buf_map;
>> +
>> struct drm_mm_node;
>> struct ttm_placement;
>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>> unsigned long start_page,
>> */
>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>> +/**
>> + * ttm_bo_vmap
>> + *
>> + * @bo: The buffer object.
>> + * @map: pointer to a struct dma_buf_map representing the map.
>> + *
>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>> + * data in the buffer object. The parameter @map returns the virtual
>> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
>> + *
>> + * Returns
>> + * -ENOMEM: Out of memory.
>> + * -EINVAL: Invalid range.
>> + */
>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>> +
>> +/**
>> + * ttm_bo_vunmap
>> + *
>> + * @bo: The buffer object.
>> + * @map: Object describing the map to unmap.
>> + *
>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>> + */
>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>> *map);
>> +
>> /**
>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>> *
>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>> index fd1aba545fdf..2e8bbecb5091 100644
>> --- a/include/linux/dma-buf-map.h
>> +++ b/include/linux/dma-buf-map.h
>> @@ -45,6 +45,12 @@
>> *
>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>> *
>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>> + *
>> + * .. code-block:: c
>> + *
>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>> + *
>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>> * dma_buf_map_is_null().
>> *
>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>> dma_buf_map *map, void *vaddr)
>> map->is_iomem = false;
>> }
>> +/**
>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>> an address in I/O memory
>> + * @map: The dma-buf mapping structure
>> + * @vaddr_iomem: An I/O-memory address
>> + *
>> + * Sets the address and the I/O-memory flag.
>> + */
>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>> + void __iomem *vaddr_iomem)
>> +{
>> + map->vaddr_iomem = vaddr_iomem;
>> + map->is_iomem = true;
>> +}
>> +
>> /**
>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>> for equality
>> * @lhs: The dma-buf mapping structure
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-19 9:08 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-19 9:08 UTC (permalink / raw)
To: Christian König, maarten.lankhorst, mripard, airlied,
daniel, sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
spice-devel, xen-devel, linux-arm-kernel, linux-media
Hi Christian
On 15.10.20 16:08, Christian König wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
>> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
>> address space. The mapping's address is returned as struct dma_buf_map.
>> Each function is a simplified version of TTM's existing kmap code. Both
>> functions respect the memory's location ani/or writecombine flags.
>>
>> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
>> two helpers that convert a GEM object into the TTM BO and forward the
>> call
>> to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM
>> object
>> callbacks.
>>
>> v4:
>> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
>> Christian)
>
> Bunch of minor comments below, but over all look very solid to me.
>
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> ---
>> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
>> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
>> include/drm/drm_gem_ttm_helper.h | 6 +++
>> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
>> include/linux/dma-buf-map.h | 20 ++++++++
>> 5 files changed, 164 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
>> b/drivers/gpu/drm/drm_gem_ttm_helper.c
>> index 0e4fb9ba43ad..db4c14d78a30 100644
>> --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
>> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
>> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
>> unsigned int indent,
>> }
>> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>> +/**
>> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
>> + * @gem: GEM object.
>> + * @map: [out] returns the dma-buf mapping.
>> + *
>> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
>> + * &drm_gem_object_funcs.vmap callback.
>> + *
>> + * Returns:
>> + * 0 on success, or a negative errno code otherwise.
>> + */
>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map)
>> +{
>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>> +
>> + return ttm_bo_vmap(bo, map);
>> +
>> +}
>> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
>> +
>> +/**
>> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
>> + * @gem: GEM object.
>> + * @map: dma-buf mapping.
>> + *
>> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be
>> used as
>> + * &drm_gem_object_funcs.vmap callback.
>> + */
>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map)
>> +{
>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>> +
>> + ttm_bo_vunmap(bo, map);
>> +}
>> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
>> +
>> /**
>> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
>> * @gem: GEM object.
>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
>> b/drivers/gpu/drm/ttm/ttm_bo_util.c
>> index bdee4df1f3f2..80c42c774c7d 100644
>> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>> @@ -32,6 +32,7 @@
>> #include <drm/ttm/ttm_bo_driver.h>
>> #include <drm/ttm/ttm_placement.h>
>> #include <drm/drm_vma_manager.h>
>> +#include <linux/dma-buf-map.h>
>> #include <linux/io.h>
>> #include <linux/highmem.h>
>> #include <linux/wait.h>
>> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>> }
>> EXPORT_SYMBOL(ttm_bo_kunmap);
>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>> +{
>> + struct ttm_resource *mem = &bo->mem;
>> + int ret;
>> +
>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>> + if (ret)
>> + return ret;
>> +
>> + if (mem->bus.is_iomem) {
>> + void __iomem *vaddr_iomem;
>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
I thought the rule of thumb is to use u64 in source code. Yet TTM only
uses uint*_t types. Is there anything special about TTM?
>
> We have an unit tests of allocating a 8GB BO and that should work on a
> 32bit machine as well :)
>
>> +
>> + if (mem->bus.addr)
>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
I after reading the patch again, I realized that this is the
'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
options here: ignore this case in _vunmap(), or do an ioremap()
unconditionally. Which one is preferable?
Best regards
Thomas
>> + else if (mem->placement & TTM_PL_FLAG_WC)
>
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
>
>> + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
>> + else
>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>> +
>> + if (!vaddr_iomem)
>> + return -ENOMEM;
>> +
>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>> +
>> + } else {
>> + struct ttm_operation_ctx ctx = {
>> + .interruptible = false,
>> + .no_wait_gpu = false
>> + };
>> + struct ttm_tt *ttm = bo->ttm;
>> + pgprot_t prot;
>> + void *vaddr;
>> +
>> + BUG_ON(!ttm);
>
> I think we can drop this, populate will just crash badly anyway.
>
>> +
>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>> + if (ret)
>> + return ret;
>> +
>> + /*
>> + * We need to use vmap to get the desired page protection
>> + * or to make the buffer object look contiguous.
>> + */
>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>
> The calling convention has changed on drm-misc-next as well, but should
> be trivial to adapt.
>
> Regards,
> Christian.
>
>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>> + if (!vaddr)
>> + return -ENOMEM;
>> +
>> + dma_buf_map_set_vaddr(map, vaddr);
>> + }
>> +
>> + return 0;
>> +}
>> +EXPORT_SYMBOL(ttm_bo_vmap);
>> +
>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>> *map)
>> +{
>> + if (dma_buf_map_is_null(map))
>> + return;
>> +
>> + if (map->is_iomem)
>> + iounmap(map->vaddr_iomem);
>> + else
>> + vunmap(map->vaddr);
>> + dma_buf_map_clear(map);
>> +
>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>> +}
>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>> +
>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>> bool dst_use_tt)
>> {
>> diff --git a/include/drm/drm_gem_ttm_helper.h
>> b/include/drm/drm_gem_ttm_helper.h
>> index 118cef76f84f..7c6d874910b8 100644
>> --- a/include/drm/drm_gem_ttm_helper.h
>> +++ b/include/drm/drm_gem_ttm_helper.h
>> @@ -10,11 +10,17 @@
>> #include <drm/ttm/ttm_bo_api.h>
>> #include <drm/ttm/ttm_bo_driver.h>
>> +struct dma_buf_map;
>> +
>> #define drm_gem_ttm_of_gem(gem_obj) \
>> container_of(gem_obj, struct ttm_buffer_object, base)
>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>> indent,
>> const struct drm_gem_object *gem);
>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map);
>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map);
>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>> struct vm_area_struct *vma);
>> diff --git a/include/drm/ttm/ttm_bo_api.h
>> b/include/drm/ttm/ttm_bo_api.h
>> index 37102e45e496..2c59a785374c 100644
>> --- a/include/drm/ttm/ttm_bo_api.h
>> +++ b/include/drm/ttm/ttm_bo_api.h
>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>> struct ttm_bo_device;
>> +struct dma_buf_map;
>> +
>> struct drm_mm_node;
>> struct ttm_placement;
>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>> unsigned long start_page,
>> */
>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>> +/**
>> + * ttm_bo_vmap
>> + *
>> + * @bo: The buffer object.
>> + * @map: pointer to a struct dma_buf_map representing the map.
>> + *
>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>> + * data in the buffer object. The parameter @map returns the virtual
>> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
>> + *
>> + * Returns
>> + * -ENOMEM: Out of memory.
>> + * -EINVAL: Invalid range.
>> + */
>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>> +
>> +/**
>> + * ttm_bo_vunmap
>> + *
>> + * @bo: The buffer object.
>> + * @map: Object describing the map to unmap.
>> + *
>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>> + */
>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>> *map);
>> +
>> /**
>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>> *
>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>> index fd1aba545fdf..2e8bbecb5091 100644
>> --- a/include/linux/dma-buf-map.h
>> +++ b/include/linux/dma-buf-map.h
>> @@ -45,6 +45,12 @@
>> *
>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>> *
>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>> + *
>> + * .. code-block:: c
>> + *
>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>> + *
>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>> * dma_buf_map_is_null().
>> *
>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>> dma_buf_map *map, void *vaddr)
>> map->is_iomem = false;
>> }
>> +/**
>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>> an address in I/O memory
>> + * @map: The dma-buf mapping structure
>> + * @vaddr_iomem: An I/O-memory address
>> + *
>> + * Sets the address and the I/O-memory flag.
>> + */
>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>> + void __iomem *vaddr_iomem)
>> +{
>> + map->vaddr_iomem = vaddr_iomem;
>> + map->is_iomem = true;
>> +}
>> +
>> /**
>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>> for equality
>> * @lhs: The dma-buf mapping structure
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-19 9:08 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-19 9:08 UTC (permalink / raw)
To: Christian König, maarten.lankhorst, mripard, airlied,
daniel, sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
spice-devel, xen-devel, linux-arm-kernel, linux-media
Hi Christian
On 15.10.20 16:08, Christian König wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
>> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
>> address space. The mapping's address is returned as struct dma_buf_map.
>> Each function is a simplified version of TTM's existing kmap code. Both
>> functions respect the memory's location ani/or writecombine flags.
>>
>> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
>> two helpers that convert a GEM object into the TTM BO and forward the
>> call
>> to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM
>> object
>> callbacks.
>>
>> v4:
>> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
>> Christian)
>
> Bunch of minor comments below, but over all look very solid to me.
>
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> ---
>> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
>> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
>> include/drm/drm_gem_ttm_helper.h | 6 +++
>> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
>> include/linux/dma-buf-map.h | 20 ++++++++
>> 5 files changed, 164 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
>> b/drivers/gpu/drm/drm_gem_ttm_helper.c
>> index 0e4fb9ba43ad..db4c14d78a30 100644
>> --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
>> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
>> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
>> unsigned int indent,
>> }
>> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>> +/**
>> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
>> + * @gem: GEM object.
>> + * @map: [out] returns the dma-buf mapping.
>> + *
>> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
>> + * &drm_gem_object_funcs.vmap callback.
>> + *
>> + * Returns:
>> + * 0 on success, or a negative errno code otherwise.
>> + */
>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map)
>> +{
>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>> +
>> + return ttm_bo_vmap(bo, map);
>> +
>> +}
>> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
>> +
>> +/**
>> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
>> + * @gem: GEM object.
>> + * @map: dma-buf mapping.
>> + *
>> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be
>> used as
>> + * &drm_gem_object_funcs.vmap callback.
>> + */
>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map)
>> +{
>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>> +
>> + ttm_bo_vunmap(bo, map);
>> +}
>> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
>> +
>> /**
>> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
>> * @gem: GEM object.
>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
>> b/drivers/gpu/drm/ttm/ttm_bo_util.c
>> index bdee4df1f3f2..80c42c774c7d 100644
>> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>> @@ -32,6 +32,7 @@
>> #include <drm/ttm/ttm_bo_driver.h>
>> #include <drm/ttm/ttm_placement.h>
>> #include <drm/drm_vma_manager.h>
>> +#include <linux/dma-buf-map.h>
>> #include <linux/io.h>
>> #include <linux/highmem.h>
>> #include <linux/wait.h>
>> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>> }
>> EXPORT_SYMBOL(ttm_bo_kunmap);
>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>> +{
>> + struct ttm_resource *mem = &bo->mem;
>> + int ret;
>> +
>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>> + if (ret)
>> + return ret;
>> +
>> + if (mem->bus.is_iomem) {
>> + void __iomem *vaddr_iomem;
>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
I thought the rule of thumb is to use u64 in source code. Yet TTM only
uses uint*_t types. Is there anything special about TTM?
>
> We have an unit tests of allocating a 8GB BO and that should work on a
> 32bit machine as well :)
>
>> +
>> + if (mem->bus.addr)
>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
I after reading the patch again, I realized that this is the
'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
options here: ignore this case in _vunmap(), or do an ioremap()
unconditionally. Which one is preferable?
Best regards
Thomas
>> + else if (mem->placement & TTM_PL_FLAG_WC)
>
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
>
>> + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
>> + else
>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>> +
>> + if (!vaddr_iomem)
>> + return -ENOMEM;
>> +
>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>> +
>> + } else {
>> + struct ttm_operation_ctx ctx = {
>> + .interruptible = false,
>> + .no_wait_gpu = false
>> + };
>> + struct ttm_tt *ttm = bo->ttm;
>> + pgprot_t prot;
>> + void *vaddr;
>> +
>> + BUG_ON(!ttm);
>
> I think we can drop this, populate will just crash badly anyway.
>
>> +
>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>> + if (ret)
>> + return ret;
>> +
>> + /*
>> + * We need to use vmap to get the desired page protection
>> + * or to make the buffer object look contiguous.
>> + */
>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>
> The calling convention has changed on drm-misc-next as well, but should
> be trivial to adapt.
>
> Regards,
> Christian.
>
>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>> + if (!vaddr)
>> + return -ENOMEM;
>> +
>> + dma_buf_map_set_vaddr(map, vaddr);
>> + }
>> +
>> + return 0;
>> +}
>> +EXPORT_SYMBOL(ttm_bo_vmap);
>> +
>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>> *map)
>> +{
>> + if (dma_buf_map_is_null(map))
>> + return;
>> +
>> + if (map->is_iomem)
>> + iounmap(map->vaddr_iomem);
>> + else
>> + vunmap(map->vaddr);
>> + dma_buf_map_clear(map);
>> +
>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>> +}
>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>> +
>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>> bool dst_use_tt)
>> {
>> diff --git a/include/drm/drm_gem_ttm_helper.h
>> b/include/drm/drm_gem_ttm_helper.h
>> index 118cef76f84f..7c6d874910b8 100644
>> --- a/include/drm/drm_gem_ttm_helper.h
>> +++ b/include/drm/drm_gem_ttm_helper.h
>> @@ -10,11 +10,17 @@
>> #include <drm/ttm/ttm_bo_api.h>
>> #include <drm/ttm/ttm_bo_driver.h>
>> +struct dma_buf_map;
>> +
>> #define drm_gem_ttm_of_gem(gem_obj) \
>> container_of(gem_obj, struct ttm_buffer_object, base)
>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>> indent,
>> const struct drm_gem_object *gem);
>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map);
>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map);
>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>> struct vm_area_struct *vma);
>> diff --git a/include/drm/ttm/ttm_bo_api.h
>> b/include/drm/ttm/ttm_bo_api.h
>> index 37102e45e496..2c59a785374c 100644
>> --- a/include/drm/ttm/ttm_bo_api.h
>> +++ b/include/drm/ttm/ttm_bo_api.h
>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>> struct ttm_bo_device;
>> +struct dma_buf_map;
>> +
>> struct drm_mm_node;
>> struct ttm_placement;
>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>> unsigned long start_page,
>> */
>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>> +/**
>> + * ttm_bo_vmap
>> + *
>> + * @bo: The buffer object.
>> + * @map: pointer to a struct dma_buf_map representing the map.
>> + *
>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>> + * data in the buffer object. The parameter @map returns the virtual
>> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
>> + *
>> + * Returns
>> + * -ENOMEM: Out of memory.
>> + * -EINVAL: Invalid range.
>> + */
>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>> +
>> +/**
>> + * ttm_bo_vunmap
>> + *
>> + * @bo: The buffer object.
>> + * @map: Object describing the map to unmap.
>> + *
>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>> + */
>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>> *map);
>> +
>> /**
>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>> *
>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>> index fd1aba545fdf..2e8bbecb5091 100644
>> --- a/include/linux/dma-buf-map.h
>> +++ b/include/linux/dma-buf-map.h
>> @@ -45,6 +45,12 @@
>> *
>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>> *
>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>> + *
>> + * .. code-block:: c
>> + *
>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>> + *
>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>> * dma_buf_map_is_null().
>> *
>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>> dma_buf_map *map, void *vaddr)
>> map->is_iomem = false;
>> }
>> +/**
>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>> an address in I/O memory
>> + * @map: The dma-buf mapping structure
>> + * @vaddr_iomem: An I/O-memory address
>> + *
>> + * Sets the address and the I/O-memory flag.
>> + */
>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>> + void __iomem *vaddr_iomem)
>> +{
>> + map->vaddr_iomem = vaddr_iomem;
>> + map->is_iomem = true;
>> +}
>> +
>> /**
>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>> for equality
>> * @lhs: The dma-buf mapping structure
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-19 9:08 ` Thomas Zimmermann
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Zimmermann @ 2020-10-19 9:08 UTC (permalink / raw)
To: Christian König, maarten.lankhorst, mripard, airlied,
daniel, sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
spice-devel, xen-devel, linux-arm-kernel, linux-media
Hi Christian
On 15.10.20 16:08, Christian König wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
>> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
>> address space. The mapping's address is returned as struct dma_buf_map.
>> Each function is a simplified version of TTM's existing kmap code. Both
>> functions respect the memory's location ani/or writecombine flags.
>>
>> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
>> two helpers that convert a GEM object into the TTM BO and forward the
>> call
>> to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM
>> object
>> callbacks.
>>
>> v4:
>> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
>> Christian)
>
> Bunch of minor comments below, but over all look very solid to me.
>
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> ---
>> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
>> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++
>> include/drm/drm_gem_ttm_helper.h | 6 +++
>> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++
>> include/linux/dma-buf-map.h | 20 ++++++++
>> 5 files changed, 164 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
>> b/drivers/gpu/drm/drm_gem_ttm_helper.c
>> index 0e4fb9ba43ad..db4c14d78a30 100644
>> --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
>> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
>> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
>> unsigned int indent,
>> }
>> EXPORT_SYMBOL(drm_gem_ttm_print_info);
>> +/**
>> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
>> + * @gem: GEM object.
>> + * @map: [out] returns the dma-buf mapping.
>> + *
>> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
>> + * &drm_gem_object_funcs.vmap callback.
>> + *
>> + * Returns:
>> + * 0 on success, or a negative errno code otherwise.
>> + */
>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map)
>> +{
>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>> +
>> + return ttm_bo_vmap(bo, map);
>> +
>> +}
>> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
>> +
>> +/**
>> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
>> + * @gem: GEM object.
>> + * @map: dma-buf mapping.
>> + *
>> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be
>> used as
>> + * &drm_gem_object_funcs.vmap callback.
>> + */
>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map)
>> +{
>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>> +
>> + ttm_bo_vunmap(bo, map);
>> +}
>> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
>> +
>> /**
>> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
>> * @gem: GEM object.
>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
>> b/drivers/gpu/drm/ttm/ttm_bo_util.c
>> index bdee4df1f3f2..80c42c774c7d 100644
>> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>> @@ -32,6 +32,7 @@
>> #include <drm/ttm/ttm_bo_driver.h>
>> #include <drm/ttm/ttm_placement.h>
>> #include <drm/drm_vma_manager.h>
>> +#include <linux/dma-buf-map.h>
>> #include <linux/io.h>
>> #include <linux/highmem.h>
>> #include <linux/wait.h>
>> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>> }
>> EXPORT_SYMBOL(ttm_bo_kunmap);
>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>> +{
>> + struct ttm_resource *mem = &bo->mem;
>> + int ret;
>> +
>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>> + if (ret)
>> + return ret;
>> +
>> + if (mem->bus.is_iomem) {
>> + void __iomem *vaddr_iomem;
>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
I thought the rule of thumb is to use u64 in source code. Yet TTM only
uses uint*_t types. Is there anything special about TTM?
>
> We have an unit tests of allocating a 8GB BO and that should work on a
> 32bit machine as well :)
>
>> +
>> + if (mem->bus.addr)
>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
I after reading the patch again, I realized that this is the
'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
options here: ignore this case in _vunmap(), or do an ioremap()
unconditionally. Which one is preferable?
Best regards
Thomas
>> + else if (mem->placement & TTM_PL_FLAG_WC)
>
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
>
>> + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
>> + else
>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>> +
>> + if (!vaddr_iomem)
>> + return -ENOMEM;
>> +
>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>> +
>> + } else {
>> + struct ttm_operation_ctx ctx = {
>> + .interruptible = false,
>> + .no_wait_gpu = false
>> + };
>> + struct ttm_tt *ttm = bo->ttm;
>> + pgprot_t prot;
>> + void *vaddr;
>> +
>> + BUG_ON(!ttm);
>
> I think we can drop this, populate will just crash badly anyway.
>
>> +
>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>> + if (ret)
>> + return ret;
>> +
>> + /*
>> + * We need to use vmap to get the desired page protection
>> + * or to make the buffer object look contiguous.
>> + */
>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>
> The calling convention has changed on drm-misc-next as well, but should
> be trivial to adapt.
>
> Regards,
> Christian.
>
>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>> + if (!vaddr)
>> + return -ENOMEM;
>> +
>> + dma_buf_map_set_vaddr(map, vaddr);
>> + }
>> +
>> + return 0;
>> +}
>> +EXPORT_SYMBOL(ttm_bo_vmap);
>> +
>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>> *map)
>> +{
>> + if (dma_buf_map_is_null(map))
>> + return;
>> +
>> + if (map->is_iomem)
>> + iounmap(map->vaddr_iomem);
>> + else
>> + vunmap(map->vaddr);
>> + dma_buf_map_clear(map);
>> +
>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>> +}
>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>> +
>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>> bool dst_use_tt)
>> {
>> diff --git a/include/drm/drm_gem_ttm_helper.h
>> b/include/drm/drm_gem_ttm_helper.h
>> index 118cef76f84f..7c6d874910b8 100644
>> --- a/include/drm/drm_gem_ttm_helper.h
>> +++ b/include/drm/drm_gem_ttm_helper.h
>> @@ -10,11 +10,17 @@
>> #include <drm/ttm/ttm_bo_api.h>
>> #include <drm/ttm/ttm_bo_driver.h>
>> +struct dma_buf_map;
>> +
>> #define drm_gem_ttm_of_gem(gem_obj) \
>> container_of(gem_obj, struct ttm_buffer_object, base)
>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>> indent,
>> const struct drm_gem_object *gem);
>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map);
>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>> + struct dma_buf_map *map);
>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>> struct vm_area_struct *vma);
>> diff --git a/include/drm/ttm/ttm_bo_api.h
>> b/include/drm/ttm/ttm_bo_api.h
>> index 37102e45e496..2c59a785374c 100644
>> --- a/include/drm/ttm/ttm_bo_api.h
>> +++ b/include/drm/ttm/ttm_bo_api.h
>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>> struct ttm_bo_device;
>> +struct dma_buf_map;
>> +
>> struct drm_mm_node;
>> struct ttm_placement;
>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>> unsigned long start_page,
>> */
>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>> +/**
>> + * ttm_bo_vmap
>> + *
>> + * @bo: The buffer object.
>> + * @map: pointer to a struct dma_buf_map representing the map.
>> + *
>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>> + * data in the buffer object. The parameter @map returns the virtual
>> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
>> + *
>> + * Returns
>> + * -ENOMEM: Out of memory.
>> + * -EINVAL: Invalid range.
>> + */
>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>> +
>> +/**
>> + * ttm_bo_vunmap
>> + *
>> + * @bo: The buffer object.
>> + * @map: Object describing the map to unmap.
>> + *
>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>> + */
>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>> *map);
>> +
>> /**
>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>> *
>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>> index fd1aba545fdf..2e8bbecb5091 100644
>> --- a/include/linux/dma-buf-map.h
>> +++ b/include/linux/dma-buf-map.h
>> @@ -45,6 +45,12 @@
>> *
>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>> *
>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>> + *
>> + * .. code-block:: c
>> + *
>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>> + *
>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>> * dma_buf_map_is_null().
>> *
>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>> dma_buf_map *map, void *vaddr)
>> map->is_iomem = false;
>> }
>> +/**
>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>> an address in I/O memory
>> + * @map: The dma-buf mapping structure
>> + * @vaddr_iomem: An I/O-memory address
>> + *
>> + * Sets the address and the I/O-memory flag.
>> + */
>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>> + void __iomem *vaddr_iomem)
>> +{
>> + map->vaddr_iomem = vaddr_iomem;
>> + map->is_iomem = true;
>> +}
>> +
>> /**
>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>> for equality
>> * @lhs: The dma-buf mapping structure
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
2020-10-19 9:08 ` Thomas Zimmermann
` (3 preceding siblings ...)
(?)
@ 2020-10-19 9:45 ` Christian König
-1 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-19 9:45 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Hi Thomas,
[SNIP]
>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>>> +{
>>> + struct ttm_resource *mem = &bo->mem;
>>> + int ret;
>>> +
>>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + if (mem->bus.is_iomem) {
>>> + void __iomem *vaddr_iomem;
>>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>> Please use uint64_t here and make sure to cast bo->num_pages before
>> shifting.
> I thought the rule of thumb is to use u64 in source code. Yet TTM only
> uses uint*_t types. Is there anything special about TTM?
My last status is that you can use both and my personal preference is to
use the uint*_t types because they are part of a higher level standard.
>> We have an unit tests of allocating a 8GB BO and that should work on a
>> 32bit machine as well :)
>>
>>> +
>>> + if (mem->bus.addr)
>>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> I after reading the patch again, I realized that this is the
> 'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
> options here: ignore this case in _vunmap(), or do an ioremap()
> unconditionally. Which one is preferable?
ioremap would be very very bad, so we should just do nothing.
Thanks,
Christian.
>
> Best regards
> Thomas
>
>>> + else if (mem->placement & TTM_PL_FLAG_WC)
>> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
>> mem->bus.caching enum as replacement.
>>
>>> + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
>>> + else
>>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>>> +
>>> + if (!vaddr_iomem)
>>> + return -ENOMEM;
>>> +
>>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>>> +
>>> + } else {
>>> + struct ttm_operation_ctx ctx = {
>>> + .interruptible = false,
>>> + .no_wait_gpu = false
>>> + };
>>> + struct ttm_tt *ttm = bo->ttm;
>>> + pgprot_t prot;
>>> + void *vaddr;
>>> +
>>> + BUG_ON(!ttm);
>> I think we can drop this, populate will just crash badly anyway.
>>
>>> +
>>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + /*
>>> + * We need to use vmap to get the desired page protection
>>> + * or to make the buffer object look contiguous.
>>> + */
>>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>> The calling convention has changed on drm-misc-next as well, but should
>> be trivial to adapt.
>>
>> Regards,
>> Christian.
>>
>>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>>> + if (!vaddr)
>>> + return -ENOMEM;
>>> +
>>> + dma_buf_map_set_vaddr(map, vaddr);
>>> + }
>>> +
>>> + return 0;
>>> +}
>>> +EXPORT_SYMBOL(ttm_bo_vmap);
>>> +
>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>> *map)
>>> +{
>>> + if (dma_buf_map_is_null(map))
>>> + return;
>>> +
>>> + if (map->is_iomem)
>>> + iounmap(map->vaddr_iomem);
>>> + else
>>> + vunmap(map->vaddr);
>>> + dma_buf_map_clear(map);
>>> +
>>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>>> +}
>>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>>> +
>>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>>> bool dst_use_tt)
>>> {
>>> diff --git a/include/drm/drm_gem_ttm_helper.h
>>> b/include/drm/drm_gem_ttm_helper.h
>>> index 118cef76f84f..7c6d874910b8 100644
>>> --- a/include/drm/drm_gem_ttm_helper.h
>>> +++ b/include/drm/drm_gem_ttm_helper.h
>>> @@ -10,11 +10,17 @@
>>> #include <drm/ttm/ttm_bo_api.h>
>>> #include <drm/ttm/ttm_bo_driver.h>
>>> +struct dma_buf_map;
>>> +
>>> #define drm_gem_ttm_of_gem(gem_obj) \
>>> container_of(gem_obj, struct ttm_buffer_object, base)
>>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>>> indent,
>>> const struct drm_gem_object *gem);
>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>> + struct dma_buf_map *map);
>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>> + struct dma_buf_map *map);
>>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>>> struct vm_area_struct *vma);
>>> diff --git a/include/drm/ttm/ttm_bo_api.h
>>> b/include/drm/ttm/ttm_bo_api.h
>>> index 37102e45e496..2c59a785374c 100644
>>> --- a/include/drm/ttm/ttm_bo_api.h
>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>>> struct ttm_bo_device;
>>> +struct dma_buf_map;
>>> +
>>> struct drm_mm_node;
>>> struct ttm_placement;
>>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>> unsigned long start_page,
>>> */
>>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>>> +/**
>>> + * ttm_bo_vmap
>>> + *
>>> + * @bo: The buffer object.
>>> + * @map: pointer to a struct dma_buf_map representing the map.
>>> + *
>>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>>> + * data in the buffer object. The parameter @map returns the virtual
>>> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
>>> + *
>>> + * Returns
>>> + * -ENOMEM: Out of memory.
>>> + * -EINVAL: Invalid range.
>>> + */
>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>> +
>>> +/**
>>> + * ttm_bo_vunmap
>>> + *
>>> + * @bo: The buffer object.
>>> + * @map: Object describing the map to unmap.
>>> + *
>>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>>> + */
>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>> *map);
>>> +
>>> /**
>>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>>> *
>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>> index fd1aba545fdf..2e8bbecb5091 100644
>>> --- a/include/linux/dma-buf-map.h
>>> +++ b/include/linux/dma-buf-map.h
>>> @@ -45,6 +45,12 @@
>>> *
>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>> *
>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>> + *
>>> + * .. code-block:: c
>>> + *
>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>> + *
>>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>>> * dma_buf_map_is_null().
>>> *
>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>> dma_buf_map *map, void *vaddr)
>>> map->is_iomem = false;
>>> }
>>> +/**
>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>> an address in I/O memory
>>> + * @map: The dma-buf mapping structure
>>> + * @vaddr_iomem: An I/O-memory address
>>> + *
>>> + * Sets the address and the I/O-memory flag.
>>> + */
>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>> + void __iomem *vaddr_iomem)
>>> +{
>>> + map->vaddr_iomem = vaddr_iomem;
>>> + map->is_iomem = true;
>>> +}
>>> +
>>> /**
>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>>> for equality
>>> * @lhs: The dma-buf mapping structure
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=04%7C01%7Cchristian.koenig%40amd.com%7C07bc68af3c6440b5be8d08d8740e9b32%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637386953433558595%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=RlGCmjzyZERvqfnl4kA1bEHez5bkLf3F9OlKi2ybDAM%3D&reserved=0
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-19 9:45 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-19 9:45 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
xen-devel, spice-devel, linux-arm-kernel, linux-media
Hi Thomas,
[SNIP]
>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>>> +{
>>> + struct ttm_resource *mem = &bo->mem;
>>> + int ret;
>>> +
>>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + if (mem->bus.is_iomem) {
>>> + void __iomem *vaddr_iomem;
>>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>> Please use uint64_t here and make sure to cast bo->num_pages before
>> shifting.
> I thought the rule of thumb is to use u64 in source code. Yet TTM only
> uses uint*_t types. Is there anything special about TTM?
My last status is that you can use both and my personal preference is to
use the uint*_t types because they are part of a higher level standard.
>> We have an unit tests of allocating a 8GB BO and that should work on a
>> 32bit machine as well :)
>>
>>> +
>>> + if (mem->bus.addr)
>>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> I after reading the patch again, I realized that this is the
> 'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
> options here: ignore this case in _vunmap(), or do an ioremap()
> unconditionally. Which one is preferable?
ioremap would be very very bad, so we should just do nothing.
Thanks,
Christian.
>
> Best regards
> Thomas
>
>>> + else if (mem->placement & TTM_PL_FLAG_WC)
>> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
>> mem->bus.caching enum as replacement.
>>
>>> + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
>>> + else
>>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>>> +
>>> + if (!vaddr_iomem)
>>> + return -ENOMEM;
>>> +
>>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>>> +
>>> + } else {
>>> + struct ttm_operation_ctx ctx = {
>>> + .interruptible = false,
>>> + .no_wait_gpu = false
>>> + };
>>> + struct ttm_tt *ttm = bo->ttm;
>>> + pgprot_t prot;
>>> + void *vaddr;
>>> +
>>> + BUG_ON(!ttm);
>> I think we can drop this, populate will just crash badly anyway.
>>
>>> +
>>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + /*
>>> + * We need to use vmap to get the desired page protection
>>> + * or to make the buffer object look contiguous.
>>> + */
>>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>> The calling convention has changed on drm-misc-next as well, but should
>> be trivial to adapt.
>>
>> Regards,
>> Christian.
>>
>>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>>> + if (!vaddr)
>>> + return -ENOMEM;
>>> +
>>> + dma_buf_map_set_vaddr(map, vaddr);
>>> + }
>>> +
>>> + return 0;
>>> +}
>>> +EXPORT_SYMBOL(ttm_bo_vmap);
>>> +
>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>> *map)
>>> +{
>>> + if (dma_buf_map_is_null(map))
>>> + return;
>>> +
>>> + if (map->is_iomem)
>>> + iounmap(map->vaddr_iomem);
>>> + else
>>> + vunmap(map->vaddr);
>>> + dma_buf_map_clear(map);
>>> +
>>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>>> +}
>>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>>> +
>>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>>> bool dst_use_tt)
>>> {
>>> diff --git a/include/drm/drm_gem_ttm_helper.h
>>> b/include/drm/drm_gem_ttm_helper.h
>>> index 118cef76f84f..7c6d874910b8 100644
>>> --- a/include/drm/drm_gem_ttm_helper.h
>>> +++ b/include/drm/drm_gem_ttm_helper.h
>>> @@ -10,11 +10,17 @@
>>> #include <drm/ttm/ttm_bo_api.h>
>>> #include <drm/ttm/ttm_bo_driver.h>
>>> +struct dma_buf_map;
>>> +
>>> #define drm_gem_ttm_of_gem(gem_obj) \
>>> container_of(gem_obj, struct ttm_buffer_object, base)
>>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>>> indent,
>>> const struct drm_gem_object *gem);
>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>> + struct dma_buf_map *map);
>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>> + struct dma_buf_map *map);
>>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>>> struct vm_area_struct *vma);
>>> diff --git a/include/drm/ttm/ttm_bo_api.h
>>> b/include/drm/ttm/ttm_bo_api.h
>>> index 37102e45e496..2c59a785374c 100644
>>> --- a/include/drm/ttm/ttm_bo_api.h
>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>>> struct ttm_bo_device;
>>> +struct dma_buf_map;
>>> +
>>> struct drm_mm_node;
>>> struct ttm_placement;
>>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>> unsigned long start_page,
>>> */
>>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>>> +/**
>>> + * ttm_bo_vmap
>>> + *
>>> + * @bo: The buffer object.
>>> + * @map: pointer to a struct dma_buf_map representing the map.
>>> + *
>>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>>> + * data in the buffer object. The parameter @map returns the virtual
>>> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
>>> + *
>>> + * Returns
>>> + * -ENOMEM: Out of memory.
>>> + * -EINVAL: Invalid range.
>>> + */
>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>> +
>>> +/**
>>> + * ttm_bo_vunmap
>>> + *
>>> + * @bo: The buffer object.
>>> + * @map: Object describing the map to unmap.
>>> + *
>>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>>> + */
>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>> *map);
>>> +
>>> /**
>>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>>> *
>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>> index fd1aba545fdf..2e8bbecb5091 100644
>>> --- a/include/linux/dma-buf-map.h
>>> +++ b/include/linux/dma-buf-map.h
>>> @@ -45,6 +45,12 @@
>>> *
>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>> *
>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>> + *
>>> + * .. code-block:: c
>>> + *
>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>> + *
>>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>>> * dma_buf_map_is_null().
>>> *
>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>> dma_buf_map *map, void *vaddr)
>>> map->is_iomem = false;
>>> }
>>> +/**
>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>> an address in I/O memory
>>> + * @map: The dma-buf mapping structure
>>> + * @vaddr_iomem: An I/O-memory address
>>> + *
>>> + * Sets the address and the I/O-memory flag.
>>> + */
>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>> + void __iomem *vaddr_iomem)
>>> +{
>>> + map->vaddr_iomem = vaddr_iomem;
>>> + map->is_iomem = true;
>>> +}
>>> +
>>> /**
>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>>> for equality
>>> * @lhs: The dma-buf mapping structure
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=04%7C01%7Cchristian.koenig%40amd.com%7C07bc68af3c6440b5be8d08d8740e9b32%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637386953433558595%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=RlGCmjzyZERvqfnl4kA1bEHez5bkLf3F9OlKi2ybDAM%3D&reserved=0
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-19 9:45 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-19 9:45 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
spice-devel, xen-devel, linux-arm-kernel, linux-media
Hi Thomas,
[SNIP]
>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>>> +{
>>> + struct ttm_resource *mem = &bo->mem;
>>> + int ret;
>>> +
>>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + if (mem->bus.is_iomem) {
>>> + void __iomem *vaddr_iomem;
>>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>> Please use uint64_t here and make sure to cast bo->num_pages before
>> shifting.
> I thought the rule of thumb is to use u64 in source code. Yet TTM only
> uses uint*_t types. Is there anything special about TTM?
My last status is that you can use both and my personal preference is to
use the uint*_t types because they are part of a higher level standard.
>> We have an unit tests of allocating a 8GB BO and that should work on a
>> 32bit machine as well :)
>>
>>> +
>>> + if (mem->bus.addr)
>>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> I after reading the patch again, I realized that this is the
> 'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
> options here: ignore this case in _vunmap(), or do an ioremap()
> unconditionally. Which one is preferable?
ioremap would be very very bad, so we should just do nothing.
Thanks,
Christian.
>
> Best regards
> Thomas
>
>>> + else if (mem->placement & TTM_PL_FLAG_WC)
>> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
>> mem->bus.caching enum as replacement.
>>
>>> + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
>>> + else
>>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>>> +
>>> + if (!vaddr_iomem)
>>> + return -ENOMEM;
>>> +
>>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>>> +
>>> + } else {
>>> + struct ttm_operation_ctx ctx = {
>>> + .interruptible = false,
>>> + .no_wait_gpu = false
>>> + };
>>> + struct ttm_tt *ttm = bo->ttm;
>>> + pgprot_t prot;
>>> + void *vaddr;
>>> +
>>> + BUG_ON(!ttm);
>> I think we can drop this, populate will just crash badly anyway.
>>
>>> +
>>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + /*
>>> + * We need to use vmap to get the desired page protection
>>> + * or to make the buffer object look contiguous.
>>> + */
>>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>> The calling convention has changed on drm-misc-next as well, but should
>> be trivial to adapt.
>>
>> Regards,
>> Christian.
>>
>>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>>> + if (!vaddr)
>>> + return -ENOMEM;
>>> +
>>> + dma_buf_map_set_vaddr(map, vaddr);
>>> + }
>>> +
>>> + return 0;
>>> +}
>>> +EXPORT_SYMBOL(ttm_bo_vmap);
>>> +
>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>> *map)
>>> +{
>>> + if (dma_buf_map_is_null(map))
>>> + return;
>>> +
>>> + if (map->is_iomem)
>>> + iounmap(map->vaddr_iomem);
>>> + else
>>> + vunmap(map->vaddr);
>>> + dma_buf_map_clear(map);
>>> +
>>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>>> +}
>>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>>> +
>>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>>> bool dst_use_tt)
>>> {
>>> diff --git a/include/drm/drm_gem_ttm_helper.h
>>> b/include/drm/drm_gem_ttm_helper.h
>>> index 118cef76f84f..7c6d874910b8 100644
>>> --- a/include/drm/drm_gem_ttm_helper.h
>>> +++ b/include/drm/drm_gem_ttm_helper.h
>>> @@ -10,11 +10,17 @@
>>> #include <drm/ttm/ttm_bo_api.h>
>>> #include <drm/ttm/ttm_bo_driver.h>
>>> +struct dma_buf_map;
>>> +
>>> #define drm_gem_ttm_of_gem(gem_obj) \
>>> container_of(gem_obj, struct ttm_buffer_object, base)
>>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>>> indent,
>>> const struct drm_gem_object *gem);
>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>> + struct dma_buf_map *map);
>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>> + struct dma_buf_map *map);
>>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>>> struct vm_area_struct *vma);
>>> diff --git a/include/drm/ttm/ttm_bo_api.h
>>> b/include/drm/ttm/ttm_bo_api.h
>>> index 37102e45e496..2c59a785374c 100644
>>> --- a/include/drm/ttm/ttm_bo_api.h
>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>>> struct ttm_bo_device;
>>> +struct dma_buf_map;
>>> +
>>> struct drm_mm_node;
>>> struct ttm_placement;
>>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>> unsigned long start_page,
>>> */
>>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>>> +/**
>>> + * ttm_bo_vmap
>>> + *
>>> + * @bo: The buffer object.
>>> + * @map: pointer to a struct dma_buf_map representing the map.
>>> + *
>>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>>> + * data in the buffer object. The parameter @map returns the virtual
>>> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
>>> + *
>>> + * Returns
>>> + * -ENOMEM: Out of memory.
>>> + * -EINVAL: Invalid range.
>>> + */
>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>> +
>>> +/**
>>> + * ttm_bo_vunmap
>>> + *
>>> + * @bo: The buffer object.
>>> + * @map: Object describing the map to unmap.
>>> + *
>>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>>> + */
>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>> *map);
>>> +
>>> /**
>>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>>> *
>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>> index fd1aba545fdf..2e8bbecb5091 100644
>>> --- a/include/linux/dma-buf-map.h
>>> +++ b/include/linux/dma-buf-map.h
>>> @@ -45,6 +45,12 @@
>>> *
>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>> *
>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>> + *
>>> + * .. code-block:: c
>>> + *
>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>> + *
>>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>>> * dma_buf_map_is_null().
>>> *
>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>> dma_buf_map *map, void *vaddr)
>>> map->is_iomem = false;
>>> }
>>> +/**
>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>> an address in I/O memory
>>> + * @map: The dma-buf mapping structure
>>> + * @vaddr_iomem: An I/O-memory address
>>> + *
>>> + * Sets the address and the I/O-memory flag.
>>> + */
>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>> + void __iomem *vaddr_iomem)
>>> +{
>>> + map->vaddr_iomem = vaddr_iomem;
>>> + map->is_iomem = true;
>>> +}
>>> +
>>> /**
>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>>> for equality
>>> * @lhs: The dma-buf mapping structure
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=04%7C01%7Cchristian.koenig%40amd.com%7C07bc68af3c6440b5be8d08d8740e9b32%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637386953433558595%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=RlGCmjzyZERvqfnl4kA1bEHez5bkLf3F9OlKi2ybDAM%3D&reserved=0
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-19 9:45 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-19 9:45 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
spice-devel, xen-devel, linux-arm-kernel, linux-media
Hi Thomas,
[SNIP]
>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>>> +{
>>> + struct ttm_resource *mem = &bo->mem;
>>> + int ret;
>>> +
>>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + if (mem->bus.is_iomem) {
>>> + void __iomem *vaddr_iomem;
>>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>> Please use uint64_t here and make sure to cast bo->num_pages before
>> shifting.
> I thought the rule of thumb is to use u64 in source code. Yet TTM only
> uses uint*_t types. Is there anything special about TTM?
My last status is that you can use both and my personal preference is to
use the uint*_t types because they are part of a higher level standard.
>> We have an unit tests of allocating a 8GB BO and that should work on a
>> 32bit machine as well :)
>>
>>> +
>>> + if (mem->bus.addr)
>>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> I after reading the patch again, I realized that this is the
> 'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
> options here: ignore this case in _vunmap(), or do an ioremap()
> unconditionally. Which one is preferable?
ioremap would be very very bad, so we should just do nothing.
Thanks,
Christian.
>
> Best regards
> Thomas
>
>>> + else if (mem->placement & TTM_PL_FLAG_WC)
>> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
>> mem->bus.caching enum as replacement.
>>
>>> + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
>>> + else
>>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>>> +
>>> + if (!vaddr_iomem)
>>> + return -ENOMEM;
>>> +
>>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>>> +
>>> + } else {
>>> + struct ttm_operation_ctx ctx = {
>>> + .interruptible = false,
>>> + .no_wait_gpu = false
>>> + };
>>> + struct ttm_tt *ttm = bo->ttm;
>>> + pgprot_t prot;
>>> + void *vaddr;
>>> +
>>> + BUG_ON(!ttm);
>> I think we can drop this, populate will just crash badly anyway.
>>
>>> +
>>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + /*
>>> + * We need to use vmap to get the desired page protection
>>> + * or to make the buffer object look contiguous.
>>> + */
>>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>> The calling convention has changed on drm-misc-next as well, but should
>> be trivial to adapt.
>>
>> Regards,
>> Christian.
>>
>>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>>> + if (!vaddr)
>>> + return -ENOMEM;
>>> +
>>> + dma_buf_map_set_vaddr(map, vaddr);
>>> + }
>>> +
>>> + return 0;
>>> +}
>>> +EXPORT_SYMBOL(ttm_bo_vmap);
>>> +
>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>> *map)
>>> +{
>>> + if (dma_buf_map_is_null(map))
>>> + return;
>>> +
>>> + if (map->is_iomem)
>>> + iounmap(map->vaddr_iomem);
>>> + else
>>> + vunmap(map->vaddr);
>>> + dma_buf_map_clear(map);
>>> +
>>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>>> +}
>>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>>> +
>>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>>> bool dst_use_tt)
>>> {
>>> diff --git a/include/drm/drm_gem_ttm_helper.h
>>> b/include/drm/drm_gem_ttm_helper.h
>>> index 118cef76f84f..7c6d874910b8 100644
>>> --- a/include/drm/drm_gem_ttm_helper.h
>>> +++ b/include/drm/drm_gem_ttm_helper.h
>>> @@ -10,11 +10,17 @@
>>> #include <drm/ttm/ttm_bo_api.h>
>>> #include <drm/ttm/ttm_bo_driver.h>
>>> +struct dma_buf_map;
>>> +
>>> #define drm_gem_ttm_of_gem(gem_obj) \
>>> container_of(gem_obj, struct ttm_buffer_object, base)
>>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>>> indent,
>>> const struct drm_gem_object *gem);
>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>> + struct dma_buf_map *map);
>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>> + struct dma_buf_map *map);
>>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>>> struct vm_area_struct *vma);
>>> diff --git a/include/drm/ttm/ttm_bo_api.h
>>> b/include/drm/ttm/ttm_bo_api.h
>>> index 37102e45e496..2c59a785374c 100644
>>> --- a/include/drm/ttm/ttm_bo_api.h
>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>>> struct ttm_bo_device;
>>> +struct dma_buf_map;
>>> +
>>> struct drm_mm_node;
>>> struct ttm_placement;
>>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>> unsigned long start_page,
>>> */
>>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>>> +/**
>>> + * ttm_bo_vmap
>>> + *
>>> + * @bo: The buffer object.
>>> + * @map: pointer to a struct dma_buf_map representing the map.
>>> + *
>>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>>> + * data in the buffer object. The parameter @map returns the virtual
>>> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
>>> + *
>>> + * Returns
>>> + * -ENOMEM: Out of memory.
>>> + * -EINVAL: Invalid range.
>>> + */
>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>> +
>>> +/**
>>> + * ttm_bo_vunmap
>>> + *
>>> + * @bo: The buffer object.
>>> + * @map: Object describing the map to unmap.
>>> + *
>>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>>> + */
>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>> *map);
>>> +
>>> /**
>>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>>> *
>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>> index fd1aba545fdf..2e8bbecb5091 100644
>>> --- a/include/linux/dma-buf-map.h
>>> +++ b/include/linux/dma-buf-map.h
>>> @@ -45,6 +45,12 @@
>>> *
>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>> *
>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>> + *
>>> + * .. code-block:: c
>>> + *
>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>> + *
>>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>>> * dma_buf_map_is_null().
>>> *
>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>> dma_buf_map *map, void *vaddr)
>>> map->is_iomem = false;
>>> }
>>> +/**
>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>> an address in I/O memory
>>> + * @map: The dma-buf mapping structure
>>> + * @vaddr_iomem: An I/O-memory address
>>> + *
>>> + * Sets the address and the I/O-memory flag.
>>> + */
>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>> + void __iomem *vaddr_iomem)
>>> +{
>>> + map->vaddr_iomem = vaddr_iomem;
>>> + map->is_iomem = true;
>>> +}
>>> +
>>> /**
>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>>> for equality
>>> * @lhs: The dma-buf mapping structure
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=04%7C01%7Cchristian.koenig%40amd.com%7C07bc68af3c6440b5be8d08d8740e9b32%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637386953433558595%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=RlGCmjzyZERvqfnl4kA1bEHez5bkLf3F9OlKi2ybDAM%3D&reserved=0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-19 9:45 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-19 9:45 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
spice-devel, xen-devel, linux-arm-kernel, linux-media
Hi Thomas,
[SNIP]
>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>>> +{
>>> + struct ttm_resource *mem = &bo->mem;
>>> + int ret;
>>> +
>>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + if (mem->bus.is_iomem) {
>>> + void __iomem *vaddr_iomem;
>>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>> Please use uint64_t here and make sure to cast bo->num_pages before
>> shifting.
> I thought the rule of thumb is to use u64 in source code. Yet TTM only
> uses uint*_t types. Is there anything special about TTM?
My last status is that you can use both and my personal preference is to
use the uint*_t types because they are part of a higher level standard.
>> We have an unit tests of allocating a 8GB BO and that should work on a
>> 32bit machine as well :)
>>
>>> +
>>> + if (mem->bus.addr)
>>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> I after reading the patch again, I realized that this is the
> 'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
> options here: ignore this case in _vunmap(), or do an ioremap()
> unconditionally. Which one is preferable?
ioremap would be very very bad, so we should just do nothing.
Thanks,
Christian.
>
> Best regards
> Thomas
>
>>> + else if (mem->placement & TTM_PL_FLAG_WC)
>> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
>> mem->bus.caching enum as replacement.
>>
>>> + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
>>> + else
>>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>>> +
>>> + if (!vaddr_iomem)
>>> + return -ENOMEM;
>>> +
>>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>>> +
>>> + } else {
>>> + struct ttm_operation_ctx ctx = {
>>> + .interruptible = false,
>>> + .no_wait_gpu = false
>>> + };
>>> + struct ttm_tt *ttm = bo->ttm;
>>> + pgprot_t prot;
>>> + void *vaddr;
>>> +
>>> + BUG_ON(!ttm);
>> I think we can drop this, populate will just crash badly anyway.
>>
>>> +
>>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + /*
>>> + * We need to use vmap to get the desired page protection
>>> + * or to make the buffer object look contiguous.
>>> + */
>>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>> The calling convention has changed on drm-misc-next as well, but should
>> be trivial to adapt.
>>
>> Regards,
>> Christian.
>>
>>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>>> + if (!vaddr)
>>> + return -ENOMEM;
>>> +
>>> + dma_buf_map_set_vaddr(map, vaddr);
>>> + }
>>> +
>>> + return 0;
>>> +}
>>> +EXPORT_SYMBOL(ttm_bo_vmap);
>>> +
>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>> *map)
>>> +{
>>> + if (dma_buf_map_is_null(map))
>>> + return;
>>> +
>>> + if (map->is_iomem)
>>> + iounmap(map->vaddr_iomem);
>>> + else
>>> + vunmap(map->vaddr);
>>> + dma_buf_map_clear(map);
>>> +
>>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>>> +}
>>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>>> +
>>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>>> bool dst_use_tt)
>>> {
>>> diff --git a/include/drm/drm_gem_ttm_helper.h
>>> b/include/drm/drm_gem_ttm_helper.h
>>> index 118cef76f84f..7c6d874910b8 100644
>>> --- a/include/drm/drm_gem_ttm_helper.h
>>> +++ b/include/drm/drm_gem_ttm_helper.h
>>> @@ -10,11 +10,17 @@
>>> #include <drm/ttm/ttm_bo_api.h>
>>> #include <drm/ttm/ttm_bo_driver.h>
>>> +struct dma_buf_map;
>>> +
>>> #define drm_gem_ttm_of_gem(gem_obj) \
>>> container_of(gem_obj, struct ttm_buffer_object, base)
>>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>>> indent,
>>> const struct drm_gem_object *gem);
>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>> + struct dma_buf_map *map);
>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>> + struct dma_buf_map *map);
>>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>>> struct vm_area_struct *vma);
>>> diff --git a/include/drm/ttm/ttm_bo_api.h
>>> b/include/drm/ttm/ttm_bo_api.h
>>> index 37102e45e496..2c59a785374c 100644
>>> --- a/include/drm/ttm/ttm_bo_api.h
>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>>> struct ttm_bo_device;
>>> +struct dma_buf_map;
>>> +
>>> struct drm_mm_node;
>>> struct ttm_placement;
>>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>> unsigned long start_page,
>>> */
>>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>>> +/**
>>> + * ttm_bo_vmap
>>> + *
>>> + * @bo: The buffer object.
>>> + * @map: pointer to a struct dma_buf_map representing the map.
>>> + *
>>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>>> + * data in the buffer object. The parameter @map returns the virtual
>>> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
>>> + *
>>> + * Returns
>>> + * -ENOMEM: Out of memory.
>>> + * -EINVAL: Invalid range.
>>> + */
>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>> +
>>> +/**
>>> + * ttm_bo_vunmap
>>> + *
>>> + * @bo: The buffer object.
>>> + * @map: Object describing the map to unmap.
>>> + *
>>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>>> + */
>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>> *map);
>>> +
>>> /**
>>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>>> *
>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>> index fd1aba545fdf..2e8bbecb5091 100644
>>> --- a/include/linux/dma-buf-map.h
>>> +++ b/include/linux/dma-buf-map.h
>>> @@ -45,6 +45,12 @@
>>> *
>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>> *
>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>> + *
>>> + * .. code-block:: c
>>> + *
>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>> + *
>>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>>> * dma_buf_map_is_null().
>>> *
>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>> dma_buf_map *map, void *vaddr)
>>> map->is_iomem = false;
>>> }
>>> +/**
>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>> an address in I/O memory
>>> + * @map: The dma-buf mapping structure
>>> + * @vaddr_iomem: An I/O-memory address
>>> + *
>>> + * Sets the address and the I/O-memory flag.
>>> + */
>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>> + void __iomem *vaddr_iomem)
>>> +{
>>> + map->vaddr_iomem = vaddr_iomem;
>>> + map->is_iomem = true;
>>> +}
>>> +
>>> /**
>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>>> for equality
>>> * @lhs: The dma-buf mapping structure
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=04%7C01%7Cchristian.koenig%40amd.com%7C07bc68af3c6440b5be8d08d8740e9b32%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637386953433558595%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=RlGCmjzyZERvqfnl4kA1bEHez5bkLf3F9OlKi2ybDAM%3D&reserved=0
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-19 9:45 ` Christian König
0 siblings, 0 replies; 195+ messages in thread
From: Christian König @ 2020-10-19 9:45 UTC (permalink / raw)
To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang
Cc: linux-samsung-soc, lima, nouveau, etnaviv, amd-gfx,
virtualization, linaro-mm-sig, linux-rockchip, dri-devel,
spice-devel, xen-devel, linux-arm-kernel, linux-media
Hi Thomas,
[SNIP]
>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>>> +{
>>> + struct ttm_resource *mem = &bo->mem;
>>> + int ret;
>>> +
>>> + ret = ttm_mem_io_reserve(bo->bdev, mem);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + if (mem->bus.is_iomem) {
>>> + void __iomem *vaddr_iomem;
>>> + unsigned long size = bo->num_pages << PAGE_SHIFT;
>> Please use uint64_t here and make sure to cast bo->num_pages before
>> shifting.
> I thought the rule of thumb is to use u64 in source code. Yet TTM only
> uses uint*_t types. Is there anything special about TTM?
My last status is that you can use both and my personal preference is to
use the uint*_t types because they are part of a higher level standard.
>> We have an unit tests of allocating a 8GB BO and that should work on a
>> 32bit machine as well :)
>>
>>> +
>>> + if (mem->bus.addr)
>>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> I after reading the patch again, I realized that this is the
> 'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
> options here: ignore this case in _vunmap(), or do an ioremap()
> unconditionally. Which one is preferable?
ioremap would be very very bad, so we should just do nothing.
Thanks,
Christian.
>
> Best regards
> Thomas
>
>>> + else if (mem->placement & TTM_PL_FLAG_WC)
>> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
>> mem->bus.caching enum as replacement.
>>
>>> + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
>>> + else
>>> + vaddr_iomem = ioremap(mem->bus.offset, size);
>>> +
>>> + if (!vaddr_iomem)
>>> + return -ENOMEM;
>>> +
>>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>>> +
>>> + } else {
>>> + struct ttm_operation_ctx ctx = {
>>> + .interruptible = false,
>>> + .no_wait_gpu = false
>>> + };
>>> + struct ttm_tt *ttm = bo->ttm;
>>> + pgprot_t prot;
>>> + void *vaddr;
>>> +
>>> + BUG_ON(!ttm);
>> I think we can drop this, populate will just crash badly anyway.
>>
>>> +
>>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + /*
>>> + * We need to use vmap to get the desired page protection
>>> + * or to make the buffer object look contiguous.
>>> + */
>>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>> The calling convention has changed on drm-misc-next as well, but should
>> be trivial to adapt.
>>
>> Regards,
>> Christian.
>>
>>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>>> + if (!vaddr)
>>> + return -ENOMEM;
>>> +
>>> + dma_buf_map_set_vaddr(map, vaddr);
>>> + }
>>> +
>>> + return 0;
>>> +}
>>> +EXPORT_SYMBOL(ttm_bo_vmap);
>>> +
>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>> *map)
>>> +{
>>> + if (dma_buf_map_is_null(map))
>>> + return;
>>> +
>>> + if (map->is_iomem)
>>> + iounmap(map->vaddr_iomem);
>>> + else
>>> + vunmap(map->vaddr);
>>> + dma_buf_map_clear(map);
>>> +
>>> + ttm_mem_io_free(bo->bdev, &bo->mem);
>>> +}
>>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>>> +
>>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>>> bool dst_use_tt)
>>> {
>>> diff --git a/include/drm/drm_gem_ttm_helper.h
>>> b/include/drm/drm_gem_ttm_helper.h
>>> index 118cef76f84f..7c6d874910b8 100644
>>> --- a/include/drm/drm_gem_ttm_helper.h
>>> +++ b/include/drm/drm_gem_ttm_helper.h
>>> @@ -10,11 +10,17 @@
>>> #include <drm/ttm/ttm_bo_api.h>
>>> #include <drm/ttm/ttm_bo_driver.h>
>>> +struct dma_buf_map;
>>> +
>>> #define drm_gem_ttm_of_gem(gem_obj) \
>>> container_of(gem_obj, struct ttm_buffer_object, base)
>>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>>> indent,
>>> const struct drm_gem_object *gem);
>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>> + struct dma_buf_map *map);
>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>> + struct dma_buf_map *map);
>>> int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>>> struct vm_area_struct *vma);
>>> diff --git a/include/drm/ttm/ttm_bo_api.h
>>> b/include/drm/ttm/ttm_bo_api.h
>>> index 37102e45e496..2c59a785374c 100644
>>> --- a/include/drm/ttm/ttm_bo_api.h
>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>>> struct ttm_bo_device;
>>> +struct dma_buf_map;
>>> +
>>> struct drm_mm_node;
>>> struct ttm_placement;
>>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>> unsigned long start_page,
>>> */
>>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>>> +/**
>>> + * ttm_bo_vmap
>>> + *
>>> + * @bo: The buffer object.
>>> + * @map: pointer to a struct dma_buf_map representing the map.
>>> + *
>>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>>> + * data in the buffer object. The parameter @map returns the virtual
>>> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
>>> + *
>>> + * Returns
>>> + * -ENOMEM: Out of memory.
>>> + * -EINVAL: Invalid range.
>>> + */
>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>> +
>>> +/**
>>> + * ttm_bo_vunmap
>>> + *
>>> + * @bo: The buffer object.
>>> + * @map: Object describing the map to unmap.
>>> + *
>>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>>> + */
>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>> *map);
>>> +
>>> /**
>>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>>> *
>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>> index fd1aba545fdf..2e8bbecb5091 100644
>>> --- a/include/linux/dma-buf-map.h
>>> +++ b/include/linux/dma-buf-map.h
>>> @@ -45,6 +45,12 @@
>>> *
>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>> *
>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>> + *
>>> + * .. code-block:: c
>>> + *
>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>> + *
>>> * Test if a mapping is valid with either dma_buf_map_is_set() or
>>> * dma_buf_map_is_null().
>>> *
>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>> dma_buf_map *map, void *vaddr)
>>> map->is_iomem = false;
>>> }
>>> +/**
>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>> an address in I/O memory
>>> + * @map: The dma-buf mapping structure
>>> + * @vaddr_iomem: An I/O-memory address
>>> + *
>>> + * Sets the address and the I/O-memory flag.
>>> + */
>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>> + void __iomem *vaddr_iomem)
>>> +{
>>> + map->vaddr_iomem = vaddr_iomem;
>>> + map->is_iomem = true;
>>> +}
>>> +
>>> /**
>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>>> for equality
>>> * @lhs: The dma-buf mapping structure
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=04%7C01%7Cchristian.koenig%40amd.com%7C07bc68af3c6440b5be8d08d8740e9b32%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637386953433558595%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=RlGCmjzyZERvqfnl4kA1bEHez5bkLf3F9OlKi2ybDAM%3D&reserved=0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
2020-10-19 9:45 ` Christian König
` (3 preceding siblings ...)
(?)
@ 2020-10-19 15:46 ` Daniel Vetter
-1 siblings, 0 replies; 195+ messages in thread
From: Daniel Vetter @ 2020-10-19 15:46 UTC (permalink / raw)
To: Christian König
Cc: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers, linus.walleij, melissa.srw,
chris, miaoqinglang, linux-samsung-soc, lima, nouveau, etnaviv,
amd-gfx, virtualization, linaro-mm-sig, linux-rockchip,
dri-devel, xen-devel, spice-devel, linux-arm-kernel, linux-media
On Mon, Oct 19, 2020 at 11:45:05AM +0200, Christian König wrote:
> Hi Thomas,
>
> [SNIP]
> > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > > > +{
> > > > + struct ttm_resource *mem = &bo->mem;
> > > > + int ret;
> > > > +
> > > > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > > > + if (ret)
> > > > + return ret;
> > > > +
> > > > + if (mem->bus.is_iomem) {
> > > > + void __iomem *vaddr_iomem;
> > > > + unsigned long size = bo->num_pages << PAGE_SHIFT;
> > > Please use uint64_t here and make sure to cast bo->num_pages before
> > > shifting.
> > I thought the rule of thumb is to use u64 in source code. Yet TTM only
> > uses uint*_t types. Is there anything special about TTM?
>
> My last status is that you can use both and my personal preference is to use
> the uint*_t types because they are part of a higher level standard.
Yeah the only hard rule is that in uapi headers you need to use the __u64
and similar typedefs, to avoid cluttering the namespace for unrelated
stuff in userspace.
In the kernel c99 types are perfectly fine, and I think slowly on the
rise.
-Daniel
>
> > > We have an unit tests of allocating a 8GB BO and that should work on a
> > > 32bit machine as well :)
> > >
> > > > +
> > > > + if (mem->bus.addr)
> > > > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > I after reading the patch again, I realized that this is the
> > 'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
> > options here: ignore this case in _vunmap(), or do an ioremap()
> > unconditionally. Which one is preferable?
>
> ioremap would be very very bad, so we should just do nothing.
>
> Thanks,
> Christian.
>
> >
> > Best regards
> > Thomas
> >
> > > > + else if (mem->placement & TTM_PL_FLAG_WC)
> > > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> > > mem->bus.caching enum as replacement.
> > >
> > > > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > > > + else
> > > > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > > > +
> > > > + if (!vaddr_iomem)
> > > > + return -ENOMEM;
> > > > +
> > > > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > > > +
> > > > + } else {
> > > > + struct ttm_operation_ctx ctx = {
> > > > + .interruptible = false,
> > > > + .no_wait_gpu = false
> > > > + };
> > > > + struct ttm_tt *ttm = bo->ttm;
> > > > + pgprot_t prot;
> > > > + void *vaddr;
> > > > +
> > > > + BUG_ON(!ttm);
> > > I think we can drop this, populate will just crash badly anyway.
> > >
> > > > +
> > > > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > > > + if (ret)
> > > > + return ret;
> > > > +
> > > > + /*
> > > > + * We need to use vmap to get the desired page protection
> > > > + * or to make the buffer object look contiguous.
> > > > + */
> > > > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
> > > The calling convention has changed on drm-misc-next as well, but should
> > > be trivial to adapt.
> > >
> > > Regards,
> > > Christian.
> > >
> > > > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > > > + if (!vaddr)
> > > > + return -ENOMEM;
> > > > +
> > > > + dma_buf_map_set_vaddr(map, vaddr);
> > > > + }
> > > > +
> > > > + return 0;
> > > > +}
> > > > +EXPORT_SYMBOL(ttm_bo_vmap);
> > > > +
> > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > > *map)
> > > > +{
> > > > + if (dma_buf_map_is_null(map))
> > > > + return;
> > > > +
> > > > + if (map->is_iomem)
> > > > + iounmap(map->vaddr_iomem);
> > > > + else
> > > > + vunmap(map->vaddr);
> > > > + dma_buf_map_clear(map);
> > > > +
> > > > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > > > +}
> > > > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > > > +
> > > > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > > > bool dst_use_tt)
> > > > {
> > > > diff --git a/include/drm/drm_gem_ttm_helper.h
> > > > b/include/drm/drm_gem_ttm_helper.h
> > > > index 118cef76f84f..7c6d874910b8 100644
> > > > --- a/include/drm/drm_gem_ttm_helper.h
> > > > +++ b/include/drm/drm_gem_ttm_helper.h
> > > > @@ -10,11 +10,17 @@
> > > > #include <drm/ttm/ttm_bo_api.h>
> > > > #include <drm/ttm/ttm_bo_driver.h>
> > > > +struct dma_buf_map;
> > > > +
> > > > #define drm_gem_ttm_of_gem(gem_obj) \
> > > > container_of(gem_obj, struct ttm_buffer_object, base)
> > > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
> > > > indent,
> > > > const struct drm_gem_object *gem);
> > > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > > + struct dma_buf_map *map);
> > > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > > + struct dma_buf_map *map);
> > > > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > > > struct vm_area_struct *vma);
> > > > diff --git a/include/drm/ttm/ttm_bo_api.h
> > > > b/include/drm/ttm/ttm_bo_api.h
> > > > index 37102e45e496..2c59a785374c 100644
> > > > --- a/include/drm/ttm/ttm_bo_api.h
> > > > +++ b/include/drm/ttm/ttm_bo_api.h
> > > > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > > > struct ttm_bo_device;
> > > > +struct dma_buf_map;
> > > > +
> > > > struct drm_mm_node;
> > > > struct ttm_placement;
> > > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > > > unsigned long start_page,
> > > > */
> > > > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > > > +/**
> > > > + * ttm_bo_vmap
> > > > + *
> > > > + * @bo: The buffer object.
> > > > + * @map: pointer to a struct dma_buf_map representing the map.
> > > > + *
> > > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > > > + * data in the buffer object. The parameter @map returns the virtual
> > > > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > > > + *
> > > > + * Returns
> > > > + * -ENOMEM: Out of memory.
> > > > + * -EINVAL: Invalid range.
> > > > + */
> > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > > > +
> > > > +/**
> > > > + * ttm_bo_vunmap
> > > > + *
> > > > + * @bo: The buffer object.
> > > > + * @map: Object describing the map to unmap.
> > > > + *
> > > > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > > > + */
> > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > > *map);
> > > > +
> > > > /**
> > > > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > > > *
> > > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > > > index fd1aba545fdf..2e8bbecb5091 100644
> > > > --- a/include/linux/dma-buf-map.h
> > > > +++ b/include/linux/dma-buf-map.h
> > > > @@ -45,6 +45,12 @@
> > > > *
> > > > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > > > *
> > > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > > > + *
> > > > + * .. code-block:: c
> > > > + *
> > > > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > > > + *
> > > > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > > > * dma_buf_map_is_null().
> > > > *
> > > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > > > dma_buf_map *map, void *vaddr)
> > > > map->is_iomem = false;
> > > > }
> > > > +/**
> > > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> > > > an address in I/O memory
> > > > + * @map: The dma-buf mapping structure
> > > > + * @vaddr_iomem: An I/O-memory address
> > > > + *
> > > > + * Sets the address and the I/O-memory flag.
> > > > + */
> > > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > > > + void __iomem *vaddr_iomem)
> > > > +{
> > > > + map->vaddr_iomem = vaddr_iomem;
> > > > + map->is_iomem = true;
> > > > +}
> > > > +
> > > > /**
> > > > * dma_buf_map_is_equal - Compares two dma-buf mapping structures
> > > > for equality
> > > > * @lhs: The dma-buf mapping structure
> > > _______________________________________________
> > > dri-devel mailing list
> > > dri-devel@lists.freedesktop.org
> > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=04%7C01%7Cchristian.koenig%40amd.com%7C07bc68af3c6440b5be8d08d8740e9b32%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637386953433558595%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=RlGCmjzyZERvqfnl4kA1bEHez5bkLf3F9OlKi2ybDAM%3D&reserved=0
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-19 15:46 ` Daniel Vetter
0 siblings, 0 replies; 195+ messages in thread
From: Daniel Vetter @ 2020-10-19 15:46 UTC (permalink / raw)
To: Christian König
Cc: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
sam, alexander.deucher, kraxel, l.stach, linux+etnaviv,
christian.gmeiner, inki.dae, jy0922.shim, sw0312.kim,
kyungmin.park, kgene, krzk, yuq825, bskeggs, robh, tomeu.vizoso,
steven.price, alyssa.rosenzweig, hjc, heiko, hdegoede, sean,
eric, oleksandr_andrushchenko, ray.huang, sumit.semwal,
emil.velikov, luben.tuikov, apaneers
On Mon, Oct 19, 2020 at 11:45:05AM +0200, Christian König wrote:
> Hi Thomas,
>
> [SNIP]
> > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > > > +{
> > > > + struct ttm_resource *mem = &bo->mem;
> > > > + int ret;
> > > > +
> > > > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > > > + if (ret)
> > > > + return ret;
> > > > +
> > > > + if (mem->bus.is_iomem) {
> > > > + void __iomem *vaddr_iomem;
> > > > + unsigned long size = bo->num_pages << PAGE_SHIFT;
> > > Please use uint64_t here and make sure to cast bo->num_pages before
> > > shifting.
> > I thought the rule of thumb is to use u64 in source code. Yet TTM only
> > uses uint*_t types. Is there anything special about TTM?
>
> My last status is that you can use both and my personal preference is to use
> the uint*_t types because they are part of a higher level standard.
Yeah the only hard rule is that in uapi headers you need to use the __u64
and similar typedefs, to avoid cluttering the namespace for unrelated
stuff in userspace.
In the kernel c99 types are perfectly fine, and I think slowly on the
rise.
-Daniel
>
> > > We have an unit tests of allocating a 8GB BO and that should work on a
> > > 32bit machine as well :)
> > >
> > > > +
> > > > + if (mem->bus.addr)
> > > > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > I after reading the patch again, I realized that this is the
> > 'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
> > options here: ignore this case in _vunmap(), or do an ioremap()
> > unconditionally. Which one is preferable?
>
> ioremap would be very very bad, so we should just do nothing.
>
> Thanks,
> Christian.
>
> >
> > Best regards
> > Thomas
> >
> > > > + else if (mem->placement & TTM_PL_FLAG_WC)
> > > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> > > mem->bus.caching enum as replacement.
> > >
> > > > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > > > + else
> > > > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > > > +
> > > > + if (!vaddr_iomem)
> > > > + return -ENOMEM;
> > > > +
> > > > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > > > +
> > > > + } else {
> > > > + struct ttm_operation_ctx ctx = {
> > > > + .interruptible = false,
> > > > + .no_wait_gpu = false
> > > > + };
> > > > + struct ttm_tt *ttm = bo->ttm;
> > > > + pgprot_t prot;
> > > > + void *vaddr;
> > > > +
> > > > + BUG_ON(!ttm);
> > > I think we can drop this, populate will just crash badly anyway.
> > >
> > > > +
> > > > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > > > + if (ret)
> > > > + return ret;
> > > > +
> > > > + /*
> > > > + * We need to use vmap to get the desired page protection
> > > > + * or to make the buffer object look contiguous.
> > > > + */
> > > > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
> > > The calling convention has changed on drm-misc-next as well, but should
> > > be trivial to adapt.
> > >
> > > Regards,
> > > Christian.
> > >
> > > > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > > > + if (!vaddr)
> > > > + return -ENOMEM;
> > > > +
> > > > + dma_buf_map_set_vaddr(map, vaddr);
> > > > + }
> > > > +
> > > > + return 0;
> > > > +}
> > > > +EXPORT_SYMBOL(ttm_bo_vmap);
> > > > +
> > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > > *map)
> > > > +{
> > > > + if (dma_buf_map_is_null(map))
> > > > + return;
> > > > +
> > > > + if (map->is_iomem)
> > > > + iounmap(map->vaddr_iomem);
> > > > + else
> > > > + vunmap(map->vaddr);
> > > > + dma_buf_map_clear(map);
> > > > +
> > > > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > > > +}
> > > > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > > > +
> > > > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > > > bool dst_use_tt)
> > > > {
> > > > diff --git a/include/drm/drm_gem_ttm_helper.h
> > > > b/include/drm/drm_gem_ttm_helper.h
> > > > index 118cef76f84f..7c6d874910b8 100644
> > > > --- a/include/drm/drm_gem_ttm_helper.h
> > > > +++ b/include/drm/drm_gem_ttm_helper.h
> > > > @@ -10,11 +10,17 @@
> > > > #include <drm/ttm/ttm_bo_api.h>
> > > > #include <drm/ttm/ttm_bo_driver.h>
> > > > +struct dma_buf_map;
> > > > +
> > > > #define drm_gem_ttm_of_gem(gem_obj) \
> > > > container_of(gem_obj, struct ttm_buffer_object, base)
> > > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
> > > > indent,
> > > > const struct drm_gem_object *gem);
> > > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > > + struct dma_buf_map *map);
> > > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > > + struct dma_buf_map *map);
> > > > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > > > struct vm_area_struct *vma);
> > > > diff --git a/include/drm/ttm/ttm_bo_api.h
> > > > b/include/drm/ttm/ttm_bo_api.h
> > > > index 37102e45e496..2c59a785374c 100644
> > > > --- a/include/drm/ttm/ttm_bo_api.h
> > > > +++ b/include/drm/ttm/ttm_bo_api.h
> > > > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > > > struct ttm_bo_device;
> > > > +struct dma_buf_map;
> > > > +
> > > > struct drm_mm_node;
> > > > struct ttm_placement;
> > > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > > > unsigned long start_page,
> > > > */
> > > > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > > > +/**
> > > > + * ttm_bo_vmap
> > > > + *
> > > > + * @bo: The buffer object.
> > > > + * @map: pointer to a struct dma_buf_map representing the map.
> > > > + *
> > > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > > > + * data in the buffer object. The parameter @map returns the virtual
> > > > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > > > + *
> > > > + * Returns
> > > > + * -ENOMEM: Out of memory.
> > > > + * -EINVAL: Invalid range.
> > > > + */
> > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > > > +
> > > > +/**
> > > > + * ttm_bo_vunmap
> > > > + *
> > > > + * @bo: The buffer object.
> > > > + * @map: Object describing the map to unmap.
> > > > + *
> > > > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > > > + */
> > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > > *map);
> > > > +
> > > > /**
> > > > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > > > *
> > > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > > > index fd1aba545fdf..2e8bbecb5091 100644
> > > > --- a/include/linux/dma-buf-map.h
> > > > +++ b/include/linux/dma-buf-map.h
> > > > @@ -45,6 +45,12 @@
> > > > *
> > > > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > > > *
> > > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > > > + *
> > > > + * .. code-block:: c
> > > > + *
> > > > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > > > + *
> > > > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > > > * dma_buf_map_is_null().
> > > > *
> > > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > > > dma_buf_map *map, void *vaddr)
> > > > map->is_iomem = false;
> > > > }
> > > > +/**
> > > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> > > > an address in I/O memory
> > > > + * @map: The dma-buf mapping structure
> > > > + * @vaddr_iomem: An I/O-memory address
> > > > + *
> > > > + * Sets the address and the I/O-memory flag.
> > > > + */
> > > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > > > + void __iomem *vaddr_iomem)
> > > > +{
> > > > + map->vaddr_iomem = vaddr_iomem;
> > > > + map->is_iomem = true;
> > > > +}
> > > > +
> > > > /**
> > > > * dma_buf_map_is_equal - Compares two dma-buf mapping structures
> > > > for equality
> > > > * @lhs: The dma-buf mapping structure
> > > _______________________________________________
> > > dri-devel mailing list
> > > dri-devel@lists.freedesktop.org
> > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=04%7C01%7Cchristian.koenig%40amd.com%7C07bc68af3c6440b5be8d08d8740e9b32%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637386953433558595%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=RlGCmjzyZERvqfnl4kA1bEHez5bkLf3F9OlKi2ybDAM%3D&reserved=0
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-19 15:46 ` Daniel Vetter
0 siblings, 0 replies; 195+ messages in thread
From: Daniel Vetter @ 2020-10-19 15:46 UTC (permalink / raw)
To: Christian König
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, kraxel, sam, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, bskeggs, linux+etnaviv, xen-devel, alyssa.rosenzweig,
daniel, maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, spice-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825,
Thomas Zimmermann, alexander.deucher, linux-media, l.stach
On Mon, Oct 19, 2020 at 11:45:05AM +0200, Christian König wrote:
> Hi Thomas,
>
> [SNIP]
> > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > > > +{
> > > > + struct ttm_resource *mem = &bo->mem;
> > > > + int ret;
> > > > +
> > > > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > > > + if (ret)
> > > > + return ret;
> > > > +
> > > > + if (mem->bus.is_iomem) {
> > > > + void __iomem *vaddr_iomem;
> > > > + unsigned long size = bo->num_pages << PAGE_SHIFT;
> > > Please use uint64_t here and make sure to cast bo->num_pages before
> > > shifting.
> > I thought the rule of thumb is to use u64 in source code. Yet TTM only
> > uses uint*_t types. Is there anything special about TTM?
>
> My last status is that you can use both and my personal preference is to use
> the uint*_t types because they are part of a higher level standard.
Yeah the only hard rule is that in uapi headers you need to use the __u64
and similar typedefs, to avoid cluttering the namespace for unrelated
stuff in userspace.
In the kernel c99 types are perfectly fine, and I think slowly on the
rise.
-Daniel
>
> > > We have an unit tests of allocating a 8GB BO and that should work on a
> > > 32bit machine as well :)
> > >
> > > > +
> > > > + if (mem->bus.addr)
> > > > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > I after reading the patch again, I realized that this is the
> > 'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
> > options here: ignore this case in _vunmap(), or do an ioremap()
> > unconditionally. Which one is preferable?
>
> ioremap would be very very bad, so we should just do nothing.
>
> Thanks,
> Christian.
>
> >
> > Best regards
> > Thomas
> >
> > > > + else if (mem->placement & TTM_PL_FLAG_WC)
> > > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> > > mem->bus.caching enum as replacement.
> > >
> > > > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > > > + else
> > > > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > > > +
> > > > + if (!vaddr_iomem)
> > > > + return -ENOMEM;
> > > > +
> > > > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > > > +
> > > > + } else {
> > > > + struct ttm_operation_ctx ctx = {
> > > > + .interruptible = false,
> > > > + .no_wait_gpu = false
> > > > + };
> > > > + struct ttm_tt *ttm = bo->ttm;
> > > > + pgprot_t prot;
> > > > + void *vaddr;
> > > > +
> > > > + BUG_ON(!ttm);
> > > I think we can drop this, populate will just crash badly anyway.
> > >
> > > > +
> > > > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > > > + if (ret)
> > > > + return ret;
> > > > +
> > > > + /*
> > > > + * We need to use vmap to get the desired page protection
> > > > + * or to make the buffer object look contiguous.
> > > > + */
> > > > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
> > > The calling convention has changed on drm-misc-next as well, but should
> > > be trivial to adapt.
> > >
> > > Regards,
> > > Christian.
> > >
> > > > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > > > + if (!vaddr)
> > > > + return -ENOMEM;
> > > > +
> > > > + dma_buf_map_set_vaddr(map, vaddr);
> > > > + }
> > > > +
> > > > + return 0;
> > > > +}
> > > > +EXPORT_SYMBOL(ttm_bo_vmap);
> > > > +
> > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > > *map)
> > > > +{
> > > > + if (dma_buf_map_is_null(map))
> > > > + return;
> > > > +
> > > > + if (map->is_iomem)
> > > > + iounmap(map->vaddr_iomem);
> > > > + else
> > > > + vunmap(map->vaddr);
> > > > + dma_buf_map_clear(map);
> > > > +
> > > > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > > > +}
> > > > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > > > +
> > > > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > > > bool dst_use_tt)
> > > > {
> > > > diff --git a/include/drm/drm_gem_ttm_helper.h
> > > > b/include/drm/drm_gem_ttm_helper.h
> > > > index 118cef76f84f..7c6d874910b8 100644
> > > > --- a/include/drm/drm_gem_ttm_helper.h
> > > > +++ b/include/drm/drm_gem_ttm_helper.h
> > > > @@ -10,11 +10,17 @@
> > > > #include <drm/ttm/ttm_bo_api.h>
> > > > #include <drm/ttm/ttm_bo_driver.h>
> > > > +struct dma_buf_map;
> > > > +
> > > > #define drm_gem_ttm_of_gem(gem_obj) \
> > > > container_of(gem_obj, struct ttm_buffer_object, base)
> > > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
> > > > indent,
> > > > const struct drm_gem_object *gem);
> > > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > > + struct dma_buf_map *map);
> > > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > > + struct dma_buf_map *map);
> > > > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > > > struct vm_area_struct *vma);
> > > > diff --git a/include/drm/ttm/ttm_bo_api.h
> > > > b/include/drm/ttm/ttm_bo_api.h
> > > > index 37102e45e496..2c59a785374c 100644
> > > > --- a/include/drm/ttm/ttm_bo_api.h
> > > > +++ b/include/drm/ttm/ttm_bo_api.h
> > > > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > > > struct ttm_bo_device;
> > > > +struct dma_buf_map;
> > > > +
> > > > struct drm_mm_node;
> > > > struct ttm_placement;
> > > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > > > unsigned long start_page,
> > > > */
> > > > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > > > +/**
> > > > + * ttm_bo_vmap
> > > > + *
> > > > + * @bo: The buffer object.
> > > > + * @map: pointer to a struct dma_buf_map representing the map.
> > > > + *
> > > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > > > + * data in the buffer object. The parameter @map returns the virtual
> > > > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > > > + *
> > > > + * Returns
> > > > + * -ENOMEM: Out of memory.
> > > > + * -EINVAL: Invalid range.
> > > > + */
> > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > > > +
> > > > +/**
> > > > + * ttm_bo_vunmap
> > > > + *
> > > > + * @bo: The buffer object.
> > > > + * @map: Object describing the map to unmap.
> > > > + *
> > > > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > > > + */
> > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > > *map);
> > > > +
> > > > /**
> > > > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > > > *
> > > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > > > index fd1aba545fdf..2e8bbecb5091 100644
> > > > --- a/include/linux/dma-buf-map.h
> > > > +++ b/include/linux/dma-buf-map.h
> > > > @@ -45,6 +45,12 @@
> > > > *
> > > > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > > > *
> > > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > > > + *
> > > > + * .. code-block:: c
> > > > + *
> > > > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > > > + *
> > > > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > > > * dma_buf_map_is_null().
> > > > *
> > > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > > > dma_buf_map *map, void *vaddr)
> > > > map->is_iomem = false;
> > > > }
> > > > +/**
> > > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> > > > an address in I/O memory
> > > > + * @map: The dma-buf mapping structure
> > > > + * @vaddr_iomem: An I/O-memory address
> > > > + *
> > > > + * Sets the address and the I/O-memory flag.
> > > > + */
> > > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > > > + void __iomem *vaddr_iomem)
> > > > +{
> > > > + map->vaddr_iomem = vaddr_iomem;
> > > > + map->is_iomem = true;
> > > > +}
> > > > +
> > > > /**
> > > > * dma_buf_map_is_equal - Compares two dma-buf mapping structures
> > > > for equality
> > > > * @lhs: The dma-buf mapping structure
> > > _______________________________________________
> > > dri-devel mailing list
> > > dri-devel@lists.freedesktop.org
> > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=04%7C01%7Cchristian.koenig%40amd.com%7C07bc68af3c6440b5be8d08d8740e9b32%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637386953433558595%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=RlGCmjzyZERvqfnl4kA1bEHez5bkLf3F9OlKi2ybDAM%3D&reserved=0
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-19 15:46 ` Daniel Vetter
0 siblings, 0 replies; 195+ messages in thread
From: Daniel Vetter @ 2020-10-19 15:46 UTC (permalink / raw)
To: Christian König
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, sam, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, bskeggs, linux+etnaviv, xen-devel, alyssa.rosenzweig,
daniel, maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, spice-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825,
Thomas Zimmermann, alexander.deucher, linux-media, l.stach
On Mon, Oct 19, 2020 at 11:45:05AM +0200, Christian König wrote:
> Hi Thomas,
>
> [SNIP]
> > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > > > +{
> > > > + struct ttm_resource *mem = &bo->mem;
> > > > + int ret;
> > > > +
> > > > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > > > + if (ret)
> > > > + return ret;
> > > > +
> > > > + if (mem->bus.is_iomem) {
> > > > + void __iomem *vaddr_iomem;
> > > > + unsigned long size = bo->num_pages << PAGE_SHIFT;
> > > Please use uint64_t here and make sure to cast bo->num_pages before
> > > shifting.
> > I thought the rule of thumb is to use u64 in source code. Yet TTM only
> > uses uint*_t types. Is there anything special about TTM?
>
> My last status is that you can use both and my personal preference is to use
> the uint*_t types because they are part of a higher level standard.
Yeah the only hard rule is that in uapi headers you need to use the __u64
and similar typedefs, to avoid cluttering the namespace for unrelated
stuff in userspace.
In the kernel c99 types are perfectly fine, and I think slowly on the
rise.
-Daniel
>
> > > We have an unit tests of allocating a 8GB BO and that should work on a
> > > 32bit machine as well :)
> > >
> > > > +
> > > > + if (mem->bus.addr)
> > > > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > I after reading the patch again, I realized that this is the
> > 'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
> > options here: ignore this case in _vunmap(), or do an ioremap()
> > unconditionally. Which one is preferable?
>
> ioremap would be very very bad, so we should just do nothing.
>
> Thanks,
> Christian.
>
> >
> > Best regards
> > Thomas
> >
> > > > + else if (mem->placement & TTM_PL_FLAG_WC)
> > > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> > > mem->bus.caching enum as replacement.
> > >
> > > > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > > > + else
> > > > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > > > +
> > > > + if (!vaddr_iomem)
> > > > + return -ENOMEM;
> > > > +
> > > > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > > > +
> > > > + } else {
> > > > + struct ttm_operation_ctx ctx = {
> > > > + .interruptible = false,
> > > > + .no_wait_gpu = false
> > > > + };
> > > > + struct ttm_tt *ttm = bo->ttm;
> > > > + pgprot_t prot;
> > > > + void *vaddr;
> > > > +
> > > > + BUG_ON(!ttm);
> > > I think we can drop this, populate will just crash badly anyway.
> > >
> > > > +
> > > > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > > > + if (ret)
> > > > + return ret;
> > > > +
> > > > + /*
> > > > + * We need to use vmap to get the desired page protection
> > > > + * or to make the buffer object look contiguous.
> > > > + */
> > > > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
> > > The calling convention has changed on drm-misc-next as well, but should
> > > be trivial to adapt.
> > >
> > > Regards,
> > > Christian.
> > >
> > > > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > > > + if (!vaddr)
> > > > + return -ENOMEM;
> > > > +
> > > > + dma_buf_map_set_vaddr(map, vaddr);
> > > > + }
> > > > +
> > > > + return 0;
> > > > +}
> > > > +EXPORT_SYMBOL(ttm_bo_vmap);
> > > > +
> > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > > *map)
> > > > +{
> > > > + if (dma_buf_map_is_null(map))
> > > > + return;
> > > > +
> > > > + if (map->is_iomem)
> > > > + iounmap(map->vaddr_iomem);
> > > > + else
> > > > + vunmap(map->vaddr);
> > > > + dma_buf_map_clear(map);
> > > > +
> > > > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > > > +}
> > > > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > > > +
> > > > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > > > bool dst_use_tt)
> > > > {
> > > > diff --git a/include/drm/drm_gem_ttm_helper.h
> > > > b/include/drm/drm_gem_ttm_helper.h
> > > > index 118cef76f84f..7c6d874910b8 100644
> > > > --- a/include/drm/drm_gem_ttm_helper.h
> > > > +++ b/include/drm/drm_gem_ttm_helper.h
> > > > @@ -10,11 +10,17 @@
> > > > #include <drm/ttm/ttm_bo_api.h>
> > > > #include <drm/ttm/ttm_bo_driver.h>
> > > > +struct dma_buf_map;
> > > > +
> > > > #define drm_gem_ttm_of_gem(gem_obj) \
> > > > container_of(gem_obj, struct ttm_buffer_object, base)
> > > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
> > > > indent,
> > > > const struct drm_gem_object *gem);
> > > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > > + struct dma_buf_map *map);
> > > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > > + struct dma_buf_map *map);
> > > > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > > > struct vm_area_struct *vma);
> > > > diff --git a/include/drm/ttm/ttm_bo_api.h
> > > > b/include/drm/ttm/ttm_bo_api.h
> > > > index 37102e45e496..2c59a785374c 100644
> > > > --- a/include/drm/ttm/ttm_bo_api.h
> > > > +++ b/include/drm/ttm/ttm_bo_api.h
> > > > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > > > struct ttm_bo_device;
> > > > +struct dma_buf_map;
> > > > +
> > > > struct drm_mm_node;
> > > > struct ttm_placement;
> > > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > > > unsigned long start_page,
> > > > */
> > > > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > > > +/**
> > > > + * ttm_bo_vmap
> > > > + *
> > > > + * @bo: The buffer object.
> > > > + * @map: pointer to a struct dma_buf_map representing the map.
> > > > + *
> > > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > > > + * data in the buffer object. The parameter @map returns the virtual
> > > > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > > > + *
> > > > + * Returns
> > > > + * -ENOMEM: Out of memory.
> > > > + * -EINVAL: Invalid range.
> > > > + */
> > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > > > +
> > > > +/**
> > > > + * ttm_bo_vunmap
> > > > + *
> > > > + * @bo: The buffer object.
> > > > + * @map: Object describing the map to unmap.
> > > > + *
> > > > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > > > + */
> > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > > *map);
> > > > +
> > > > /**
> > > > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > > > *
> > > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > > > index fd1aba545fdf..2e8bbecb5091 100644
> > > > --- a/include/linux/dma-buf-map.h
> > > > +++ b/include/linux/dma-buf-map.h
> > > > @@ -45,6 +45,12 @@
> > > > *
> > > > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > > > *
> > > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > > > + *
> > > > + * .. code-block:: c
> > > > + *
> > > > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > > > + *
> > > > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > > > * dma_buf_map_is_null().
> > > > *
> > > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > > > dma_buf_map *map, void *vaddr)
> > > > map->is_iomem = false;
> > > > }
> > > > +/**
> > > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> > > > an address in I/O memory
> > > > + * @map: The dma-buf mapping structure
> > > > + * @vaddr_iomem: An I/O-memory address
> > > > + *
> > > > + * Sets the address and the I/O-memory flag.
> > > > + */
> > > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > > > + void __iomem *vaddr_iomem)
> > > > +{
> > > > + map->vaddr_iomem = vaddr_iomem;
> > > > + map->is_iomem = true;
> > > > +}
> > > > +
> > > > /**
> > > > * dma_buf_map_is_equal - Compares two dma-buf mapping structures
> > > > for equality
> > > > * @lhs: The dma-buf mapping structure
> > > _______________________________________________
> > > dri-devel mailing list
> > > dri-devel@lists.freedesktop.org
> > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=04%7C01%7Cchristian.koenig%40amd.com%7C07bc68af3c6440b5be8d08d8740e9b32%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637386953433558595%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=RlGCmjzyZERvqfnl4kA1bEHez5bkLf3F9OlKi2ybDAM%3D&reserved=0
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-19 15:46 ` Daniel Vetter
0 siblings, 0 replies; 195+ messages in thread
From: Daniel Vetter @ 2020-10-19 15:46 UTC (permalink / raw)
To: Christian König
Cc: luben.tuikov, airlied, nouveau, dri-devel, chris, melissa.srw,
ray.huang, kraxel, sam, emil.velikov, linux-samsung-soc,
jy0922.shim, lima, oleksandr_andrushchenko, krzk, steven.price,
linux-rockchip, kgene, bskeggs, linux+etnaviv, xen-devel,
alyssa.rosenzweig, etnaviv, hdegoede, spice-devel,
virtualization, sean, apaneers, linux-arm-kernel, linaro-mm-sig,
amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park,
miaoqinglang, yuq825, Thomas Zimmermann, alexander.deucher,
linux-media
On Mon, Oct 19, 2020 at 11:45:05AM +0200, Christian König wrote:
> Hi Thomas,
>
> [SNIP]
> > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > > > +{
> > > > + struct ttm_resource *mem = &bo->mem;
> > > > + int ret;
> > > > +
> > > > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > > > + if (ret)
> > > > + return ret;
> > > > +
> > > > + if (mem->bus.is_iomem) {
> > > > + void __iomem *vaddr_iomem;
> > > > + unsigned long size = bo->num_pages << PAGE_SHIFT;
> > > Please use uint64_t here and make sure to cast bo->num_pages before
> > > shifting.
> > I thought the rule of thumb is to use u64 in source code. Yet TTM only
> > uses uint*_t types. Is there anything special about TTM?
>
> My last status is that you can use both and my personal preference is to use
> the uint*_t types because they are part of a higher level standard.
Yeah the only hard rule is that in uapi headers you need to use the __u64
and similar typedefs, to avoid cluttering the namespace for unrelated
stuff in userspace.
In the kernel c99 types are perfectly fine, and I think slowly on the
rise.
-Daniel
>
> > > We have an unit tests of allocating a 8GB BO and that should work on a
> > > 32bit machine as well :)
> > >
> > > > +
> > > > + if (mem->bus.addr)
> > > > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > I after reading the patch again, I realized that this is the
> > 'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
> > options here: ignore this case in _vunmap(), or do an ioremap()
> > unconditionally. Which one is preferable?
>
> ioremap would be very very bad, so we should just do nothing.
>
> Thanks,
> Christian.
>
> >
> > Best regards
> > Thomas
> >
> > > > + else if (mem->placement & TTM_PL_FLAG_WC)
> > > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> > > mem->bus.caching enum as replacement.
> > >
> > > > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > > > + else
> > > > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > > > +
> > > > + if (!vaddr_iomem)
> > > > + return -ENOMEM;
> > > > +
> > > > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > > > +
> > > > + } else {
> > > > + struct ttm_operation_ctx ctx = {
> > > > + .interruptible = false,
> > > > + .no_wait_gpu = false
> > > > + };
> > > > + struct ttm_tt *ttm = bo->ttm;
> > > > + pgprot_t prot;
> > > > + void *vaddr;
> > > > +
> > > > + BUG_ON(!ttm);
> > > I think we can drop this, populate will just crash badly anyway.
> > >
> > > > +
> > > > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > > > + if (ret)
> > > > + return ret;
> > > > +
> > > > + /*
> > > > + * We need to use vmap to get the desired page protection
> > > > + * or to make the buffer object look contiguous.
> > > > + */
> > > > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
> > > The calling convention has changed on drm-misc-next as well, but should
> > > be trivial to adapt.
> > >
> > > Regards,
> > > Christian.
> > >
> > > > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > > > + if (!vaddr)
> > > > + return -ENOMEM;
> > > > +
> > > > + dma_buf_map_set_vaddr(map, vaddr);
> > > > + }
> > > > +
> > > > + return 0;
> > > > +}
> > > > +EXPORT_SYMBOL(ttm_bo_vmap);
> > > > +
> > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > > *map)
> > > > +{
> > > > + if (dma_buf_map_is_null(map))
> > > > + return;
> > > > +
> > > > + if (map->is_iomem)
> > > > + iounmap(map->vaddr_iomem);
> > > > + else
> > > > + vunmap(map->vaddr);
> > > > + dma_buf_map_clear(map);
> > > > +
> > > > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > > > +}
> > > > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > > > +
> > > > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > > > bool dst_use_tt)
> > > > {
> > > > diff --git a/include/drm/drm_gem_ttm_helper.h
> > > > b/include/drm/drm_gem_ttm_helper.h
> > > > index 118cef76f84f..7c6d874910b8 100644
> > > > --- a/include/drm/drm_gem_ttm_helper.h
> > > > +++ b/include/drm/drm_gem_ttm_helper.h
> > > > @@ -10,11 +10,17 @@
> > > > #include <drm/ttm/ttm_bo_api.h>
> > > > #include <drm/ttm/ttm_bo_driver.h>
> > > > +struct dma_buf_map;
> > > > +
> > > > #define drm_gem_ttm_of_gem(gem_obj) \
> > > > container_of(gem_obj, struct ttm_buffer_object, base)
> > > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
> > > > indent,
> > > > const struct drm_gem_object *gem);
> > > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > > + struct dma_buf_map *map);
> > > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > > + struct dma_buf_map *map);
> > > > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > > > struct vm_area_struct *vma);
> > > > diff --git a/include/drm/ttm/ttm_bo_api.h
> > > > b/include/drm/ttm/ttm_bo_api.h
> > > > index 37102e45e496..2c59a785374c 100644
> > > > --- a/include/drm/ttm/ttm_bo_api.h
> > > > +++ b/include/drm/ttm/ttm_bo_api.h
> > > > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > > > struct ttm_bo_device;
> > > > +struct dma_buf_map;
> > > > +
> > > > struct drm_mm_node;
> > > > struct ttm_placement;
> > > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > > > unsigned long start_page,
> > > > */
> > > > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > > > +/**
> > > > + * ttm_bo_vmap
> > > > + *
> > > > + * @bo: The buffer object.
> > > > + * @map: pointer to a struct dma_buf_map representing the map.
> > > > + *
> > > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > > > + * data in the buffer object. The parameter @map returns the virtual
> > > > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > > > + *
> > > > + * Returns
> > > > + * -ENOMEM: Out of memory.
> > > > + * -EINVAL: Invalid range.
> > > > + */
> > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > > > +
> > > > +/**
> > > > + * ttm_bo_vunmap
> > > > + *
> > > > + * @bo: The buffer object.
> > > > + * @map: Object describing the map to unmap.
> > > > + *
> > > > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > > > + */
> > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > > *map);
> > > > +
> > > > /**
> > > > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > > > *
> > > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > > > index fd1aba545fdf..2e8bbecb5091 100644
> > > > --- a/include/linux/dma-buf-map.h
> > > > +++ b/include/linux/dma-buf-map.h
> > > > @@ -45,6 +45,12 @@
> > > > *
> > > > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > > > *
> > > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > > > + *
> > > > + * .. code-block:: c
> > > > + *
> > > > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > > > + *
> > > > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > > > * dma_buf_map_is_null().
> > > > *
> > > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > > > dma_buf_map *map, void *vaddr)
> > > > map->is_iomem = false;
> > > > }
> > > > +/**
> > > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> > > > an address in I/O memory
> > > > + * @map: The dma-buf mapping structure
> > > > + * @vaddr_iomem: An I/O-memory address
> > > > + *
> > > > + * Sets the address and the I/O-memory flag.
> > > > + */
> > > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > > > + void __iomem *vaddr_iomem)
> > > > +{
> > > > + map->vaddr_iomem = vaddr_iomem;
> > > > + map->is_iomem = true;
> > > > +}
> > > > +
> > > > /**
> > > > * dma_buf_map_is_equal - Compares two dma-buf mapping structures
> > > > for equality
> > > > * @lhs: The dma-buf mapping structure
> > > _______________________________________________
> > > dri-devel mailing list
> > > dri-devel@lists.freedesktop.org
> > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=04%7C01%7Cchristian.koenig%40amd.com%7C07bc68af3c6440b5be8d08d8740e9b32%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637386953433558595%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=RlGCmjzyZERvqfnl4kA1bEHez5bkLf3F9OlKi2ybDAM%3D&reserved=0
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
@ 2020-10-19 15:46 ` Daniel Vetter
0 siblings, 0 replies; 195+ messages in thread
From: Daniel Vetter @ 2020-10-19 15:46 UTC (permalink / raw)
To: Christian König
Cc: luben.tuikov, heiko, airlied, nouveau, linus.walleij, dri-devel,
chris, melissa.srw, eric, ray.huang, kraxel, sam, sumit.semwal,
emil.velikov, robh, linux-samsung-soc, jy0922.shim, lima,
oleksandr_andrushchenko, krzk, steven.price, linux-rockchip,
kgene, bskeggs, linux+etnaviv, xen-devel, alyssa.rosenzweig,
daniel, maarten.lankhorst, etnaviv, mripard, inki.dae, hdegoede,
christian.gmeiner, spice-devel, virtualization, sean, apaneers,
linux-arm-kernel, linaro-mm-sig, amd-gfx, tomeu.vizoso,
sw0312.kim, hjc, kyungmin.park, miaoqinglang, yuq825,
Thomas Zimmermann, alexander.deucher, linux-media, l.stach
On Mon, Oct 19, 2020 at 11:45:05AM +0200, Christian König wrote:
> Hi Thomas,
>
> [SNIP]
> > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > > > +{
> > > > + struct ttm_resource *mem = &bo->mem;
> > > > + int ret;
> > > > +
> > > > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > > > + if (ret)
> > > > + return ret;
> > > > +
> > > > + if (mem->bus.is_iomem) {
> > > > + void __iomem *vaddr_iomem;
> > > > + unsigned long size = bo->num_pages << PAGE_SHIFT;
> > > Please use uint64_t here and make sure to cast bo->num_pages before
> > > shifting.
> > I thought the rule of thumb is to use u64 in source code. Yet TTM only
> > uses uint*_t types. Is there anything special about TTM?
>
> My last status is that you can use both and my personal preference is to use
> the uint*_t types because they are part of a higher level standard.
Yeah the only hard rule is that in uapi headers you need to use the __u64
and similar typedefs, to avoid cluttering the namespace for unrelated
stuff in userspace.
In the kernel c99 types are perfectly fine, and I think slowly on the
rise.
-Daniel
>
> > > We have an unit tests of allocating a 8GB BO and that should work on a
> > > 32bit machine as well :)
> > >
> > > > +
> > > > + if (mem->bus.addr)
> > > > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > I after reading the patch again, I realized that this is the
> > 'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
> > options here: ignore this case in _vunmap(), or do an ioremap()
> > unconditionally. Which one is preferable?
>
> ioremap would be very very bad, so we should just do nothing.
>
> Thanks,
> Christian.
>
> >
> > Best regards
> > Thomas
> >
> > > > + else if (mem->placement & TTM_PL_FLAG_WC)
> > > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> > > mem->bus.caching enum as replacement.
> > >
> > > > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > > > + else
> > > > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > > > +
> > > > + if (!vaddr_iomem)
> > > > + return -ENOMEM;
> > > > +
> > > > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > > > +
> > > > + } else {
> > > > + struct ttm_operation_ctx ctx = {
> > > > + .interruptible = false,
> > > > + .no_wait_gpu = false
> > > > + };
> > > > + struct ttm_tt *ttm = bo->ttm;
> > > > + pgprot_t prot;
> > > > + void *vaddr;
> > > > +
> > > > + BUG_ON(!ttm);
> > > I think we can drop this, populate will just crash badly anyway.
> > >
> > > > +
> > > > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > > > + if (ret)
> > > > + return ret;
> > > > +
> > > > + /*
> > > > + * We need to use vmap to get the desired page protection
> > > > + * or to make the buffer object look contiguous.
> > > > + */
> > > > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
> > > The calling convention has changed on drm-misc-next as well, but should
> > > be trivial to adapt.
> > >
> > > Regards,
> > > Christian.
> > >
> > > > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > > > + if (!vaddr)
> > > > + return -ENOMEM;
> > > > +
> > > > + dma_buf_map_set_vaddr(map, vaddr);
> > > > + }
> > > > +
> > > > + return 0;
> > > > +}
> > > > +EXPORT_SYMBOL(ttm_bo_vmap);
> > > > +
> > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > > *map)
> > > > +{
> > > > + if (dma_buf_map_is_null(map))
> > > > + return;
> > > > +
> > > > + if (map->is_iomem)
> > > > + iounmap(map->vaddr_iomem);
> > > > + else
> > > > + vunmap(map->vaddr);
> > > > + dma_buf_map_clear(map);
> > > > +
> > > > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > > > +}
> > > > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > > > +
> > > > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > > > bool dst_use_tt)
> > > > {
> > > > diff --git a/include/drm/drm_gem_ttm_helper.h
> > > > b/include/drm/drm_gem_ttm_helper.h
> > > > index 118cef76f84f..7c6d874910b8 100644
> > > > --- a/include/drm/drm_gem_ttm_helper.h
> > > > +++ b/include/drm/drm_gem_ttm_helper.h
> > > > @@ -10,11 +10,17 @@
> > > > #include <drm/ttm/ttm_bo_api.h>
> > > > #include <drm/ttm/ttm_bo_driver.h>
> > > > +struct dma_buf_map;
> > > > +
> > > > #define drm_gem_ttm_of_gem(gem_obj) \
> > > > container_of(gem_obj, struct ttm_buffer_object, base)
> > > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
> > > > indent,
> > > > const struct drm_gem_object *gem);
> > > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > > + struct dma_buf_map *map);
> > > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > > + struct dma_buf_map *map);
> > > > int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > > > struct vm_area_struct *vma);
> > > > diff --git a/include/drm/ttm/ttm_bo_api.h
> > > > b/include/drm/ttm/ttm_bo_api.h
> > > > index 37102e45e496..2c59a785374c 100644
> > > > --- a/include/drm/ttm/ttm_bo_api.h
> > > > +++ b/include/drm/ttm/ttm_bo_api.h
> > > > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > > > struct ttm_bo_device;
> > > > +struct dma_buf_map;
> > > > +
> > > > struct drm_mm_node;
> > > > struct ttm_placement;
> > > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > > > unsigned long start_page,
> > > > */
> > > > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > > > +/**
> > > > + * ttm_bo_vmap
> > > > + *
> > > > + * @bo: The buffer object.
> > > > + * @map: pointer to a struct dma_buf_map representing the map.
> > > > + *
> > > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > > > + * data in the buffer object. The parameter @map returns the virtual
> > > > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > > > + *
> > > > + * Returns
> > > > + * -ENOMEM: Out of memory.
> > > > + * -EINVAL: Invalid range.
> > > > + */
> > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > > > +
> > > > +/**
> > > > + * ttm_bo_vunmap
> > > > + *
> > > > + * @bo: The buffer object.
> > > > + * @map: Object describing the map to unmap.
> > > > + *
> > > > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > > > + */
> > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > > *map);
> > > > +
> > > > /**
> > > > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > > > *
> > > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > > > index fd1aba545fdf..2e8bbecb5091 100644
> > > > --- a/include/linux/dma-buf-map.h
> > > > +++ b/include/linux/dma-buf-map.h
> > > > @@ -45,6 +45,12 @@
> > > > *
> > > > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > > > *
> > > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > > > + *
> > > > + * .. code-block:: c
> > > > + *
> > > > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > > > + *
> > > > * Test if a mapping is valid with either dma_buf_map_is_set() or
> > > > * dma_buf_map_is_null().
> > > > *
> > > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > > > dma_buf_map *map, void *vaddr)
> > > > map->is_iomem = false;
> > > > }
> > > > +/**
> > > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> > > > an address in I/O memory
> > > > + * @map: The dma-buf mapping structure
> > > > + * @vaddr_iomem: An I/O-memory address
> > > > + *
> > > > + * Sets the address and the I/O-memory flag.
> > > > + */
> > > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > > > + void __iomem *vaddr_iomem)
> > > > +{
> > > > + map->vaddr_iomem = vaddr_iomem;
> > > > + map->is_iomem = true;
> > > > +}
> > > > +
> > > > /**
> > > > * dma_buf_map_is_equal - Compares two dma-buf mapping structures
> > > > for equality
> > > > * @lhs: The dma-buf mapping structure
> > > _______________________________________________
> > > dri-devel mailing list
> > > dri-devel@lists.freedesktop.org
> > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=04%7C01%7Cchristian.koenig%40amd.com%7C07bc68af3c6440b5be8d08d8740e9b32%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637386953433558595%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=RlGCmjzyZERvqfnl4kA1bEHez5bkLf3F9OlKi2ybDAM%3D&reserved=0
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 195+ messages in thread
end of thread, other threads:[~2020-10-19 15:46 UTC | newest]
Thread overview: 195+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-15 12:37 [PATCH v4 00/10] Support GEM object mappings from I/O memory Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 12:37 ` [PATCH v4 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 13:57 ` Christian König
2020-10-15 13:57 ` Christian König
2020-10-15 13:57 ` Christian König
2020-10-15 13:57 ` Christian König
2020-10-15 13:57 ` Christian König
2020-10-15 13:57 ` Christian König
2020-10-15 12:37 ` [PATCH v4 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap() Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 13:58 ` Christian König
2020-10-15 13:58 ` Christian König
2020-10-15 13:58 ` Christian König
2020-10-15 13:58 ` Christian König
2020-10-15 13:58 ` Christian König
2020-10-15 13:58 ` Christian König
2020-10-15 12:37 ` [PATCH v4 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap() Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 12:37 ` Thomas Zimmermann
2020-10-15 13:59 ` Christian König
2020-10-15 13:59 ` Christian König
2020-10-15 13:59 ` Christian König
2020-10-15 13:59 ` Christian König
2020-10-15 13:59 ` Christian König
2020-10-15 13:59 ` Christian König
2020-10-15 12:38 ` [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}() Thomas Zimmermann
2020-10-15 12:38 ` [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap, vunmap}() Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}() Thomas Zimmermann
2020-10-15 14:00 ` Christian König
2020-10-15 14:00 ` Christian König
2020-10-15 14:00 ` Christian König
2020-10-15 14:00 ` Christian König
2020-10-15 14:00 ` Christian König
2020-10-15 14:00 ` Christian König
2020-10-15 12:38 ` [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 14:08 ` Christian König
2020-10-15 14:08 ` Christian König
2020-10-15 14:08 ` Christian König
2020-10-15 14:08 ` Christian König
2020-10-15 14:08 ` Christian König
2020-10-15 14:08 ` Christian König
2020-10-15 16:49 ` Daniel Vetter
2020-10-15 16:49 ` Daniel Vetter
2020-10-15 16:49 ` Daniel Vetter
2020-10-15 16:49 ` Daniel Vetter
2020-10-15 16:49 ` Daniel Vetter
2020-10-15 16:49 ` Daniel Vetter
2020-10-15 17:52 ` Thomas Zimmermann
2020-10-15 17:52 ` Thomas Zimmermann
2020-10-15 17:52 ` Thomas Zimmermann
2020-10-15 17:52 ` Thomas Zimmermann
2020-10-15 17:52 ` Thomas Zimmermann
2020-10-15 17:52 ` Thomas Zimmermann
2020-10-15 17:52 ` Thomas Zimmermann
2020-10-16 9:41 ` Christian König
2020-10-16 9:41 ` Christian König
2020-10-16 9:41 ` Christian König
2020-10-16 9:41 ` Christian König
2020-10-16 9:41 ` Christian König
2020-10-16 9:41 ` Christian König
2020-10-16 9:41 ` Christian König
2020-10-15 17:56 ` Thomas Zimmermann
2020-10-15 17:56 ` Thomas Zimmermann
2020-10-15 17:56 ` Thomas Zimmermann
2020-10-15 17:56 ` Thomas Zimmermann
2020-10-15 17:56 ` Thomas Zimmermann
2020-10-15 17:56 ` Thomas Zimmermann
2020-10-19 9:08 ` Thomas Zimmermann
2020-10-19 9:08 ` Thomas Zimmermann
2020-10-19 9:08 ` Thomas Zimmermann
2020-10-19 9:08 ` Thomas Zimmermann
2020-10-19 9:08 ` Thomas Zimmermann
2020-10-19 9:08 ` Thomas Zimmermann
2020-10-19 9:45 ` Christian König
2020-10-19 9:45 ` Christian König
2020-10-19 9:45 ` Christian König
2020-10-19 9:45 ` Christian König
2020-10-19 9:45 ` Christian König
2020-10-19 9:45 ` Christian König
2020-10-19 15:46 ` Daniel Vetter
2020-10-19 15:46 ` Daniel Vetter
2020-10-19 15:46 ` Daniel Vetter
2020-10-19 15:46 ` Daniel Vetter
2020-10-19 15:46 ` Daniel Vetter
2020-10-19 15:46 ` Daniel Vetter
2020-10-15 12:38 ` [PATCH v4 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 14:21 ` Christian König
2020-10-15 14:21 ` Christian König
2020-10-15 14:21 ` Christian König
2020-10-15 14:21 ` Christian König
2020-10-15 14:21 ` Christian König
2020-10-15 14:21 ` Christian König
2020-10-15 12:38 ` [PATCH v4 07/10] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` [PATCH v4 08/10] drm/gem: Store client buffer mappings as " Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-16 10:08 ` Sam Ravnborg
2020-10-16 10:08 ` Sam Ravnborg
2020-10-16 10:08 ` Sam Ravnborg
2020-10-16 10:08 ` Sam Ravnborg
2020-10-16 10:08 ` Sam Ravnborg
2020-10-16 10:08 ` Sam Ravnborg
2020-10-16 10:39 ` Thomas Zimmermann
2020-10-16 10:39 ` Thomas Zimmermann
2020-10-16 10:39 ` Thomas Zimmermann
2020-10-16 10:39 ` Thomas Zimmermann
2020-10-16 10:39 ` Thomas Zimmermann
2020-10-16 10:39 ` Thomas Zimmermann
2020-10-16 10:39 ` Thomas Zimmermann
2020-10-16 11:31 ` Sam Ravnborg
2020-10-16 11:31 ` Sam Ravnborg
2020-10-16 11:31 ` Sam Ravnborg
2020-10-16 11:31 ` Sam Ravnborg
2020-10-16 11:31 ` Sam Ravnborg
2020-10-16 11:31 ` Sam Ravnborg
2020-10-15 12:38 ` [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-15 12:38 ` Thomas Zimmermann
2020-10-16 10:58 ` Sam Ravnborg
2020-10-16 10:58 ` Sam Ravnborg
2020-10-16 10:58 ` Sam Ravnborg
2020-10-16 10:58 ` Sam Ravnborg
2020-10-16 10:58 ` Sam Ravnborg
2020-10-16 10:58 ` Sam Ravnborg
2020-10-16 11:34 ` Thomas Zimmermann
2020-10-16 11:34 ` Thomas Zimmermann
2020-10-16 11:34 ` Thomas Zimmermann
2020-10-16 11:34 ` Thomas Zimmermann
2020-10-16 11:34 ` Thomas Zimmermann
2020-10-16 11:34 ` Thomas Zimmermann
2020-10-16 11:34 ` Thomas Zimmermann
2020-10-16 12:03 ` Sam Ravnborg
2020-10-16 12:03 ` Sam Ravnborg
2020-10-16 12:03 ` Sam Ravnborg
2020-10-16 12:03 ` Sam Ravnborg
2020-10-16 12:03 ` Sam Ravnborg
2020-10-16 12:03 ` Sam Ravnborg
2020-10-16 12:19 ` Thomas Zimmermann
2020-10-16 12:19 ` Thomas Zimmermann
2020-10-16 12:19 ` Thomas Zimmermann
2020-10-16 12:19 ` Thomas Zimmermann
2020-10-16 12:19 ` Thomas Zimmermann
2020-10-16 12:19 ` Thomas Zimmermann
2020-10-16 12:48 ` Sam Ravnborg
2020-10-16 12:48 ` Sam Ravnborg
2020-10-16 12:48 ` Sam Ravnborg
2020-10-16 12:48 ` Sam Ravnborg
2020-10-16 12:48 ` Sam Ravnborg
2020-10-16 12:48 ` Sam Ravnborg
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.