All of lore.kernel.org
 help / color / mirror / Atom feed
From: Thomas Zimmermann <tzimmermann@suse.de>
To: daniel@ffwll.ch, christian.koenig@amd.com, noralf@tronnes.org
Cc: Thomas Zimmermann <tzimmermann@suse.de>, dri-devel@lists.freedesktop.org
Subject: [PATCH 3/8] drm: Add is_iomem return parameter to struct drm_gem_object_funcs.vmap
Date: Wed,  6 Nov 2019 10:31:16 +0100	[thread overview]
Message-ID: <20191106093121.21762-4-tzimmermann@suse.de> (raw)
In-Reply-To: <20191106093121.21762-1-tzimmermann@suse.de>

The vmap operation can return system or I/O memory, which the caller may
have to treat differently. The parameter is_iomem returns 'true' if the
returned pointer refers to I/O memory, or 'false' otherwise.

In many cases, such as CMA ans SHMEM, the returned value is 'false'. For
TTM-based drivers, the correct value is provided by TTM itself. For DMA
buffers that are shared among devices, we assume system memory as well.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |  6 +++++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h |  2 +-
 drivers/gpu/drm/cirrus/cirrus.c             |  2 +-
 drivers/gpu/drm/drm_gem.c                   |  4 ++--
 drivers/gpu/drm/drm_gem_cma_helper.c        |  7 ++++++-
 drivers/gpu/drm/drm_gem_shmem_helper.c      | 12 +++++++++---
 drivers/gpu/drm/drm_gem_vram_helper.c       |  7 +++++--
 drivers/gpu/drm/etnaviv/etnaviv_drv.h       |  2 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |  4 +++-
 drivers/gpu/drm/nouveau/nouveau_gem.h       |  2 +-
 drivers/gpu/drm/nouveau/nouveau_prime.c     |  4 +++-
 drivers/gpu/drm/panfrost/panfrost_perfcnt.c |  2 +-
 drivers/gpu/drm/qxl/qxl_drv.h               |  2 +-
 drivers/gpu/drm/qxl/qxl_prime.c             |  4 ++--
 drivers/gpu/drm/radeon/radeon_drv.c         |  2 +-
 drivers/gpu/drm/radeon/radeon_prime.c       |  4 +++-
 drivers/gpu/drm/tiny/gm12u320.c             |  2 +-
 drivers/gpu/drm/vc4/vc4_bo.c                |  4 ++--
 drivers/gpu/drm/vc4/vc4_drv.h               |  2 +-
 drivers/gpu/drm/vgem/vgem_drv.c             |  5 ++++-
 drivers/gpu/drm/xen/xen_drm_front_gem.c     |  6 +++++-
 drivers/gpu/drm/xen/xen_drm_front_gem.h     |  3 ++-
 include/drm/drm_drv.h                       |  2 +-
 include/drm/drm_gem.h                       |  2 +-
 include/drm/drm_gem_cma_helper.h            |  2 +-
 include/drm/drm_gem_shmem_helper.h          |  2 +-
 26 files changed, 64 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 4917b548b7f2..97b77e7e15dc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -57,13 +57,15 @@ struct sg_table *amdgpu_gem_prime_get_sg_table(struct drm_gem_object *obj)
 /**
  * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
  * @obj: GEM BO
+ * @is_iomem: returns true if the mapped memory is I/O memory, or false
+ *            otherwise; can be NULL
  *
  * Sets up an in-kernel virtual mapping of the BO's memory.
  *
  * Returns:
  * The virtual address of the mapping or an error pointer.
  */
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
+void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
 	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
 	int ret;
@@ -73,6 +75,8 @@ void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
 	if (ret)
 		return ERR_PTR(ret);
 
+	if (is_iomem)
+		return ttm_kmap_obj_virtual(&bo->dma_buf_vmap, is_iomem);
 	return bo->dma_buf_vmap.virtual;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
index 5012e6ab58f1..910cf2ef345f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
@@ -34,7 +34,7 @@ struct dma_buf *amdgpu_gem_prime_export(struct drm_gem_object *gobj,
 					int flags);
 struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
 					    struct dma_buf *dma_buf);
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
+void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem);
 void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
 			  struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/cirrus/cirrus.c b/drivers/gpu/drm/cirrus/cirrus.c
index 248c9f765c45..6518e5c31eb4 100644
--- a/drivers/gpu/drm/cirrus/cirrus.c
+++ b/drivers/gpu/drm/cirrus/cirrus.c
@@ -302,7 +302,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
 	struct cirrus_device *cirrus = fb->dev->dev_private;
 	void *vmap;
 
-	vmap = drm_gem_shmem_vmap(fb->obj[0]);
+	vmap = drm_gem_shmem_vmap(fb->obj[0], NULL);
 	if (!vmap)
 		return -ENOMEM;
 
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 56f42e0f2584..0acfbd134e04 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1251,9 +1251,9 @@ void *drm_gem_vmap(struct drm_gem_object *obj)
 	void *vaddr;
 
 	if (obj->funcs && obj->funcs->vmap)
-		vaddr = obj->funcs->vmap(obj);
+		vaddr = obj->funcs->vmap(obj, NULL);
 	else if (obj->dev->driver->gem_prime_vmap)
-		vaddr = obj->dev->driver->gem_prime_vmap(obj);
+		vaddr = obj->dev->driver->gem_prime_vmap(obj, NULL);
 	else
 		vaddr = ERR_PTR(-EOPNOTSUPP);
 
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 12e98fb28229..b14e88337529 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -537,6 +537,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
  * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
  *     address space
  * @obj: GEM object
+ * @is_iomem: returns true if the mapped memory is I/O memory, or false
+ *            otherwise; can be NULL
  *
  * This function maps a buffer exported via DRM PRIME into the kernel's
  * virtual address space. Since the CMA buffers are already mapped into the
@@ -547,10 +549,13 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
  * Returns:
  * The kernel virtual address of the CMA GEM object's backing store.
  */
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
+void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
 	struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
 
+	if (is_iomem)
+		*is_iomem = false;
+
 	return cma_obj->vaddr;
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 3bc69b1ffa7d..a8a8e1b13a30 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -242,7 +242,8 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
 }
 EXPORT_SYMBOL(drm_gem_shmem_unpin);
 
-static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
+static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
+				       bool *is_iomem)
 {
 	struct drm_gem_object *obj = &shmem->base;
 	int ret;
@@ -266,6 +267,9 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 		goto err_put_pages;
 	}
 
+	if (is_iomem)
+		*is_iomem = false;
+
 	return shmem->vaddr;
 
 err_put_pages:
@@ -279,6 +283,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 /*
  * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
  * @shmem: shmem GEM object
+ * @is_iomem: returns true if the mapped memory is I/O memory, or false
+ *            otherwise; can be NULL
  *
  * This function makes sure that a virtual address exists for the buffer backing
  * the shmem GEM object.
@@ -286,7 +292,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
  * Returns:
  * 0 on success or a negative error code on failure.
  */
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
+void *drm_gem_shmem_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
 	void *vaddr;
@@ -295,7 +301,7 @@ void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
 	ret = mutex_lock_interruptible(&shmem->vmap_lock);
 	if (ret)
 		return ERR_PTR(ret);
-	vaddr = drm_gem_shmem_vmap_locked(shmem);
+	vaddr = drm_gem_shmem_vmap_locked(shmem, is_iomem);
 	mutex_unlock(&shmem->vmap_lock);
 
 	return vaddr;
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 05f63f28814d..77658f835774 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -818,17 +818,20 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
  * drm_gem_vram_object_vmap() - \
 	Implements &struct drm_gem_object_funcs.vmap
  * @gem:	The GEM object to map
+ * @is_iomem:	returns true if the mapped memory is I/O memory, or false
+ *              otherwise; can be NULL
  *
  * Returns:
  * The buffers virtual address on success, or
  * NULL otherwise.
  */
-static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
+static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem,
+				      bool *is_iomem)
 {
 	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
 	void *base;
 
-	base = drm_gem_vram_vmap(gbo, NULL);
+	base = drm_gem_vram_vmap(gbo, is_iomem);
 	if (IS_ERR(base))
 		return NULL;
 	return base;
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 32cfa5a48d42..558b79366bf4 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -51,7 +51,7 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 vm_fault_t etnaviv_gem_fault(struct vm_fault *vmf);
 int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
 struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
+void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem);
 void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index f24dd21c2363..c8b09ed7f936 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -22,8 +22,10 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(etnaviv_obj->pages, npages);
 }
 
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
+void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
+	if (is_iomem)
+		*is_iomem = false;
 	return etnaviv_gem_vmap(obj);
 }
 
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
index 978e07591990..46ff11a39f23 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.h
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
@@ -35,7 +35,7 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
 extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
 extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
 	struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
-extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
+extern void *nouveau_gem_prime_vmap(struct drm_gem_object *, bool *);
 extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
 
 #endif
diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
index bae6a3eccee0..b61376c91d31 100644
--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
+++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
@@ -35,7 +35,7 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(nvbo->bo.ttm->pages, npages);
 }
 
-void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
+void *nouveau_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
 	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
 	int ret;
@@ -45,6 +45,8 @@ void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
 	if (ret)
 		return ERR_PTR(ret);
 
+	if (is_iomem)
+		return ttm_kmap_obj_virtual(&nvbo->dma_buf_vmap, is_iomem);
 	return nvbo->dma_buf_vmap.virtual;
 }
 
diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
index 83c57d325ca8..f833d8376d44 100644
--- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
+++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
@@ -94,7 +94,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
 	if (ret)
 		goto err_put_bo;
 
-	perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
+	perfcnt->buf = drm_gem_shmem_vmap(&bo->base, NULL);
 	if (IS_ERR(perfcnt->buf)) {
 		ret = PTR_ERR(perfcnt->buf);
 		goto err_put_bo;
diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index e749c0d0e819..3f80b2215f25 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -452,7 +452,7 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
 struct drm_gem_object *qxl_gem_prime_import_sg_table(
 	struct drm_device *dev, struct dma_buf_attachment *attach,
 	struct sg_table *sgt);
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
+void *qxl_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem);
 void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int qxl_gem_prime_mmap(struct drm_gem_object *obj,
 				struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index e67ebbdeb7f2..9b2d4015e0d6 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -54,13 +54,13 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
 	return ERR_PTR(-ENOSYS);
 }
 
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
+void *qxl_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
 	struct qxl_bo *bo = gem_to_qxl_bo(obj);
 	void *ptr;
 	int ret;
 
-	ret = qxl_bo_kmap(bo, &ptr, NULL);
+	ret = qxl_bo_kmap(bo, &ptr, is_iomem);
 	if (ret < 0)
 		return ERR_PTR(ret);
 
diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c
index 888e0f384c61..7f9cff9cb572 100644
--- a/drivers/gpu/drm/radeon/radeon_drv.c
+++ b/drivers/gpu/drm/radeon/radeon_drv.c
@@ -153,7 +153,7 @@ struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
 							struct sg_table *sg);
 int radeon_gem_prime_pin(struct drm_gem_object *obj);
 void radeon_gem_prime_unpin(struct drm_gem_object *obj);
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
+void *radeon_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem);
 void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 
 /* atpx handler */
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
index b906e8fbd5f3..2019b54277e4 100644
--- a/drivers/gpu/drm/radeon/radeon_prime.c
+++ b/drivers/gpu/drm/radeon/radeon_prime.c
@@ -39,7 +39,7 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(bo->tbo.ttm->pages, npages);
 }
 
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
+void *radeon_gem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
 	struct radeon_bo *bo = gem_to_radeon_bo(obj);
 	int ret;
@@ -49,6 +49,8 @@ void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
 	if (ret)
 		return ERR_PTR(ret);
 
+	if (is_iomem)
+		return ttm_kmap_obj_virtual(&bo->dma_buf_vmap, is_iomem);
 	return bo->dma_buf_vmap.virtual;
 }
 
diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
index 94fb1f593564..4c4b1904e046 100644
--- a/drivers/gpu/drm/tiny/gm12u320.c
+++ b/drivers/gpu/drm/tiny/gm12u320.c
@@ -278,7 +278,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 	y1 = gm12u320->fb_update.rect.y1;
 	y2 = gm12u320->fb_update.rect.y2;
 
-	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
+	vaddr = drm_gem_shmem_vmap(fb->obj[0], NULL);
 	if (IS_ERR(vaddr)) {
 		GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
 		goto put_fb;
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index 72d30d90b856..c03462cef01c 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -767,7 +767,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
 	return drm_gem_cma_prime_mmap(obj, vma);
 }
 
-void *vc4_prime_vmap(struct drm_gem_object *obj)
+void *vc4_prime_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
 	struct vc4_bo *bo = to_vc4_bo(obj);
 
@@ -776,7 +776,7 @@ void *vc4_prime_vmap(struct drm_gem_object *obj)
 		return ERR_PTR(-EINVAL);
 	}
 
-	return drm_gem_cma_prime_vmap(obj);
+	return drm_gem_cma_prime_vmap(obj, is_iomem);
 }
 
 struct drm_gem_object *
diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
index 6627b20c99e9..c84a7eaf1f3e 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.h
+++ b/drivers/gpu/drm/vc4/vc4_drv.h
@@ -733,7 +733,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
 						 struct dma_buf_attachment *attach,
 						 struct sg_table *sgt);
-void *vc4_prime_vmap(struct drm_gem_object *obj);
+void *vc4_prime_vmap(struct drm_gem_object *obj, bool *is_iomem);
 int vc4_bo_cache_init(struct drm_device *dev);
 void vc4_bo_cache_destroy(struct drm_device *dev);
 int vc4_bo_inc_usecnt(struct vc4_bo *bo);
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index 5bd60ded3d81..b991cfce3d91 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -379,7 +379,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
 	return &obj->base;
 }
 
-static void *vgem_prime_vmap(struct drm_gem_object *obj)
+static void *vgem_prime_vmap(struct drm_gem_object *obj, bool *is_iomem)
 {
 	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
 	long n_pages = obj->size >> PAGE_SHIFT;
@@ -389,6 +389,9 @@ static void *vgem_prime_vmap(struct drm_gem_object *obj)
 	if (IS_ERR(pages))
 		return NULL;
 
+	if (is_iomem)
+		*is_iomem = false;
+
 	return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
 }
 
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index f0b85e094111..b3c3ba661f38 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -272,13 +272,17 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
 	return gem_mmap_obj(xen_obj, vma);
 }
 
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
+void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
+				   bool *is_iomem)
 {
 	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
 
 	if (!xen_obj->pages)
 		return NULL;
 
+	if (is_iomem)
+		*is_iomem = false;
+
 	/* Please see comment in gem_mmap_obj on mapping and attributes. */
 	return vmap(xen_obj->pages, xen_obj->num_pages,
 		    VM_MAP, PAGE_KERNEL);
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
index a39675fa31b2..adcf3d809c75 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
@@ -34,7 +34,8 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
 
 int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
+void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
+				   bool *is_iomem);
 
 void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
 				    void *vaddr);
diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
index cf13470810a5..662c5d5dfd05 100644
--- a/include/drm/drm_drv.h
+++ b/include/drm/drm_drv.h
@@ -631,7 +631,7 @@ struct drm_driver {
 	 * Deprecated vmap hook for GEM drivers. Please use
 	 * &drm_gem_object_funcs.vmap instead.
 	 */
-	void *(*gem_prime_vmap)(struct drm_gem_object *obj);
+	void *(*gem_prime_vmap)(struct drm_gem_object *obj, bool *is_iomem);
 
 	/**
 	 * @gem_prime_vunmap:
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index e71f75a2ab57..edc73b686c60 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -138,7 +138,7 @@ struct drm_gem_object_funcs {
 	 *
 	 * This callback is optional.
 	 */
-	void *(*vmap)(struct drm_gem_object *obj);
+	void *(*vmap)(struct drm_gem_object *obj, bool *is_iomem);
 
 	/**
 	 * @vunmap:
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index 947ac95eb24a..69fdd18dc7b2 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
 				  struct sg_table *sgt);
 int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
+void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj, bool *is_iomem);
 void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 
 struct drm_gem_object *
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 6748379a0b44..ddb54aa1ac1a 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -95,7 +95,7 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
 void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
 int drm_gem_shmem_pin(struct drm_gem_object *obj);
 void drm_gem_shmem_unpin(struct drm_gem_object *obj);
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
+void *drm_gem_shmem_vmap(struct drm_gem_object *obj, bool *is_iomem);
 void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
 
 int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
-- 
2.23.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

  parent reply	other threads:[~2019-11-06  9:31 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-06  9:31 [RFC][PATCH 0/8] Support I/O memory in generic fbdev emulation Thomas Zimmermann
2019-11-06  9:31 ` [PATCH 1/8] drm/vram-helper: Tell caller if vmap() returned I/O memory Thomas Zimmermann
2019-11-06  9:31 ` [PATCH 2/8] drm/qxl: Tell caller if kmap() " Thomas Zimmermann
2019-11-06  9:31 ` Thomas Zimmermann [this message]
2019-11-06  9:31 ` [PATCH 4/8] drm/gem: Return I/O-memory flag from drm_gem_vram() Thomas Zimmermann
2019-11-06  9:31 ` [PATCH 5/8] drm/client: Return I/O memory flag from drm_client_buffer_vmap() Thomas Zimmermann
2019-11-06  9:31 ` [PATCH 6/8] fbdev: Export default read and write operations as fb_cfb_{read, write}() Thomas Zimmermann
2019-11-06  9:31 ` [PATCH 7/8] drm/fb-helper: Select between fb_{sys, cfb}_read() and _write() Thomas Zimmermann
2019-11-06  9:31 ` [PATCH 8/8] drm/fb-helper: Handle I/O memory correctly when flushing shadow fb Thomas Zimmermann
2019-11-06 10:05 ` [RFC][PATCH 0/8] Support I/O memory in generic fbdev emulation Daniel Vetter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191106093121.21762-4-tzimmermann@suse.de \
    --to=tzimmermann@suse.de \
    --cc=christian.koenig@amd.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=noralf@tronnes.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.