linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 1/3] drm/gem: don't force writecombine mmap'ing
@ 2019-07-16 21:37 Rob Clark
  2019-07-16 21:37 ` [PATCH v3 2/3] drm: plumb attaching dev thru to prime_pin/unpin Rob Clark
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Rob Clark @ 2019-07-16 21:37 UTC (permalink / raw)
  To: dri-devel
  Cc: Chris Wilson, Intel Graphics Development, Rob Clark,
	Maarten Lankhorst, Maxime Ripard, Sean Paul, David Airlie,
	Daniel Vetter, linux-kernel

From: Rob Clark <robdclark@chromium.org>

The driver should be in control of this.

Signed-off-by: Rob Clark <robdclark@chromium.org>
---
It is possible that this was masking bugs (ie. not setting appropriate
pgprot) in drivers.  I don't have a particularly good idea for tracking
those down (since I don't have the hw for most drivers).  Unless someone
has a better idea, maybe land this and let driver maintainers fix any
potential fallout in their drivers?

This is necessary for the last patch to fix VGEM brokenness on arm.

v3: rebased on drm-tip

 drivers/gpu/drm/drm_gem.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index e6c12c6ec728..84689ccae885 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1109,7 +1109,7 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
 
 	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
 	vma->vm_private_data = obj;
-	vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
 	vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
 
 	/* Take a ref for this mapping of the object, so that the fault
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 2/3] drm: plumb attaching dev thru to prime_pin/unpin
  2019-07-16 21:37 [PATCH v3 1/3] drm/gem: don't force writecombine mmap'ing Rob Clark
@ 2019-07-16 21:37 ` Rob Clark
  2019-07-17  6:59   ` Koenig, Christian
  2019-07-16 21:37 ` [PATCH v3 3/3] drm/vgem: use normal cached mmap'ings Rob Clark
  2019-07-16 23:21 ` [Intel-gfx] [PATCH v3 1/3] drm/gem: don't force writecombine mmap'ing Eric Anholt
  2 siblings, 1 reply; 8+ messages in thread
From: Rob Clark @ 2019-07-16 21:37 UTC (permalink / raw)
  To: dri-devel
  Cc: Chris Wilson, Intel Graphics Development, Rob Clark,
	Maarten Lankhorst, Maxime Ripard, Sean Paul, David Airlie,
	Daniel Vetter, Lucas Stach, Russell King, Christian Gmeiner,
	Rob Clark, Ben Skeggs, Dave Airlie, Gerd Hoffmann, Alex Deucher,
	Christian König, David (ChunMing) Zhou, Emil Velikov,
	Eric Anholt, Thomas Zimmermann, Imre Deak, Deepak Sharma,
	Eric Biggers, linux-kernel, etnaviv, linux-arm-msm, freedreno,
	nouveau, virtualization, spice-devel, amd-gfx

From: Rob Clark <robdclark@chromium.org>

Needed in the following patch for cache operations.

Signed-off-by: Rob Clark <robdclark@chromium.org>
---
v3: rebased on drm-tip

 drivers/gpu/drm/drm_gem.c                   | 8 ++++----
 drivers/gpu/drm/drm_internal.h              | 4 ++--
 drivers/gpu/drm/drm_prime.c                 | 4 ++--
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 4 ++--
 drivers/gpu/drm/msm/msm_drv.h               | 4 ++--
 drivers/gpu/drm/msm/msm_gem_prime.c         | 4 ++--
 drivers/gpu/drm/nouveau/nouveau_gem.h       | 4 ++--
 drivers/gpu/drm/nouveau/nouveau_prime.c     | 4 ++--
 drivers/gpu/drm/qxl/qxl_prime.c             | 4 ++--
 drivers/gpu/drm/radeon/radeon_prime.c       | 4 ++--
 drivers/gpu/drm/vgem/vgem_drv.c             | 4 ++--
 include/drm/drm_drv.h                       | 5 ++---
 12 files changed, 26 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 84689ccae885..af2549c45027 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1215,22 +1215,22 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
 		obj->dev->driver->gem_print_info(p, indent, obj);
 }
 
-int drm_gem_pin(struct drm_gem_object *obj)
+int drm_gem_pin(struct drm_gem_object *obj, struct device *dev)
 {
 	if (obj->funcs && obj->funcs->pin)
 		return obj->funcs->pin(obj);
 	else if (obj->dev->driver->gem_prime_pin)
-		return obj->dev->driver->gem_prime_pin(obj);
+		return obj->dev->driver->gem_prime_pin(obj, dev);
 	else
 		return 0;
 }
 
-void drm_gem_unpin(struct drm_gem_object *obj)
+void drm_gem_unpin(struct drm_gem_object *obj, struct device *dev)
 {
 	if (obj->funcs && obj->funcs->unpin)
 		obj->funcs->unpin(obj);
 	else if (obj->dev->driver->gem_prime_unpin)
-		obj->dev->driver->gem_prime_unpin(obj);
+		obj->dev->driver->gem_prime_unpin(obj, dev);
 }
 
 void *drm_gem_vmap(struct drm_gem_object *obj)
diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
index 51a2055c8f18..e64090373e3a 100644
--- a/drivers/gpu/drm/drm_internal.h
+++ b/drivers/gpu/drm/drm_internal.h
@@ -133,8 +133,8 @@ void drm_gem_release(struct drm_device *dev, struct drm_file *file_private);
 void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
 			const struct drm_gem_object *obj);
 
-int drm_gem_pin(struct drm_gem_object *obj);
-void drm_gem_unpin(struct drm_gem_object *obj);
+int drm_gem_pin(struct drm_gem_object *obj, struct device *dev);
+void drm_gem_unpin(struct drm_gem_object *obj, struct device *dev);
 void *drm_gem_vmap(struct drm_gem_object *obj);
 void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
 
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 189d980402ad..126860432ff9 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -575,7 +575,7 @@ int drm_gem_map_attach(struct dma_buf *dma_buf,
 {
 	struct drm_gem_object *obj = dma_buf->priv;
 
-	return drm_gem_pin(obj);
+	return drm_gem_pin(obj, attach->dev);
 }
 EXPORT_SYMBOL(drm_gem_map_attach);
 
@@ -593,7 +593,7 @@ void drm_gem_map_detach(struct dma_buf *dma_buf,
 {
 	struct drm_gem_object *obj = dma_buf->priv;
 
-	drm_gem_unpin(obj);
+	drm_gem_unpin(obj, attach->dev);
 }
 EXPORT_SYMBOL(drm_gem_map_detach);
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index a05292e8ed6f..67e69a5f00f2 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -43,7 +43,7 @@ int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
 	return etnaviv_obj->ops->mmap(etnaviv_obj, vma);
 }
 
-int etnaviv_gem_prime_pin(struct drm_gem_object *obj)
+int etnaviv_gem_prime_pin(struct drm_gem_object *obj, struct device *dev)
 {
 	if (!obj->import_attach) {
 		struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
@@ -55,7 +55,7 @@ int etnaviv_gem_prime_pin(struct drm_gem_object *obj)
 	return 0;
 }
 
-void etnaviv_gem_prime_unpin(struct drm_gem_object *obj)
+void etnaviv_gem_prime_unpin(struct drm_gem_object *obj, struct device *dev)
 {
 	if (!obj->import_attach) {
 		struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index ee7b512dc158..0eea68618b68 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -288,8 +288,8 @@ void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int msm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
 		struct dma_buf_attachment *attach, struct sg_table *sg);
-int msm_gem_prime_pin(struct drm_gem_object *obj);
-void msm_gem_prime_unpin(struct drm_gem_object *obj);
+int msm_gem_prime_pin(struct drm_gem_object *obj, struct device *dev);
+void msm_gem_prime_unpin(struct drm_gem_object *obj, struct device *dev);
 void *msm_gem_get_vaddr(struct drm_gem_object *obj);
 void *msm_gem_get_vaddr_active(struct drm_gem_object *obj);
 void msm_gem_put_vaddr(struct drm_gem_object *obj);
diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c
index 5d64e0671f7a..cc07bf94e206 100644
--- a/drivers/gpu/drm/msm/msm_gem_prime.c
+++ b/drivers/gpu/drm/msm/msm_gem_prime.c
@@ -47,14 +47,14 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
 	return msm_gem_import(dev, attach->dmabuf, sg);
 }
 
-int msm_gem_prime_pin(struct drm_gem_object *obj)
+int msm_gem_prime_pin(struct drm_gem_object *obj, struct device *dev)
 {
 	if (!obj->import_attach)
 		msm_gem_get_pages(obj);
 	return 0;
 }
 
-void msm_gem_prime_unpin(struct drm_gem_object *obj)
+void msm_gem_prime_unpin(struct drm_gem_object *obj, struct device *dev)
 {
 	if (!obj->import_attach)
 		msm_gem_put_pages(obj);
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
index fe39998f65cc..91dcf89138f1 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.h
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
@@ -32,9 +32,9 @@ extern int nouveau_gem_ioctl_cpu_fini(struct drm_device *, void *,
 extern int nouveau_gem_ioctl_info(struct drm_device *, void *,
 				  struct drm_file *);
 
-extern int nouveau_gem_prime_pin(struct drm_gem_object *);
+extern int nouveau_gem_prime_pin(struct drm_gem_object *, struct device *);
 struct reservation_object *nouveau_gem_prime_res_obj(struct drm_gem_object *);
-extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
+extern void nouveau_gem_prime_unpin(struct drm_gem_object *, struct device *);
 extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
 extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
 	struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
index 1fefc93af1d7..dec2d5e37b34 100644
--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
+++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
@@ -88,7 +88,7 @@ struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
 	return &nvbo->gem;
 }
 
-int nouveau_gem_prime_pin(struct drm_gem_object *obj)
+int nouveau_gem_prime_pin(struct drm_gem_object *obj, struct device *dev)
 {
 	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
 	int ret;
@@ -101,7 +101,7 @@ int nouveau_gem_prime_pin(struct drm_gem_object *obj)
 	return 0;
 }
 
-void nouveau_gem_prime_unpin(struct drm_gem_object *obj)
+void nouveau_gem_prime_unpin(struct drm_gem_object *obj, struct device *dev)
 {
 	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
 
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index 7d3816fca5a8..21e9b44eb2e4 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -28,14 +28,14 @@
 /* Empty Implementations as there should not be any other driver for a virtual
  * device that might share buffers with qxl */
 
-int qxl_gem_prime_pin(struct drm_gem_object *obj)
+int qxl_gem_prime_pin(struct drm_gem_object *obj, struct device *dev)
 {
 	struct qxl_bo *bo = gem_to_qxl_bo(obj);
 
 	return qxl_bo_pin(bo);
 }
 
-void qxl_gem_prime_unpin(struct drm_gem_object *obj)
+void qxl_gem_prime_unpin(struct drm_gem_object *obj, struct device *dev)
 {
 	struct qxl_bo *bo = gem_to_qxl_bo(obj);
 
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
index deaffce50a2e..f3442bd860f6 100644
--- a/drivers/gpu/drm/radeon/radeon_prime.c
+++ b/drivers/gpu/drm/radeon/radeon_prime.c
@@ -83,7 +83,7 @@ struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
 	return &bo->gem_base;
 }
 
-int radeon_gem_prime_pin(struct drm_gem_object *obj)
+int radeon_gem_prime_pin(struct drm_gem_object *obj, struct device *dev)
 {
 	struct radeon_bo *bo = gem_to_radeon_bo(obj);
 	int ret = 0;
@@ -101,7 +101,7 @@ int radeon_gem_prime_pin(struct drm_gem_object *obj)
 	return ret;
 }
 
-void radeon_gem_prime_unpin(struct drm_gem_object *obj)
+void radeon_gem_prime_unpin(struct drm_gem_object *obj, struct device *dev)
 {
 	struct radeon_bo *bo = gem_to_radeon_bo(obj);
 	int ret = 0;
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index 76d95b5e289c..e7d12e93b1f0 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -307,7 +307,7 @@ static void vgem_unpin_pages(struct drm_vgem_gem_object *bo)
 	mutex_unlock(&bo->pages_lock);
 }
 
-static int vgem_prime_pin(struct drm_gem_object *obj)
+static int vgem_prime_pin(struct drm_gem_object *obj, struct device *dev)
 {
 	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
 	long n_pages = obj->size >> PAGE_SHIFT;
@@ -325,7 +325,7 @@ static int vgem_prime_pin(struct drm_gem_object *obj)
 	return 0;
 }
 
-static void vgem_prime_unpin(struct drm_gem_object *obj)
+static void vgem_prime_unpin(struct drm_gem_object *obj, struct device *dev)
 {
 	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
 
diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
index b33f2cee2099..2b3d79bde6e1 100644
--- a/include/drm/drm_drv.h
+++ b/include/drm/drm_drv.h
@@ -592,20 +592,19 @@ struct drm_driver {
 	 */
 	struct drm_gem_object * (*gem_prime_import)(struct drm_device *dev,
 				struct dma_buf *dma_buf);
-
 	/**
 	 * @gem_prime_pin:
 	 *
 	 * Deprecated hook in favour of &drm_gem_object_funcs.pin.
 	 */
-	int (*gem_prime_pin)(struct drm_gem_object *obj);
+	int (*gem_prime_pin)(struct drm_gem_object *obj, struct device *dev);
 
 	/**
 	 * @gem_prime_unpin:
 	 *
 	 * Deprecated hook in favour of &drm_gem_object_funcs.unpin.
 	 */
-	void (*gem_prime_unpin)(struct drm_gem_object *obj);
+	void (*gem_prime_unpin)(struct drm_gem_object *obj, struct device *dev);
 
 
 	/**
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 3/3] drm/vgem: use normal cached mmap'ings
  2019-07-16 21:37 [PATCH v3 1/3] drm/gem: don't force writecombine mmap'ing Rob Clark
  2019-07-16 21:37 ` [PATCH v3 2/3] drm: plumb attaching dev thru to prime_pin/unpin Rob Clark
@ 2019-07-16 21:37 ` Rob Clark
  2019-07-16 23:39   ` Eric Anholt
  2019-07-16 23:21 ` [Intel-gfx] [PATCH v3 1/3] drm/gem: don't force writecombine mmap'ing Eric Anholt
  2 siblings, 1 reply; 8+ messages in thread
From: Rob Clark @ 2019-07-16 21:37 UTC (permalink / raw)
  To: dri-devel
  Cc: Chris Wilson, Intel Graphics Development, Rob Clark,
	David Airlie, Daniel Vetter, Emil Velikov, Eric Anholt,
	Eric Biggers, Deepak Sharma, linux-kernel

From: Rob Clark <robdclark@chromium.org>

Since there is no real device associated with VGEM, it is impossible to
end up with appropriate dev->dma_ops, meaning that we have no way to
invalidate the shmem pages allocated by VGEM.  So, at least on platforms
without drm_cflush_pages(), we end up with corruption when cache lines
from previous usage of VGEM bo pages get evicted to memory.

The only sane option is to use cached mappings.

Signed-off-by: Rob Clark <robdclark@chromium.org>
---
v3: rebased on drm-tip

 drivers/gpu/drm/vgem/vgem_drv.c | 24 ++++++++++++++----------
 1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index e7d12e93b1f0..84262e2bd7f7 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -259,9 +259,6 @@ static int vgem_mmap(struct file *filp, struct vm_area_struct *vma)
 	if (ret)
 		return ret;
 
-	/* Keep the WC mmaping set by drm_gem_mmap() but our pages
-	 * are ordinary and not special.
-	 */
 	vma->vm_flags = flags | VM_DONTEXPAND | VM_DONTDUMP;
 	return 0;
 }
@@ -310,17 +307,17 @@ static void vgem_unpin_pages(struct drm_vgem_gem_object *bo)
 static int vgem_prime_pin(struct drm_gem_object *obj, struct device *dev)
 {
 	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
-	long n_pages = obj->size >> PAGE_SHIFT;
+	long i, n_pages = obj->size >> PAGE_SHIFT;
 	struct page **pages;
 
 	pages = vgem_pin_pages(bo);
 	if (IS_ERR(pages))
 		return PTR_ERR(pages);
 
-	/* Flush the object from the CPU cache so that importers can rely
-	 * on coherent indirect access via the exported dma-address.
-	 */
-	drm_clflush_pages(pages, n_pages);
+	for (i = 0; i < n_pages; i++) {
+		dma_sync_single_for_device(dev, page_to_phys(pages[i]),
+					   PAGE_SIZE, DMA_BIDIRECTIONAL);
+	}
 
 	return 0;
 }
@@ -328,6 +325,13 @@ static int vgem_prime_pin(struct drm_gem_object *obj, struct device *dev)
 static void vgem_prime_unpin(struct drm_gem_object *obj, struct device *dev)
 {
 	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
+	long i, n_pages = obj->size >> PAGE_SHIFT;
+	struct page **pages = bo->pages;
+
+	for (i = 0; i < n_pages; i++) {
+		dma_sync_single_for_cpu(dev, page_to_phys(pages[i]),
+					PAGE_SIZE, DMA_BIDIRECTIONAL);
+	}
 
 	vgem_unpin_pages(bo);
 }
@@ -382,7 +386,7 @@ static void *vgem_prime_vmap(struct drm_gem_object *obj)
 	if (IS_ERR(pages))
 		return NULL;
 
-	return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+	return vmap(pages, n_pages, 0, PAGE_KERNEL);
 }
 
 static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
@@ -411,7 +415,7 @@ static int vgem_prime_mmap(struct drm_gem_object *obj,
 	fput(vma->vm_file);
 	vma->vm_file = get_file(obj->filp);
 	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
-	vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
 
 	return 0;
 }
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [Intel-gfx] [PATCH v3 1/3] drm/gem: don't force writecombine mmap'ing
  2019-07-16 21:37 [PATCH v3 1/3] drm/gem: don't force writecombine mmap'ing Rob Clark
  2019-07-16 21:37 ` [PATCH v3 2/3] drm: plumb attaching dev thru to prime_pin/unpin Rob Clark
  2019-07-16 21:37 ` [PATCH v3 3/3] drm/vgem: use normal cached mmap'ings Rob Clark
@ 2019-07-16 23:21 ` Eric Anholt
  2 siblings, 0 replies; 8+ messages in thread
From: Eric Anholt @ 2019-07-16 23:21 UTC (permalink / raw)
  To: Rob Clark, dri-devel
  Cc: Rob Clark, Maxime Ripard, Intel Graphics Development,
	linux-kernel, David Airlie

[-- Attachment #1: Type: text/plain, Size: 858 bytes --]

Rob Clark <robdclark@gmail.com> writes:

> From: Rob Clark <robdclark@chromium.org>
>
> The driver should be in control of this.
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
> It is possible that this was masking bugs (ie. not setting appropriate
> pgprot) in drivers.  I don't have a particularly good idea for tracking
> those down (since I don't have the hw for most drivers).  Unless someone
> has a better idea, maybe land this and let driver maintainers fix any
> potential fallout in their drivers?
>
> This is necessary for the last patch to fix VGEM brokenness on arm.

This will break at least v3d and panfrost, and it looks like cirrus as
well, since you're now promoting the mapping to cached by default and
drm_gem_shmem_helper now produces cached mappings.  That's all I could
find that would break, but don't trust me on that.


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 3/3] drm/vgem: use normal cached mmap'ings
  2019-07-16 21:37 ` [PATCH v3 3/3] drm/vgem: use normal cached mmap'ings Rob Clark
@ 2019-07-16 23:39   ` Eric Anholt
  2019-07-17  0:13     ` Rob Clark
  0 siblings, 1 reply; 8+ messages in thread
From: Eric Anholt @ 2019-07-16 23:39 UTC (permalink / raw)
  To: Rob Clark, dri-devel
  Cc: Chris Wilson, Intel Graphics Development, Rob Clark,
	David Airlie, Daniel Vetter, Emil Velikov, Eric Biggers,
	Deepak Sharma, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1020 bytes --]

Rob Clark <robdclark@gmail.com> writes:

> From: Rob Clark <robdclark@chromium.org>
>
> Since there is no real device associated with VGEM, it is impossible to
> end up with appropriate dev->dma_ops, meaning that we have no way to
> invalidate the shmem pages allocated by VGEM.  So, at least on platforms
> without drm_cflush_pages(), we end up with corruption when cache lines
> from previous usage of VGEM bo pages get evicted to memory.
>
> The only sane option is to use cached mappings.

This may be an improvement, but...

pin/unpin is only on attaching/closing the dma-buf, right?  So, great,
you flushed the cached map once after exporting the vgem dma-buf to the
actual GPU device, but from then on you still have no interface for
getting coherent access through VGEM's mapping again, which still
exists.

I feel like this is papering over something that's really just broken,
and we should stop providing VGEM just because someone wants to write
dma-buf test code without driver-specific BO alloc ioctl code.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 3/3] drm/vgem: use normal cached mmap'ings
  2019-07-16 23:39   ` Eric Anholt
@ 2019-07-17  0:13     ` Rob Clark
  2019-07-19  9:13       ` Daniel Vetter
  0 siblings, 1 reply; 8+ messages in thread
From: Rob Clark @ 2019-07-17  0:13 UTC (permalink / raw)
  To: Eric Anholt
  Cc: dri-devel, Chris Wilson, Intel Graphics Development, Rob Clark,
	David Airlie, Daniel Vetter, Emil Velikov, Eric Biggers,
	Deepak Sharma, Linux Kernel Mailing List

On Tue, Jul 16, 2019 at 4:39 PM Eric Anholt <eric@anholt.net> wrote:
>
> Rob Clark <robdclark@gmail.com> writes:
>
> > From: Rob Clark <robdclark@chromium.org>
> >
> > Since there is no real device associated with VGEM, it is impossible to
> > end up with appropriate dev->dma_ops, meaning that we have no way to
> > invalidate the shmem pages allocated by VGEM.  So, at least on platforms
> > without drm_cflush_pages(), we end up with corruption when cache lines
> > from previous usage of VGEM bo pages get evicted to memory.
> >
> > The only sane option is to use cached mappings.
>
> This may be an improvement, but...
>
> pin/unpin is only on attaching/closing the dma-buf, right?  So, great,
> you flushed the cached map once after exporting the vgem dma-buf to the
> actual GPU device, but from then on you still have no interface for
> getting coherent access through VGEM's mapping again, which still
> exists.

In *theory* one would detach before doing further CPU access to
buffer, and then re-attach when passing back to GPU.

Ofc that isn't how actual drivers do things.  But maybe it is enough
for vgem to serve it's purpose (ie. test code).

> I feel like this is papering over something that's really just broken,
> and we should stop providing VGEM just because someone wants to write
> dma-buf test code without driver-specific BO alloc ioctl code.

yup, it is vgem that is fundamentally broken (or maybe more
specifically doesn't fit in w/ dma-mappings view of how to do cache
maint), and I'm just papering over it because people and CI systems
want to be able to use it to do some dma-buf tests ;-)

I'm kinda wondering, at least for arm/dt based systems, if there is a
way (other than in early boot) that we can inject a vgem device node
into the dtb.  That isn't a thing drivers should normally do, but (if
possible) since vgem is really just test infrastructure, it could be a
way to make dma-mapping happily think vgem is a real device.

BR,
-R

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/3] drm: plumb attaching dev thru to prime_pin/unpin
  2019-07-16 21:37 ` [PATCH v3 2/3] drm: plumb attaching dev thru to prime_pin/unpin Rob Clark
@ 2019-07-17  6:59   ` Koenig, Christian
  0 siblings, 0 replies; 8+ messages in thread
From: Koenig, Christian @ 2019-07-17  6:59 UTC (permalink / raw)
  To: Rob Clark, dri-devel
  Cc: Chris Wilson, Intel Graphics Development, Rob Clark,
	Maarten Lankhorst, Maxime Ripard, Sean Paul, David Airlie,
	Daniel Vetter, Lucas Stach, Russell King, Christian Gmeiner,
	Ben Skeggs, Dave Airlie, Gerd Hoffmann, Deucher, Alexander, Zhou,
	David(ChunMing),
	Emil Velikov, Eric Anholt, Thomas Zimmermann, Imre Deak, Sharma,
	Deepak, Eric Biggers, linux-kernel, etnaviv, linux-arm-msm,
	freedreno, nouveau, virtualization, spice-devel, amd-gfx

Am 16.07.19 um 23:37 schrieb Rob Clark:
> From: Rob Clark <robdclark@chromium.org>
>
> Needed in the following patch for cache operations.

Well have you seen that those callbacks are deprecated?
>   	 * Deprecated hook in favour of &drm_gem_object_funcs.pin.

>   	 * Deprecated hook in favour of &drm_gem_object_funcs.unpin.
>

I would rather say if you want to extend something it would be better to 
switch over to the per GEM object functions first.

Regards,
Christian.

>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
> v3: rebased on drm-tip
>
>   drivers/gpu/drm/drm_gem.c                   | 8 ++++----
>   drivers/gpu/drm/drm_internal.h              | 4 ++--
>   drivers/gpu/drm/drm_prime.c                 | 4 ++--
>   drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 4 ++--
>   drivers/gpu/drm/msm/msm_drv.h               | 4 ++--
>   drivers/gpu/drm/msm/msm_gem_prime.c         | 4 ++--
>   drivers/gpu/drm/nouveau/nouveau_gem.h       | 4 ++--
>   drivers/gpu/drm/nouveau/nouveau_prime.c     | 4 ++--
>   drivers/gpu/drm/qxl/qxl_prime.c             | 4 ++--
>   drivers/gpu/drm/radeon/radeon_prime.c       | 4 ++--
>   drivers/gpu/drm/vgem/vgem_drv.c             | 4 ++--
>   include/drm/drm_drv.h                       | 5 ++---
>   12 files changed, 26 insertions(+), 27 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 84689ccae885..af2549c45027 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1215,22 +1215,22 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
>   		obj->dev->driver->gem_print_info(p, indent, obj);
>   }
>   
> -int drm_gem_pin(struct drm_gem_object *obj)
> +int drm_gem_pin(struct drm_gem_object *obj, struct device *dev)
>   {
>   	if (obj->funcs && obj->funcs->pin)
>   		return obj->funcs->pin(obj);
>   	else if (obj->dev->driver->gem_prime_pin)
> -		return obj->dev->driver->gem_prime_pin(obj);
> +		return obj->dev->driver->gem_prime_pin(obj, dev);
>   	else
>   		return 0;
>   }
>   
> -void drm_gem_unpin(struct drm_gem_object *obj)
> +void drm_gem_unpin(struct drm_gem_object *obj, struct device *dev)
>   {
>   	if (obj->funcs && obj->funcs->unpin)
>   		obj->funcs->unpin(obj);
>   	else if (obj->dev->driver->gem_prime_unpin)
> -		obj->dev->driver->gem_prime_unpin(obj);
> +		obj->dev->driver->gem_prime_unpin(obj, dev);
>   }
>   
>   void *drm_gem_vmap(struct drm_gem_object *obj)
> diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
> index 51a2055c8f18..e64090373e3a 100644
> --- a/drivers/gpu/drm/drm_internal.h
> +++ b/drivers/gpu/drm/drm_internal.h
> @@ -133,8 +133,8 @@ void drm_gem_release(struct drm_device *dev, struct drm_file *file_private);
>   void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
>   			const struct drm_gem_object *obj);
>   
> -int drm_gem_pin(struct drm_gem_object *obj);
> -void drm_gem_unpin(struct drm_gem_object *obj);
> +int drm_gem_pin(struct drm_gem_object *obj, struct device *dev);
> +void drm_gem_unpin(struct drm_gem_object *obj, struct device *dev);
>   void *drm_gem_vmap(struct drm_gem_object *obj);
>   void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
>   
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index 189d980402ad..126860432ff9 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -575,7 +575,7 @@ int drm_gem_map_attach(struct dma_buf *dma_buf,
>   {
>   	struct drm_gem_object *obj = dma_buf->priv;
>   
> -	return drm_gem_pin(obj);
> +	return drm_gem_pin(obj, attach->dev);
>   }
>   EXPORT_SYMBOL(drm_gem_map_attach);
>   
> @@ -593,7 +593,7 @@ void drm_gem_map_detach(struct dma_buf *dma_buf,
>   {
>   	struct drm_gem_object *obj = dma_buf->priv;
>   
> -	drm_gem_unpin(obj);
> +	drm_gem_unpin(obj, attach->dev);
>   }
>   EXPORT_SYMBOL(drm_gem_map_detach);
>   
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index a05292e8ed6f..67e69a5f00f2 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -43,7 +43,7 @@ int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
>   	return etnaviv_obj->ops->mmap(etnaviv_obj, vma);
>   }
>   
> -int etnaviv_gem_prime_pin(struct drm_gem_object *obj)
> +int etnaviv_gem_prime_pin(struct drm_gem_object *obj, struct device *dev)
>   {
>   	if (!obj->import_attach) {
>   		struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
> @@ -55,7 +55,7 @@ int etnaviv_gem_prime_pin(struct drm_gem_object *obj)
>   	return 0;
>   }
>   
> -void etnaviv_gem_prime_unpin(struct drm_gem_object *obj)
> +void etnaviv_gem_prime_unpin(struct drm_gem_object *obj, struct device *dev)
>   {
>   	if (!obj->import_attach) {
>   		struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
> diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
> index ee7b512dc158..0eea68618b68 100644
> --- a/drivers/gpu/drm/msm/msm_drv.h
> +++ b/drivers/gpu/drm/msm/msm_drv.h
> @@ -288,8 +288,8 @@ void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
>   int msm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
>   struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
>   		struct dma_buf_attachment *attach, struct sg_table *sg);
> -int msm_gem_prime_pin(struct drm_gem_object *obj);
> -void msm_gem_prime_unpin(struct drm_gem_object *obj);
> +int msm_gem_prime_pin(struct drm_gem_object *obj, struct device *dev);
> +void msm_gem_prime_unpin(struct drm_gem_object *obj, struct device *dev);
>   void *msm_gem_get_vaddr(struct drm_gem_object *obj);
>   void *msm_gem_get_vaddr_active(struct drm_gem_object *obj);
>   void msm_gem_put_vaddr(struct drm_gem_object *obj);
> diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c
> index 5d64e0671f7a..cc07bf94e206 100644
> --- a/drivers/gpu/drm/msm/msm_gem_prime.c
> +++ b/drivers/gpu/drm/msm/msm_gem_prime.c
> @@ -47,14 +47,14 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
>   	return msm_gem_import(dev, attach->dmabuf, sg);
>   }
>   
> -int msm_gem_prime_pin(struct drm_gem_object *obj)
> +int msm_gem_prime_pin(struct drm_gem_object *obj, struct device *dev)
>   {
>   	if (!obj->import_attach)
>   		msm_gem_get_pages(obj);
>   	return 0;
>   }
>   
> -void msm_gem_prime_unpin(struct drm_gem_object *obj)
> +void msm_gem_prime_unpin(struct drm_gem_object *obj, struct device *dev)
>   {
>   	if (!obj->import_attach)
>   		msm_gem_put_pages(obj);
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
> index fe39998f65cc..91dcf89138f1 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
> @@ -32,9 +32,9 @@ extern int nouveau_gem_ioctl_cpu_fini(struct drm_device *, void *,
>   extern int nouveau_gem_ioctl_info(struct drm_device *, void *,
>   				  struct drm_file *);
>   
> -extern int nouveau_gem_prime_pin(struct drm_gem_object *);
> +extern int nouveau_gem_prime_pin(struct drm_gem_object *, struct device *);
>   struct reservation_object *nouveau_gem_prime_res_obj(struct drm_gem_object *);
> -extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
> +extern void nouveau_gem_prime_unpin(struct drm_gem_object *, struct device *);
>   extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
>   extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
>   	struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
> diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
> index 1fefc93af1d7..dec2d5e37b34 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_prime.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
> @@ -88,7 +88,7 @@ struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
>   	return &nvbo->gem;
>   }
>   
> -int nouveau_gem_prime_pin(struct drm_gem_object *obj)
> +int nouveau_gem_prime_pin(struct drm_gem_object *obj, struct device *dev)
>   {
>   	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
>   	int ret;
> @@ -101,7 +101,7 @@ int nouveau_gem_prime_pin(struct drm_gem_object *obj)
>   	return 0;
>   }
>   
> -void nouveau_gem_prime_unpin(struct drm_gem_object *obj)
> +void nouveau_gem_prime_unpin(struct drm_gem_object *obj, struct device *dev)
>   {
>   	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
>   
> diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
> index 7d3816fca5a8..21e9b44eb2e4 100644
> --- a/drivers/gpu/drm/qxl/qxl_prime.c
> +++ b/drivers/gpu/drm/qxl/qxl_prime.c
> @@ -28,14 +28,14 @@
>   /* Empty Implementations as there should not be any other driver for a virtual
>    * device that might share buffers with qxl */
>   
> -int qxl_gem_prime_pin(struct drm_gem_object *obj)
> +int qxl_gem_prime_pin(struct drm_gem_object *obj, struct device *dev)
>   {
>   	struct qxl_bo *bo = gem_to_qxl_bo(obj);
>   
>   	return qxl_bo_pin(bo);
>   }
>   
> -void qxl_gem_prime_unpin(struct drm_gem_object *obj)
> +void qxl_gem_prime_unpin(struct drm_gem_object *obj, struct device *dev)
>   {
>   	struct qxl_bo *bo = gem_to_qxl_bo(obj);
>   
> diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
> index deaffce50a2e..f3442bd860f6 100644
> --- a/drivers/gpu/drm/radeon/radeon_prime.c
> +++ b/drivers/gpu/drm/radeon/radeon_prime.c
> @@ -83,7 +83,7 @@ struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
>   	return &bo->gem_base;
>   }
>   
> -int radeon_gem_prime_pin(struct drm_gem_object *obj)
> +int radeon_gem_prime_pin(struct drm_gem_object *obj, struct device *dev)
>   {
>   	struct radeon_bo *bo = gem_to_radeon_bo(obj);
>   	int ret = 0;
> @@ -101,7 +101,7 @@ int radeon_gem_prime_pin(struct drm_gem_object *obj)
>   	return ret;
>   }
>   
> -void radeon_gem_prime_unpin(struct drm_gem_object *obj)
> +void radeon_gem_prime_unpin(struct drm_gem_object *obj, struct device *dev)
>   {
>   	struct radeon_bo *bo = gem_to_radeon_bo(obj);
>   	int ret = 0;
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index 76d95b5e289c..e7d12e93b1f0 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -307,7 +307,7 @@ static void vgem_unpin_pages(struct drm_vgem_gem_object *bo)
>   	mutex_unlock(&bo->pages_lock);
>   }
>   
> -static int vgem_prime_pin(struct drm_gem_object *obj)
> +static int vgem_prime_pin(struct drm_gem_object *obj, struct device *dev)
>   {
>   	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
>   	long n_pages = obj->size >> PAGE_SHIFT;
> @@ -325,7 +325,7 @@ static int vgem_prime_pin(struct drm_gem_object *obj)
>   	return 0;
>   }
>   
> -static void vgem_prime_unpin(struct drm_gem_object *obj)
> +static void vgem_prime_unpin(struct drm_gem_object *obj, struct device *dev)
>   {
>   	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
>   
> diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
> index b33f2cee2099..2b3d79bde6e1 100644
> --- a/include/drm/drm_drv.h
> +++ b/include/drm/drm_drv.h
> @@ -592,20 +592,19 @@ struct drm_driver {
>   	 */
>   	struct drm_gem_object * (*gem_prime_import)(struct drm_device *dev,
>   				struct dma_buf *dma_buf);
> -
>   	/**
>   	 * @gem_prime_pin:
>   	 *
>   	 * Deprecated hook in favour of &drm_gem_object_funcs.pin.
>   	 */
> -	int (*gem_prime_pin)(struct drm_gem_object *obj);
> +	int (*gem_prime_pin)(struct drm_gem_object *obj, struct device *dev);
>   
>   	/**
>   	 * @gem_prime_unpin:
>   	 *
>   	 * Deprecated hook in favour of &drm_gem_object_funcs.unpin.
>   	 */
> -	void (*gem_prime_unpin)(struct drm_gem_object *obj);
> +	void (*gem_prime_unpin)(struct drm_gem_object *obj, struct device *dev);
>   
>   
>   	/**


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 3/3] drm/vgem: use normal cached mmap'ings
  2019-07-17  0:13     ` Rob Clark
@ 2019-07-19  9:13       ` Daniel Vetter
  0 siblings, 0 replies; 8+ messages in thread
From: Daniel Vetter @ 2019-07-19  9:13 UTC (permalink / raw)
  To: Rob Clark
  Cc: Eric Anholt, dri-devel, Chris Wilson, Intel Graphics Development,
	Rob Clark, David Airlie, Daniel Vetter, Emil Velikov,
	Eric Biggers, Deepak Sharma, Linux Kernel Mailing List

On Tue, Jul 16, 2019 at 05:13:10PM -0700, Rob Clark wrote:
> On Tue, Jul 16, 2019 at 4:39 PM Eric Anholt <eric@anholt.net> wrote:
> >
> > Rob Clark <robdclark@gmail.com> writes:
> >
> > > From: Rob Clark <robdclark@chromium.org>
> > >
> > > Since there is no real device associated with VGEM, it is impossible to
> > > end up with appropriate dev->dma_ops, meaning that we have no way to
> > > invalidate the shmem pages allocated by VGEM.  So, at least on platforms
> > > without drm_cflush_pages(), we end up with corruption when cache lines
> > > from previous usage of VGEM bo pages get evicted to memory.
> > >
> > > The only sane option is to use cached mappings.
> >
> > This may be an improvement, but...
> >
> > pin/unpin is only on attaching/closing the dma-buf, right?  So, great,
> > you flushed the cached map once after exporting the vgem dma-buf to the
> > actual GPU device, but from then on you still have no interface for
> > getting coherent access through VGEM's mapping again, which still
> > exists.
> 
> In *theory* one would detach before doing further CPU access to
> buffer, and then re-attach when passing back to GPU.
> 
> Ofc that isn't how actual drivers do things.  But maybe it is enough
> for vgem to serve it's purpose (ie. test code).
> 
> > I feel like this is papering over something that's really just broken,
> > and we should stop providing VGEM just because someone wants to write
> > dma-buf test code without driver-specific BO alloc ioctl code.
> 
> yup, it is vgem that is fundamentally broken (or maybe more
> specifically doesn't fit in w/ dma-mappings view of how to do cache
> maint), and I'm just papering over it because people and CI systems
> want to be able to use it to do some dma-buf tests ;-)
> 
> I'm kinda wondering, at least for arm/dt based systems, if there is a
> way (other than in early boot) that we can inject a vgem device node
> into the dtb.  That isn't a thing drivers should normally do, but (if
> possible) since vgem is really just test infrastructure, it could be a
> way to make dma-mapping happily think vgem is a real device.

Or we just extend drm_cflush_pages with the cflushing we need (at least
for those arms where this is possible, let's ignore the others) and accept
for a few more years that dma-api doesn't fit?

Note this would need to be a full copypasta of what the arch code has
(since just exporting the function was shot down before), but I really
don't care about the resulting wailing if we do this.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-07-19  9:13 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-16 21:37 [PATCH v3 1/3] drm/gem: don't force writecombine mmap'ing Rob Clark
2019-07-16 21:37 ` [PATCH v3 2/3] drm: plumb attaching dev thru to prime_pin/unpin Rob Clark
2019-07-17  6:59   ` Koenig, Christian
2019-07-16 21:37 ` [PATCH v3 3/3] drm/vgem: use normal cached mmap'ings Rob Clark
2019-07-16 23:39   ` Eric Anholt
2019-07-17  0:13     ` Rob Clark
2019-07-19  9:13       ` Daniel Vetter
2019-07-16 23:21 ` [Intel-gfx] [PATCH v3 1/3] drm/gem: don't force writecombine mmap'ing Eric Anholt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).