All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/7] drm/vram-helper: Lock GEM BOs while they are mapped
@ 2020-12-03 14:02 ` Thomas Zimmermann
  0 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 14:02 UTC (permalink / raw)
  To: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal
  Cc: dri-devel, linux-media, linaro-mm-sig, Thomas Zimmermann

GEM VRAM helpers used to pin the BO in their implementation of vmap, so
that they could not be relocated. In a recent discussion, [1] it became
clear that this is incorrect and that vmap should rather repend on the
reservation lock to prevent relocation. This patchset addresses the issue.

Version 2 of the patchset is a significant rework. In particular, the
helper function drm_gem_vram_vmap_unlocked() is gone and importers now
acquire the reservation locks by themselves. I took Christian's A-bs only
for patches that were not affected.

Patches 1 and 2 prepare the ast cursor code for the later changes.

Patch 3 pushes the BO locking into callers of vmap and vunmap. Besides
the VRAM helpers, this affects ast, vboxvideo and the generic fbdev
emulation.

Patches 4 to 6 clean up afterwards. BO pinning is not required any longer
and the VRAM-internal helpers are not needed.

Patch 7 adds documentation to vmap and vunmap in struct dma_buf_ops. It
hopefully reflects the discussion on the patchset's version 1.

Tested on ast with GEM VRAM and also on mgag200 to verify that the fbdev
change does not interfere with GEM SHMEM.

v2:
	* make importers acquire resv locks by themselves
	* document dma-buf vamp/vunmap ops

Thomas Zimmermann (7):
  drm/ast: Don't pin cursor source BO explicitly during update
  drm/ast: Only map cursor BOs during updates
  drm/vram-helper: Move BO locking from vmap code into callers
  drm/vram-helper: Remove pinning from drm_gem_vram_{vmap,vunmap}()
  drm/vram-helper: Remove vmap reference counting
  drm/vram-helper: Simplify vmap implementation
  dma-buf: Write down some rules for vmap usage

 Documentation/gpu/todo.rst            | 15 +++++
 drivers/gpu/drm/ast/ast_cursor.c      | 70 +++++++++++++---------
 drivers/gpu/drm/ast/ast_drv.h         |  2 -
 drivers/gpu/drm/drm_client.c          | 31 ++++++++++
 drivers/gpu/drm/drm_fb_helper.c       | 10 +++-
 drivers/gpu/drm/drm_gem_vram_helper.c | 85 ++++++---------------------
 drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 ++--
 include/drm/drm_client.h              |  2 +
 include/drm/drm_gem.h                 |  4 ++
 include/drm/drm_gem_vram_helper.h     | 17 +-----
 include/linux/dma-buf.h               | 45 ++++++++++++++
 11 files changed, 175 insertions(+), 117 deletions(-)

--
2.29.2


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/7] drm/vram-helper: Lock GEM BOs while they are mapped
@ 2020-12-03 14:02 ` Thomas Zimmermann
  0 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 14:02 UTC (permalink / raw)
  To: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal
  Cc: linaro-mm-sig, Thomas Zimmermann, dri-devel, linux-media

GEM VRAM helpers used to pin the BO in their implementation of vmap, so
that they could not be relocated. In a recent discussion, [1] it became
clear that this is incorrect and that vmap should rather repend on the
reservation lock to prevent relocation. This patchset addresses the issue.

Version 2 of the patchset is a significant rework. In particular, the
helper function drm_gem_vram_vmap_unlocked() is gone and importers now
acquire the reservation locks by themselves. I took Christian's A-bs only
for patches that were not affected.

Patches 1 and 2 prepare the ast cursor code for the later changes.

Patch 3 pushes the BO locking into callers of vmap and vunmap. Besides
the VRAM helpers, this affects ast, vboxvideo and the generic fbdev
emulation.

Patches 4 to 6 clean up afterwards. BO pinning is not required any longer
and the VRAM-internal helpers are not needed.

Patch 7 adds documentation to vmap and vunmap in struct dma_buf_ops. It
hopefully reflects the discussion on the patchset's version 1.

Tested on ast with GEM VRAM and also on mgag200 to verify that the fbdev
change does not interfere with GEM SHMEM.

v2:
	* make importers acquire resv locks by themselves
	* document dma-buf vamp/vunmap ops

Thomas Zimmermann (7):
  drm/ast: Don't pin cursor source BO explicitly during update
  drm/ast: Only map cursor BOs during updates
  drm/vram-helper: Move BO locking from vmap code into callers
  drm/vram-helper: Remove pinning from drm_gem_vram_{vmap,vunmap}()
  drm/vram-helper: Remove vmap reference counting
  drm/vram-helper: Simplify vmap implementation
  dma-buf: Write down some rules for vmap usage

 Documentation/gpu/todo.rst            | 15 +++++
 drivers/gpu/drm/ast/ast_cursor.c      | 70 +++++++++++++---------
 drivers/gpu/drm/ast/ast_drv.h         |  2 -
 drivers/gpu/drm/drm_client.c          | 31 ++++++++++
 drivers/gpu/drm/drm_fb_helper.c       | 10 +++-
 drivers/gpu/drm/drm_gem_vram_helper.c | 85 ++++++---------------------
 drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 ++--
 include/drm/drm_client.h              |  2 +
 include/drm/drm_gem.h                 |  4 ++
 include/drm/drm_gem_vram_helper.h     | 17 +-----
 include/linux/dma-buf.h               | 45 ++++++++++++++
 11 files changed, 175 insertions(+), 117 deletions(-)

--
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 1/7] drm/ast: Don't pin cursor source BO explicitly during update
  2020-12-03 14:02 ` Thomas Zimmermann
@ 2020-12-03 14:02   ` Thomas Zimmermann
  -1 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 14:02 UTC (permalink / raw)
  To: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal
  Cc: dri-devel, linux-media, linaro-mm-sig, Thomas Zimmermann

Vmapping the cursor source BO contains an implicit pin operation,
so there's no need to do this manually.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/ast/ast_cursor.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
index 742d43a7edf4..68bf3d33f1ed 100644
--- a/drivers/gpu/drm/ast/ast_cursor.c
+++ b/drivers/gpu/drm/ast/ast_cursor.c
@@ -180,12 +180,9 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 
 	gbo = drm_gem_vram_of_gem(fb->obj[0]);
 
-	ret = drm_gem_vram_pin(gbo, 0);
-	if (ret)
-		return ret;
 	ret = drm_gem_vram_vmap(gbo, &map);
 	if (ret)
-		goto err_drm_gem_vram_unpin;
+		return ret;
 	src = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
@@ -194,13 +191,8 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 	update_cursor_image(dst, src, fb->width, fb->height);
 
 	drm_gem_vram_vunmap(gbo, &map);
-	drm_gem_vram_unpin(gbo);
 
 	return 0;
-
-err_drm_gem_vram_unpin:
-	drm_gem_vram_unpin(gbo);
-	return ret;
 }
 
 static void ast_cursor_set_base(struct ast_private *ast, u64 address)
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 1/7] drm/ast: Don't pin cursor source BO explicitly during update
@ 2020-12-03 14:02   ` Thomas Zimmermann
  0 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 14:02 UTC (permalink / raw)
  To: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal
  Cc: linaro-mm-sig, Thomas Zimmermann, dri-devel, linux-media

Vmapping the cursor source BO contains an implicit pin operation,
so there's no need to do this manually.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/ast/ast_cursor.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
index 742d43a7edf4..68bf3d33f1ed 100644
--- a/drivers/gpu/drm/ast/ast_cursor.c
+++ b/drivers/gpu/drm/ast/ast_cursor.c
@@ -180,12 +180,9 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 
 	gbo = drm_gem_vram_of_gem(fb->obj[0]);
 
-	ret = drm_gem_vram_pin(gbo, 0);
-	if (ret)
-		return ret;
 	ret = drm_gem_vram_vmap(gbo, &map);
 	if (ret)
-		goto err_drm_gem_vram_unpin;
+		return ret;
 	src = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
@@ -194,13 +191,8 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 	update_cursor_image(dst, src, fb->width, fb->height);
 
 	drm_gem_vram_vunmap(gbo, &map);
-	drm_gem_vram_unpin(gbo);
 
 	return 0;
-
-err_drm_gem_vram_unpin:
-	drm_gem_vram_unpin(gbo);
-	return ret;
 }
 
 static void ast_cursor_set_base(struct ast_private *ast, u64 address)
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 2/7] drm/ast: Only map cursor BOs during updates
  2020-12-03 14:02 ` Thomas Zimmermann
@ 2020-12-03 14:02   ` Thomas Zimmermann
  -1 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 14:02 UTC (permalink / raw)
  To: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal
  Cc: dri-devel, linux-media, linaro-mm-sig, Thomas Zimmermann

The HW cursor's BO used to be mapped permanently into the kernel's
address space. GEM's vmap operation will be protected by locks, and
we don't want to lock the BO's for an indefinate period of time.

Change the cursor code to map the HW BOs only during updates. The
vmap operation in VRAM helpers is cheap, as a once estabished mapping
is being reused until the BO actually moves. As the HW cursor BOs are
permanently pinned, they never move at all.

v2:
	* fix typos in commit description

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/ast/ast_cursor.c | 51 ++++++++++++++++++--------------
 drivers/gpu/drm/ast/ast_drv.h    |  2 --
 2 files changed, 28 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
index 68bf3d33f1ed..fac1ee79c372 100644
--- a/drivers/gpu/drm/ast/ast_cursor.c
+++ b/drivers/gpu/drm/ast/ast_cursor.c
@@ -39,7 +39,6 @@ static void ast_cursor_fini(struct ast_private *ast)
 
 	for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
 		gbo = ast->cursor.gbo[i];
-		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
 		drm_gem_vram_unpin(gbo);
 		drm_gem_vram_put(gbo);
 	}
@@ -53,14 +52,13 @@ static void ast_cursor_release(struct drm_device *dev, void *ptr)
 }
 
 /*
- * Allocate cursor BOs and pins them at the end of VRAM.
+ * Allocate cursor BOs and pin them at the end of VRAM.
  */
 int ast_cursor_init(struct ast_private *ast)
 {
 	struct drm_device *dev = &ast->base;
 	size_t size, i;
 	struct drm_gem_vram_object *gbo;
-	struct dma_buf_map map;
 	int ret;
 
 	size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
@@ -77,15 +75,7 @@ int ast_cursor_init(struct ast_private *ast)
 			drm_gem_vram_put(gbo);
 			goto err_drm_gem_vram_put;
 		}
-		ret = drm_gem_vram_vmap(gbo, &map);
-		if (ret) {
-			drm_gem_vram_unpin(gbo);
-			drm_gem_vram_put(gbo);
-			goto err_drm_gem_vram_put;
-		}
-
 		ast->cursor.gbo[i] = gbo;
-		ast->cursor.map[i] = map;
 	}
 
 	return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
@@ -94,7 +84,6 @@ int ast_cursor_init(struct ast_private *ast)
 	while (i) {
 		--i;
 		gbo = ast->cursor.gbo[i];
-		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
 		drm_gem_vram_unpin(gbo);
 		drm_gem_vram_put(gbo);
 	}
@@ -168,31 +157,38 @@ static void update_cursor_image(u8 __iomem *dst, const u8 *src, int width, int h
 int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 {
 	struct drm_device *dev = &ast->base;
-	struct drm_gem_vram_object *gbo;
-	struct dma_buf_map map;
-	int ret;
-	void *src;
+	struct drm_gem_vram_object *dst_gbo = ast->cursor.gbo[ast->cursor.next_index];
+	struct drm_gem_vram_object *src_gbo = drm_gem_vram_of_gem(fb->obj[0]);
+	struct dma_buf_map src_map, dst_map;
 	void __iomem *dst;
+	void *src;
+	int ret;
 
 	if (drm_WARN_ON_ONCE(dev, fb->width > AST_MAX_HWC_WIDTH) ||
 	    drm_WARN_ON_ONCE(dev, fb->height > AST_MAX_HWC_HEIGHT))
 		return -EINVAL;
 
-	gbo = drm_gem_vram_of_gem(fb->obj[0]);
-
-	ret = drm_gem_vram_vmap(gbo, &map);
+	ret = drm_gem_vram_vmap(src_gbo, &src_map);
 	if (ret)
 		return ret;
-	src = map.vaddr; /* TODO: Use mapping abstraction properly */
+	src = src_map.vaddr; /* TODO: Use mapping abstraction properly */
 
-	dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
+	ret = drm_gem_vram_vmap(dst_gbo, &dst_map);
+	if (ret)
+		goto err_drm_gem_vram_vunmap;
+	dst = dst_map.vaddr_iomem; /* TODO: Use mapping abstraction properly */
 
 	/* do data transfer to cursor BO */
 	update_cursor_image(dst, src, fb->width, fb->height);
 
-	drm_gem_vram_vunmap(gbo, &map);
+	drm_gem_vram_vunmap(dst_gbo, &dst_map);
+	drm_gem_vram_vunmap(src_gbo, &src_map);
 
 	return 0;
+
+err_drm_gem_vram_vunmap:
+	drm_gem_vram_vunmap(src_gbo, &src_map);
+	return ret;
 }
 
 static void ast_cursor_set_base(struct ast_private *ast, u64 address)
@@ -243,17 +239,26 @@ static void ast_cursor_set_location(struct ast_private *ast, u16 x, u16 y,
 void ast_cursor_show(struct ast_private *ast, int x, int y,
 		     unsigned int offset_x, unsigned int offset_y)
 {
+	struct drm_device *dev = &ast->base;
+	struct drm_gem_vram_object *gbo = ast->cursor.gbo[ast->cursor.next_index];
+	struct dma_buf_map map;
 	u8 x_offset, y_offset;
 	u8 __iomem *dst;
 	u8 __iomem *sig;
 	u8 jreg;
+	int ret;
 
-	dst = ast->cursor.map[ast->cursor.next_index].vaddr;
+	ret = drm_gem_vram_vmap(gbo, &map);
+	if (drm_WARN_ONCE(dev, ret, "drm_gem_vram_vmap() failed, ret=%d\n", ret))
+		return;
+	dst = map.vaddr_iomem; /* TODO: Use mapping abstraction properly */
 
 	sig = dst + AST_HWC_SIZE;
 	writel(x, sig + AST_HWC_SIGNATURE_X);
 	writel(y, sig + AST_HWC_SIGNATURE_Y);
 
+	drm_gem_vram_vunmap(gbo, &map);
+
 	if (x < 0) {
 		x_offset = (-x) + offset_x;
 		x = 0;
diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
index ccaff81924ee..f871fc36c2f7 100644
--- a/drivers/gpu/drm/ast/ast_drv.h
+++ b/drivers/gpu/drm/ast/ast_drv.h
@@ -28,7 +28,6 @@
 #ifndef __AST_DRV_H__
 #define __AST_DRV_H__
 
-#include <linux/dma-buf-map.h>
 #include <linux/i2c.h>
 #include <linux/i2c-algo-bit.h>
 #include <linux/io.h>
@@ -133,7 +132,6 @@ struct ast_private {
 
 	struct {
 		struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
-		struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
 		unsigned int next_index;
 	} cursor;
 
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 2/7] drm/ast: Only map cursor BOs during updates
@ 2020-12-03 14:02   ` Thomas Zimmermann
  0 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 14:02 UTC (permalink / raw)
  To: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal
  Cc: linaro-mm-sig, Thomas Zimmermann, dri-devel, linux-media

The HW cursor's BO used to be mapped permanently into the kernel's
address space. GEM's vmap operation will be protected by locks, and
we don't want to lock the BO's for an indefinate period of time.

Change the cursor code to map the HW BOs only during updates. The
vmap operation in VRAM helpers is cheap, as a once estabished mapping
is being reused until the BO actually moves. As the HW cursor BOs are
permanently pinned, they never move at all.

v2:
	* fix typos in commit description

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/ast/ast_cursor.c | 51 ++++++++++++++++++--------------
 drivers/gpu/drm/ast/ast_drv.h    |  2 --
 2 files changed, 28 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
index 68bf3d33f1ed..fac1ee79c372 100644
--- a/drivers/gpu/drm/ast/ast_cursor.c
+++ b/drivers/gpu/drm/ast/ast_cursor.c
@@ -39,7 +39,6 @@ static void ast_cursor_fini(struct ast_private *ast)
 
 	for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
 		gbo = ast->cursor.gbo[i];
-		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
 		drm_gem_vram_unpin(gbo);
 		drm_gem_vram_put(gbo);
 	}
@@ -53,14 +52,13 @@ static void ast_cursor_release(struct drm_device *dev, void *ptr)
 }
 
 /*
- * Allocate cursor BOs and pins them at the end of VRAM.
+ * Allocate cursor BOs and pin them at the end of VRAM.
  */
 int ast_cursor_init(struct ast_private *ast)
 {
 	struct drm_device *dev = &ast->base;
 	size_t size, i;
 	struct drm_gem_vram_object *gbo;
-	struct dma_buf_map map;
 	int ret;
 
 	size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
@@ -77,15 +75,7 @@ int ast_cursor_init(struct ast_private *ast)
 			drm_gem_vram_put(gbo);
 			goto err_drm_gem_vram_put;
 		}
-		ret = drm_gem_vram_vmap(gbo, &map);
-		if (ret) {
-			drm_gem_vram_unpin(gbo);
-			drm_gem_vram_put(gbo);
-			goto err_drm_gem_vram_put;
-		}
-
 		ast->cursor.gbo[i] = gbo;
-		ast->cursor.map[i] = map;
 	}
 
 	return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
@@ -94,7 +84,6 @@ int ast_cursor_init(struct ast_private *ast)
 	while (i) {
 		--i;
 		gbo = ast->cursor.gbo[i];
-		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
 		drm_gem_vram_unpin(gbo);
 		drm_gem_vram_put(gbo);
 	}
@@ -168,31 +157,38 @@ static void update_cursor_image(u8 __iomem *dst, const u8 *src, int width, int h
 int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 {
 	struct drm_device *dev = &ast->base;
-	struct drm_gem_vram_object *gbo;
-	struct dma_buf_map map;
-	int ret;
-	void *src;
+	struct drm_gem_vram_object *dst_gbo = ast->cursor.gbo[ast->cursor.next_index];
+	struct drm_gem_vram_object *src_gbo = drm_gem_vram_of_gem(fb->obj[0]);
+	struct dma_buf_map src_map, dst_map;
 	void __iomem *dst;
+	void *src;
+	int ret;
 
 	if (drm_WARN_ON_ONCE(dev, fb->width > AST_MAX_HWC_WIDTH) ||
 	    drm_WARN_ON_ONCE(dev, fb->height > AST_MAX_HWC_HEIGHT))
 		return -EINVAL;
 
-	gbo = drm_gem_vram_of_gem(fb->obj[0]);
-
-	ret = drm_gem_vram_vmap(gbo, &map);
+	ret = drm_gem_vram_vmap(src_gbo, &src_map);
 	if (ret)
 		return ret;
-	src = map.vaddr; /* TODO: Use mapping abstraction properly */
+	src = src_map.vaddr; /* TODO: Use mapping abstraction properly */
 
-	dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
+	ret = drm_gem_vram_vmap(dst_gbo, &dst_map);
+	if (ret)
+		goto err_drm_gem_vram_vunmap;
+	dst = dst_map.vaddr_iomem; /* TODO: Use mapping abstraction properly */
 
 	/* do data transfer to cursor BO */
 	update_cursor_image(dst, src, fb->width, fb->height);
 
-	drm_gem_vram_vunmap(gbo, &map);
+	drm_gem_vram_vunmap(dst_gbo, &dst_map);
+	drm_gem_vram_vunmap(src_gbo, &src_map);
 
 	return 0;
+
+err_drm_gem_vram_vunmap:
+	drm_gem_vram_vunmap(src_gbo, &src_map);
+	return ret;
 }
 
 static void ast_cursor_set_base(struct ast_private *ast, u64 address)
@@ -243,17 +239,26 @@ static void ast_cursor_set_location(struct ast_private *ast, u16 x, u16 y,
 void ast_cursor_show(struct ast_private *ast, int x, int y,
 		     unsigned int offset_x, unsigned int offset_y)
 {
+	struct drm_device *dev = &ast->base;
+	struct drm_gem_vram_object *gbo = ast->cursor.gbo[ast->cursor.next_index];
+	struct dma_buf_map map;
 	u8 x_offset, y_offset;
 	u8 __iomem *dst;
 	u8 __iomem *sig;
 	u8 jreg;
+	int ret;
 
-	dst = ast->cursor.map[ast->cursor.next_index].vaddr;
+	ret = drm_gem_vram_vmap(gbo, &map);
+	if (drm_WARN_ONCE(dev, ret, "drm_gem_vram_vmap() failed, ret=%d\n", ret))
+		return;
+	dst = map.vaddr_iomem; /* TODO: Use mapping abstraction properly */
 
 	sig = dst + AST_HWC_SIZE;
 	writel(x, sig + AST_HWC_SIGNATURE_X);
 	writel(y, sig + AST_HWC_SIGNATURE_Y);
 
+	drm_gem_vram_vunmap(gbo, &map);
+
 	if (x < 0) {
 		x_offset = (-x) + offset_x;
 		x = 0;
diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
index ccaff81924ee..f871fc36c2f7 100644
--- a/drivers/gpu/drm/ast/ast_drv.h
+++ b/drivers/gpu/drm/ast/ast_drv.h
@@ -28,7 +28,6 @@
 #ifndef __AST_DRV_H__
 #define __AST_DRV_H__
 
-#include <linux/dma-buf-map.h>
 #include <linux/i2c.h>
 #include <linux/i2c-algo-bit.h>
 #include <linux/io.h>
@@ -133,7 +132,6 @@ struct ast_private {
 
 	struct {
 		struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
-		struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
 		unsigned int next_index;
 	} cursor;
 
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 3/7] drm/vram-helper: Move BO locking from vmap code into callers
  2020-12-03 14:02 ` Thomas Zimmermann
@ 2020-12-03 14:02   ` Thomas Zimmermann
  -1 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 14:02 UTC (permalink / raw)
  To: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal
  Cc: dri-devel, linux-media, linaro-mm-sig, Thomas Zimmermann

Implementations of the vmap/vunmap callbacks may expect that the caller
holds the reservation lock. Therefore push the locking from vmap and
vunmap into the callers.

This affects fbdev emulation, and cursor updates in ast and vboxvideo.
Ast and vboxvideo acquire the BO's reservation lock directly.

Fbdev emulation uses DRM client helpers for locking. This is solely done
for consistency with the rest of the interface. Fbdev emulation tries to
avoid calling GEM interfaces.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/ast/ast_cursor.c      | 21 ++++++++++++++++--
 drivers/gpu/drm/drm_client.c          | 31 +++++++++++++++++++++++++++
 drivers/gpu/drm/drm_fb_helper.c       | 10 +++++++--
 drivers/gpu/drm/drm_gem_vram_helper.c | 18 +++-------------
 drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 ++++++----
 include/drm/drm_client.h              |  2 ++
 6 files changed, 70 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
index fac1ee79c372..15e5c4fd301d 100644
--- a/drivers/gpu/drm/ast/ast_cursor.c
+++ b/drivers/gpu/drm/ast/ast_cursor.c
@@ -159,6 +159,8 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 	struct drm_device *dev = &ast->base;
 	struct drm_gem_vram_object *dst_gbo = ast->cursor.gbo[ast->cursor.next_index];
 	struct drm_gem_vram_object *src_gbo = drm_gem_vram_of_gem(fb->obj[0]);
+	struct drm_gem_object *objs[] = {&src_gbo->bo.base, &dst_gbo->bo.base};
+	struct ww_acquire_ctx ctx;
 	struct dma_buf_map src_map, dst_map;
 	void __iomem *dst;
 	void *src;
@@ -168,9 +170,13 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 	    drm_WARN_ON_ONCE(dev, fb->height > AST_MAX_HWC_HEIGHT))
 		return -EINVAL;
 
-	ret = drm_gem_vram_vmap(src_gbo, &src_map);
+	ret = drm_gem_lock_reservations(objs, ARRAY_SIZE(objs), &ctx);
 	if (ret)
 		return ret;
+
+	ret = drm_gem_vram_vmap(src_gbo, &src_map);
+	if (ret)
+		goto err_drm_gem_unlock_reservations;
 	src = src_map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	ret = drm_gem_vram_vmap(dst_gbo, &dst_map);
@@ -184,10 +190,14 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 	drm_gem_vram_vunmap(dst_gbo, &dst_map);
 	drm_gem_vram_vunmap(src_gbo, &src_map);
 
+	drm_gem_unlock_reservations(objs, ARRAY_SIZE(objs), &ctx);
+
 	return 0;
 
 err_drm_gem_vram_vunmap:
 	drm_gem_vram_vunmap(src_gbo, &src_map);
+err_drm_gem_unlock_reservations:
+	drm_gem_unlock_reservations(objs, ARRAY_SIZE(objs), &ctx);
 	return ret;
 }
 
@@ -241,6 +251,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
 {
 	struct drm_device *dev = &ast->base;
 	struct drm_gem_vram_object *gbo = ast->cursor.gbo[ast->cursor.next_index];
+	struct drm_gem_object *obj = &gbo->bo.base;
 	struct dma_buf_map map;
 	u8 x_offset, y_offset;
 	u8 __iomem *dst;
@@ -248,9 +259,14 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
 	u8 jreg;
 	int ret;
 
+	ret = dma_resv_lock(obj->resv, NULL);
+	if (ret)
+		return;
 	ret = drm_gem_vram_vmap(gbo, &map);
-	if (drm_WARN_ONCE(dev, ret, "drm_gem_vram_vmap() failed, ret=%d\n", ret))
+	if (drm_WARN_ONCE(dev, ret, "drm_gem_vram_vmap() failed, ret=%d\n", ret)) {
+		dma_resv_unlock(obj->resv);
 		return;
+	}
 	dst = map.vaddr_iomem; /* TODO: Use mapping abstraction properly */
 
 	sig = dst + AST_HWC_SIZE;
@@ -258,6 +274,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
 	writel(y, sig + AST_HWC_SIGNATURE_Y);
 
 	drm_gem_vram_vunmap(gbo, &map);
+	dma_resv_unlock(obj->resv);
 
 	if (x < 0) {
 		x_offset = (-x) + offset_x;
diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index ce45e380f4a2..82453ca0b3ec 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -288,6 +288,37 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
 	return ERR_PTR(ret);
 }
 
+/**
+ * drm_client_buffer_lock - Locks the DRM client buffer
+ * @buffer: DRM client buffer
+ *
+ * This function locks the client buffer by acquiring the buffer
+ * object's reservation lock.
+ *
+ * Unlock the buffer with drm_client_buffer_unlock().
+ *
+ * Returns:
+ *	0 on success, or a negative errno code otherwise.
+ */
+int
+drm_client_buffer_lock(struct drm_client_buffer *buffer)
+{
+	return dma_resv_lock(buffer->gem->resv, NULL);
+}
+EXPORT_SYMBOL(drm_client_buffer_lock);
+
+/**
+ * drm_client_buffer_unlock - Unlock DRM client buffer
+ * @buffer: DRM client buffer
+ *
+ * Unlocks a client buffer. See drm_client_buffer_lock().
+ */
+void drm_client_buffer_unlock(struct drm_client_buffer *buffer)
+{
+	dma_resv_unlock(buffer->gem->resv);
+}
+EXPORT_SYMBOL(drm_client_buffer_unlock);
+
 /**
  * drm_client_buffer_vmap - Map DRM client buffer into address space
  * @buffer: DRM client buffer
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 4b8119510687..97856d9194de 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -411,16 +411,22 @@ static int drm_fb_helper_damage_blit(struct drm_fb_helper *fb_helper,
 	 */
 	mutex_lock(&fb_helper->lock);
 
+	ret = drm_client_buffer_lock(buffer);
+	if (ret)
+		goto out_mutex_unlock;
+
 	ret = drm_client_buffer_vmap(buffer, &map);
 	if (ret)
-		goto out;
+		goto out_drm_client_buffer_unlock;
 
 	dst = map;
 	drm_fb_helper_damage_blit_real(fb_helper, clip, &dst);
 
 	drm_client_buffer_vunmap(buffer);
 
-out:
+out_drm_client_buffer_unlock:
+	drm_client_buffer_unlock(buffer);
+out_mutex_unlock:
 	mutex_unlock(&fb_helper->lock);
 
 	return ret;
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 02ca22e90290..35a30dafccce 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -440,25 +440,19 @@ int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
 	int ret;
 
-	ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
-	if (ret)
-		return ret;
+	dma_resv_assert_held(gbo->bo.base.resv);
 
 	ret = drm_gem_vram_pin_locked(gbo, 0);
 	if (ret)
-		goto err_ttm_bo_unreserve;
+		return ret;
 	ret = drm_gem_vram_kmap_locked(gbo, map);
 	if (ret)
 		goto err_drm_gem_vram_unpin_locked;
 
-	ttm_bo_unreserve(&gbo->bo);
-
 	return 0;
 
 err_drm_gem_vram_unpin_locked:
 	drm_gem_vram_unpin_locked(gbo);
-err_ttm_bo_unreserve:
-	ttm_bo_unreserve(&gbo->bo);
 	return ret;
 }
 EXPORT_SYMBOL(drm_gem_vram_vmap);
@@ -473,16 +467,10 @@ EXPORT_SYMBOL(drm_gem_vram_vmap);
  */
 void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
-	int ret;
-
-	ret = ttm_bo_reserve(&gbo->bo, false, false, NULL);
-	if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
-		return;
+	dma_resv_assert_held(gbo->bo.base.resv);
 
 	drm_gem_vram_kunmap_locked(gbo, map);
 	drm_gem_vram_unpin_locked(gbo);
-
-	ttm_bo_unreserve(&gbo->bo);
 }
 EXPORT_SYMBOL(drm_gem_vram_vunmap);
 
diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
index dbc0dd53c69e..8b1a8522144e 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
@@ -381,7 +381,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 		container_of(plane->dev, struct vbox_private, ddev);
 	struct vbox_crtc *vbox_crtc = to_vbox_crtc(plane->state->crtc);
 	struct drm_framebuffer *fb = plane->state->fb;
-	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(fb->obj[0]);
+	struct drm_gem_object *obj = fb->obj[0];
+	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(obj);
 	u32 width = plane->state->crtc_w;
 	u32 height = plane->state->crtc_h;
 	size_t data_size, mask_size;
@@ -401,11 +402,12 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 
 	vbox_crtc->cursor_enabled = true;
 
+	ret = dma_resv_lock(obj->resv, NULL);
+	if (ret)
+		return;
 	ret = drm_gem_vram_vmap(gbo, &map);
 	if (ret) {
-		/*
-		 * BUG: we should have pinned the BO in prepare_fb().
-		 */
+		dma_resv_unlock(obj->resv);
 		mutex_unlock(&vbox->hw_mutex);
 		DRM_WARN("Could not map cursor bo, skipping update\n");
 		return;
@@ -422,6 +424,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 
 	copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
 	drm_gem_vram_vunmap(gbo, &map);
+	dma_resv_unlock(obj->resv);
 
 	flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
 		VBOX_MOUSE_POINTER_ALPHA;
diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
index f07f2fb02e75..1cf811471fc4 100644
--- a/include/drm/drm_client.h
+++ b/include/drm/drm_client.h
@@ -156,6 +156,8 @@ struct drm_client_buffer *
 drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
 void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
 int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
+int drm_client_buffer_lock(struct drm_client_buffer *buffer);
+void drm_client_buffer_unlock(struct drm_client_buffer *buffer);
 int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map);
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
 
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 3/7] drm/vram-helper: Move BO locking from vmap code into callers
@ 2020-12-03 14:02   ` Thomas Zimmermann
  0 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 14:02 UTC (permalink / raw)
  To: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal
  Cc: linaro-mm-sig, Thomas Zimmermann, dri-devel, linux-media

Implementations of the vmap/vunmap callbacks may expect that the caller
holds the reservation lock. Therefore push the locking from vmap and
vunmap into the callers.

This affects fbdev emulation, and cursor updates in ast and vboxvideo.
Ast and vboxvideo acquire the BO's reservation lock directly.

Fbdev emulation uses DRM client helpers for locking. This is solely done
for consistency with the rest of the interface. Fbdev emulation tries to
avoid calling GEM interfaces.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/ast/ast_cursor.c      | 21 ++++++++++++++++--
 drivers/gpu/drm/drm_client.c          | 31 +++++++++++++++++++++++++++
 drivers/gpu/drm/drm_fb_helper.c       | 10 +++++++--
 drivers/gpu/drm/drm_gem_vram_helper.c | 18 +++-------------
 drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 ++++++----
 include/drm/drm_client.h              |  2 ++
 6 files changed, 70 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
index fac1ee79c372..15e5c4fd301d 100644
--- a/drivers/gpu/drm/ast/ast_cursor.c
+++ b/drivers/gpu/drm/ast/ast_cursor.c
@@ -159,6 +159,8 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 	struct drm_device *dev = &ast->base;
 	struct drm_gem_vram_object *dst_gbo = ast->cursor.gbo[ast->cursor.next_index];
 	struct drm_gem_vram_object *src_gbo = drm_gem_vram_of_gem(fb->obj[0]);
+	struct drm_gem_object *objs[] = {&src_gbo->bo.base, &dst_gbo->bo.base};
+	struct ww_acquire_ctx ctx;
 	struct dma_buf_map src_map, dst_map;
 	void __iomem *dst;
 	void *src;
@@ -168,9 +170,13 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 	    drm_WARN_ON_ONCE(dev, fb->height > AST_MAX_HWC_HEIGHT))
 		return -EINVAL;
 
-	ret = drm_gem_vram_vmap(src_gbo, &src_map);
+	ret = drm_gem_lock_reservations(objs, ARRAY_SIZE(objs), &ctx);
 	if (ret)
 		return ret;
+
+	ret = drm_gem_vram_vmap(src_gbo, &src_map);
+	if (ret)
+		goto err_drm_gem_unlock_reservations;
 	src = src_map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	ret = drm_gem_vram_vmap(dst_gbo, &dst_map);
@@ -184,10 +190,14 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 	drm_gem_vram_vunmap(dst_gbo, &dst_map);
 	drm_gem_vram_vunmap(src_gbo, &src_map);
 
+	drm_gem_unlock_reservations(objs, ARRAY_SIZE(objs), &ctx);
+
 	return 0;
 
 err_drm_gem_vram_vunmap:
 	drm_gem_vram_vunmap(src_gbo, &src_map);
+err_drm_gem_unlock_reservations:
+	drm_gem_unlock_reservations(objs, ARRAY_SIZE(objs), &ctx);
 	return ret;
 }
 
@@ -241,6 +251,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
 {
 	struct drm_device *dev = &ast->base;
 	struct drm_gem_vram_object *gbo = ast->cursor.gbo[ast->cursor.next_index];
+	struct drm_gem_object *obj = &gbo->bo.base;
 	struct dma_buf_map map;
 	u8 x_offset, y_offset;
 	u8 __iomem *dst;
@@ -248,9 +259,14 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
 	u8 jreg;
 	int ret;
 
+	ret = dma_resv_lock(obj->resv, NULL);
+	if (ret)
+		return;
 	ret = drm_gem_vram_vmap(gbo, &map);
-	if (drm_WARN_ONCE(dev, ret, "drm_gem_vram_vmap() failed, ret=%d\n", ret))
+	if (drm_WARN_ONCE(dev, ret, "drm_gem_vram_vmap() failed, ret=%d\n", ret)) {
+		dma_resv_unlock(obj->resv);
 		return;
+	}
 	dst = map.vaddr_iomem; /* TODO: Use mapping abstraction properly */
 
 	sig = dst + AST_HWC_SIZE;
@@ -258,6 +274,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
 	writel(y, sig + AST_HWC_SIGNATURE_Y);
 
 	drm_gem_vram_vunmap(gbo, &map);
+	dma_resv_unlock(obj->resv);
 
 	if (x < 0) {
 		x_offset = (-x) + offset_x;
diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index ce45e380f4a2..82453ca0b3ec 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -288,6 +288,37 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
 	return ERR_PTR(ret);
 }
 
+/**
+ * drm_client_buffer_lock - Locks the DRM client buffer
+ * @buffer: DRM client buffer
+ *
+ * This function locks the client buffer by acquiring the buffer
+ * object's reservation lock.
+ *
+ * Unlock the buffer with drm_client_buffer_unlock().
+ *
+ * Returns:
+ *	0 on success, or a negative errno code otherwise.
+ */
+int
+drm_client_buffer_lock(struct drm_client_buffer *buffer)
+{
+	return dma_resv_lock(buffer->gem->resv, NULL);
+}
+EXPORT_SYMBOL(drm_client_buffer_lock);
+
+/**
+ * drm_client_buffer_unlock - Unlock DRM client buffer
+ * @buffer: DRM client buffer
+ *
+ * Unlocks a client buffer. See drm_client_buffer_lock().
+ */
+void drm_client_buffer_unlock(struct drm_client_buffer *buffer)
+{
+	dma_resv_unlock(buffer->gem->resv);
+}
+EXPORT_SYMBOL(drm_client_buffer_unlock);
+
 /**
  * drm_client_buffer_vmap - Map DRM client buffer into address space
  * @buffer: DRM client buffer
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 4b8119510687..97856d9194de 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -411,16 +411,22 @@ static int drm_fb_helper_damage_blit(struct drm_fb_helper *fb_helper,
 	 */
 	mutex_lock(&fb_helper->lock);
 
+	ret = drm_client_buffer_lock(buffer);
+	if (ret)
+		goto out_mutex_unlock;
+
 	ret = drm_client_buffer_vmap(buffer, &map);
 	if (ret)
-		goto out;
+		goto out_drm_client_buffer_unlock;
 
 	dst = map;
 	drm_fb_helper_damage_blit_real(fb_helper, clip, &dst);
 
 	drm_client_buffer_vunmap(buffer);
 
-out:
+out_drm_client_buffer_unlock:
+	drm_client_buffer_unlock(buffer);
+out_mutex_unlock:
 	mutex_unlock(&fb_helper->lock);
 
 	return ret;
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 02ca22e90290..35a30dafccce 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -440,25 +440,19 @@ int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
 	int ret;
 
-	ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
-	if (ret)
-		return ret;
+	dma_resv_assert_held(gbo->bo.base.resv);
 
 	ret = drm_gem_vram_pin_locked(gbo, 0);
 	if (ret)
-		goto err_ttm_bo_unreserve;
+		return ret;
 	ret = drm_gem_vram_kmap_locked(gbo, map);
 	if (ret)
 		goto err_drm_gem_vram_unpin_locked;
 
-	ttm_bo_unreserve(&gbo->bo);
-
 	return 0;
 
 err_drm_gem_vram_unpin_locked:
 	drm_gem_vram_unpin_locked(gbo);
-err_ttm_bo_unreserve:
-	ttm_bo_unreserve(&gbo->bo);
 	return ret;
 }
 EXPORT_SYMBOL(drm_gem_vram_vmap);
@@ -473,16 +467,10 @@ EXPORT_SYMBOL(drm_gem_vram_vmap);
  */
 void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
-	int ret;
-
-	ret = ttm_bo_reserve(&gbo->bo, false, false, NULL);
-	if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
-		return;
+	dma_resv_assert_held(gbo->bo.base.resv);
 
 	drm_gem_vram_kunmap_locked(gbo, map);
 	drm_gem_vram_unpin_locked(gbo);
-
-	ttm_bo_unreserve(&gbo->bo);
 }
 EXPORT_SYMBOL(drm_gem_vram_vunmap);
 
diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
index dbc0dd53c69e..8b1a8522144e 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
@@ -381,7 +381,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 		container_of(plane->dev, struct vbox_private, ddev);
 	struct vbox_crtc *vbox_crtc = to_vbox_crtc(plane->state->crtc);
 	struct drm_framebuffer *fb = plane->state->fb;
-	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(fb->obj[0]);
+	struct drm_gem_object *obj = fb->obj[0];
+	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(obj);
 	u32 width = plane->state->crtc_w;
 	u32 height = plane->state->crtc_h;
 	size_t data_size, mask_size;
@@ -401,11 +402,12 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 
 	vbox_crtc->cursor_enabled = true;
 
+	ret = dma_resv_lock(obj->resv, NULL);
+	if (ret)
+		return;
 	ret = drm_gem_vram_vmap(gbo, &map);
 	if (ret) {
-		/*
-		 * BUG: we should have pinned the BO in prepare_fb().
-		 */
+		dma_resv_unlock(obj->resv);
 		mutex_unlock(&vbox->hw_mutex);
 		DRM_WARN("Could not map cursor bo, skipping update\n");
 		return;
@@ -422,6 +424,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 
 	copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
 	drm_gem_vram_vunmap(gbo, &map);
+	dma_resv_unlock(obj->resv);
 
 	flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
 		VBOX_MOUSE_POINTER_ALPHA;
diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
index f07f2fb02e75..1cf811471fc4 100644
--- a/include/drm/drm_client.h
+++ b/include/drm/drm_client.h
@@ -156,6 +156,8 @@ struct drm_client_buffer *
 drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
 void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
 int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
+int drm_client_buffer_lock(struct drm_client_buffer *buffer);
+void drm_client_buffer_unlock(struct drm_client_buffer *buffer);
 int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map);
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
 
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 4/7] drm/vram-helper: Remove pinning from drm_gem_vram_{vmap,vunmap}()
  2020-12-03 14:02 ` Thomas Zimmermann
@ 2020-12-03 14:02   ` Thomas Zimmermann
  -1 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 14:02 UTC (permalink / raw)
  To: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal
  Cc: dri-devel, linux-media, linaro-mm-sig, Thomas Zimmermann

BO pinning was never meant to be part of a GEM object's vmap operation.
Remove it from the related code in VRAM helpers.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_gem_vram_helper.c | 16 +---------------
 1 file changed, 1 insertion(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 35a30dafccce..760d77c6c3c0 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -438,22 +438,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
  */
 int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
-	int ret;
-
 	dma_resv_assert_held(gbo->bo.base.resv);
 
-	ret = drm_gem_vram_pin_locked(gbo, 0);
-	if (ret)
-		return ret;
-	ret = drm_gem_vram_kmap_locked(gbo, map);
-	if (ret)
-		goto err_drm_gem_vram_unpin_locked;
-
-	return 0;
-
-err_drm_gem_vram_unpin_locked:
-	drm_gem_vram_unpin_locked(gbo);
-	return ret;
+	return drm_gem_vram_kmap_locked(gbo, map);
 }
 EXPORT_SYMBOL(drm_gem_vram_vmap);
 
@@ -470,7 +457,6 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *ma
 	dma_resv_assert_held(gbo->bo.base.resv);
 
 	drm_gem_vram_kunmap_locked(gbo, map);
-	drm_gem_vram_unpin_locked(gbo);
 }
 EXPORT_SYMBOL(drm_gem_vram_vunmap);
 
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 4/7] drm/vram-helper: Remove pinning from drm_gem_vram_{vmap, vunmap}()
@ 2020-12-03 14:02   ` Thomas Zimmermann
  0 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 14:02 UTC (permalink / raw)
  To: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal
  Cc: linaro-mm-sig, Thomas Zimmermann, dri-devel, linux-media

BO pinning was never meant to be part of a GEM object's vmap operation.
Remove it from the related code in VRAM helpers.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_gem_vram_helper.c | 16 +---------------
 1 file changed, 1 insertion(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 35a30dafccce..760d77c6c3c0 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -438,22 +438,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
  */
 int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
-	int ret;
-
 	dma_resv_assert_held(gbo->bo.base.resv);
 
-	ret = drm_gem_vram_pin_locked(gbo, 0);
-	if (ret)
-		return ret;
-	ret = drm_gem_vram_kmap_locked(gbo, map);
-	if (ret)
-		goto err_drm_gem_vram_unpin_locked;
-
-	return 0;
-
-err_drm_gem_vram_unpin_locked:
-	drm_gem_vram_unpin_locked(gbo);
-	return ret;
+	return drm_gem_vram_kmap_locked(gbo, map);
 }
 EXPORT_SYMBOL(drm_gem_vram_vmap);
 
@@ -470,7 +457,6 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *ma
 	dma_resv_assert_held(gbo->bo.base.resv);
 
 	drm_gem_vram_kunmap_locked(gbo, map);
-	drm_gem_vram_unpin_locked(gbo);
 }
 EXPORT_SYMBOL(drm_gem_vram_vunmap);
 
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 5/7] drm/vram-helper: Remove vmap reference counting
  2020-12-03 14:02 ` Thomas Zimmermann
@ 2020-12-03 14:02   ` Thomas Zimmermann
  -1 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 14:02 UTC (permalink / raw)
  To: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal
  Cc: dri-devel, linux-media, linaro-mm-sig, Thomas Zimmermann

Overlapping or nested mappings of the same BO are not allowed by the
semantics of the GEM vmap/vunmap operations. Concurent access to the
GEM object is prevented by reservation locks.

So we don't need the reference counter in the GEM VRAM object. Remove
it.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_gem_vram_helper.c | 19 ++++---------------
 include/drm/drm_gem_vram_helper.h     | 17 +++--------------
 2 files changed, 7 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 760d77c6c3c0..276e8f8ea663 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -113,7 +113,6 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
 	 * up; only release the GEM object.
 	 */
 
-	WARN_ON(gbo->vmap_use_count);
 	WARN_ON(dma_buf_map_is_set(&gbo->map));
 
 	drm_gem_object_release(&gbo->bo.base);
@@ -384,15 +383,10 @@ static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
 {
 	int ret;
 
-	if (gbo->vmap_use_count > 0)
-		goto out;
-
 	ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
 	if (ret)
 		return ret;
 
-out:
-	++gbo->vmap_use_count;
 	*map = gbo->map;
 
 	return 0;
@@ -403,15 +397,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
 {
 	struct drm_device *dev = gbo->bo.base.dev;
 
-	if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count))
-		return;
-
 	if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
 		return; /* BUG: map not mapped from this BO */
 
-	if (--gbo->vmap_use_count > 0)
-		return;
-
 	/*
 	 * Permanently mapping and unmapping buffers adds overhead from
 	 * updating the page tables and creates debugging output. Therefore,
@@ -545,12 +533,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
 					       struct ttm_resource *new_mem)
 {
 	struct ttm_buffer_object *bo = &gbo->bo;
-	struct drm_device *dev = bo->base.dev;
+	struct dma_buf_map *map = &gbo->map;
 
-	if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count))
+	if (dma_buf_map_is_null(map))
 		return;
 
-	ttm_bo_vunmap(bo, &gbo->map);
+	ttm_bo_vunmap(bo, map);
+	dma_buf_map_clear(map);
 }
 
 static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
index a4bac02249c2..48af238b5ca9 100644
--- a/include/drm/drm_gem_vram_helper.h
+++ b/include/drm/drm_gem_vram_helper.h
@@ -41,25 +41,14 @@ struct vm_area_struct;
  * dedicated memory. The buffer object can be evicted to system memory if
  * video memory becomes scarce.
  *
- * GEM VRAM objects perform reference counting for pin and mapping
- * operations. So a buffer object that has been pinned N times with
- * drm_gem_vram_pin() must be unpinned N times with
- * drm_gem_vram_unpin(). The same applies to pairs of
- * drm_gem_vram_kmap() and drm_gem_vram_kunmap(), as well as pairs of
- * drm_gem_vram_vmap() and drm_gem_vram_vunmap().
+ * GEM VRAM objects perform reference counting for pin operations. So a
+ * buffer object that has been pinned N times with drm_gem_vram_pin() must
+ * be unpinned N times with drm_gem_vram_unpin().
  */
 struct drm_gem_vram_object {
 	struct ttm_buffer_object bo;
 	struct dma_buf_map map;
 
-	/**
-	 * @vmap_use_count:
-	 *
-	 * Reference count on the virtual address.
-	 * The address are un-mapped when the count reaches zero.
-	 */
-	unsigned int vmap_use_count;
-
 	/* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
 	struct ttm_placement placement;
 	struct ttm_place placements[2];
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 5/7] drm/vram-helper: Remove vmap reference counting
@ 2020-12-03 14:02   ` Thomas Zimmermann
  0 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 14:02 UTC (permalink / raw)
  To: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal
  Cc: linaro-mm-sig, Thomas Zimmermann, dri-devel, linux-media

Overlapping or nested mappings of the same BO are not allowed by the
semantics of the GEM vmap/vunmap operations. Concurent access to the
GEM object is prevented by reservation locks.

So we don't need the reference counter in the GEM VRAM object. Remove
it.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_gem_vram_helper.c | 19 ++++---------------
 include/drm/drm_gem_vram_helper.h     | 17 +++--------------
 2 files changed, 7 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 760d77c6c3c0..276e8f8ea663 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -113,7 +113,6 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
 	 * up; only release the GEM object.
 	 */
 
-	WARN_ON(gbo->vmap_use_count);
 	WARN_ON(dma_buf_map_is_set(&gbo->map));
 
 	drm_gem_object_release(&gbo->bo.base);
@@ -384,15 +383,10 @@ static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
 {
 	int ret;
 
-	if (gbo->vmap_use_count > 0)
-		goto out;
-
 	ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
 	if (ret)
 		return ret;
 
-out:
-	++gbo->vmap_use_count;
 	*map = gbo->map;
 
 	return 0;
@@ -403,15 +397,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
 {
 	struct drm_device *dev = gbo->bo.base.dev;
 
-	if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count))
-		return;
-
 	if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
 		return; /* BUG: map not mapped from this BO */
 
-	if (--gbo->vmap_use_count > 0)
-		return;
-
 	/*
 	 * Permanently mapping and unmapping buffers adds overhead from
 	 * updating the page tables and creates debugging output. Therefore,
@@ -545,12 +533,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
 					       struct ttm_resource *new_mem)
 {
 	struct ttm_buffer_object *bo = &gbo->bo;
-	struct drm_device *dev = bo->base.dev;
+	struct dma_buf_map *map = &gbo->map;
 
-	if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count))
+	if (dma_buf_map_is_null(map))
 		return;
 
-	ttm_bo_vunmap(bo, &gbo->map);
+	ttm_bo_vunmap(bo, map);
+	dma_buf_map_clear(map);
 }
 
 static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
index a4bac02249c2..48af238b5ca9 100644
--- a/include/drm/drm_gem_vram_helper.h
+++ b/include/drm/drm_gem_vram_helper.h
@@ -41,25 +41,14 @@ struct vm_area_struct;
  * dedicated memory. The buffer object can be evicted to system memory if
  * video memory becomes scarce.
  *
- * GEM VRAM objects perform reference counting for pin and mapping
- * operations. So a buffer object that has been pinned N times with
- * drm_gem_vram_pin() must be unpinned N times with
- * drm_gem_vram_unpin(). The same applies to pairs of
- * drm_gem_vram_kmap() and drm_gem_vram_kunmap(), as well as pairs of
- * drm_gem_vram_vmap() and drm_gem_vram_vunmap().
+ * GEM VRAM objects perform reference counting for pin operations. So a
+ * buffer object that has been pinned N times with drm_gem_vram_pin() must
+ * be unpinned N times with drm_gem_vram_unpin().
  */
 struct drm_gem_vram_object {
 	struct ttm_buffer_object bo;
 	struct dma_buf_map map;
 
-	/**
-	 * @vmap_use_count:
-	 *
-	 * Reference count on the virtual address.
-	 * The address are un-mapped when the count reaches zero.
-	 */
-	unsigned int vmap_use_count;
-
 	/* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
 	struct ttm_placement placement;
 	struct ttm_place placements[2];
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 6/7] drm/vram-helper: Simplify vmap implementation
  2020-12-03 14:02 ` Thomas Zimmermann
@ 2020-12-03 14:02   ` Thomas Zimmermann
  -1 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 14:02 UTC (permalink / raw)
  To: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal
  Cc: dri-devel, linux-media, linaro-mm-sig, Thomas Zimmermann

After removing the pinning operations, the vmap/vunmap code as been
reduced to what used to be an internal helper. Inline the helper to
simplify the implementation.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/drm_gem_vram_helper.c | 52 +++++++++++----------------
 1 file changed, 20 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 276e8f8ea663..6159f5dc8f1f 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -378,36 +378,6 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
 }
 EXPORT_SYMBOL(drm_gem_vram_unpin);
 
-static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
-				    struct dma_buf_map *map)
-{
-	int ret;
-
-	ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
-	if (ret)
-		return ret;
-
-	*map = gbo->map;
-
-	return 0;
-}
-
-static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
-				       struct dma_buf_map *map)
-{
-	struct drm_device *dev = gbo->bo.base.dev;
-
-	if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
-		return; /* BUG: map not mapped from this BO */
-
-	/*
-	 * Permanently mapping and unmapping buffers adds overhead from
-	 * updating the page tables and creates debugging output. Therefore,
-	 * we delay the actual unmap operation until the BO gets evicted
-	 * from memory. See drm_gem_vram_bo_driver_move_notify().
-	 */
-}
-
 /**
  * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
  *                       space
@@ -426,9 +396,17 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
  */
 int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
+	int ret;
+
 	dma_resv_assert_held(gbo->bo.base.resv);
 
-	return drm_gem_vram_kmap_locked(gbo, map);
+	ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
+	if (ret)
+		return ret;
+
+	*map = gbo->map;
+
+	return 0;
 }
 EXPORT_SYMBOL(drm_gem_vram_vmap);
 
@@ -442,9 +420,19 @@ EXPORT_SYMBOL(drm_gem_vram_vmap);
  */
 void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
+	struct drm_device *dev = gbo->bo.base.dev;
+
 	dma_resv_assert_held(gbo->bo.base.resv);
 
-	drm_gem_vram_kunmap_locked(gbo, map);
+	if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
+		return; /* BUG: map not mapped from this BO */
+
+	/*
+	 * Permanently mapping and unmapping buffers adds overhead from
+	 * updating the page tables and creates debugging output. Therefore,
+	 * we delay the actual unmap operation until the BO gets evicted
+	 * from memory. See drm_gem_vram_bo_driver_move_notify().
+	 */
 }
 EXPORT_SYMBOL(drm_gem_vram_vunmap);
 
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 6/7] drm/vram-helper: Simplify vmap implementation
@ 2020-12-03 14:02   ` Thomas Zimmermann
  0 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 14:02 UTC (permalink / raw)
  To: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal
  Cc: linaro-mm-sig, Thomas Zimmermann, dri-devel, linux-media

After removing the pinning operations, the vmap/vunmap code as been
reduced to what used to be an internal helper. Inline the helper to
simplify the implementation.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/drm_gem_vram_helper.c | 52 +++++++++++----------------
 1 file changed, 20 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 276e8f8ea663..6159f5dc8f1f 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -378,36 +378,6 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
 }
 EXPORT_SYMBOL(drm_gem_vram_unpin);
 
-static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
-				    struct dma_buf_map *map)
-{
-	int ret;
-
-	ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
-	if (ret)
-		return ret;
-
-	*map = gbo->map;
-
-	return 0;
-}
-
-static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
-				       struct dma_buf_map *map)
-{
-	struct drm_device *dev = gbo->bo.base.dev;
-
-	if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
-		return; /* BUG: map not mapped from this BO */
-
-	/*
-	 * Permanently mapping and unmapping buffers adds overhead from
-	 * updating the page tables and creates debugging output. Therefore,
-	 * we delay the actual unmap operation until the BO gets evicted
-	 * from memory. See drm_gem_vram_bo_driver_move_notify().
-	 */
-}
-
 /**
  * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
  *                       space
@@ -426,9 +396,17 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
  */
 int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
+	int ret;
+
 	dma_resv_assert_held(gbo->bo.base.resv);
 
-	return drm_gem_vram_kmap_locked(gbo, map);
+	ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
+	if (ret)
+		return ret;
+
+	*map = gbo->map;
+
+	return 0;
 }
 EXPORT_SYMBOL(drm_gem_vram_vmap);
 
@@ -442,9 +420,19 @@ EXPORT_SYMBOL(drm_gem_vram_vmap);
  */
 void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
+	struct drm_device *dev = gbo->bo.base.dev;
+
 	dma_resv_assert_held(gbo->bo.base.resv);
 
-	drm_gem_vram_kunmap_locked(gbo, map);
+	if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
+		return; /* BUG: map not mapped from this BO */
+
+	/*
+	 * Permanently mapping and unmapping buffers adds overhead from
+	 * updating the page tables and creates debugging output. Therefore,
+	 * we delay the actual unmap operation until the BO gets evicted
+	 * from memory. See drm_gem_vram_bo_driver_move_notify().
+	 */
 }
 EXPORT_SYMBOL(drm_gem_vram_vunmap);
 
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
  2020-12-03 14:02 ` Thomas Zimmermann
@ 2020-12-03 14:02   ` Thomas Zimmermann
  -1 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 14:02 UTC (permalink / raw)
  To: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal
  Cc: dri-devel, linux-media, linaro-mm-sig, Thomas Zimmermann

Dma-buf's vmap and vunmap callbacks are undocumented and various
exporters currently have slightly different semantics for them. Add
documentation on how to implement and use these interfaces correctly.

v2:
	* document vmap semantics in struct dma_buf_ops
	* add TODO item for reviewing and maybe fixing dma-buf exporters

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 Documentation/gpu/todo.rst | 15 +++++++++++++
 include/drm/drm_gem.h      |  4 ++++
 include/linux/dma-buf.h    | 45 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 64 insertions(+)

diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 009d8e6c7e3c..32bb797a84fc 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -505,6 +505,21 @@ Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vette
 Level: Intermediate
 
 
+Enforce rules for dma-buf vmap and pin ops
+------------------------------------------
+
+Exporter implementations of vmap and pin in struct dma_buf_ops (and consequently
+struct drm_gem_object_funcs) use a variety of locking semantics. Some rely on
+the caller holding the dma-buf's reservation lock, some do their own locking,
+some don't require any locking. VRAM helpers even used to pin as part of vmap.
+
+We need to review each exporter and enforce the documented rules.
+
+Contact: Christian König, Daniel Vetter, Thomas Zimmermann <tzimmermann@suse.de>
+
+Level: Advanced
+
+
 Core refactorings
 =================
 
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 5e6daa1c982f..1864c6a721b1 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
 	 * drm_gem_dmabuf_vmap() helper.
 	 *
 	 * This callback is optional.
+	 *
+	 * See also struct dma_buf_ops.vmap
 	 */
 	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
 
@@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
 	 * drm_gem_dmabuf_vunmap() helper.
 	 *
 	 * This callback is optional.
+	 *
+	 * See also struct dma_buf_ops.vunmap
 	 */
 	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
 
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index cf72699cb2bc..dc81fdc01dda 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -267,7 +267,52 @@ struct dma_buf_ops {
 	 */
 	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
 
+	/**
+	 * @vmap:
+	 *
+	 * Returns a virtual address for the buffer.
+	 *
+	 * Notes to callers:
+	 *
+	 * - Callers must hold the struct dma_buf.resv lock before calling
+	 *   this interface.
+	 *
+	 * - Callers must provide means to prevent the mappings from going
+	 *   stale, such as holding the reservation lock or providing a
+	 *   move-notify callback to the exporter.
+	 *
+	 * Notes to implementors:
+	 *
+	 * - Implementations must expect pairs of @vmap and @vunmap to be
+	 *   called frequently and should optimize for this case.
+	 *
+	 * - Implementations should avoid additional operations, such as
+	 *   pinning.
+	 *
+	 * - Implementations may expect the caller to hold the dma-buf's
+	 *   reservation lock to protect against concurrent calls and
+	 *   relocation.
+	 *
+	 * - Implementations may provide additional guarantees, such as working
+	 *   without holding the reservation lock.
+	 *
+	 * This callback is optional.
+	 *
+	 * Returns:
+	 *
+	 * 0 on success or a negative error code on failure.
+	 */
 	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
+
+	/**
+	 * @vunmap:
+	 *
+	 * Releases the address previously returned by @vmap.
+	 *
+	 * This callback is optional.
+	 *
+	 * See also @vmap()
+	 */
 	void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
 };
 
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
@ 2020-12-03 14:02   ` Thomas Zimmermann
  0 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 14:02 UTC (permalink / raw)
  To: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal
  Cc: linaro-mm-sig, Thomas Zimmermann, dri-devel, linux-media

Dma-buf's vmap and vunmap callbacks are undocumented and various
exporters currently have slightly different semantics for them. Add
documentation on how to implement and use these interfaces correctly.

v2:
	* document vmap semantics in struct dma_buf_ops
	* add TODO item for reviewing and maybe fixing dma-buf exporters

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 Documentation/gpu/todo.rst | 15 +++++++++++++
 include/drm/drm_gem.h      |  4 ++++
 include/linux/dma-buf.h    | 45 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 64 insertions(+)

diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 009d8e6c7e3c..32bb797a84fc 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -505,6 +505,21 @@ Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vette
 Level: Intermediate
 
 
+Enforce rules for dma-buf vmap and pin ops
+------------------------------------------
+
+Exporter implementations of vmap and pin in struct dma_buf_ops (and consequently
+struct drm_gem_object_funcs) use a variety of locking semantics. Some rely on
+the caller holding the dma-buf's reservation lock, some do their own locking,
+some don't require any locking. VRAM helpers even used to pin as part of vmap.
+
+We need to review each exporter and enforce the documented rules.
+
+Contact: Christian König, Daniel Vetter, Thomas Zimmermann <tzimmermann@suse.de>
+
+Level: Advanced
+
+
 Core refactorings
 =================
 
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 5e6daa1c982f..1864c6a721b1 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
 	 * drm_gem_dmabuf_vmap() helper.
 	 *
 	 * This callback is optional.
+	 *
+	 * See also struct dma_buf_ops.vmap
 	 */
 	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
 
@@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
 	 * drm_gem_dmabuf_vunmap() helper.
 	 *
 	 * This callback is optional.
+	 *
+	 * See also struct dma_buf_ops.vunmap
 	 */
 	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
 
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index cf72699cb2bc..dc81fdc01dda 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -267,7 +267,52 @@ struct dma_buf_ops {
 	 */
 	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
 
+	/**
+	 * @vmap:
+	 *
+	 * Returns a virtual address for the buffer.
+	 *
+	 * Notes to callers:
+	 *
+	 * - Callers must hold the struct dma_buf.resv lock before calling
+	 *   this interface.
+	 *
+	 * - Callers must provide means to prevent the mappings from going
+	 *   stale, such as holding the reservation lock or providing a
+	 *   move-notify callback to the exporter.
+	 *
+	 * Notes to implementors:
+	 *
+	 * - Implementations must expect pairs of @vmap and @vunmap to be
+	 *   called frequently and should optimize for this case.
+	 *
+	 * - Implementations should avoid additional operations, such as
+	 *   pinning.
+	 *
+	 * - Implementations may expect the caller to hold the dma-buf's
+	 *   reservation lock to protect against concurrent calls and
+	 *   relocation.
+	 *
+	 * - Implementations may provide additional guarantees, such as working
+	 *   without holding the reservation lock.
+	 *
+	 * This callback is optional.
+	 *
+	 * Returns:
+	 *
+	 * 0 on success or a negative error code on failure.
+	 */
 	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
+
+	/**
+	 * @vunmap:
+	 *
+	 * Releases the address previously returned by @vmap.
+	 *
+	 * This callback is optional.
+	 *
+	 * See also @vmap()
+	 */
 	void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
 };
 
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 0/7] drm/vram-helper: Lock GEM BOs while they are mapped
  2020-12-03 14:02 ` Thomas Zimmermann
@ 2020-12-03 14:05   ` Thomas Zimmermann
  -1 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 14:05 UTC (permalink / raw)
  To: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal
  Cc: dri-devel, linux-media, linaro-mm-sig


[-- Attachment #1.1: Type: text/plain, Size: 2866 bytes --]



Am 03.12.20 um 15:02 schrieb Thomas Zimmermann:
> GEM VRAM helpers used to pin the BO in their implementation of vmap, so
> that they could not be relocated. In a recent discussion, [1] it became

Grrr, [1] was again supposed to point to the discussion at

   https://patchwork.freedesktop.org/patch/400054/?series=83765&rev=1

> clear that this is incorrect and that vmap should rather repend on the
> reservation lock to prevent relocation. This patchset addresses the issue.
> 
> Version 2 of the patchset is a significant rework. In particular, the
> helper function drm_gem_vram_vmap_unlocked() is gone and importers now
> acquire the reservation locks by themselves. I took Christian's A-bs only
> for patches that were not affected.
> 
> Patches 1 and 2 prepare the ast cursor code for the later changes.
> 
> Patch 3 pushes the BO locking into callers of vmap and vunmap. Besides
> the VRAM helpers, this affects ast, vboxvideo and the generic fbdev
> emulation.
> 
> Patches 4 to 6 clean up afterwards. BO pinning is not required any longer
> and the VRAM-internal helpers are not needed.
> 
> Patch 7 adds documentation to vmap and vunmap in struct dma_buf_ops. It
> hopefully reflects the discussion on the patchset's version 1.
> 
> Tested on ast with GEM VRAM and also on mgag200 to verify that the fbdev
> change does not interfere with GEM SHMEM.
> 
> v2:
> 	* make importers acquire resv locks by themselves
> 	* document dma-buf vamp/vunmap ops
> 
> Thomas Zimmermann (7):
>    drm/ast: Don't pin cursor source BO explicitly during update
>    drm/ast: Only map cursor BOs during updates
>    drm/vram-helper: Move BO locking from vmap code into callers
>    drm/vram-helper: Remove pinning from drm_gem_vram_{vmap,vunmap}()
>    drm/vram-helper: Remove vmap reference counting
>    drm/vram-helper: Simplify vmap implementation
>    dma-buf: Write down some rules for vmap usage
> 
>   Documentation/gpu/todo.rst            | 15 +++++
>   drivers/gpu/drm/ast/ast_cursor.c      | 70 +++++++++++++---------
>   drivers/gpu/drm/ast/ast_drv.h         |  2 -
>   drivers/gpu/drm/drm_client.c          | 31 ++++++++++
>   drivers/gpu/drm/drm_fb_helper.c       | 10 +++-
>   drivers/gpu/drm/drm_gem_vram_helper.c | 85 ++++++---------------------
>   drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 ++--
>   include/drm/drm_client.h              |  2 +
>   include/drm/drm_gem.h                 |  4 ++
>   include/drm/drm_gem_vram_helper.h     | 17 +-----
>   include/linux/dma-buf.h               | 45 ++++++++++++++
>   11 files changed, 175 insertions(+), 117 deletions(-)
> 
> --
> 2.29.2
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 0/7] drm/vram-helper: Lock GEM BOs while they are mapped
@ 2020-12-03 14:05   ` Thomas Zimmermann
  0 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 14:05 UTC (permalink / raw)
  To: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal
  Cc: linaro-mm-sig, dri-devel, linux-media


[-- Attachment #1.1.1: Type: text/plain, Size: 2866 bytes --]



Am 03.12.20 um 15:02 schrieb Thomas Zimmermann:
> GEM VRAM helpers used to pin the BO in their implementation of vmap, so
> that they could not be relocated. In a recent discussion, [1] it became

Grrr, [1] was again supposed to point to the discussion at

   https://patchwork.freedesktop.org/patch/400054/?series=83765&rev=1

> clear that this is incorrect and that vmap should rather repend on the
> reservation lock to prevent relocation. This patchset addresses the issue.
> 
> Version 2 of the patchset is a significant rework. In particular, the
> helper function drm_gem_vram_vmap_unlocked() is gone and importers now
> acquire the reservation locks by themselves. I took Christian's A-bs only
> for patches that were not affected.
> 
> Patches 1 and 2 prepare the ast cursor code for the later changes.
> 
> Patch 3 pushes the BO locking into callers of vmap and vunmap. Besides
> the VRAM helpers, this affects ast, vboxvideo and the generic fbdev
> emulation.
> 
> Patches 4 to 6 clean up afterwards. BO pinning is not required any longer
> and the VRAM-internal helpers are not needed.
> 
> Patch 7 adds documentation to vmap and vunmap in struct dma_buf_ops. It
> hopefully reflects the discussion on the patchset's version 1.
> 
> Tested on ast with GEM VRAM and also on mgag200 to verify that the fbdev
> change does not interfere with GEM SHMEM.
> 
> v2:
> 	* make importers acquire resv locks by themselves
> 	* document dma-buf vamp/vunmap ops
> 
> Thomas Zimmermann (7):
>    drm/ast: Don't pin cursor source BO explicitly during update
>    drm/ast: Only map cursor BOs during updates
>    drm/vram-helper: Move BO locking from vmap code into callers
>    drm/vram-helper: Remove pinning from drm_gem_vram_{vmap,vunmap}()
>    drm/vram-helper: Remove vmap reference counting
>    drm/vram-helper: Simplify vmap implementation
>    dma-buf: Write down some rules for vmap usage
> 
>   Documentation/gpu/todo.rst            | 15 +++++
>   drivers/gpu/drm/ast/ast_cursor.c      | 70 +++++++++++++---------
>   drivers/gpu/drm/ast/ast_drv.h         |  2 -
>   drivers/gpu/drm/drm_client.c          | 31 ++++++++++
>   drivers/gpu/drm/drm_fb_helper.c       | 10 +++-
>   drivers/gpu/drm/drm_gem_vram_helper.c | 85 ++++++---------------------
>   drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 ++--
>   include/drm/drm_client.h              |  2 +
>   include/drm/drm_gem.h                 |  4 ++
>   include/drm/drm_gem_vram_helper.h     | 17 +-----
>   include/linux/dma-buf.h               | 45 ++++++++++++++
>   11 files changed, 175 insertions(+), 117 deletions(-)
> 
> --
> 2.29.2
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
  2020-12-03 14:02   ` Thomas Zimmermann
@ 2020-12-03 15:26     ` Daniel Vetter
  -1 siblings, 0 replies; 36+ messages in thread
From: Daniel Vetter @ 2020-12-03 15:26 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: airlied, daniel, maarten.lankhorst, mripard, hdegoede,
	christian.koenig, sumit.semwal, dri-devel, linux-media,
	linaro-mm-sig

On Thu, Dec 03, 2020 at 03:02:59PM +0100, Thomas Zimmermann wrote:
> Dma-buf's vmap and vunmap callbacks are undocumented and various
> exporters currently have slightly different semantics for them. Add
> documentation on how to implement and use these interfaces correctly.
> 
> v2:
> 	* document vmap semantics in struct dma_buf_ops
> 	* add TODO item for reviewing and maybe fixing dma-buf exporters
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

I still don't think this works, we're breaking dma_buf_vmap for everyone
else here.

> ---
>  Documentation/gpu/todo.rst | 15 +++++++++++++
>  include/drm/drm_gem.h      |  4 ++++
>  include/linux/dma-buf.h    | 45 ++++++++++++++++++++++++++++++++++++++
>  3 files changed, 64 insertions(+)
> 
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 009d8e6c7e3c..32bb797a84fc 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vette
>  Level: Intermediate
>  
>  
> +Enforce rules for dma-buf vmap and pin ops
> +------------------------------------------
> +
> +Exporter implementations of vmap and pin in struct dma_buf_ops (and consequently
> +struct drm_gem_object_funcs) use a variety of locking semantics. Some rely on
> +the caller holding the dma-buf's reservation lock, some do their own locking,
> +some don't require any locking. VRAM helpers even used to pin as part of vmap.
> +
> +We need to review each exporter and enforce the documented rules.
> +
> +Contact: Christian König, Daniel Vetter, Thomas Zimmermann <tzimmermann@suse.de>
> +
> +Level: Advanced
> +
> +
>  Core refactorings
>  =================
>  
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index 5e6daa1c982f..1864c6a721b1 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
>  	 * drm_gem_dmabuf_vmap() helper.
>  	 *
>  	 * This callback is optional.
> +	 *
> +	 * See also struct dma_buf_ops.vmap
>  	 */
>  	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
> @@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
>  	 * drm_gem_dmabuf_vunmap() helper.
>  	 *
>  	 * This callback is optional.
> +	 *
> +	 * See also struct dma_buf_ops.vunmap
>  	 */
>  	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index cf72699cb2bc..dc81fdc01dda 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -267,7 +267,52 @@ struct dma_buf_ops {
>  	 */
>  	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
>  
> +	/**
> +	 * @vmap:

There's already a @vmap and @vunamp kerneldoc at the top comment, that
needs to be removed.
-Daniel

> +	 *
> +	 * Returns a virtual address for the buffer.
> +	 *
> +	 * Notes to callers:
> +	 *
> +	 * - Callers must hold the struct dma_buf.resv lock before calling
> +	 *   this interface.
> +	 *
> +	 * - Callers must provide means to prevent the mappings from going
> +	 *   stale, such as holding the reservation lock or providing a
> +	 *   move-notify callback to the exporter.
> +	 *
> +	 * Notes to implementors:
> +	 *
> +	 * - Implementations must expect pairs of @vmap and @vunmap to be
> +	 *   called frequently and should optimize for this case.
> +	 *
> +	 * - Implementations should avoid additional operations, such as
> +	 *   pinning.
> +	 *
> +	 * - Implementations may expect the caller to hold the dma-buf's
> +	 *   reservation lock to protect against concurrent calls and
> +	 *   relocation.
> +	 *
> +	 * - Implementations may provide additional guarantees, such as working
> +	 *   without holding the reservation lock.
> +	 *
> +	 * This callback is optional.
> +	 *
> +	 * Returns:
> +	 *
> +	 * 0 on success or a negative error code on failure.
> +	 */
>  	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
> +
> +	/**
> +	 * @vunmap:
> +	 *
> +	 * Releases the address previously returned by @vmap.
> +	 *
> +	 * This callback is optional.
> +	 *
> +	 * See also @vmap()
> +	 */
>  	void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>  };
>  
> -- 
> 2.29.2
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
@ 2020-12-03 15:26     ` Daniel Vetter
  0 siblings, 0 replies; 36+ messages in thread
From: Daniel Vetter @ 2020-12-03 15:26 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: linaro-mm-sig, hdegoede, dri-devel, airlied, christian.koenig,
	linux-media

On Thu, Dec 03, 2020 at 03:02:59PM +0100, Thomas Zimmermann wrote:
> Dma-buf's vmap and vunmap callbacks are undocumented and various
> exporters currently have slightly different semantics for them. Add
> documentation on how to implement and use these interfaces correctly.
> 
> v2:
> 	* document vmap semantics in struct dma_buf_ops
> 	* add TODO item for reviewing and maybe fixing dma-buf exporters
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

I still don't think this works, we're breaking dma_buf_vmap for everyone
else here.

> ---
>  Documentation/gpu/todo.rst | 15 +++++++++++++
>  include/drm/drm_gem.h      |  4 ++++
>  include/linux/dma-buf.h    | 45 ++++++++++++++++++++++++++++++++++++++
>  3 files changed, 64 insertions(+)
> 
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 009d8e6c7e3c..32bb797a84fc 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vette
>  Level: Intermediate
>  
>  
> +Enforce rules for dma-buf vmap and pin ops
> +------------------------------------------
> +
> +Exporter implementations of vmap and pin in struct dma_buf_ops (and consequently
> +struct drm_gem_object_funcs) use a variety of locking semantics. Some rely on
> +the caller holding the dma-buf's reservation lock, some do their own locking,
> +some don't require any locking. VRAM helpers even used to pin as part of vmap.
> +
> +We need to review each exporter and enforce the documented rules.
> +
> +Contact: Christian König, Daniel Vetter, Thomas Zimmermann <tzimmermann@suse.de>
> +
> +Level: Advanced
> +
> +
>  Core refactorings
>  =================
>  
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index 5e6daa1c982f..1864c6a721b1 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
>  	 * drm_gem_dmabuf_vmap() helper.
>  	 *
>  	 * This callback is optional.
> +	 *
> +	 * See also struct dma_buf_ops.vmap
>  	 */
>  	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
> @@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
>  	 * drm_gem_dmabuf_vunmap() helper.
>  	 *
>  	 * This callback is optional.
> +	 *
> +	 * See also struct dma_buf_ops.vunmap
>  	 */
>  	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index cf72699cb2bc..dc81fdc01dda 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -267,7 +267,52 @@ struct dma_buf_ops {
>  	 */
>  	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
>  
> +	/**
> +	 * @vmap:

There's already a @vmap and @vunamp kerneldoc at the top comment, that
needs to be removed.
-Daniel

> +	 *
> +	 * Returns a virtual address for the buffer.
> +	 *
> +	 * Notes to callers:
> +	 *
> +	 * - Callers must hold the struct dma_buf.resv lock before calling
> +	 *   this interface.
> +	 *
> +	 * - Callers must provide means to prevent the mappings from going
> +	 *   stale, such as holding the reservation lock or providing a
> +	 *   move-notify callback to the exporter.
> +	 *
> +	 * Notes to implementors:
> +	 *
> +	 * - Implementations must expect pairs of @vmap and @vunmap to be
> +	 *   called frequently and should optimize for this case.
> +	 *
> +	 * - Implementations should avoid additional operations, such as
> +	 *   pinning.
> +	 *
> +	 * - Implementations may expect the caller to hold the dma-buf's
> +	 *   reservation lock to protect against concurrent calls and
> +	 *   relocation.
> +	 *
> +	 * - Implementations may provide additional guarantees, such as working
> +	 *   without holding the reservation lock.
> +	 *
> +	 * This callback is optional.
> +	 *
> +	 * Returns:
> +	 *
> +	 * 0 on success or a negative error code on failure.
> +	 */
>  	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
> +
> +	/**
> +	 * @vunmap:
> +	 *
> +	 * Releases the address previously returned by @vmap.
> +	 *
> +	 * This callback is optional.
> +	 *
> +	 * See also @vmap()
> +	 */
>  	void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>  };
>  
> -- 
> 2.29.2
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
  2020-12-03 15:26     ` Daniel Vetter
@ 2020-12-03 18:59       ` Thomas Zimmermann
  -1 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 18:59 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: linaro-mm-sig, hdegoede, dri-devel, airlied, christian.koenig,
	linux-media


[-- Attachment #1.1: Type: text/plain, Size: 5257 bytes --]

Hi

Am 03.12.20 um 16:26 schrieb Daniel Vetter:
> On Thu, Dec 03, 2020 at 03:02:59PM +0100, Thomas Zimmermann wrote:
>> Dma-buf's vmap and vunmap callbacks are undocumented and various
>> exporters currently have slightly different semantics for them. Add
>> documentation on how to implement and use these interfaces correctly.
>>
>> v2:
>> 	* document vmap semantics in struct dma_buf_ops
>> 	* add TODO item for reviewing and maybe fixing dma-buf exporters
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> 
> I still don't think this works, we're breaking dma_buf_vmap for everyone
> else here.

I removed the text on the importer. These notes for callers in the docs 
are more or less a consequence of the exporter semantics.

I thought we at least agreed on the exporter semantics in the other 
thread, didn't we?

What I'm trying to do is to write dome some rules for exporters, even if 
not all exporters follow them.

Given the circumstances, we should leave out this patch from the patchset.

Best regards
Thomas

> 
>> ---
>>   Documentation/gpu/todo.rst | 15 +++++++++++++
>>   include/drm/drm_gem.h      |  4 ++++
>>   include/linux/dma-buf.h    | 45 ++++++++++++++++++++++++++++++++++++++
>>   3 files changed, 64 insertions(+)
>>
>> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
>> index 009d8e6c7e3c..32bb797a84fc 100644
>> --- a/Documentation/gpu/todo.rst
>> +++ b/Documentation/gpu/todo.rst
>> @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vette
>>   Level: Intermediate
>>   
>>   
>> +Enforce rules for dma-buf vmap and pin ops
>> +------------------------------------------
>> +
>> +Exporter implementations of vmap and pin in struct dma_buf_ops (and consequently
>> +struct drm_gem_object_funcs) use a variety of locking semantics. Some rely on
>> +the caller holding the dma-buf's reservation lock, some do their own locking,
>> +some don't require any locking. VRAM helpers even used to pin as part of vmap.
>> +
>> +We need to review each exporter and enforce the documented rules.
>> +
>> +Contact: Christian König, Daniel Vetter, Thomas Zimmermann <tzimmermann@suse.de>
>> +
>> +Level: Advanced
>> +
>> +
>>   Core refactorings
>>   =================
>>   
>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
>> index 5e6daa1c982f..1864c6a721b1 100644
>> --- a/include/drm/drm_gem.h
>> +++ b/include/drm/drm_gem.h
>> @@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
>>   	 * drm_gem_dmabuf_vmap() helper.
>>   	 *
>>   	 * This callback is optional.
>> +	 *
>> +	 * See also struct dma_buf_ops.vmap
>>   	 */
>>   	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>>   
>> @@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
>>   	 * drm_gem_dmabuf_vunmap() helper.
>>   	 *
>>   	 * This callback is optional.
>> +	 *
>> +	 * See also struct dma_buf_ops.vunmap
>>   	 */
>>   	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>>   
>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
>> index cf72699cb2bc..dc81fdc01dda 100644
>> --- a/include/linux/dma-buf.h
>> +++ b/include/linux/dma-buf.h
>> @@ -267,7 +267,52 @@ struct dma_buf_ops {
>>   	 */
>>   	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
>>   
>> +	/**
>> +	 * @vmap:
> 
> There's already a @vmap and @vunamp kerneldoc at the top comment, that
> needs to be removed.
> -Daniel
> 
>> +	 *
>> +	 * Returns a virtual address for the buffer.
>> +	 *
>> +	 * Notes to callers:
>> +	 *
>> +	 * - Callers must hold the struct dma_buf.resv lock before calling
>> +	 *   this interface.
>> +	 *
>> +	 * - Callers must provide means to prevent the mappings from going
>> +	 *   stale, such as holding the reservation lock or providing a
>> +	 *   move-notify callback to the exporter.
>> +	 *
>> +	 * Notes to implementors:
>> +	 *
>> +	 * - Implementations must expect pairs of @vmap and @vunmap to be
>> +	 *   called frequently and should optimize for this case.
>> +	 *
>> +	 * - Implementations should avoid additional operations, such as
>> +	 *   pinning.
>> +	 *
>> +	 * - Implementations may expect the caller to hold the dma-buf's
>> +	 *   reservation lock to protect against concurrent calls and
>> +	 *   relocation.
>> +	 *
>> +	 * - Implementations may provide additional guarantees, such as working
>> +	 *   without holding the reservation lock.
>> +	 *
>> +	 * This callback is optional.
>> +	 *
>> +	 * Returns:
>> +	 *
>> +	 * 0 on success or a negative error code on failure.
>> +	 */
>>   	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>> +
>> +	/**
>> +	 * @vunmap:
>> +	 *
>> +	 * Releases the address previously returned by @vmap.
>> +	 *
>> +	 * This callback is optional.
>> +	 *
>> +	 * See also @vmap()
>> +	 */
>>   	void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>>   };
>>   
>> -- 
>> 2.29.2
>>
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
@ 2020-12-03 18:59       ` Thomas Zimmermann
  0 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-03 18:59 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: dri-devel, linaro-mm-sig, hdegoede, airlied, christian.koenig,
	linux-media


[-- Attachment #1.1.1: Type: text/plain, Size: 5257 bytes --]

Hi

Am 03.12.20 um 16:26 schrieb Daniel Vetter:
> On Thu, Dec 03, 2020 at 03:02:59PM +0100, Thomas Zimmermann wrote:
>> Dma-buf's vmap and vunmap callbacks are undocumented and various
>> exporters currently have slightly different semantics for them. Add
>> documentation on how to implement and use these interfaces correctly.
>>
>> v2:
>> 	* document vmap semantics in struct dma_buf_ops
>> 	* add TODO item for reviewing and maybe fixing dma-buf exporters
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> 
> I still don't think this works, we're breaking dma_buf_vmap for everyone
> else here.

I removed the text on the importer. These notes for callers in the docs 
are more or less a consequence of the exporter semantics.

I thought we at least agreed on the exporter semantics in the other 
thread, didn't we?

What I'm trying to do is to write dome some rules for exporters, even if 
not all exporters follow them.

Given the circumstances, we should leave out this patch from the patchset.

Best regards
Thomas

> 
>> ---
>>   Documentation/gpu/todo.rst | 15 +++++++++++++
>>   include/drm/drm_gem.h      |  4 ++++
>>   include/linux/dma-buf.h    | 45 ++++++++++++++++++++++++++++++++++++++
>>   3 files changed, 64 insertions(+)
>>
>> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
>> index 009d8e6c7e3c..32bb797a84fc 100644
>> --- a/Documentation/gpu/todo.rst
>> +++ b/Documentation/gpu/todo.rst
>> @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vette
>>   Level: Intermediate
>>   
>>   
>> +Enforce rules for dma-buf vmap and pin ops
>> +------------------------------------------
>> +
>> +Exporter implementations of vmap and pin in struct dma_buf_ops (and consequently
>> +struct drm_gem_object_funcs) use a variety of locking semantics. Some rely on
>> +the caller holding the dma-buf's reservation lock, some do their own locking,
>> +some don't require any locking. VRAM helpers even used to pin as part of vmap.
>> +
>> +We need to review each exporter and enforce the documented rules.
>> +
>> +Contact: Christian König, Daniel Vetter, Thomas Zimmermann <tzimmermann@suse.de>
>> +
>> +Level: Advanced
>> +
>> +
>>   Core refactorings
>>   =================
>>   
>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
>> index 5e6daa1c982f..1864c6a721b1 100644
>> --- a/include/drm/drm_gem.h
>> +++ b/include/drm/drm_gem.h
>> @@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
>>   	 * drm_gem_dmabuf_vmap() helper.
>>   	 *
>>   	 * This callback is optional.
>> +	 *
>> +	 * See also struct dma_buf_ops.vmap
>>   	 */
>>   	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>>   
>> @@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
>>   	 * drm_gem_dmabuf_vunmap() helper.
>>   	 *
>>   	 * This callback is optional.
>> +	 *
>> +	 * See also struct dma_buf_ops.vunmap
>>   	 */
>>   	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>>   
>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
>> index cf72699cb2bc..dc81fdc01dda 100644
>> --- a/include/linux/dma-buf.h
>> +++ b/include/linux/dma-buf.h
>> @@ -267,7 +267,52 @@ struct dma_buf_ops {
>>   	 */
>>   	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
>>   
>> +	/**
>> +	 * @vmap:
> 
> There's already a @vmap and @vunamp kerneldoc at the top comment, that
> needs to be removed.
> -Daniel
> 
>> +	 *
>> +	 * Returns a virtual address for the buffer.
>> +	 *
>> +	 * Notes to callers:
>> +	 *
>> +	 * - Callers must hold the struct dma_buf.resv lock before calling
>> +	 *   this interface.
>> +	 *
>> +	 * - Callers must provide means to prevent the mappings from going
>> +	 *   stale, such as holding the reservation lock or providing a
>> +	 *   move-notify callback to the exporter.
>> +	 *
>> +	 * Notes to implementors:
>> +	 *
>> +	 * - Implementations must expect pairs of @vmap and @vunmap to be
>> +	 *   called frequently and should optimize for this case.
>> +	 *
>> +	 * - Implementations should avoid additional operations, such as
>> +	 *   pinning.
>> +	 *
>> +	 * - Implementations may expect the caller to hold the dma-buf's
>> +	 *   reservation lock to protect against concurrent calls and
>> +	 *   relocation.
>> +	 *
>> +	 * - Implementations may provide additional guarantees, such as working
>> +	 *   without holding the reservation lock.
>> +	 *
>> +	 * This callback is optional.
>> +	 *
>> +	 * Returns:
>> +	 *
>> +	 * 0 on success or a negative error code on failure.
>> +	 */
>>   	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>> +
>> +	/**
>> +	 * @vunmap:
>> +	 *
>> +	 * Releases the address previously returned by @vmap.
>> +	 *
>> +	 * This callback is optional.
>> +	 *
>> +	 * See also @vmap()
>> +	 */
>>   	void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>>   };
>>   
>> -- 
>> 2.29.2
>>
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
  2020-12-03 18:59       ` Thomas Zimmermann
@ 2020-12-03 20:41         ` Daniel Vetter
  -1 siblings, 0 replies; 36+ messages in thread
From: Daniel Vetter @ 2020-12-03 20:41 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: Daniel Vetter, linaro-mm-sig, hdegoede, dri-devel, airlied,
	christian.koenig, linux-media

On Thu, Dec 03, 2020 at 07:59:04PM +0100, Thomas Zimmermann wrote:
> Hi
> 
> Am 03.12.20 um 16:26 schrieb Daniel Vetter:
> > On Thu, Dec 03, 2020 at 03:02:59PM +0100, Thomas Zimmermann wrote:
> > > Dma-buf's vmap and vunmap callbacks are undocumented and various
> > > exporters currently have slightly different semantics for them. Add
> > > documentation on how to implement and use these interfaces correctly.
> > > 
> > > v2:
> > > 	* document vmap semantics in struct dma_buf_ops
> > > 	* add TODO item for reviewing and maybe fixing dma-buf exporters
> > > 
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > 
> > I still don't think this works, we're breaking dma_buf_vmap for everyone
> > else here.
> 
> I removed the text on the importer. These notes for callers in the docs are
> more or less a consequence of the exporter semantics.

Callers are importers, so I'm not seeing how that fixes anything.

> I thought we at least agreed on the exporter semantics in the other thread,
> didn't we?
> 
> What I'm trying to do is to write dome some rules for exporters, even if not
> all exporters follow them.

This is a standard interface, everyone needs to follow the same rules. And
if they change, we need to make sure nothing breaks and we're not creating
issues.

In the past the rule was the dma_buf_vmap was allowed to take the
dma_resv_lock, and that the buffer should be pinned. Now some ttm drivers
didn't ever bother with the pinning, and mostly got away with that because
drm_prime helpers do the pinning by default at attach time, and most users
do call dma_buf_attach.

But if you look through dma-buf docs nothing ever said you have to do a
dummy attachment before you call dma_buf_vmap, that's just slightly crappy
implementations that didn't blow up yet.

> Given the circumstances, we should leave out this patch from the patchset.

So the defacto rules are already a big mess, but that's not a good excuse
to make it worse.

What I had in mind is that we split dma_buf_vmap up into two variants:

- The current one, which should guarantee that the buffer is pinned.
  Because that's what all current callers wanted, before the fbdev code
  started allowing non-pinned buffers.

- The new one, which allows vmapping with just dma_resv locked, and should
  have some caching in exporters.

Breaking code and then adding todos about that is kinda not so cool
approach here imo.

Also I guess ttm_bo_vmap should have a check that either the buffer is
pinned, or dma_resv_lock is held.

Cheers, Daniel



> 
> Best regards
> Thomas
> 
> > 
> > > ---
> > >   Documentation/gpu/todo.rst | 15 +++++++++++++
> > >   include/drm/drm_gem.h      |  4 ++++
> > >   include/linux/dma-buf.h    | 45 ++++++++++++++++++++++++++++++++++++++
> > >   3 files changed, 64 insertions(+)
> > > 
> > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > > index 009d8e6c7e3c..32bb797a84fc 100644
> > > --- a/Documentation/gpu/todo.rst
> > > +++ b/Documentation/gpu/todo.rst
> > > @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vette
> > >   Level: Intermediate
> > > +Enforce rules for dma-buf vmap and pin ops
> > > +------------------------------------------
> > > +
> > > +Exporter implementations of vmap and pin in struct dma_buf_ops (and consequently
> > > +struct drm_gem_object_funcs) use a variety of locking semantics. Some rely on
> > > +the caller holding the dma-buf's reservation lock, some do their own locking,
> > > +some don't require any locking. VRAM helpers even used to pin as part of vmap.
> > > +
> > > +We need to review each exporter and enforce the documented rules.
> > > +
> > > +Contact: Christian König, Daniel Vetter, Thomas Zimmermann <tzimmermann@suse.de>
> > > +
> > > +Level: Advanced
> > > +
> > > +
> > >   Core refactorings
> > >   =================
> > > diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> > > index 5e6daa1c982f..1864c6a721b1 100644
> > > --- a/include/drm/drm_gem.h
> > > +++ b/include/drm/drm_gem.h
> > > @@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
> > >   	 * drm_gem_dmabuf_vmap() helper.
> > >   	 *
> > >   	 * This callback is optional.
> > > +	 *
> > > +	 * See also struct dma_buf_ops.vmap
> > >   	 */
> > >   	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
> > > @@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
> > >   	 * drm_gem_dmabuf_vunmap() helper.
> > >   	 *
> > >   	 * This callback is optional.
> > > +	 *
> > > +	 * See also struct dma_buf_ops.vunmap
> > >   	 */
> > >   	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
> > > diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> > > index cf72699cb2bc..dc81fdc01dda 100644
> > > --- a/include/linux/dma-buf.h
> > > +++ b/include/linux/dma-buf.h
> > > @@ -267,7 +267,52 @@ struct dma_buf_ops {
> > >   	 */
> > >   	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
> > > +	/**
> > > +	 * @vmap:
> > 
> > There's already a @vmap and @vunamp kerneldoc at the top comment, that
> > needs to be removed.
> > -Daniel
> > 
> > > +	 *
> > > +	 * Returns a virtual address for the buffer.
> > > +	 *
> > > +	 * Notes to callers:
> > > +	 *
> > > +	 * - Callers must hold the struct dma_buf.resv lock before calling
> > > +	 *   this interface.
> > > +	 *
> > > +	 * - Callers must provide means to prevent the mappings from going
> > > +	 *   stale, such as holding the reservation lock or providing a
> > > +	 *   move-notify callback to the exporter.
> > > +	 *
> > > +	 * Notes to implementors:
> > > +	 *
> > > +	 * - Implementations must expect pairs of @vmap and @vunmap to be
> > > +	 *   called frequently and should optimize for this case.
> > > +	 *
> > > +	 * - Implementations should avoid additional operations, such as
> > > +	 *   pinning.
> > > +	 *
> > > +	 * - Implementations may expect the caller to hold the dma-buf's
> > > +	 *   reservation lock to protect against concurrent calls and
> > > +	 *   relocation.
> > > +	 *
> > > +	 * - Implementations may provide additional guarantees, such as working
> > > +	 *   without holding the reservation lock.
> > > +	 *
> > > +	 * This callback is optional.
> > > +	 *
> > > +	 * Returns:
> > > +	 *
> > > +	 * 0 on success or a negative error code on failure.
> > > +	 */
> > >   	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
> > > +
> > > +	/**
> > > +	 * @vunmap:
> > > +	 *
> > > +	 * Releases the address previously returned by @vmap.
> > > +	 *
> > > +	 * This callback is optional.
> > > +	 *
> > > +	 * See also @vmap()
> > > +	 */
> > >   	void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
> > >   };
> > > -- 
> > > 2.29.2
> > > 
> > 
> 
> -- 
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
> 




-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
@ 2020-12-03 20:41         ` Daniel Vetter
  0 siblings, 0 replies; 36+ messages in thread
From: Daniel Vetter @ 2020-12-03 20:41 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: dri-devel, linaro-mm-sig, hdegoede, airlied, christian.koenig,
	linux-media

On Thu, Dec 03, 2020 at 07:59:04PM +0100, Thomas Zimmermann wrote:
> Hi
> 
> Am 03.12.20 um 16:26 schrieb Daniel Vetter:
> > On Thu, Dec 03, 2020 at 03:02:59PM +0100, Thomas Zimmermann wrote:
> > > Dma-buf's vmap and vunmap callbacks are undocumented and various
> > > exporters currently have slightly different semantics for them. Add
> > > documentation on how to implement and use these interfaces correctly.
> > > 
> > > v2:
> > > 	* document vmap semantics in struct dma_buf_ops
> > > 	* add TODO item for reviewing and maybe fixing dma-buf exporters
> > > 
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > 
> > I still don't think this works, we're breaking dma_buf_vmap for everyone
> > else here.
> 
> I removed the text on the importer. These notes for callers in the docs are
> more or less a consequence of the exporter semantics.

Callers are importers, so I'm not seeing how that fixes anything.

> I thought we at least agreed on the exporter semantics in the other thread,
> didn't we?
> 
> What I'm trying to do is to write dome some rules for exporters, even if not
> all exporters follow them.

This is a standard interface, everyone needs to follow the same rules. And
if they change, we need to make sure nothing breaks and we're not creating
issues.

In the past the rule was the dma_buf_vmap was allowed to take the
dma_resv_lock, and that the buffer should be pinned. Now some ttm drivers
didn't ever bother with the pinning, and mostly got away with that because
drm_prime helpers do the pinning by default at attach time, and most users
do call dma_buf_attach.

But if you look through dma-buf docs nothing ever said you have to do a
dummy attachment before you call dma_buf_vmap, that's just slightly crappy
implementations that didn't blow up yet.

> Given the circumstances, we should leave out this patch from the patchset.

So the defacto rules are already a big mess, but that's not a good excuse
to make it worse.

What I had in mind is that we split dma_buf_vmap up into two variants:

- The current one, which should guarantee that the buffer is pinned.
  Because that's what all current callers wanted, before the fbdev code
  started allowing non-pinned buffers.

- The new one, which allows vmapping with just dma_resv locked, and should
  have some caching in exporters.

Breaking code and then adding todos about that is kinda not so cool
approach here imo.

Also I guess ttm_bo_vmap should have a check that either the buffer is
pinned, or dma_resv_lock is held.

Cheers, Daniel



> 
> Best regards
> Thomas
> 
> > 
> > > ---
> > >   Documentation/gpu/todo.rst | 15 +++++++++++++
> > >   include/drm/drm_gem.h      |  4 ++++
> > >   include/linux/dma-buf.h    | 45 ++++++++++++++++++++++++++++++++++++++
> > >   3 files changed, 64 insertions(+)
> > > 
> > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > > index 009d8e6c7e3c..32bb797a84fc 100644
> > > --- a/Documentation/gpu/todo.rst
> > > +++ b/Documentation/gpu/todo.rst
> > > @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vette
> > >   Level: Intermediate
> > > +Enforce rules for dma-buf vmap and pin ops
> > > +------------------------------------------
> > > +
> > > +Exporter implementations of vmap and pin in struct dma_buf_ops (and consequently
> > > +struct drm_gem_object_funcs) use a variety of locking semantics. Some rely on
> > > +the caller holding the dma-buf's reservation lock, some do their own locking,
> > > +some don't require any locking. VRAM helpers even used to pin as part of vmap.
> > > +
> > > +We need to review each exporter and enforce the documented rules.
> > > +
> > > +Contact: Christian König, Daniel Vetter, Thomas Zimmermann <tzimmermann@suse.de>
> > > +
> > > +Level: Advanced
> > > +
> > > +
> > >   Core refactorings
> > >   =================
> > > diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> > > index 5e6daa1c982f..1864c6a721b1 100644
> > > --- a/include/drm/drm_gem.h
> > > +++ b/include/drm/drm_gem.h
> > > @@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
> > >   	 * drm_gem_dmabuf_vmap() helper.
> > >   	 *
> > >   	 * This callback is optional.
> > > +	 *
> > > +	 * See also struct dma_buf_ops.vmap
> > >   	 */
> > >   	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
> > > @@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
> > >   	 * drm_gem_dmabuf_vunmap() helper.
> > >   	 *
> > >   	 * This callback is optional.
> > > +	 *
> > > +	 * See also struct dma_buf_ops.vunmap
> > >   	 */
> > >   	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
> > > diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> > > index cf72699cb2bc..dc81fdc01dda 100644
> > > --- a/include/linux/dma-buf.h
> > > +++ b/include/linux/dma-buf.h
> > > @@ -267,7 +267,52 @@ struct dma_buf_ops {
> > >   	 */
> > >   	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
> > > +	/**
> > > +	 * @vmap:
> > 
> > There's already a @vmap and @vunamp kerneldoc at the top comment, that
> > needs to be removed.
> > -Daniel
> > 
> > > +	 *
> > > +	 * Returns a virtual address for the buffer.
> > > +	 *
> > > +	 * Notes to callers:
> > > +	 *
> > > +	 * - Callers must hold the struct dma_buf.resv lock before calling
> > > +	 *   this interface.
> > > +	 *
> > > +	 * - Callers must provide means to prevent the mappings from going
> > > +	 *   stale, such as holding the reservation lock or providing a
> > > +	 *   move-notify callback to the exporter.
> > > +	 *
> > > +	 * Notes to implementors:
> > > +	 *
> > > +	 * - Implementations must expect pairs of @vmap and @vunmap to be
> > > +	 *   called frequently and should optimize for this case.
> > > +	 *
> > > +	 * - Implementations should avoid additional operations, such as
> > > +	 *   pinning.
> > > +	 *
> > > +	 * - Implementations may expect the caller to hold the dma-buf's
> > > +	 *   reservation lock to protect against concurrent calls and
> > > +	 *   relocation.
> > > +	 *
> > > +	 * - Implementations may provide additional guarantees, such as working
> > > +	 *   without holding the reservation lock.
> > > +	 *
> > > +	 * This callback is optional.
> > > +	 *
> > > +	 * Returns:
> > > +	 *
> > > +	 * 0 on success or a negative error code on failure.
> > > +	 */
> > >   	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
> > > +
> > > +	/**
> > > +	 * @vunmap:
> > > +	 *
> > > +	 * Releases the address previously returned by @vmap.
> > > +	 *
> > > +	 * This callback is optional.
> > > +	 *
> > > +	 * See also @vmap()
> > > +	 */
> > >   	void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
> > >   };
> > > -- 
> > > 2.29.2
> > > 
> > 
> 
> -- 
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
> 




-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
  2020-12-03 20:41         ` Daniel Vetter
@ 2020-12-04  8:32           ` Thomas Zimmermann
  -1 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-04  8:32 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: dri-devel, linaro-mm-sig, hdegoede, airlied, christian.koenig,
	linux-media


[-- Attachment #1.1: Type: text/plain, Size: 8432 bytes --]

Hi

Am 03.12.20 um 21:41 schrieb Daniel Vetter:
> On Thu, Dec 03, 2020 at 07:59:04PM +0100, Thomas Zimmermann wrote:
>> Hi
>>
>> Am 03.12.20 um 16:26 schrieb Daniel Vetter:
>>> On Thu, Dec 03, 2020 at 03:02:59PM +0100, Thomas Zimmermann wrote:
>>>> Dma-buf's vmap and vunmap callbacks are undocumented and various
>>>> exporters currently have slightly different semantics for them. Add
>>>> documentation on how to implement and use these interfaces correctly.
>>>>
>>>> v2:
>>>> 	* document vmap semantics in struct dma_buf_ops
>>>> 	* add TODO item for reviewing and maybe fixing dma-buf exporters
>>>>
>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>
>>> I still don't think this works, we're breaking dma_buf_vmap for everyone
>>> else here.
>>
>> I removed the text on the importer. These notes for callers in the docs are
>> more or less a consequence of the exporter semantics.
> 
> Callers are importers, so I'm not seeing how that fixes anything.
> 
>> I thought we at least agreed on the exporter semantics in the other thread,
>> didn't we?
>>
>> What I'm trying to do is to write dome some rules for exporters, even if not
>> all exporters follow them.
> 
> This is a standard interface, everyone needs to follow the same rules. And
> if they change, we need to make sure nothing breaks and we're not creating
> issues.
> 
> In the past the rule was the dma_buf_vmap was allowed to take the
> dma_resv_lock, and that the buffer should be pinned. Now some ttm drivers
> didn't ever bother with the pinning, and mostly got away with that because
> drm_prime helpers do the pinning by default at attach time, and most users
> do call dma_buf_attach.
> 
> But if you look through dma-buf docs nothing ever said you have to do a
> dummy attachment before you call dma_buf_vmap, that's just slightly crappy
> implementations that didn't blow up yet.

I had a patch for adding pin to radeon's implementation of vmap. [1] 
Christian told me to not do this; instead just get the lock in the fbdev 
code. His advise almost seems the opposite of what you're telling me here.

For the GEM VRAM helpers, that implicit pin in vmap gave me headaches. 
Because scanouts can only be done from VRAM, which is badly suited for 
exporting. So I ended up with an implicit pin that pins the buffer to 
whatever domain it currently is. I got away with it because GEM VRAM BOs 
are not sharable among devices; fbdev is the only user of that 
functionality and only pins for short periods of time.

I suspect that fixing TTM-based drivers by adding an implicit pin would 
result in a similar situation. Whatever domain it ends up pinning, some 
functionality might not be compatible with that.

> 
>> Given the circumstances, we should leave out this patch from the patchset.
> 
> So the defacto rules are already a big mess, but that's not a good excuse
> to make it worse.
> 
> What I had in mind is that we split dma_buf_vmap up into two variants:
> 
> - The current one, which should guarantee that the buffer is pinned.
>    Because that's what all current callers wanted, before the fbdev code
>    started allowing non-pinned buffers.

Can we add an explicit pin operation to dma_buf_vmap() to enforce the 
semantics?

Best regards
Thomas

[1] https://patchwork.freedesktop.org/patch/400054/?series=83765&rev=1

> 
> - The new one, which allows vmapping with just dma_resv locked, and should
>    have some caching in exporters.
> 
> Breaking code and then adding todos about that is kinda not so cool
> approach here imo.
> 
> Also I guess ttm_bo_vmap should have a check that either the buffer is
> pinned, or dma_resv_lock is held.
> 
> Cheers, Daniel
> 
> 
> 
>>
>> Best regards
>> Thomas
>>
>>>
>>>> ---
>>>>    Documentation/gpu/todo.rst | 15 +++++++++++++
>>>>    include/drm/drm_gem.h      |  4 ++++
>>>>    include/linux/dma-buf.h    | 45 ++++++++++++++++++++++++++++++++++++++
>>>>    3 files changed, 64 insertions(+)
>>>>
>>>> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
>>>> index 009d8e6c7e3c..32bb797a84fc 100644
>>>> --- a/Documentation/gpu/todo.rst
>>>> +++ b/Documentation/gpu/todo.rst
>>>> @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vette
>>>>    Level: Intermediate
>>>> +Enforce rules for dma-buf vmap and pin ops
>>>> +------------------------------------------
>>>> +
>>>> +Exporter implementations of vmap and pin in struct dma_buf_ops (and consequently
>>>> +struct drm_gem_object_funcs) use a variety of locking semantics. Some rely on
>>>> +the caller holding the dma-buf's reservation lock, some do their own locking,
>>>> +some don't require any locking. VRAM helpers even used to pin as part of vmap.
>>>> +
>>>> +We need to review each exporter and enforce the documented rules.
>>>> +
>>>> +Contact: Christian König, Daniel Vetter, Thomas Zimmermann <tzimmermann@suse.de>
>>>> +
>>>> +Level: Advanced
>>>> +
>>>> +
>>>>    Core refactorings
>>>>    =================
>>>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
>>>> index 5e6daa1c982f..1864c6a721b1 100644
>>>> --- a/include/drm/drm_gem.h
>>>> +++ b/include/drm/drm_gem.h
>>>> @@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
>>>>    	 * drm_gem_dmabuf_vmap() helper.
>>>>    	 *
>>>>    	 * This callback is optional.
>>>> +	 *
>>>> +	 * See also struct dma_buf_ops.vmap
>>>>    	 */
>>>>    	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>>>> @@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
>>>>    	 * drm_gem_dmabuf_vunmap() helper.
>>>>    	 *
>>>>    	 * This callback is optional.
>>>> +	 *
>>>> +	 * See also struct dma_buf_ops.vunmap
>>>>    	 */
>>>>    	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>>>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
>>>> index cf72699cb2bc..dc81fdc01dda 100644
>>>> --- a/include/linux/dma-buf.h
>>>> +++ b/include/linux/dma-buf.h
>>>> @@ -267,7 +267,52 @@ struct dma_buf_ops {
>>>>    	 */
>>>>    	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
>>>> +	/**
>>>> +	 * @vmap:
>>>
>>> There's already a @vmap and @vunamp kerneldoc at the top comment, that
>>> needs to be removed.
>>> -Daniel
>>>
>>>> +	 *
>>>> +	 * Returns a virtual address for the buffer.
>>>> +	 *
>>>> +	 * Notes to callers:
>>>> +	 *
>>>> +	 * - Callers must hold the struct dma_buf.resv lock before calling
>>>> +	 *   this interface.
>>>> +	 *
>>>> +	 * - Callers must provide means to prevent the mappings from going
>>>> +	 *   stale, such as holding the reservation lock or providing a
>>>> +	 *   move-notify callback to the exporter.
>>>> +	 *
>>>> +	 * Notes to implementors:
>>>> +	 *
>>>> +	 * - Implementations must expect pairs of @vmap and @vunmap to be
>>>> +	 *   called frequently and should optimize for this case.
>>>> +	 *
>>>> +	 * - Implementations should avoid additional operations, such as
>>>> +	 *   pinning.
>>>> +	 *
>>>> +	 * - Implementations may expect the caller to hold the dma-buf's
>>>> +	 *   reservation lock to protect against concurrent calls and
>>>> +	 *   relocation.
>>>> +	 *
>>>> +	 * - Implementations may provide additional guarantees, such as working
>>>> +	 *   without holding the reservation lock.
>>>> +	 *
>>>> +	 * This callback is optional.
>>>> +	 *
>>>> +	 * Returns:
>>>> +	 *
>>>> +	 * 0 on success or a negative error code on failure.
>>>> +	 */
>>>>    	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>>>> +
>>>> +	/**
>>>> +	 * @vunmap:
>>>> +	 *
>>>> +	 * Releases the address previously returned by @vmap.
>>>> +	 *
>>>> +	 * This callback is optional.
>>>> +	 *
>>>> +	 * See also @vmap()
>>>> +	 */
>>>>    	void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>>>>    };
>>>> -- 
>>>> 2.29.2
>>>>
>>>
>>
>> -- 
>> Thomas Zimmermann
>> Graphics Driver Developer
>> SUSE Software Solutions Germany GmbH
>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>> (HRB 36809, AG Nürnberg)
>> Geschäftsführer: Felix Imendörffer
>>
> 
> 
> 
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
@ 2020-12-04  8:32           ` Thomas Zimmermann
  0 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-04  8:32 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: dri-devel, linaro-mm-sig, hdegoede, airlied, christian.koenig,
	linux-media


[-- Attachment #1.1.1: Type: text/plain, Size: 8432 bytes --]

Hi

Am 03.12.20 um 21:41 schrieb Daniel Vetter:
> On Thu, Dec 03, 2020 at 07:59:04PM +0100, Thomas Zimmermann wrote:
>> Hi
>>
>> Am 03.12.20 um 16:26 schrieb Daniel Vetter:
>>> On Thu, Dec 03, 2020 at 03:02:59PM +0100, Thomas Zimmermann wrote:
>>>> Dma-buf's vmap and vunmap callbacks are undocumented and various
>>>> exporters currently have slightly different semantics for them. Add
>>>> documentation on how to implement and use these interfaces correctly.
>>>>
>>>> v2:
>>>> 	* document vmap semantics in struct dma_buf_ops
>>>> 	* add TODO item for reviewing and maybe fixing dma-buf exporters
>>>>
>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>
>>> I still don't think this works, we're breaking dma_buf_vmap for everyone
>>> else here.
>>
>> I removed the text on the importer. These notes for callers in the docs are
>> more or less a consequence of the exporter semantics.
> 
> Callers are importers, so I'm not seeing how that fixes anything.
> 
>> I thought we at least agreed on the exporter semantics in the other thread,
>> didn't we?
>>
>> What I'm trying to do is to write dome some rules for exporters, even if not
>> all exporters follow them.
> 
> This is a standard interface, everyone needs to follow the same rules. And
> if they change, we need to make sure nothing breaks and we're not creating
> issues.
> 
> In the past the rule was the dma_buf_vmap was allowed to take the
> dma_resv_lock, and that the buffer should be pinned. Now some ttm drivers
> didn't ever bother with the pinning, and mostly got away with that because
> drm_prime helpers do the pinning by default at attach time, and most users
> do call dma_buf_attach.
> 
> But if you look through dma-buf docs nothing ever said you have to do a
> dummy attachment before you call dma_buf_vmap, that's just slightly crappy
> implementations that didn't blow up yet.

I had a patch for adding pin to radeon's implementation of vmap. [1] 
Christian told me to not do this; instead just get the lock in the fbdev 
code. His advise almost seems the opposite of what you're telling me here.

For the GEM VRAM helpers, that implicit pin in vmap gave me headaches. 
Because scanouts can only be done from VRAM, which is badly suited for 
exporting. So I ended up with an implicit pin that pins the buffer to 
whatever domain it currently is. I got away with it because GEM VRAM BOs 
are not sharable among devices; fbdev is the only user of that 
functionality and only pins for short periods of time.

I suspect that fixing TTM-based drivers by adding an implicit pin would 
result in a similar situation. Whatever domain it ends up pinning, some 
functionality might not be compatible with that.

> 
>> Given the circumstances, we should leave out this patch from the patchset.
> 
> So the defacto rules are already a big mess, but that's not a good excuse
> to make it worse.
> 
> What I had in mind is that we split dma_buf_vmap up into two variants:
> 
> - The current one, which should guarantee that the buffer is pinned.
>    Because that's what all current callers wanted, before the fbdev code
>    started allowing non-pinned buffers.

Can we add an explicit pin operation to dma_buf_vmap() to enforce the 
semantics?

Best regards
Thomas

[1] https://patchwork.freedesktop.org/patch/400054/?series=83765&rev=1

> 
> - The new one, which allows vmapping with just dma_resv locked, and should
>    have some caching in exporters.
> 
> Breaking code and then adding todos about that is kinda not so cool
> approach here imo.
> 
> Also I guess ttm_bo_vmap should have a check that either the buffer is
> pinned, or dma_resv_lock is held.
> 
> Cheers, Daniel
> 
> 
> 
>>
>> Best regards
>> Thomas
>>
>>>
>>>> ---
>>>>    Documentation/gpu/todo.rst | 15 +++++++++++++
>>>>    include/drm/drm_gem.h      |  4 ++++
>>>>    include/linux/dma-buf.h    | 45 ++++++++++++++++++++++++++++++++++++++
>>>>    3 files changed, 64 insertions(+)
>>>>
>>>> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
>>>> index 009d8e6c7e3c..32bb797a84fc 100644
>>>> --- a/Documentation/gpu/todo.rst
>>>> +++ b/Documentation/gpu/todo.rst
>>>> @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vette
>>>>    Level: Intermediate
>>>> +Enforce rules for dma-buf vmap and pin ops
>>>> +------------------------------------------
>>>> +
>>>> +Exporter implementations of vmap and pin in struct dma_buf_ops (and consequently
>>>> +struct drm_gem_object_funcs) use a variety of locking semantics. Some rely on
>>>> +the caller holding the dma-buf's reservation lock, some do their own locking,
>>>> +some don't require any locking. VRAM helpers even used to pin as part of vmap.
>>>> +
>>>> +We need to review each exporter and enforce the documented rules.
>>>> +
>>>> +Contact: Christian König, Daniel Vetter, Thomas Zimmermann <tzimmermann@suse.de>
>>>> +
>>>> +Level: Advanced
>>>> +
>>>> +
>>>>    Core refactorings
>>>>    =================
>>>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
>>>> index 5e6daa1c982f..1864c6a721b1 100644
>>>> --- a/include/drm/drm_gem.h
>>>> +++ b/include/drm/drm_gem.h
>>>> @@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
>>>>    	 * drm_gem_dmabuf_vmap() helper.
>>>>    	 *
>>>>    	 * This callback is optional.
>>>> +	 *
>>>> +	 * See also struct dma_buf_ops.vmap
>>>>    	 */
>>>>    	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>>>> @@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
>>>>    	 * drm_gem_dmabuf_vunmap() helper.
>>>>    	 *
>>>>    	 * This callback is optional.
>>>> +	 *
>>>> +	 * See also struct dma_buf_ops.vunmap
>>>>    	 */
>>>>    	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>>>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
>>>> index cf72699cb2bc..dc81fdc01dda 100644
>>>> --- a/include/linux/dma-buf.h
>>>> +++ b/include/linux/dma-buf.h
>>>> @@ -267,7 +267,52 @@ struct dma_buf_ops {
>>>>    	 */
>>>>    	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
>>>> +	/**
>>>> +	 * @vmap:
>>>
>>> There's already a @vmap and @vunamp kerneldoc at the top comment, that
>>> needs to be removed.
>>> -Daniel
>>>
>>>> +	 *
>>>> +	 * Returns a virtual address for the buffer.
>>>> +	 *
>>>> +	 * Notes to callers:
>>>> +	 *
>>>> +	 * - Callers must hold the struct dma_buf.resv lock before calling
>>>> +	 *   this interface.
>>>> +	 *
>>>> +	 * - Callers must provide means to prevent the mappings from going
>>>> +	 *   stale, such as holding the reservation lock or providing a
>>>> +	 *   move-notify callback to the exporter.
>>>> +	 *
>>>> +	 * Notes to implementors:
>>>> +	 *
>>>> +	 * - Implementations must expect pairs of @vmap and @vunmap to be
>>>> +	 *   called frequently and should optimize for this case.
>>>> +	 *
>>>> +	 * - Implementations should avoid additional operations, such as
>>>> +	 *   pinning.
>>>> +	 *
>>>> +	 * - Implementations may expect the caller to hold the dma-buf's
>>>> +	 *   reservation lock to protect against concurrent calls and
>>>> +	 *   relocation.
>>>> +	 *
>>>> +	 * - Implementations may provide additional guarantees, such as working
>>>> +	 *   without holding the reservation lock.
>>>> +	 *
>>>> +	 * This callback is optional.
>>>> +	 *
>>>> +	 * Returns:
>>>> +	 *
>>>> +	 * 0 on success or a negative error code on failure.
>>>> +	 */
>>>>    	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>>>> +
>>>> +	/**
>>>> +	 * @vunmap:
>>>> +	 *
>>>> +	 * Releases the address previously returned by @vmap.
>>>> +	 *
>>>> +	 * This callback is optional.
>>>> +	 *
>>>> +	 * See also @vmap()
>>>> +	 */
>>>>    	void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>>>>    };
>>>> -- 
>>>> 2.29.2
>>>>
>>>
>>
>> -- 
>> Thomas Zimmermann
>> Graphics Driver Developer
>> SUSE Software Solutions Germany GmbH
>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>> (HRB 36809, AG Nürnberg)
>> Geschäftsführer: Felix Imendörffer
>>
> 
> 
> 
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
  2020-12-04  8:32           ` Thomas Zimmermann
@ 2020-12-04  8:47             ` Christian König
  -1 siblings, 0 replies; 36+ messages in thread
From: Christian König @ 2020-12-04  8:47 UTC (permalink / raw)
  To: Thomas Zimmermann, Daniel Vetter
  Cc: dri-devel, linaro-mm-sig, hdegoede, airlied, linux-media

Am 04.12.20 um 09:32 schrieb Thomas Zimmermann:
> Hi
>
> Am 03.12.20 um 21:41 schrieb Daniel Vetter:
>> On Thu, Dec 03, 2020 at 07:59:04PM +0100, Thomas Zimmermann wrote:
>>> Hi
>>>
>>> Am 03.12.20 um 16:26 schrieb Daniel Vetter:
>>>> On Thu, Dec 03, 2020 at 03:02:59PM +0100, Thomas Zimmermann wrote:
>>>>> Dma-buf's vmap and vunmap callbacks are undocumented and various
>>>>> exporters currently have slightly different semantics for them. Add
>>>>> documentation on how to implement and use these interfaces correctly.
>>>>>
>>>>> v2:
>>>>>     * document vmap semantics in struct dma_buf_ops
>>>>>     * add TODO item for reviewing and maybe fixing dma-buf exporters
>>>>>
>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>>
>>>> I still don't think this works, we're breaking dma_buf_vmap for 
>>>> everyone
>>>> else here.
>>>
>>> I removed the text on the importer. These notes for callers in the 
>>> docs are
>>> more or less a consequence of the exporter semantics.
>>
>> Callers are importers, so I'm not seeing how that fixes anything.
>>
>>> I thought we at least agreed on the exporter semantics in the other 
>>> thread,
>>> didn't we?
>>>
>>> What I'm trying to do is to write dome some rules for exporters, 
>>> even if not
>>> all exporters follow them.
>>
>> This is a standard interface, everyone needs to follow the same 
>> rules. And
>> if they change, we need to make sure nothing breaks and we're not 
>> creating
>> issues.
>>
>> In the past the rule was the dma_buf_vmap was allowed to take the
>> dma_resv_lock, and that the buffer should be pinned. Now some ttm 
>> drivers
>> didn't ever bother with the pinning, and mostly got away with that 
>> because
>> drm_prime helpers do the pinning by default at attach time, and most 
>> users
>> do call dma_buf_attach.
>>
>> But if you look through dma-buf docs nothing ever said you have to do a
>> dummy attachment before you call dma_buf_vmap, that's just slightly 
>> crappy
>> implementations that didn't blow up yet.
>
> I had a patch for adding pin to radeon's implementation of vmap. [1] 
> Christian told me to not do this; instead just get the lock in the 
> fbdev code. His advise almost seems the opposite of what you're 
> telling me here.

I think what Daniel suggests here is that we need to smoothly transition 
the code from making assumptions to having a straight interface where 
importers explicitly say when stuff is locked and when stuff is pinned.

I've started this with the attach interface by adding a new dynamic 
approach to that, but you probably need to carry on here with that for 
vmap as well.

When that is done we can migrate every exporter over to the new dynamic 
approach.

>
> For the GEM VRAM helpers, that implicit pin in vmap gave me headaches. 
> Because scanouts can only be done from VRAM, which is badly suited for 
> exporting. So I ended up with an implicit pin that pins the buffer to 
> whatever domain it currently is. I got away with it because GEM VRAM 
> BOs are not sharable among devices; fbdev is the only user of that 
> functionality and only pins for short periods of time.
>
> I suspect that fixing TTM-based drivers by adding an implicit pin 
> would result in a similar situation. Whatever domain it ends up 
> pinning, some functionality might not be compatible with that.

Correct, exactly that's the problem.

>
>>
>>> Given the circumstances, we should leave out this patch from the 
>>> patchset.
>>
>> So the defacto rules are already a big mess, but that's not a good 
>> excuse
>> to make it worse.
>>
>> What I had in mind is that we split dma_buf_vmap up into two variants:
>>
>> - The current one, which should guarantee that the buffer is pinned.
>>    Because that's what all current callers wanted, before the fbdev code
>>    started allowing non-pinned buffers.
>
> Can we add an explicit pin operation to dma_buf_vmap() to enforce the 
> semantics?

At least I would be fine with that. For now amdgpu is the only exporter 
who implements the explicit pin/unpin semantics anyway.

Regards,
Christian.

>
> Best regards
> Thomas
>
> [1] https://patchwork.freedesktop.org/patch/400054/?series=83765&rev=1
>
>>
>> - The new one, which allows vmapping with just dma_resv locked, and 
>> should
>>    have some caching in exporters.
>>
>> Breaking code and then adding todos about that is kinda not so cool
>> approach here imo.
>>
>> Also I guess ttm_bo_vmap should have a check that either the buffer is
>> pinned, or dma_resv_lock is held.
>>
>> Cheers, Daniel
>>
>>
>>
>>>
>>> Best regards
>>> Thomas
>>>
>>>>
>>>>> ---
>>>>>    Documentation/gpu/todo.rst | 15 +++++++++++++
>>>>>    include/drm/drm_gem.h      |  4 ++++
>>>>>    include/linux/dma-buf.h    | 45 
>>>>> ++++++++++++++++++++++++++++++++++++++
>>>>>    3 files changed, 64 insertions(+)
>>>>>
>>>>> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
>>>>> index 009d8e6c7e3c..32bb797a84fc 100644
>>>>> --- a/Documentation/gpu/todo.rst
>>>>> +++ b/Documentation/gpu/todo.rst
>>>>> @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann 
>>>>> <tzimmermann@suse.de>, Christian König, Daniel Vette
>>>>>    Level: Intermediate
>>>>> +Enforce rules for dma-buf vmap and pin ops
>>>>> +------------------------------------------
>>>>> +
>>>>> +Exporter implementations of vmap and pin in struct dma_buf_ops 
>>>>> (and consequently
>>>>> +struct drm_gem_object_funcs) use a variety of locking semantics. 
>>>>> Some rely on
>>>>> +the caller holding the dma-buf's reservation lock, some do their 
>>>>> own locking,
>>>>> +some don't require any locking. VRAM helpers even used to pin as 
>>>>> part of vmap.
>>>>> +
>>>>> +We need to review each exporter and enforce the documented rules.
>>>>> +
>>>>> +Contact: Christian König, Daniel Vetter, Thomas Zimmermann 
>>>>> <tzimmermann@suse.de>
>>>>> +
>>>>> +Level: Advanced
>>>>> +
>>>>> +
>>>>>    Core refactorings
>>>>>    =================
>>>>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
>>>>> index 5e6daa1c982f..1864c6a721b1 100644
>>>>> --- a/include/drm/drm_gem.h
>>>>> +++ b/include/drm/drm_gem.h
>>>>> @@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
>>>>>         * drm_gem_dmabuf_vmap() helper.
>>>>>         *
>>>>>         * This callback is optional.
>>>>> +     *
>>>>> +     * See also struct dma_buf_ops.vmap
>>>>>         */
>>>>>        int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map 
>>>>> *map);
>>>>> @@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
>>>>>         * drm_gem_dmabuf_vunmap() helper.
>>>>>         *
>>>>>         * This callback is optional.
>>>>> +     *
>>>>> +     * See also struct dma_buf_ops.vunmap
>>>>>         */
>>>>>        void (*vunmap)(struct drm_gem_object *obj, struct 
>>>>> dma_buf_map *map);
>>>>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
>>>>> index cf72699cb2bc..dc81fdc01dda 100644
>>>>> --- a/include/linux/dma-buf.h
>>>>> +++ b/include/linux/dma-buf.h
>>>>> @@ -267,7 +267,52 @@ struct dma_buf_ops {
>>>>>         */
>>>>>        int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
>>>>> +    /**
>>>>> +     * @vmap:
>>>>
>>>> There's already a @vmap and @vunamp kerneldoc at the top comment, that
>>>> needs to be removed.
>>>> -Daniel
>>>>
>>>>> +     *
>>>>> +     * Returns a virtual address for the buffer.
>>>>> +     *
>>>>> +     * Notes to callers:
>>>>> +     *
>>>>> +     * - Callers must hold the struct dma_buf.resv lock before 
>>>>> calling
>>>>> +     *   this interface.
>>>>> +     *
>>>>> +     * - Callers must provide means to prevent the mappings from 
>>>>> going
>>>>> +     *   stale, such as holding the reservation lock or providing a
>>>>> +     *   move-notify callback to the exporter.
>>>>> +     *
>>>>> +     * Notes to implementors:
>>>>> +     *
>>>>> +     * - Implementations must expect pairs of @vmap and @vunmap 
>>>>> to be
>>>>> +     *   called frequently and should optimize for this case.
>>>>> +     *
>>>>> +     * - Implementations should avoid additional operations, such as
>>>>> +     *   pinning.
>>>>> +     *
>>>>> +     * - Implementations may expect the caller to hold the dma-buf's
>>>>> +     *   reservation lock to protect against concurrent calls and
>>>>> +     *   relocation.
>>>>> +     *
>>>>> +     * - Implementations may provide additional guarantees, such 
>>>>> as working
>>>>> +     *   without holding the reservation lock.
>>>>> +     *
>>>>> +     * This callback is optional.
>>>>> +     *
>>>>> +     * Returns:
>>>>> +     *
>>>>> +     * 0 on success or a negative error code on failure.
>>>>> +     */
>>>>>        int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>>>>> +
>>>>> +    /**
>>>>> +     * @vunmap:
>>>>> +     *
>>>>> +     * Releases the address previously returned by @vmap.
>>>>> +     *
>>>>> +     * This callback is optional.
>>>>> +     *
>>>>> +     * See also @vmap()
>>>>> +     */
>>>>>        void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map 
>>>>> *map);
>>>>>    };
>>>>> -- 
>>>>> 2.29.2
>>>>>
>>>>
>>>
>>> -- 
>>> Thomas Zimmermann
>>> Graphics Driver Developer
>>> SUSE Software Solutions Germany GmbH
>>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>>> (HRB 36809, AG Nürnberg)
>>> Geschäftsführer: Felix Imendörffer
>>>
>>
>>
>>
>>
>


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
@ 2020-12-04  8:47             ` Christian König
  0 siblings, 0 replies; 36+ messages in thread
From: Christian König @ 2020-12-04  8:47 UTC (permalink / raw)
  To: Thomas Zimmermann, Daniel Vetter
  Cc: linaro-mm-sig, hdegoede, linux-media, dri-devel, airlied

Am 04.12.20 um 09:32 schrieb Thomas Zimmermann:
> Hi
>
> Am 03.12.20 um 21:41 schrieb Daniel Vetter:
>> On Thu, Dec 03, 2020 at 07:59:04PM +0100, Thomas Zimmermann wrote:
>>> Hi
>>>
>>> Am 03.12.20 um 16:26 schrieb Daniel Vetter:
>>>> On Thu, Dec 03, 2020 at 03:02:59PM +0100, Thomas Zimmermann wrote:
>>>>> Dma-buf's vmap and vunmap callbacks are undocumented and various
>>>>> exporters currently have slightly different semantics for them. Add
>>>>> documentation on how to implement and use these interfaces correctly.
>>>>>
>>>>> v2:
>>>>>     * document vmap semantics in struct dma_buf_ops
>>>>>     * add TODO item for reviewing and maybe fixing dma-buf exporters
>>>>>
>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>>
>>>> I still don't think this works, we're breaking dma_buf_vmap for 
>>>> everyone
>>>> else here.
>>>
>>> I removed the text on the importer. These notes for callers in the 
>>> docs are
>>> more or less a consequence of the exporter semantics.
>>
>> Callers are importers, so I'm not seeing how that fixes anything.
>>
>>> I thought we at least agreed on the exporter semantics in the other 
>>> thread,
>>> didn't we?
>>>
>>> What I'm trying to do is to write dome some rules for exporters, 
>>> even if not
>>> all exporters follow them.
>>
>> This is a standard interface, everyone needs to follow the same 
>> rules. And
>> if they change, we need to make sure nothing breaks and we're not 
>> creating
>> issues.
>>
>> In the past the rule was the dma_buf_vmap was allowed to take the
>> dma_resv_lock, and that the buffer should be pinned. Now some ttm 
>> drivers
>> didn't ever bother with the pinning, and mostly got away with that 
>> because
>> drm_prime helpers do the pinning by default at attach time, and most 
>> users
>> do call dma_buf_attach.
>>
>> But if you look through dma-buf docs nothing ever said you have to do a
>> dummy attachment before you call dma_buf_vmap, that's just slightly 
>> crappy
>> implementations that didn't blow up yet.
>
> I had a patch for adding pin to radeon's implementation of vmap. [1] 
> Christian told me to not do this; instead just get the lock in the 
> fbdev code. His advise almost seems the opposite of what you're 
> telling me here.

I think what Daniel suggests here is that we need to smoothly transition 
the code from making assumptions to having a straight interface where 
importers explicitly say when stuff is locked and when stuff is pinned.

I've started this with the attach interface by adding a new dynamic 
approach to that, but you probably need to carry on here with that for 
vmap as well.

When that is done we can migrate every exporter over to the new dynamic 
approach.

>
> For the GEM VRAM helpers, that implicit pin in vmap gave me headaches. 
> Because scanouts can only be done from VRAM, which is badly suited for 
> exporting. So I ended up with an implicit pin that pins the buffer to 
> whatever domain it currently is. I got away with it because GEM VRAM 
> BOs are not sharable among devices; fbdev is the only user of that 
> functionality and only pins for short periods of time.
>
> I suspect that fixing TTM-based drivers by adding an implicit pin 
> would result in a similar situation. Whatever domain it ends up 
> pinning, some functionality might not be compatible with that.

Correct, exactly that's the problem.

>
>>
>>> Given the circumstances, we should leave out this patch from the 
>>> patchset.
>>
>> So the defacto rules are already a big mess, but that's not a good 
>> excuse
>> to make it worse.
>>
>> What I had in mind is that we split dma_buf_vmap up into two variants:
>>
>> - The current one, which should guarantee that the buffer is pinned.
>>    Because that's what all current callers wanted, before the fbdev code
>>    started allowing non-pinned buffers.
>
> Can we add an explicit pin operation to dma_buf_vmap() to enforce the 
> semantics?

At least I would be fine with that. For now amdgpu is the only exporter 
who implements the explicit pin/unpin semantics anyway.

Regards,
Christian.

>
> Best regards
> Thomas
>
> [1] https://patchwork.freedesktop.org/patch/400054/?series=83765&rev=1
>
>>
>> - The new one, which allows vmapping with just dma_resv locked, and 
>> should
>>    have some caching in exporters.
>>
>> Breaking code and then adding todos about that is kinda not so cool
>> approach here imo.
>>
>> Also I guess ttm_bo_vmap should have a check that either the buffer is
>> pinned, or dma_resv_lock is held.
>>
>> Cheers, Daniel
>>
>>
>>
>>>
>>> Best regards
>>> Thomas
>>>
>>>>
>>>>> ---
>>>>>    Documentation/gpu/todo.rst | 15 +++++++++++++
>>>>>    include/drm/drm_gem.h      |  4 ++++
>>>>>    include/linux/dma-buf.h    | 45 
>>>>> ++++++++++++++++++++++++++++++++++++++
>>>>>    3 files changed, 64 insertions(+)
>>>>>
>>>>> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
>>>>> index 009d8e6c7e3c..32bb797a84fc 100644
>>>>> --- a/Documentation/gpu/todo.rst
>>>>> +++ b/Documentation/gpu/todo.rst
>>>>> @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann 
>>>>> <tzimmermann@suse.de>, Christian König, Daniel Vette
>>>>>    Level: Intermediate
>>>>> +Enforce rules for dma-buf vmap and pin ops
>>>>> +------------------------------------------
>>>>> +
>>>>> +Exporter implementations of vmap and pin in struct dma_buf_ops 
>>>>> (and consequently
>>>>> +struct drm_gem_object_funcs) use a variety of locking semantics. 
>>>>> Some rely on
>>>>> +the caller holding the dma-buf's reservation lock, some do their 
>>>>> own locking,
>>>>> +some don't require any locking. VRAM helpers even used to pin as 
>>>>> part of vmap.
>>>>> +
>>>>> +We need to review each exporter and enforce the documented rules.
>>>>> +
>>>>> +Contact: Christian König, Daniel Vetter, Thomas Zimmermann 
>>>>> <tzimmermann@suse.de>
>>>>> +
>>>>> +Level: Advanced
>>>>> +
>>>>> +
>>>>>    Core refactorings
>>>>>    =================
>>>>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
>>>>> index 5e6daa1c982f..1864c6a721b1 100644
>>>>> --- a/include/drm/drm_gem.h
>>>>> +++ b/include/drm/drm_gem.h
>>>>> @@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
>>>>>         * drm_gem_dmabuf_vmap() helper.
>>>>>         *
>>>>>         * This callback is optional.
>>>>> +     *
>>>>> +     * See also struct dma_buf_ops.vmap
>>>>>         */
>>>>>        int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map 
>>>>> *map);
>>>>> @@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
>>>>>         * drm_gem_dmabuf_vunmap() helper.
>>>>>         *
>>>>>         * This callback is optional.
>>>>> +     *
>>>>> +     * See also struct dma_buf_ops.vunmap
>>>>>         */
>>>>>        void (*vunmap)(struct drm_gem_object *obj, struct 
>>>>> dma_buf_map *map);
>>>>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
>>>>> index cf72699cb2bc..dc81fdc01dda 100644
>>>>> --- a/include/linux/dma-buf.h
>>>>> +++ b/include/linux/dma-buf.h
>>>>> @@ -267,7 +267,52 @@ struct dma_buf_ops {
>>>>>         */
>>>>>        int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
>>>>> +    /**
>>>>> +     * @vmap:
>>>>
>>>> There's already a @vmap and @vunamp kerneldoc at the top comment, that
>>>> needs to be removed.
>>>> -Daniel
>>>>
>>>>> +     *
>>>>> +     * Returns a virtual address for the buffer.
>>>>> +     *
>>>>> +     * Notes to callers:
>>>>> +     *
>>>>> +     * - Callers must hold the struct dma_buf.resv lock before 
>>>>> calling
>>>>> +     *   this interface.
>>>>> +     *
>>>>> +     * - Callers must provide means to prevent the mappings from 
>>>>> going
>>>>> +     *   stale, such as holding the reservation lock or providing a
>>>>> +     *   move-notify callback to the exporter.
>>>>> +     *
>>>>> +     * Notes to implementors:
>>>>> +     *
>>>>> +     * - Implementations must expect pairs of @vmap and @vunmap 
>>>>> to be
>>>>> +     *   called frequently and should optimize for this case.
>>>>> +     *
>>>>> +     * - Implementations should avoid additional operations, such as
>>>>> +     *   pinning.
>>>>> +     *
>>>>> +     * - Implementations may expect the caller to hold the dma-buf's
>>>>> +     *   reservation lock to protect against concurrent calls and
>>>>> +     *   relocation.
>>>>> +     *
>>>>> +     * - Implementations may provide additional guarantees, such 
>>>>> as working
>>>>> +     *   without holding the reservation lock.
>>>>> +     *
>>>>> +     * This callback is optional.
>>>>> +     *
>>>>> +     * Returns:
>>>>> +     *
>>>>> +     * 0 on success or a negative error code on failure.
>>>>> +     */
>>>>>        int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>>>>> +
>>>>> +    /**
>>>>> +     * @vunmap:
>>>>> +     *
>>>>> +     * Releases the address previously returned by @vmap.
>>>>> +     *
>>>>> +     * This callback is optional.
>>>>> +     *
>>>>> +     * See also @vmap()
>>>>> +     */
>>>>>        void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map 
>>>>> *map);
>>>>>    };
>>>>> -- 
>>>>> 2.29.2
>>>>>
>>>>
>>>
>>> -- 
>>> Thomas Zimmermann
>>> Graphics Driver Developer
>>> SUSE Software Solutions Germany GmbH
>>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>>> (HRB 36809, AG Nürnberg)
>>> Geschäftsführer: Felix Imendörffer
>>>
>>
>>
>>
>>
>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
  2020-12-04  8:47             ` Christian König
@ 2020-12-09  0:13               ` Daniel Vetter
  -1 siblings, 0 replies; 36+ messages in thread
From: Daniel Vetter @ 2020-12-09  0:13 UTC (permalink / raw)
  To: Christian König
  Cc: Thomas Zimmermann, Daniel Vetter, dri-devel, linaro-mm-sig,
	hdegoede, airlied, linux-media

On Fri, Dec 04, 2020 at 09:47:08AM +0100, Christian König wrote:
> Am 04.12.20 um 09:32 schrieb Thomas Zimmermann:
> > Hi
> > 
> > Am 03.12.20 um 21:41 schrieb Daniel Vetter:
> > > On Thu, Dec 03, 2020 at 07:59:04PM +0100, Thomas Zimmermann wrote:
> > > > Hi
> > > > 
> > > > Am 03.12.20 um 16:26 schrieb Daniel Vetter:
> > > > > On Thu, Dec 03, 2020 at 03:02:59PM +0100, Thomas Zimmermann wrote:
> > > > > > Dma-buf's vmap and vunmap callbacks are undocumented and various
> > > > > > exporters currently have slightly different semantics for them. Add
> > > > > > documentation on how to implement and use these interfaces correctly.
> > > > > > 
> > > > > > v2:
> > > > > >     * document vmap semantics in struct dma_buf_ops
> > > > > >     * add TODO item for reviewing and maybe fixing dma-buf exporters
> > > > > > 
> > > > > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > > > > 
> > > > > I still don't think this works, we're breaking dma_buf_vmap
> > > > > for everyone
> > > > > else here.
> > > > 
> > > > I removed the text on the importer. These notes for callers in
> > > > the docs are
> > > > more or less a consequence of the exporter semantics.
> > > 
> > > Callers are importers, so I'm not seeing how that fixes anything.
> > > 
> > > > I thought we at least agreed on the exporter semantics in the
> > > > other thread,
> > > > didn't we?
> > > > 
> > > > What I'm trying to do is to write dome some rules for exporters,
> > > > even if not
> > > > all exporters follow them.
> > > 
> > > This is a standard interface, everyone needs to follow the same
> > > rules. And
> > > if they change, we need to make sure nothing breaks and we're not
> > > creating
> > > issues.
> > > 
> > > In the past the rule was the dma_buf_vmap was allowed to take the
> > > dma_resv_lock, and that the buffer should be pinned. Now some ttm
> > > drivers
> > > didn't ever bother with the pinning, and mostly got away with that
> > > because
> > > drm_prime helpers do the pinning by default at attach time, and most
> > > users
> > > do call dma_buf_attach.
> > > 
> > > But if you look through dma-buf docs nothing ever said you have to do a
> > > dummy attachment before you call dma_buf_vmap, that's just slightly
> > > crappy
> > > implementations that didn't blow up yet.
> > 
> > I had a patch for adding pin to radeon's implementation of vmap. [1]
> > Christian told me to not do this; instead just get the lock in the fbdev
> > code. His advise almost seems the opposite of what you're telling me
> > here.
> 
> I think what Daniel suggests here is that we need to smoothly transition the
> code from making assumptions to having a straight interface where importers
> explicitly say when stuff is locked and when stuff is pinned.
> 
> I've started this with the attach interface by adding a new dynamic approach
> to that, but you probably need to carry on here with that for vmap as well.
> 
> When that is done we can migrate every exporter over to the new dynamic
> approach.
> 
> > 
> > For the GEM VRAM helpers, that implicit pin in vmap gave me headaches.
> > Because scanouts can only be done from VRAM, which is badly suited for
> > exporting. So I ended up with an implicit pin that pins the buffer to
> > whatever domain it currently is. I got away with it because GEM VRAM BOs
> > are not sharable among devices; fbdev is the only user of that
> > functionality and only pins for short periods of time.
> > 
> > I suspect that fixing TTM-based drivers by adding an implicit pin would
> > result in a similar situation. Whatever domain it ends up pinning, some
> > functionality might not be compatible with that.
> 
> Correct, exactly that's the problem.
> 
> > 
> > > 
> > > > Given the circumstances, we should leave out this patch from the
> > > > patchset.
> > > 
> > > So the defacto rules are already a big mess, but that's not a good
> > > excuse
> > > to make it worse.
> > > 
> > > What I had in mind is that we split dma_buf_vmap up into two variants:
> > > 
> > > - The current one, which should guarantee that the buffer is pinned.
> > >    Because that's what all current callers wanted, before the fbdev code
> > >    started allowing non-pinned buffers.
> > 
> > Can we add an explicit pin operation to dma_buf_vmap() to enforce the
> > semantics?
> 
> At least I would be fine with that. For now amdgpu is the only exporter who
> implements the explicit pin/unpin semantics anyway.

Yup, I think that makes sense (if it works). Maybe we could use something
like:

a) dma_buf pin exists, driver is dynamic. This means dma_buf_vmap needs to
first pin, then call ->vmap. dma_buf_vmap_local otoh can directly call
->vmap since the exporter relies on either a pin or dma_resv_lock.

b) dma_buf pin not implement, driver is a legacy pile. dma_buf_vmap will
pin (somewhere at least, or rely on some implicit pin), dma_buf_vmap_local
doesn't work and should fail.

I think for less transition work fbdev helpers could first try
dma_resv_lock + dma_buf_vmap_local, if that fails, drop the dma_resv_lock
and do the pinning dma_buf_vmap. That way we don't have to convert shmem
helpers over to dma_resv locking, which should help.

And ttm drivers would do the new clean interface, so at least everyone
using dma_resv today is all fine. Intel's conversion to dma_resv lock is
in-flight, but that needs a conversion to the dynamic interface anyway,
the current code splats. And dynamic brings means explicit pin/unpin
callbacks, so should be good too.
-Daniel

> 
> Regards,
> Christian.
> 
> > 
> > Best regards
> > Thomas
> > 
> > [1] https://patchwork.freedesktop.org/patch/400054/?series=83765&rev=1
> > 
> > > 
> > > - The new one, which allows vmapping with just dma_resv locked, and
> > > should
> > >    have some caching in exporters.
> > > 
> > > Breaking code and then adding todos about that is kinda not so cool
> > > approach here imo.
> > > 
> > > Also I guess ttm_bo_vmap should have a check that either the buffer is
> > > pinned, or dma_resv_lock is held.
> > > 
> > > Cheers, Daniel
> > > 
> > > 
> > > 
> > > > 
> > > > Best regards
> > > > Thomas
> > > > 
> > > > > 
> > > > > > ---
> > > > > >    Documentation/gpu/todo.rst | 15 +++++++++++++
> > > > > >    include/drm/drm_gem.h      |  4 ++++
> > > > > >    include/linux/dma-buf.h    | 45
> > > > > > ++++++++++++++++++++++++++++++++++++++
> > > > > >    3 files changed, 64 insertions(+)
> > > > > > 
> > > > > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > > > > > index 009d8e6c7e3c..32bb797a84fc 100644
> > > > > > --- a/Documentation/gpu/todo.rst
> > > > > > +++ b/Documentation/gpu/todo.rst
> > > > > > @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann
> > > > > > <tzimmermann@suse.de>, Christian König, Daniel Vette
> > > > > >    Level: Intermediate
> > > > > > +Enforce rules for dma-buf vmap and pin ops
> > > > > > +------------------------------------------
> > > > > > +
> > > > > > +Exporter implementations of vmap and pin in struct
> > > > > > dma_buf_ops (and consequently
> > > > > > +struct drm_gem_object_funcs) use a variety of locking
> > > > > > semantics. Some rely on
> > > > > > +the caller holding the dma-buf's reservation lock, some
> > > > > > do their own locking,
> > > > > > +some don't require any locking. VRAM helpers even used
> > > > > > to pin as part of vmap.
> > > > > > +
> > > > > > +We need to review each exporter and enforce the documented rules.
> > > > > > +
> > > > > > +Contact: Christian König, Daniel Vetter, Thomas
> > > > > > Zimmermann <tzimmermann@suse.de>
> > > > > > +
> > > > > > +Level: Advanced
> > > > > > +
> > > > > > +
> > > > > >    Core refactorings
> > > > > >    =================
> > > > > > diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> > > > > > index 5e6daa1c982f..1864c6a721b1 100644
> > > > > > --- a/include/drm/drm_gem.h
> > > > > > +++ b/include/drm/drm_gem.h
> > > > > > @@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
> > > > > >         * drm_gem_dmabuf_vmap() helper.
> > > > > >         *
> > > > > >         * This callback is optional.
> > > > > > +     *
> > > > > > +     * See also struct dma_buf_ops.vmap
> > > > > >         */
> > > > > >        int (*vmap)(struct drm_gem_object *obj, struct
> > > > > > dma_buf_map *map);
> > > > > > @@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
> > > > > >         * drm_gem_dmabuf_vunmap() helper.
> > > > > >         *
> > > > > >         * This callback is optional.
> > > > > > +     *
> > > > > > +     * See also struct dma_buf_ops.vunmap
> > > > > >         */
> > > > > >        void (*vunmap)(struct drm_gem_object *obj, struct
> > > > > > dma_buf_map *map);
> > > > > > diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> > > > > > index cf72699cb2bc..dc81fdc01dda 100644
> > > > > > --- a/include/linux/dma-buf.h
> > > > > > +++ b/include/linux/dma-buf.h
> > > > > > @@ -267,7 +267,52 @@ struct dma_buf_ops {
> > > > > >         */
> > > > > >        int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
> > > > > > +    /**
> > > > > > +     * @vmap:
> > > > > 
> > > > > There's already a @vmap and @vunamp kerneldoc at the top comment, that
> > > > > needs to be removed.
> > > > > -Daniel
> > > > > 
> > > > > > +     *
> > > > > > +     * Returns a virtual address for the buffer.
> > > > > > +     *
> > > > > > +     * Notes to callers:
> > > > > > +     *
> > > > > > +     * - Callers must hold the struct dma_buf.resv lock
> > > > > > before calling
> > > > > > +     *   this interface.
> > > > > > +     *
> > > > > > +     * - Callers must provide means to prevent the
> > > > > > mappings from going
> > > > > > +     *   stale, such as holding the reservation lock or providing a
> > > > > > +     *   move-notify callback to the exporter.
> > > > > > +     *
> > > > > > +     * Notes to implementors:
> > > > > > +     *
> > > > > > +     * - Implementations must expect pairs of @vmap and
> > > > > > @vunmap to be
> > > > > > +     *   called frequently and should optimize for this case.
> > > > > > +     *
> > > > > > +     * - Implementations should avoid additional operations, such as
> > > > > > +     *   pinning.
> > > > > > +     *
> > > > > > +     * - Implementations may expect the caller to hold the dma-buf's
> > > > > > +     *   reservation lock to protect against concurrent calls and
> > > > > > +     *   relocation.
> > > > > > +     *
> > > > > > +     * - Implementations may provide additional
> > > > > > guarantees, such as working
> > > > > > +     *   without holding the reservation lock.
> > > > > > +     *
> > > > > > +     * This callback is optional.
> > > > > > +     *
> > > > > > +     * Returns:
> > > > > > +     *
> > > > > > +     * 0 on success or a negative error code on failure.
> > > > > > +     */
> > > > > >        int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
> > > > > > +
> > > > > > +    /**
> > > > > > +     * @vunmap:
> > > > > > +     *
> > > > > > +     * Releases the address previously returned by @vmap.
> > > > > > +     *
> > > > > > +     * This callback is optional.
> > > > > > +     *
> > > > > > +     * See also @vmap()
> > > > > > +     */
> > > > > >        void (*vunmap)(struct dma_buf *dmabuf, struct
> > > > > > dma_buf_map *map);
> > > > > >    };
> > > > > > -- 
> > > > > > 2.29.2
> > > > > > 
> > > > > 
> > > > 
> > > > -- 
> > > > Thomas Zimmermann
> > > > Graphics Driver Developer
> > > > SUSE Software Solutions Germany GmbH
> > > > Maxfeldstr. 5, 90409 Nürnberg, Germany
> > > > (HRB 36809, AG Nürnberg)
> > > > Geschäftsführer: Felix Imendörffer
> > > > 
> > > 
> > > 
> > > 
> > > 
> > 
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
@ 2020-12-09  0:13               ` Daniel Vetter
  0 siblings, 0 replies; 36+ messages in thread
From: Daniel Vetter @ 2020-12-09  0:13 UTC (permalink / raw)
  To: Christian König
  Cc: Thomas Zimmermann, dri-devel, linaro-mm-sig, hdegoede, airlied,
	linux-media

On Fri, Dec 04, 2020 at 09:47:08AM +0100, Christian König wrote:
> Am 04.12.20 um 09:32 schrieb Thomas Zimmermann:
> > Hi
> > 
> > Am 03.12.20 um 21:41 schrieb Daniel Vetter:
> > > On Thu, Dec 03, 2020 at 07:59:04PM +0100, Thomas Zimmermann wrote:
> > > > Hi
> > > > 
> > > > Am 03.12.20 um 16:26 schrieb Daniel Vetter:
> > > > > On Thu, Dec 03, 2020 at 03:02:59PM +0100, Thomas Zimmermann wrote:
> > > > > > Dma-buf's vmap and vunmap callbacks are undocumented and various
> > > > > > exporters currently have slightly different semantics for them. Add
> > > > > > documentation on how to implement and use these interfaces correctly.
> > > > > > 
> > > > > > v2:
> > > > > >     * document vmap semantics in struct dma_buf_ops
> > > > > >     * add TODO item for reviewing and maybe fixing dma-buf exporters
> > > > > > 
> > > > > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > > > > 
> > > > > I still don't think this works, we're breaking dma_buf_vmap
> > > > > for everyone
> > > > > else here.
> > > > 
> > > > I removed the text on the importer. These notes for callers in
> > > > the docs are
> > > > more or less a consequence of the exporter semantics.
> > > 
> > > Callers are importers, so I'm not seeing how that fixes anything.
> > > 
> > > > I thought we at least agreed on the exporter semantics in the
> > > > other thread,
> > > > didn't we?
> > > > 
> > > > What I'm trying to do is to write dome some rules for exporters,
> > > > even if not
> > > > all exporters follow them.
> > > 
> > > This is a standard interface, everyone needs to follow the same
> > > rules. And
> > > if they change, we need to make sure nothing breaks and we're not
> > > creating
> > > issues.
> > > 
> > > In the past the rule was the dma_buf_vmap was allowed to take the
> > > dma_resv_lock, and that the buffer should be pinned. Now some ttm
> > > drivers
> > > didn't ever bother with the pinning, and mostly got away with that
> > > because
> > > drm_prime helpers do the pinning by default at attach time, and most
> > > users
> > > do call dma_buf_attach.
> > > 
> > > But if you look through dma-buf docs nothing ever said you have to do a
> > > dummy attachment before you call dma_buf_vmap, that's just slightly
> > > crappy
> > > implementations that didn't blow up yet.
> > 
> > I had a patch for adding pin to radeon's implementation of vmap. [1]
> > Christian told me to not do this; instead just get the lock in the fbdev
> > code. His advise almost seems the opposite of what you're telling me
> > here.
> 
> I think what Daniel suggests here is that we need to smoothly transition the
> code from making assumptions to having a straight interface where importers
> explicitly say when stuff is locked and when stuff is pinned.
> 
> I've started this with the attach interface by adding a new dynamic approach
> to that, but you probably need to carry on here with that for vmap as well.
> 
> When that is done we can migrate every exporter over to the new dynamic
> approach.
> 
> > 
> > For the GEM VRAM helpers, that implicit pin in vmap gave me headaches.
> > Because scanouts can only be done from VRAM, which is badly suited for
> > exporting. So I ended up with an implicit pin that pins the buffer to
> > whatever domain it currently is. I got away with it because GEM VRAM BOs
> > are not sharable among devices; fbdev is the only user of that
> > functionality and only pins for short periods of time.
> > 
> > I suspect that fixing TTM-based drivers by adding an implicit pin would
> > result in a similar situation. Whatever domain it ends up pinning, some
> > functionality might not be compatible with that.
> 
> Correct, exactly that's the problem.
> 
> > 
> > > 
> > > > Given the circumstances, we should leave out this patch from the
> > > > patchset.
> > > 
> > > So the defacto rules are already a big mess, but that's not a good
> > > excuse
> > > to make it worse.
> > > 
> > > What I had in mind is that we split dma_buf_vmap up into two variants:
> > > 
> > > - The current one, which should guarantee that the buffer is pinned.
> > >    Because that's what all current callers wanted, before the fbdev code
> > >    started allowing non-pinned buffers.
> > 
> > Can we add an explicit pin operation to dma_buf_vmap() to enforce the
> > semantics?
> 
> At least I would be fine with that. For now amdgpu is the only exporter who
> implements the explicit pin/unpin semantics anyway.

Yup, I think that makes sense (if it works). Maybe we could use something
like:

a) dma_buf pin exists, driver is dynamic. This means dma_buf_vmap needs to
first pin, then call ->vmap. dma_buf_vmap_local otoh can directly call
->vmap since the exporter relies on either a pin or dma_resv_lock.

b) dma_buf pin not implement, driver is a legacy pile. dma_buf_vmap will
pin (somewhere at least, or rely on some implicit pin), dma_buf_vmap_local
doesn't work and should fail.

I think for less transition work fbdev helpers could first try
dma_resv_lock + dma_buf_vmap_local, if that fails, drop the dma_resv_lock
and do the pinning dma_buf_vmap. That way we don't have to convert shmem
helpers over to dma_resv locking, which should help.

And ttm drivers would do the new clean interface, so at least everyone
using dma_resv today is all fine. Intel's conversion to dma_resv lock is
in-flight, but that needs a conversion to the dynamic interface anyway,
the current code splats. And dynamic brings means explicit pin/unpin
callbacks, so should be good too.
-Daniel

> 
> Regards,
> Christian.
> 
> > 
> > Best regards
> > Thomas
> > 
> > [1] https://patchwork.freedesktop.org/patch/400054/?series=83765&rev=1
> > 
> > > 
> > > - The new one, which allows vmapping with just dma_resv locked, and
> > > should
> > >    have some caching in exporters.
> > > 
> > > Breaking code and then adding todos about that is kinda not so cool
> > > approach here imo.
> > > 
> > > Also I guess ttm_bo_vmap should have a check that either the buffer is
> > > pinned, or dma_resv_lock is held.
> > > 
> > > Cheers, Daniel
> > > 
> > > 
> > > 
> > > > 
> > > > Best regards
> > > > Thomas
> > > > 
> > > > > 
> > > > > > ---
> > > > > >    Documentation/gpu/todo.rst | 15 +++++++++++++
> > > > > >    include/drm/drm_gem.h      |  4 ++++
> > > > > >    include/linux/dma-buf.h    | 45
> > > > > > ++++++++++++++++++++++++++++++++++++++
> > > > > >    3 files changed, 64 insertions(+)
> > > > > > 
> > > > > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > > > > > index 009d8e6c7e3c..32bb797a84fc 100644
> > > > > > --- a/Documentation/gpu/todo.rst
> > > > > > +++ b/Documentation/gpu/todo.rst
> > > > > > @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann
> > > > > > <tzimmermann@suse.de>, Christian König, Daniel Vette
> > > > > >    Level: Intermediate
> > > > > > +Enforce rules for dma-buf vmap and pin ops
> > > > > > +------------------------------------------
> > > > > > +
> > > > > > +Exporter implementations of vmap and pin in struct
> > > > > > dma_buf_ops (and consequently
> > > > > > +struct drm_gem_object_funcs) use a variety of locking
> > > > > > semantics. Some rely on
> > > > > > +the caller holding the dma-buf's reservation lock, some
> > > > > > do their own locking,
> > > > > > +some don't require any locking. VRAM helpers even used
> > > > > > to pin as part of vmap.
> > > > > > +
> > > > > > +We need to review each exporter and enforce the documented rules.
> > > > > > +
> > > > > > +Contact: Christian König, Daniel Vetter, Thomas
> > > > > > Zimmermann <tzimmermann@suse.de>
> > > > > > +
> > > > > > +Level: Advanced
> > > > > > +
> > > > > > +
> > > > > >    Core refactorings
> > > > > >    =================
> > > > > > diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> > > > > > index 5e6daa1c982f..1864c6a721b1 100644
> > > > > > --- a/include/drm/drm_gem.h
> > > > > > +++ b/include/drm/drm_gem.h
> > > > > > @@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
> > > > > >         * drm_gem_dmabuf_vmap() helper.
> > > > > >         *
> > > > > >         * This callback is optional.
> > > > > > +     *
> > > > > > +     * See also struct dma_buf_ops.vmap
> > > > > >         */
> > > > > >        int (*vmap)(struct drm_gem_object *obj, struct
> > > > > > dma_buf_map *map);
> > > > > > @@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
> > > > > >         * drm_gem_dmabuf_vunmap() helper.
> > > > > >         *
> > > > > >         * This callback is optional.
> > > > > > +     *
> > > > > > +     * See also struct dma_buf_ops.vunmap
> > > > > >         */
> > > > > >        void (*vunmap)(struct drm_gem_object *obj, struct
> > > > > > dma_buf_map *map);
> > > > > > diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> > > > > > index cf72699cb2bc..dc81fdc01dda 100644
> > > > > > --- a/include/linux/dma-buf.h
> > > > > > +++ b/include/linux/dma-buf.h
> > > > > > @@ -267,7 +267,52 @@ struct dma_buf_ops {
> > > > > >         */
> > > > > >        int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
> > > > > > +    /**
> > > > > > +     * @vmap:
> > > > > 
> > > > > There's already a @vmap and @vunamp kerneldoc at the top comment, that
> > > > > needs to be removed.
> > > > > -Daniel
> > > > > 
> > > > > > +     *
> > > > > > +     * Returns a virtual address for the buffer.
> > > > > > +     *
> > > > > > +     * Notes to callers:
> > > > > > +     *
> > > > > > +     * - Callers must hold the struct dma_buf.resv lock
> > > > > > before calling
> > > > > > +     *   this interface.
> > > > > > +     *
> > > > > > +     * - Callers must provide means to prevent the
> > > > > > mappings from going
> > > > > > +     *   stale, such as holding the reservation lock or providing a
> > > > > > +     *   move-notify callback to the exporter.
> > > > > > +     *
> > > > > > +     * Notes to implementors:
> > > > > > +     *
> > > > > > +     * - Implementations must expect pairs of @vmap and
> > > > > > @vunmap to be
> > > > > > +     *   called frequently and should optimize for this case.
> > > > > > +     *
> > > > > > +     * - Implementations should avoid additional operations, such as
> > > > > > +     *   pinning.
> > > > > > +     *
> > > > > > +     * - Implementations may expect the caller to hold the dma-buf's
> > > > > > +     *   reservation lock to protect against concurrent calls and
> > > > > > +     *   relocation.
> > > > > > +     *
> > > > > > +     * - Implementations may provide additional
> > > > > > guarantees, such as working
> > > > > > +     *   without holding the reservation lock.
> > > > > > +     *
> > > > > > +     * This callback is optional.
> > > > > > +     *
> > > > > > +     * Returns:
> > > > > > +     *
> > > > > > +     * 0 on success or a negative error code on failure.
> > > > > > +     */
> > > > > >        int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
> > > > > > +
> > > > > > +    /**
> > > > > > +     * @vunmap:
> > > > > > +     *
> > > > > > +     * Releases the address previously returned by @vmap.
> > > > > > +     *
> > > > > > +     * This callback is optional.
> > > > > > +     *
> > > > > > +     * See also @vmap()
> > > > > > +     */
> > > > > >        void (*vunmap)(struct dma_buf *dmabuf, struct
> > > > > > dma_buf_map *map);
> > > > > >    };
> > > > > > -- 
> > > > > > 2.29.2
> > > > > > 
> > > > > 
> > > > 
> > > > -- 
> > > > Thomas Zimmermann
> > > > Graphics Driver Developer
> > > > SUSE Software Solutions Germany GmbH
> > > > Maxfeldstr. 5, 90409 Nürnberg, Germany
> > > > (HRB 36809, AG Nürnberg)
> > > > Geschäftsführer: Felix Imendörffer
> > > > 
> > > 
> > > 
> > > 
> > > 
> > 
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
  2020-12-09  0:13               ` Daniel Vetter
@ 2020-12-09  9:32                 ` Thomas Zimmermann
  -1 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-09  9:32 UTC (permalink / raw)
  To: Daniel Vetter, Christian König
  Cc: dri-devel, linaro-mm-sig, hdegoede, airlied, linux-media


[-- Attachment #1.1: Type: text/plain, Size: 12393 bytes --]

Hi

Am 09.12.20 um 01:13 schrieb Daniel Vetter:
> On Fri, Dec 04, 2020 at 09:47:08AM +0100, Christian König wrote:
>> Am 04.12.20 um 09:32 schrieb Thomas Zimmermann:
>>> Hi
>>>
>>> Am 03.12.20 um 21:41 schrieb Daniel Vetter:
>>>> On Thu, Dec 03, 2020 at 07:59:04PM +0100, Thomas Zimmermann wrote:
>>>>> Hi
>>>>>
>>>>> Am 03.12.20 um 16:26 schrieb Daniel Vetter:
>>>>>> On Thu, Dec 03, 2020 at 03:02:59PM +0100, Thomas Zimmermann wrote:
>>>>>>> Dma-buf's vmap and vunmap callbacks are undocumented and various
>>>>>>> exporters currently have slightly different semantics for them. Add
>>>>>>> documentation on how to implement and use these interfaces correctly.
>>>>>>>
>>>>>>> v2:
>>>>>>>      * document vmap semantics in struct dma_buf_ops
>>>>>>>      * add TODO item for reviewing and maybe fixing dma-buf exporters
>>>>>>>
>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>>>>
>>>>>> I still don't think this works, we're breaking dma_buf_vmap
>>>>>> for everyone
>>>>>> else here.
>>>>>
>>>>> I removed the text on the importer. These notes for callers in
>>>>> the docs are
>>>>> more or less a consequence of the exporter semantics.
>>>>
>>>> Callers are importers, so I'm not seeing how that fixes anything.
>>>>
>>>>> I thought we at least agreed on the exporter semantics in the
>>>>> other thread,
>>>>> didn't we?
>>>>>
>>>>> What I'm trying to do is to write dome some rules for exporters,
>>>>> even if not
>>>>> all exporters follow them.
>>>>
>>>> This is a standard interface, everyone needs to follow the same
>>>> rules. And
>>>> if they change, we need to make sure nothing breaks and we're not
>>>> creating
>>>> issues.
>>>>
>>>> In the past the rule was the dma_buf_vmap was allowed to take the
>>>> dma_resv_lock, and that the buffer should be pinned. Now some ttm
>>>> drivers
>>>> didn't ever bother with the pinning, and mostly got away with that
>>>> because
>>>> drm_prime helpers do the pinning by default at attach time, and most
>>>> users
>>>> do call dma_buf_attach.
>>>>
>>>> But if you look through dma-buf docs nothing ever said you have to do a
>>>> dummy attachment before you call dma_buf_vmap, that's just slightly
>>>> crappy
>>>> implementations that didn't blow up yet.
>>>
>>> I had a patch for adding pin to radeon's implementation of vmap. [1]
>>> Christian told me to not do this; instead just get the lock in the fbdev
>>> code. His advise almost seems the opposite of what you're telling me
>>> here.
>>
>> I think what Daniel suggests here is that we need to smoothly transition the
>> code from making assumptions to having a straight interface where importers
>> explicitly say when stuff is locked and when stuff is pinned.
>>
>> I've started this with the attach interface by adding a new dynamic approach
>> to that, but you probably need to carry on here with that for vmap as well.
>>
>> When that is done we can migrate every exporter over to the new dynamic
>> approach.
>>
>>>
>>> For the GEM VRAM helpers, that implicit pin in vmap gave me headaches.
>>> Because scanouts can only be done from VRAM, which is badly suited for
>>> exporting. So I ended up with an implicit pin that pins the buffer to
>>> whatever domain it currently is. I got away with it because GEM VRAM BOs
>>> are not sharable among devices; fbdev is the only user of that
>>> functionality and only pins for short periods of time.
>>>
>>> I suspect that fixing TTM-based drivers by adding an implicit pin would
>>> result in a similar situation. Whatever domain it ends up pinning, some
>>> functionality might not be compatible with that.
>>
>> Correct, exactly that's the problem.
>>
>>>
>>>>
>>>>> Given the circumstances, we should leave out this patch from the
>>>>> patchset.
>>>>
>>>> So the defacto rules are already a big mess, but that's not a good
>>>> excuse
>>>> to make it worse.
>>>>
>>>> What I had in mind is that we split dma_buf_vmap up into two variants:
>>>>
>>>> - The current one, which should guarantee that the buffer is pinned.
>>>>     Because that's what all current callers wanted, before the fbdev code
>>>>     started allowing non-pinned buffers.
>>>
>>> Can we add an explicit pin operation to dma_buf_vmap() to enforce the
>>> semantics?
>>
>> At least I would be fine with that. For now amdgpu is the only exporter who
>> implements the explicit pin/unpin semantics anyway.
> 
> Yup, I think that makes sense (if it works). Maybe we could use something
> like:
> 
> a) dma_buf pin exists, driver is dynamic. This means dma_buf_vmap needs to
> first pin, then call ->vmap. dma_buf_vmap_local otoh can directly call
> ->vmap since the exporter relies on either a pin or dma_resv_lock.
> 
> b) dma_buf pin not implement, driver is a legacy pile. dma_buf_vmap will
> pin (somewhere at least, or rely on some implicit pin), dma_buf_vmap_local
> doesn't work and should fail.

I think I read in the dma-buf documentation that pin is supposed to put 
the BO in a domain that is suitable for scanout. Now I don't really 
trust this to work. Amdgpu, radeon and nouveau put it into the GTT 
region. Qxl appears to put it wherever it is.

> 
> I think for less transition work fbdev helpers could first try
> dma_resv_lock + dma_buf_vmap_local, if that fails, drop the dma_resv_lock
> and do the pinning dma_buf_vmap. That way we don't have to convert shmem
> helpers over to dma_resv locking, which should help.

I have meanwhile made a patchset that updates helpers for cma, shmem and 
vram with vmap_local; and converts fbdev emulation as well. It needs a 
bit more testing before being posted.

Best regards
Thomas

> 
> And ttm drivers would do the new clean interface, so at least everyone
> using dma_resv today is all fine. Intel's conversion to dma_resv lock is
> in-flight, but that needs a conversion to the dynamic interface anyway,
> the current code splats. And dynamic brings means explicit pin/unpin
> callbacks, so should be good too.
> -Daniel
> 
>>
>> Regards,
>> Christian.
>>
>>>
>>> Best regards
>>> Thomas
>>>
>>> [1] https://patchwork.freedesktop.org/patch/400054/?series=83765&rev=1
>>>
>>>>
>>>> - The new one, which allows vmapping with just dma_resv locked, and
>>>> should
>>>>     have some caching in exporters.
>>>>
>>>> Breaking code and then adding todos about that is kinda not so cool
>>>> approach here imo.
>>>>
>>>> Also I guess ttm_bo_vmap should have a check that either the buffer is
>>>> pinned, or dma_resv_lock is held.
>>>>
>>>> Cheers, Daniel
>>>>
>>>>
>>>>
>>>>>
>>>>> Best regards
>>>>> Thomas
>>>>>
>>>>>>
>>>>>>> ---
>>>>>>>     Documentation/gpu/todo.rst | 15 +++++++++++++
>>>>>>>     include/drm/drm_gem.h      |  4 ++++
>>>>>>>     include/linux/dma-buf.h    | 45
>>>>>>> ++++++++++++++++++++++++++++++++++++++
>>>>>>>     3 files changed, 64 insertions(+)
>>>>>>>
>>>>>>> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
>>>>>>> index 009d8e6c7e3c..32bb797a84fc 100644
>>>>>>> --- a/Documentation/gpu/todo.rst
>>>>>>> +++ b/Documentation/gpu/todo.rst
>>>>>>> @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann
>>>>>>> <tzimmermann@suse.de>, Christian König, Daniel Vette
>>>>>>>     Level: Intermediate
>>>>>>> +Enforce rules for dma-buf vmap and pin ops
>>>>>>> +------------------------------------------
>>>>>>> +
>>>>>>> +Exporter implementations of vmap and pin in struct
>>>>>>> dma_buf_ops (and consequently
>>>>>>> +struct drm_gem_object_funcs) use a variety of locking
>>>>>>> semantics. Some rely on
>>>>>>> +the caller holding the dma-buf's reservation lock, some
>>>>>>> do their own locking,
>>>>>>> +some don't require any locking. VRAM helpers even used
>>>>>>> to pin as part of vmap.
>>>>>>> +
>>>>>>> +We need to review each exporter and enforce the documented rules.
>>>>>>> +
>>>>>>> +Contact: Christian König, Daniel Vetter, Thomas
>>>>>>> Zimmermann <tzimmermann@suse.de>
>>>>>>> +
>>>>>>> +Level: Advanced
>>>>>>> +
>>>>>>> +
>>>>>>>     Core refactorings
>>>>>>>     =================
>>>>>>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
>>>>>>> index 5e6daa1c982f..1864c6a721b1 100644
>>>>>>> --- a/include/drm/drm_gem.h
>>>>>>> +++ b/include/drm/drm_gem.h
>>>>>>> @@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
>>>>>>>          * drm_gem_dmabuf_vmap() helper.
>>>>>>>          *
>>>>>>>          * This callback is optional.
>>>>>>> +     *
>>>>>>> +     * See also struct dma_buf_ops.vmap
>>>>>>>          */
>>>>>>>         int (*vmap)(struct drm_gem_object *obj, struct
>>>>>>> dma_buf_map *map);
>>>>>>> @@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
>>>>>>>          * drm_gem_dmabuf_vunmap() helper.
>>>>>>>          *
>>>>>>>          * This callback is optional.
>>>>>>> +     *
>>>>>>> +     * See also struct dma_buf_ops.vunmap
>>>>>>>          */
>>>>>>>         void (*vunmap)(struct drm_gem_object *obj, struct
>>>>>>> dma_buf_map *map);
>>>>>>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
>>>>>>> index cf72699cb2bc..dc81fdc01dda 100644
>>>>>>> --- a/include/linux/dma-buf.h
>>>>>>> +++ b/include/linux/dma-buf.h
>>>>>>> @@ -267,7 +267,52 @@ struct dma_buf_ops {
>>>>>>>          */
>>>>>>>         int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
>>>>>>> +    /**
>>>>>>> +     * @vmap:
>>>>>>
>>>>>> There's already a @vmap and @vunamp kerneldoc at the top comment, that
>>>>>> needs to be removed.
>>>>>> -Daniel
>>>>>>
>>>>>>> +     *
>>>>>>> +     * Returns a virtual address for the buffer.
>>>>>>> +     *
>>>>>>> +     * Notes to callers:
>>>>>>> +     *
>>>>>>> +     * - Callers must hold the struct dma_buf.resv lock
>>>>>>> before calling
>>>>>>> +     *   this interface.
>>>>>>> +     *
>>>>>>> +     * - Callers must provide means to prevent the
>>>>>>> mappings from going
>>>>>>> +     *   stale, such as holding the reservation lock or providing a
>>>>>>> +     *   move-notify callback to the exporter.
>>>>>>> +     *
>>>>>>> +     * Notes to implementors:
>>>>>>> +     *
>>>>>>> +     * - Implementations must expect pairs of @vmap and
>>>>>>> @vunmap to be
>>>>>>> +     *   called frequently and should optimize for this case.
>>>>>>> +     *
>>>>>>> +     * - Implementations should avoid additional operations, such as
>>>>>>> +     *   pinning.
>>>>>>> +     *
>>>>>>> +     * - Implementations may expect the caller to hold the dma-buf's
>>>>>>> +     *   reservation lock to protect against concurrent calls and
>>>>>>> +     *   relocation.
>>>>>>> +     *
>>>>>>> +     * - Implementations may provide additional
>>>>>>> guarantees, such as working
>>>>>>> +     *   without holding the reservation lock.
>>>>>>> +     *
>>>>>>> +     * This callback is optional.
>>>>>>> +     *
>>>>>>> +     * Returns:
>>>>>>> +     *
>>>>>>> +     * 0 on success or a negative error code on failure.
>>>>>>> +     */
>>>>>>>         int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>>>>>>> +
>>>>>>> +    /**
>>>>>>> +     * @vunmap:
>>>>>>> +     *
>>>>>>> +     * Releases the address previously returned by @vmap.
>>>>>>> +     *
>>>>>>> +     * This callback is optional.
>>>>>>> +     *
>>>>>>> +     * See also @vmap()
>>>>>>> +     */
>>>>>>>         void (*vunmap)(struct dma_buf *dmabuf, struct
>>>>>>> dma_buf_map *map);
>>>>>>>     };
>>>>>>> -- 
>>>>>>> 2.29.2
>>>>>>>
>>>>>>
>>>>>
>>>>> -- 
>>>>> Thomas Zimmermann
>>>>> Graphics Driver Developer
>>>>> SUSE Software Solutions Germany GmbH
>>>>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>>>>> (HRB 36809, AG Nürnberg)
>>>>> Geschäftsführer: Felix Imendörffer
>>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
@ 2020-12-09  9:32                 ` Thomas Zimmermann
  0 siblings, 0 replies; 36+ messages in thread
From: Thomas Zimmermann @ 2020-12-09  9:32 UTC (permalink / raw)
  To: Daniel Vetter, Christian König
  Cc: linaro-mm-sig, hdegoede, linux-media, dri-devel, airlied


[-- Attachment #1.1.1: Type: text/plain, Size: 12393 bytes --]

Hi

Am 09.12.20 um 01:13 schrieb Daniel Vetter:
> On Fri, Dec 04, 2020 at 09:47:08AM +0100, Christian König wrote:
>> Am 04.12.20 um 09:32 schrieb Thomas Zimmermann:
>>> Hi
>>>
>>> Am 03.12.20 um 21:41 schrieb Daniel Vetter:
>>>> On Thu, Dec 03, 2020 at 07:59:04PM +0100, Thomas Zimmermann wrote:
>>>>> Hi
>>>>>
>>>>> Am 03.12.20 um 16:26 schrieb Daniel Vetter:
>>>>>> On Thu, Dec 03, 2020 at 03:02:59PM +0100, Thomas Zimmermann wrote:
>>>>>>> Dma-buf's vmap and vunmap callbacks are undocumented and various
>>>>>>> exporters currently have slightly different semantics for them. Add
>>>>>>> documentation on how to implement and use these interfaces correctly.
>>>>>>>
>>>>>>> v2:
>>>>>>>      * document vmap semantics in struct dma_buf_ops
>>>>>>>      * add TODO item for reviewing and maybe fixing dma-buf exporters
>>>>>>>
>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>>>>
>>>>>> I still don't think this works, we're breaking dma_buf_vmap
>>>>>> for everyone
>>>>>> else here.
>>>>>
>>>>> I removed the text on the importer. These notes for callers in
>>>>> the docs are
>>>>> more or less a consequence of the exporter semantics.
>>>>
>>>> Callers are importers, so I'm not seeing how that fixes anything.
>>>>
>>>>> I thought we at least agreed on the exporter semantics in the
>>>>> other thread,
>>>>> didn't we?
>>>>>
>>>>> What I'm trying to do is to write dome some rules for exporters,
>>>>> even if not
>>>>> all exporters follow them.
>>>>
>>>> This is a standard interface, everyone needs to follow the same
>>>> rules. And
>>>> if they change, we need to make sure nothing breaks and we're not
>>>> creating
>>>> issues.
>>>>
>>>> In the past the rule was the dma_buf_vmap was allowed to take the
>>>> dma_resv_lock, and that the buffer should be pinned. Now some ttm
>>>> drivers
>>>> didn't ever bother with the pinning, and mostly got away with that
>>>> because
>>>> drm_prime helpers do the pinning by default at attach time, and most
>>>> users
>>>> do call dma_buf_attach.
>>>>
>>>> But if you look through dma-buf docs nothing ever said you have to do a
>>>> dummy attachment before you call dma_buf_vmap, that's just slightly
>>>> crappy
>>>> implementations that didn't blow up yet.
>>>
>>> I had a patch for adding pin to radeon's implementation of vmap. [1]
>>> Christian told me to not do this; instead just get the lock in the fbdev
>>> code. His advise almost seems the opposite of what you're telling me
>>> here.
>>
>> I think what Daniel suggests here is that we need to smoothly transition the
>> code from making assumptions to having a straight interface where importers
>> explicitly say when stuff is locked and when stuff is pinned.
>>
>> I've started this with the attach interface by adding a new dynamic approach
>> to that, but you probably need to carry on here with that for vmap as well.
>>
>> When that is done we can migrate every exporter over to the new dynamic
>> approach.
>>
>>>
>>> For the GEM VRAM helpers, that implicit pin in vmap gave me headaches.
>>> Because scanouts can only be done from VRAM, which is badly suited for
>>> exporting. So I ended up with an implicit pin that pins the buffer to
>>> whatever domain it currently is. I got away with it because GEM VRAM BOs
>>> are not sharable among devices; fbdev is the only user of that
>>> functionality and only pins for short periods of time.
>>>
>>> I suspect that fixing TTM-based drivers by adding an implicit pin would
>>> result in a similar situation. Whatever domain it ends up pinning, some
>>> functionality might not be compatible with that.
>>
>> Correct, exactly that's the problem.
>>
>>>
>>>>
>>>>> Given the circumstances, we should leave out this patch from the
>>>>> patchset.
>>>>
>>>> So the defacto rules are already a big mess, but that's not a good
>>>> excuse
>>>> to make it worse.
>>>>
>>>> What I had in mind is that we split dma_buf_vmap up into two variants:
>>>>
>>>> - The current one, which should guarantee that the buffer is pinned.
>>>>     Because that's what all current callers wanted, before the fbdev code
>>>>     started allowing non-pinned buffers.
>>>
>>> Can we add an explicit pin operation to dma_buf_vmap() to enforce the
>>> semantics?
>>
>> At least I would be fine with that. For now amdgpu is the only exporter who
>> implements the explicit pin/unpin semantics anyway.
> 
> Yup, I think that makes sense (if it works). Maybe we could use something
> like:
> 
> a) dma_buf pin exists, driver is dynamic. This means dma_buf_vmap needs to
> first pin, then call ->vmap. dma_buf_vmap_local otoh can directly call
> ->vmap since the exporter relies on either a pin or dma_resv_lock.
> 
> b) dma_buf pin not implement, driver is a legacy pile. dma_buf_vmap will
> pin (somewhere at least, or rely on some implicit pin), dma_buf_vmap_local
> doesn't work and should fail.

I think I read in the dma-buf documentation that pin is supposed to put 
the BO in a domain that is suitable for scanout. Now I don't really 
trust this to work. Amdgpu, radeon and nouveau put it into the GTT 
region. Qxl appears to put it wherever it is.

> 
> I think for less transition work fbdev helpers could first try
> dma_resv_lock + dma_buf_vmap_local, if that fails, drop the dma_resv_lock
> and do the pinning dma_buf_vmap. That way we don't have to convert shmem
> helpers over to dma_resv locking, which should help.

I have meanwhile made a patchset that updates helpers for cma, shmem and 
vram with vmap_local; and converts fbdev emulation as well. It needs a 
bit more testing before being posted.

Best regards
Thomas

> 
> And ttm drivers would do the new clean interface, so at least everyone
> using dma_resv today is all fine. Intel's conversion to dma_resv lock is
> in-flight, but that needs a conversion to the dynamic interface anyway,
> the current code splats. And dynamic brings means explicit pin/unpin
> callbacks, so should be good too.
> -Daniel
> 
>>
>> Regards,
>> Christian.
>>
>>>
>>> Best regards
>>> Thomas
>>>
>>> [1] https://patchwork.freedesktop.org/patch/400054/?series=83765&rev=1
>>>
>>>>
>>>> - The new one, which allows vmapping with just dma_resv locked, and
>>>> should
>>>>     have some caching in exporters.
>>>>
>>>> Breaking code and then adding todos about that is kinda not so cool
>>>> approach here imo.
>>>>
>>>> Also I guess ttm_bo_vmap should have a check that either the buffer is
>>>> pinned, or dma_resv_lock is held.
>>>>
>>>> Cheers, Daniel
>>>>
>>>>
>>>>
>>>>>
>>>>> Best regards
>>>>> Thomas
>>>>>
>>>>>>
>>>>>>> ---
>>>>>>>     Documentation/gpu/todo.rst | 15 +++++++++++++
>>>>>>>     include/drm/drm_gem.h      |  4 ++++
>>>>>>>     include/linux/dma-buf.h    | 45
>>>>>>> ++++++++++++++++++++++++++++++++++++++
>>>>>>>     3 files changed, 64 insertions(+)
>>>>>>>
>>>>>>> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
>>>>>>> index 009d8e6c7e3c..32bb797a84fc 100644
>>>>>>> --- a/Documentation/gpu/todo.rst
>>>>>>> +++ b/Documentation/gpu/todo.rst
>>>>>>> @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann
>>>>>>> <tzimmermann@suse.de>, Christian König, Daniel Vette
>>>>>>>     Level: Intermediate
>>>>>>> +Enforce rules for dma-buf vmap and pin ops
>>>>>>> +------------------------------------------
>>>>>>> +
>>>>>>> +Exporter implementations of vmap and pin in struct
>>>>>>> dma_buf_ops (and consequently
>>>>>>> +struct drm_gem_object_funcs) use a variety of locking
>>>>>>> semantics. Some rely on
>>>>>>> +the caller holding the dma-buf's reservation lock, some
>>>>>>> do their own locking,
>>>>>>> +some don't require any locking. VRAM helpers even used
>>>>>>> to pin as part of vmap.
>>>>>>> +
>>>>>>> +We need to review each exporter and enforce the documented rules.
>>>>>>> +
>>>>>>> +Contact: Christian König, Daniel Vetter, Thomas
>>>>>>> Zimmermann <tzimmermann@suse.de>
>>>>>>> +
>>>>>>> +Level: Advanced
>>>>>>> +
>>>>>>> +
>>>>>>>     Core refactorings
>>>>>>>     =================
>>>>>>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
>>>>>>> index 5e6daa1c982f..1864c6a721b1 100644
>>>>>>> --- a/include/drm/drm_gem.h
>>>>>>> +++ b/include/drm/drm_gem.h
>>>>>>> @@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
>>>>>>>          * drm_gem_dmabuf_vmap() helper.
>>>>>>>          *
>>>>>>>          * This callback is optional.
>>>>>>> +     *
>>>>>>> +     * See also struct dma_buf_ops.vmap
>>>>>>>          */
>>>>>>>         int (*vmap)(struct drm_gem_object *obj, struct
>>>>>>> dma_buf_map *map);
>>>>>>> @@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
>>>>>>>          * drm_gem_dmabuf_vunmap() helper.
>>>>>>>          *
>>>>>>>          * This callback is optional.
>>>>>>> +     *
>>>>>>> +     * See also struct dma_buf_ops.vunmap
>>>>>>>          */
>>>>>>>         void (*vunmap)(struct drm_gem_object *obj, struct
>>>>>>> dma_buf_map *map);
>>>>>>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
>>>>>>> index cf72699cb2bc..dc81fdc01dda 100644
>>>>>>> --- a/include/linux/dma-buf.h
>>>>>>> +++ b/include/linux/dma-buf.h
>>>>>>> @@ -267,7 +267,52 @@ struct dma_buf_ops {
>>>>>>>          */
>>>>>>>         int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
>>>>>>> +    /**
>>>>>>> +     * @vmap:
>>>>>>
>>>>>> There's already a @vmap and @vunamp kerneldoc at the top comment, that
>>>>>> needs to be removed.
>>>>>> -Daniel
>>>>>>
>>>>>>> +     *
>>>>>>> +     * Returns a virtual address for the buffer.
>>>>>>> +     *
>>>>>>> +     * Notes to callers:
>>>>>>> +     *
>>>>>>> +     * - Callers must hold the struct dma_buf.resv lock
>>>>>>> before calling
>>>>>>> +     *   this interface.
>>>>>>> +     *
>>>>>>> +     * - Callers must provide means to prevent the
>>>>>>> mappings from going
>>>>>>> +     *   stale, such as holding the reservation lock or providing a
>>>>>>> +     *   move-notify callback to the exporter.
>>>>>>> +     *
>>>>>>> +     * Notes to implementors:
>>>>>>> +     *
>>>>>>> +     * - Implementations must expect pairs of @vmap and
>>>>>>> @vunmap to be
>>>>>>> +     *   called frequently and should optimize for this case.
>>>>>>> +     *
>>>>>>> +     * - Implementations should avoid additional operations, such as
>>>>>>> +     *   pinning.
>>>>>>> +     *
>>>>>>> +     * - Implementations may expect the caller to hold the dma-buf's
>>>>>>> +     *   reservation lock to protect against concurrent calls and
>>>>>>> +     *   relocation.
>>>>>>> +     *
>>>>>>> +     * - Implementations may provide additional
>>>>>>> guarantees, such as working
>>>>>>> +     *   without holding the reservation lock.
>>>>>>> +     *
>>>>>>> +     * This callback is optional.
>>>>>>> +     *
>>>>>>> +     * Returns:
>>>>>>> +     *
>>>>>>> +     * 0 on success or a negative error code on failure.
>>>>>>> +     */
>>>>>>>         int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>>>>>>> +
>>>>>>> +    /**
>>>>>>> +     * @vunmap:
>>>>>>> +     *
>>>>>>> +     * Releases the address previously returned by @vmap.
>>>>>>> +     *
>>>>>>> +     * This callback is optional.
>>>>>>> +     *
>>>>>>> +     * See also @vmap()
>>>>>>> +     */
>>>>>>>         void (*vunmap)(struct dma_buf *dmabuf, struct
>>>>>>> dma_buf_map *map);
>>>>>>>     };
>>>>>>> -- 
>>>>>>> 2.29.2
>>>>>>>
>>>>>>
>>>>>
>>>>> -- 
>>>>> Thomas Zimmermann
>>>>> Graphics Driver Developer
>>>>> SUSE Software Solutions Germany GmbH
>>>>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>>>>> (HRB 36809, AG Nürnberg)
>>>>> Geschäftsführer: Felix Imendörffer
>>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
  2020-12-09  9:32                 ` Thomas Zimmermann
@ 2020-12-09 10:16                   ` Daniel Vetter
  -1 siblings, 0 replies; 36+ messages in thread
From: Daniel Vetter @ 2020-12-09 10:16 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: Christian König, dri-devel,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Hans de Goede,
	Dave Airlie, open list:DMA BUFFER SHARING FRAMEWORK

On Wed, Dec 9, 2020 at 10:32 AM Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
> Hi
>
> Am 09.12.20 um 01:13 schrieb Daniel Vetter:
> > On Fri, Dec 04, 2020 at 09:47:08AM +0100, Christian König wrote:
> >> Am 04.12.20 um 09:32 schrieb Thomas Zimmermann:
> >>> Hi
> >>>
> >>> Am 03.12.20 um 21:41 schrieb Daniel Vetter:
> >>>> On Thu, Dec 03, 2020 at 07:59:04PM +0100, Thomas Zimmermann wrote:
> >>>>> Hi
> >>>>>
> >>>>> Am 03.12.20 um 16:26 schrieb Daniel Vetter:
> >>>>>> On Thu, Dec 03, 2020 at 03:02:59PM +0100, Thomas Zimmermann wrote:
> >>>>>>> Dma-buf's vmap and vunmap callbacks are undocumented and various
> >>>>>>> exporters currently have slightly different semantics for them. Add
> >>>>>>> documentation on how to implement and use these interfaces correctly.
> >>>>>>>
> >>>>>>> v2:
> >>>>>>>      * document vmap semantics in struct dma_buf_ops
> >>>>>>>      * add TODO item for reviewing and maybe fixing dma-buf exporters
> >>>>>>>
> >>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >>>>>>
> >>>>>> I still don't think this works, we're breaking dma_buf_vmap
> >>>>>> for everyone
> >>>>>> else here.
> >>>>>
> >>>>> I removed the text on the importer. These notes for callers in
> >>>>> the docs are
> >>>>> more or less a consequence of the exporter semantics.
> >>>>
> >>>> Callers are importers, so I'm not seeing how that fixes anything.
> >>>>
> >>>>> I thought we at least agreed on the exporter semantics in the
> >>>>> other thread,
> >>>>> didn't we?
> >>>>>
> >>>>> What I'm trying to do is to write dome some rules for exporters,
> >>>>> even if not
> >>>>> all exporters follow them.
> >>>>
> >>>> This is a standard interface, everyone needs to follow the same
> >>>> rules. And
> >>>> if they change, we need to make sure nothing breaks and we're not
> >>>> creating
> >>>> issues.
> >>>>
> >>>> In the past the rule was the dma_buf_vmap was allowed to take the
> >>>> dma_resv_lock, and that the buffer should be pinned. Now some ttm
> >>>> drivers
> >>>> didn't ever bother with the pinning, and mostly got away with that
> >>>> because
> >>>> drm_prime helpers do the pinning by default at attach time, and most
> >>>> users
> >>>> do call dma_buf_attach.
> >>>>
> >>>> But if you look through dma-buf docs nothing ever said you have to do a
> >>>> dummy attachment before you call dma_buf_vmap, that's just slightly
> >>>> crappy
> >>>> implementations that didn't blow up yet.
> >>>
> >>> I had a patch for adding pin to radeon's implementation of vmap. [1]
> >>> Christian told me to not do this; instead just get the lock in the fbdev
> >>> code. His advise almost seems the opposite of what you're telling me
> >>> here.
> >>
> >> I think what Daniel suggests here is that we need to smoothly transition the
> >> code from making assumptions to having a straight interface where importers
> >> explicitly say when stuff is locked and when stuff is pinned.
> >>
> >> I've started this with the attach interface by adding a new dynamic approach
> >> to that, but you probably need to carry on here with that for vmap as well.
> >>
> >> When that is done we can migrate every exporter over to the new dynamic
> >> approach.
> >>
> >>>
> >>> For the GEM VRAM helpers, that implicit pin in vmap gave me headaches.
> >>> Because scanouts can only be done from VRAM, which is badly suited for
> >>> exporting. So I ended up with an implicit pin that pins the buffer to
> >>> whatever domain it currently is. I got away with it because GEM VRAM BOs
> >>> are not sharable among devices; fbdev is the only user of that
> >>> functionality and only pins for short periods of time.
> >>>
> >>> I suspect that fixing TTM-based drivers by adding an implicit pin would
> >>> result in a similar situation. Whatever domain it ends up pinning, some
> >>> functionality might not be compatible with that.
> >>
> >> Correct, exactly that's the problem.
> >>
> >>>
> >>>>
> >>>>> Given the circumstances, we should leave out this patch from the
> >>>>> patchset.
> >>>>
> >>>> So the defacto rules are already a big mess, but that's not a good
> >>>> excuse
> >>>> to make it worse.
> >>>>
> >>>> What I had in mind is that we split dma_buf_vmap up into two variants:
> >>>>
> >>>> - The current one, which should guarantee that the buffer is pinned.
> >>>>     Because that's what all current callers wanted, before the fbdev code
> >>>>     started allowing non-pinned buffers.
> >>>
> >>> Can we add an explicit pin operation to dma_buf_vmap() to enforce the
> >>> semantics?
> >>
> >> At least I would be fine with that. For now amdgpu is the only exporter who
> >> implements the explicit pin/unpin semantics anyway.
> >
> > Yup, I think that makes sense (if it works). Maybe we could use something
> > like:
> >
> > a) dma_buf pin exists, driver is dynamic. This means dma_buf_vmap needs to
> > first pin, then call ->vmap. dma_buf_vmap_local otoh can directly call
> > ->vmap since the exporter relies on either a pin or dma_resv_lock.
> >
> > b) dma_buf pin not implement, driver is a legacy pile. dma_buf_vmap will
> > pin (somewhere at least, or rely on some implicit pin), dma_buf_vmap_local
> > doesn't work and should fail.
>
> I think I read in the dma-buf documentation that pin is supposed to put
> the BO in a domain that is suitable for scanout. Now I don't really
> trust this to work. Amdgpu, radeon and nouveau put it into the GTT
> region. Qxl appears to put it wherever it is.

Uh that sounds wrong, or at least not the full complexity. ->pin's
main use right now is to freeze the dma-buf into system memory when
there's a non-dynamic attachement for a dynamic exporter. There's also
a dma_buf_pin call in amdgpu, and I think that only works for scanout
on integrated gpu. Pinning to vram would break sharing for all
existing dma-buf users.

Christian can you perhaps explain when amdgpu uses dma_buf_pin()?

My take is the ->pin callback should clarify that it should pin into
system memory, for legacy (non-dynamic) dma-buf users. And
dma_buf_pin() should gain some kerneldoc which then states that "This
should only be used in limited use cases like scanout and not for
temporary pin operations." I think the problem with this kerneldoc is
simply that it tries to document the exporter and importer side of the
contract in one place and makes a mess of it, plus leaves the actual
importer side function undocumented. I think the kerneldoc also
predates the final patch version, and we didn't update it fully.

> > I think for less transition work fbdev helpers could first try
> > dma_resv_lock + dma_buf_vmap_local, if that fails, drop the dma_resv_lock
> > and do the pinning dma_buf_vmap. That way we don't have to convert shmem
> > helpers over to dma_resv locking, which should help.
>
> I have meanwhile made a patchset that updates helpers for cma, shmem and
> vram with vmap_local; and converts fbdev emulation as well. It needs a
> bit more testing before being posted.

Nice, even better!
-Daniel

> Best regards
> Thomas
>
> >
> > And ttm drivers would do the new clean interface, so at least everyone
> > using dma_resv today is all fine. Intel's conversion to dma_resv lock is
> > in-flight, but that needs a conversion to the dynamic interface anyway,
> > the current code splats. And dynamic brings means explicit pin/unpin
> > callbacks, so should be good too.
> > -Daniel
> >
> >>
> >> Regards,
> >> Christian.
> >>
> >>>
> >>> Best regards
> >>> Thomas
> >>>
> >>> [1] https://patchwork.freedesktop.org/patch/400054/?series=83765&rev=1
> >>>
> >>>>
> >>>> - The new one, which allows vmapping with just dma_resv locked, and
> >>>> should
> >>>>     have some caching in exporters.
> >>>>
> >>>> Breaking code and then adding todos about that is kinda not so cool
> >>>> approach here imo.
> >>>>
> >>>> Also I guess ttm_bo_vmap should have a check that either the buffer is
> >>>> pinned, or dma_resv_lock is held.
> >>>>
> >>>> Cheers, Daniel
> >>>>
> >>>>
> >>>>
> >>>>>
> >>>>> Best regards
> >>>>> Thomas
> >>>>>
> >>>>>>
> >>>>>>> ---
> >>>>>>>     Documentation/gpu/todo.rst | 15 +++++++++++++
> >>>>>>>     include/drm/drm_gem.h      |  4 ++++
> >>>>>>>     include/linux/dma-buf.h    | 45
> >>>>>>> ++++++++++++++++++++++++++++++++++++++
> >>>>>>>     3 files changed, 64 insertions(+)
> >>>>>>>
> >>>>>>> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> >>>>>>> index 009d8e6c7e3c..32bb797a84fc 100644
> >>>>>>> --- a/Documentation/gpu/todo.rst
> >>>>>>> +++ b/Documentation/gpu/todo.rst
> >>>>>>> @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann
> >>>>>>> <tzimmermann@suse.de>, Christian König, Daniel Vette
> >>>>>>>     Level: Intermediate
> >>>>>>> +Enforce rules for dma-buf vmap and pin ops
> >>>>>>> +------------------------------------------
> >>>>>>> +
> >>>>>>> +Exporter implementations of vmap and pin in struct
> >>>>>>> dma_buf_ops (and consequently
> >>>>>>> +struct drm_gem_object_funcs) use a variety of locking
> >>>>>>> semantics. Some rely on
> >>>>>>> +the caller holding the dma-buf's reservation lock, some
> >>>>>>> do their own locking,
> >>>>>>> +some don't require any locking. VRAM helpers even used
> >>>>>>> to pin as part of vmap.
> >>>>>>> +
> >>>>>>> +We need to review each exporter and enforce the documented rules.
> >>>>>>> +
> >>>>>>> +Contact: Christian König, Daniel Vetter, Thomas
> >>>>>>> Zimmermann <tzimmermann@suse.de>
> >>>>>>> +
> >>>>>>> +Level: Advanced
> >>>>>>> +
> >>>>>>> +
> >>>>>>>     Core refactorings
> >>>>>>>     =================
> >>>>>>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> >>>>>>> index 5e6daa1c982f..1864c6a721b1 100644
> >>>>>>> --- a/include/drm/drm_gem.h
> >>>>>>> +++ b/include/drm/drm_gem.h
> >>>>>>> @@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
> >>>>>>>          * drm_gem_dmabuf_vmap() helper.
> >>>>>>>          *
> >>>>>>>          * This callback is optional.
> >>>>>>> +     *
> >>>>>>> +     * See also struct dma_buf_ops.vmap
> >>>>>>>          */
> >>>>>>>         int (*vmap)(struct drm_gem_object *obj, struct
> >>>>>>> dma_buf_map *map);
> >>>>>>> @@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
> >>>>>>>          * drm_gem_dmabuf_vunmap() helper.
> >>>>>>>          *
> >>>>>>>          * This callback is optional.
> >>>>>>> +     *
> >>>>>>> +     * See also struct dma_buf_ops.vunmap
> >>>>>>>          */
> >>>>>>>         void (*vunmap)(struct drm_gem_object *obj, struct
> >>>>>>> dma_buf_map *map);
> >>>>>>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> >>>>>>> index cf72699cb2bc..dc81fdc01dda 100644
> >>>>>>> --- a/include/linux/dma-buf.h
> >>>>>>> +++ b/include/linux/dma-buf.h
> >>>>>>> @@ -267,7 +267,52 @@ struct dma_buf_ops {
> >>>>>>>          */
> >>>>>>>         int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
> >>>>>>> +    /**
> >>>>>>> +     * @vmap:
> >>>>>>
> >>>>>> There's already a @vmap and @vunamp kerneldoc at the top comment, that
> >>>>>> needs to be removed.
> >>>>>> -Daniel
> >>>>>>
> >>>>>>> +     *
> >>>>>>> +     * Returns a virtual address for the buffer.
> >>>>>>> +     *
> >>>>>>> +     * Notes to callers:
> >>>>>>> +     *
> >>>>>>> +     * - Callers must hold the struct dma_buf.resv lock
> >>>>>>> before calling
> >>>>>>> +     *   this interface.
> >>>>>>> +     *
> >>>>>>> +     * - Callers must provide means to prevent the
> >>>>>>> mappings from going
> >>>>>>> +     *   stale, such as holding the reservation lock or providing a
> >>>>>>> +     *   move-notify callback to the exporter.
> >>>>>>> +     *
> >>>>>>> +     * Notes to implementors:
> >>>>>>> +     *
> >>>>>>> +     * - Implementations must expect pairs of @vmap and
> >>>>>>> @vunmap to be
> >>>>>>> +     *   called frequently and should optimize for this case.
> >>>>>>> +     *
> >>>>>>> +     * - Implementations should avoid additional operations, such as
> >>>>>>> +     *   pinning.
> >>>>>>> +     *
> >>>>>>> +     * - Implementations may expect the caller to hold the dma-buf's
> >>>>>>> +     *   reservation lock to protect against concurrent calls and
> >>>>>>> +     *   relocation.
> >>>>>>> +     *
> >>>>>>> +     * - Implementations may provide additional
> >>>>>>> guarantees, such as working
> >>>>>>> +     *   without holding the reservation lock.
> >>>>>>> +     *
> >>>>>>> +     * This callback is optional.
> >>>>>>> +     *
> >>>>>>> +     * Returns:
> >>>>>>> +     *
> >>>>>>> +     * 0 on success or a negative error code on failure.
> >>>>>>> +     */
> >>>>>>>         int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
> >>>>>>> +
> >>>>>>> +    /**
> >>>>>>> +     * @vunmap:
> >>>>>>> +     *
> >>>>>>> +     * Releases the address previously returned by @vmap.
> >>>>>>> +     *
> >>>>>>> +     * This callback is optional.
> >>>>>>> +     *
> >>>>>>> +     * See also @vmap()
> >>>>>>> +     */
> >>>>>>>         void (*vunmap)(struct dma_buf *dmabuf, struct
> >>>>>>> dma_buf_map *map);
> >>>>>>>     };
> >>>>>>> --
> >>>>>>> 2.29.2
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>> --
> >>>>> Thomas Zimmermann
> >>>>> Graphics Driver Developer
> >>>>> SUSE Software Solutions Germany GmbH
> >>>>> Maxfeldstr. 5, 90409 Nürnberg, Germany
> >>>>> (HRB 36809, AG Nürnberg)
> >>>>> Geschäftsführer: Felix Imendörffer
> >>>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>
> >>
> >
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
@ 2020-12-09 10:16                   ` Daniel Vetter
  0 siblings, 0 replies; 36+ messages in thread
From: Daniel Vetter @ 2020-12-09 10:16 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: dri-devel, moderated list:DMA BUFFER SHARING FRAMEWORK,
	Hans de Goede, Dave Airlie, Christian König,
	open list:DMA BUFFER SHARING FRAMEWORK

On Wed, Dec 9, 2020 at 10:32 AM Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
> Hi
>
> Am 09.12.20 um 01:13 schrieb Daniel Vetter:
> > On Fri, Dec 04, 2020 at 09:47:08AM +0100, Christian König wrote:
> >> Am 04.12.20 um 09:32 schrieb Thomas Zimmermann:
> >>> Hi
> >>>
> >>> Am 03.12.20 um 21:41 schrieb Daniel Vetter:
> >>>> On Thu, Dec 03, 2020 at 07:59:04PM +0100, Thomas Zimmermann wrote:
> >>>>> Hi
> >>>>>
> >>>>> Am 03.12.20 um 16:26 schrieb Daniel Vetter:
> >>>>>> On Thu, Dec 03, 2020 at 03:02:59PM +0100, Thomas Zimmermann wrote:
> >>>>>>> Dma-buf's vmap and vunmap callbacks are undocumented and various
> >>>>>>> exporters currently have slightly different semantics for them. Add
> >>>>>>> documentation on how to implement and use these interfaces correctly.
> >>>>>>>
> >>>>>>> v2:
> >>>>>>>      * document vmap semantics in struct dma_buf_ops
> >>>>>>>      * add TODO item for reviewing and maybe fixing dma-buf exporters
> >>>>>>>
> >>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >>>>>>
> >>>>>> I still don't think this works, we're breaking dma_buf_vmap
> >>>>>> for everyone
> >>>>>> else here.
> >>>>>
> >>>>> I removed the text on the importer. These notes for callers in
> >>>>> the docs are
> >>>>> more or less a consequence of the exporter semantics.
> >>>>
> >>>> Callers are importers, so I'm not seeing how that fixes anything.
> >>>>
> >>>>> I thought we at least agreed on the exporter semantics in the
> >>>>> other thread,
> >>>>> didn't we?
> >>>>>
> >>>>> What I'm trying to do is to write dome some rules for exporters,
> >>>>> even if not
> >>>>> all exporters follow them.
> >>>>
> >>>> This is a standard interface, everyone needs to follow the same
> >>>> rules. And
> >>>> if they change, we need to make sure nothing breaks and we're not
> >>>> creating
> >>>> issues.
> >>>>
> >>>> In the past the rule was the dma_buf_vmap was allowed to take the
> >>>> dma_resv_lock, and that the buffer should be pinned. Now some ttm
> >>>> drivers
> >>>> didn't ever bother with the pinning, and mostly got away with that
> >>>> because
> >>>> drm_prime helpers do the pinning by default at attach time, and most
> >>>> users
> >>>> do call dma_buf_attach.
> >>>>
> >>>> But if you look through dma-buf docs nothing ever said you have to do a
> >>>> dummy attachment before you call dma_buf_vmap, that's just slightly
> >>>> crappy
> >>>> implementations that didn't blow up yet.
> >>>
> >>> I had a patch for adding pin to radeon's implementation of vmap. [1]
> >>> Christian told me to not do this; instead just get the lock in the fbdev
> >>> code. His advise almost seems the opposite of what you're telling me
> >>> here.
> >>
> >> I think what Daniel suggests here is that we need to smoothly transition the
> >> code from making assumptions to having a straight interface where importers
> >> explicitly say when stuff is locked and when stuff is pinned.
> >>
> >> I've started this with the attach interface by adding a new dynamic approach
> >> to that, but you probably need to carry on here with that for vmap as well.
> >>
> >> When that is done we can migrate every exporter over to the new dynamic
> >> approach.
> >>
> >>>
> >>> For the GEM VRAM helpers, that implicit pin in vmap gave me headaches.
> >>> Because scanouts can only be done from VRAM, which is badly suited for
> >>> exporting. So I ended up with an implicit pin that pins the buffer to
> >>> whatever domain it currently is. I got away with it because GEM VRAM BOs
> >>> are not sharable among devices; fbdev is the only user of that
> >>> functionality and only pins for short periods of time.
> >>>
> >>> I suspect that fixing TTM-based drivers by adding an implicit pin would
> >>> result in a similar situation. Whatever domain it ends up pinning, some
> >>> functionality might not be compatible with that.
> >>
> >> Correct, exactly that's the problem.
> >>
> >>>
> >>>>
> >>>>> Given the circumstances, we should leave out this patch from the
> >>>>> patchset.
> >>>>
> >>>> So the defacto rules are already a big mess, but that's not a good
> >>>> excuse
> >>>> to make it worse.
> >>>>
> >>>> What I had in mind is that we split dma_buf_vmap up into two variants:
> >>>>
> >>>> - The current one, which should guarantee that the buffer is pinned.
> >>>>     Because that's what all current callers wanted, before the fbdev code
> >>>>     started allowing non-pinned buffers.
> >>>
> >>> Can we add an explicit pin operation to dma_buf_vmap() to enforce the
> >>> semantics?
> >>
> >> At least I would be fine with that. For now amdgpu is the only exporter who
> >> implements the explicit pin/unpin semantics anyway.
> >
> > Yup, I think that makes sense (if it works). Maybe we could use something
> > like:
> >
> > a) dma_buf pin exists, driver is dynamic. This means dma_buf_vmap needs to
> > first pin, then call ->vmap. dma_buf_vmap_local otoh can directly call
> > ->vmap since the exporter relies on either a pin or dma_resv_lock.
> >
> > b) dma_buf pin not implement, driver is a legacy pile. dma_buf_vmap will
> > pin (somewhere at least, or rely on some implicit pin), dma_buf_vmap_local
> > doesn't work and should fail.
>
> I think I read in the dma-buf documentation that pin is supposed to put
> the BO in a domain that is suitable for scanout. Now I don't really
> trust this to work. Amdgpu, radeon and nouveau put it into the GTT
> region. Qxl appears to put it wherever it is.

Uh that sounds wrong, or at least not the full complexity. ->pin's
main use right now is to freeze the dma-buf into system memory when
there's a non-dynamic attachement for a dynamic exporter. There's also
a dma_buf_pin call in amdgpu, and I think that only works for scanout
on integrated gpu. Pinning to vram would break sharing for all
existing dma-buf users.

Christian can you perhaps explain when amdgpu uses dma_buf_pin()?

My take is the ->pin callback should clarify that it should pin into
system memory, for legacy (non-dynamic) dma-buf users. And
dma_buf_pin() should gain some kerneldoc which then states that "This
should only be used in limited use cases like scanout and not for
temporary pin operations." I think the problem with this kerneldoc is
simply that it tries to document the exporter and importer side of the
contract in one place and makes a mess of it, plus leaves the actual
importer side function undocumented. I think the kerneldoc also
predates the final patch version, and we didn't update it fully.

> > I think for less transition work fbdev helpers could first try
> > dma_resv_lock + dma_buf_vmap_local, if that fails, drop the dma_resv_lock
> > and do the pinning dma_buf_vmap. That way we don't have to convert shmem
> > helpers over to dma_resv locking, which should help.
>
> I have meanwhile made a patchset that updates helpers for cma, shmem and
> vram with vmap_local; and converts fbdev emulation as well. It needs a
> bit more testing before being posted.

Nice, even better!
-Daniel

> Best regards
> Thomas
>
> >
> > And ttm drivers would do the new clean interface, so at least everyone
> > using dma_resv today is all fine. Intel's conversion to dma_resv lock is
> > in-flight, but that needs a conversion to the dynamic interface anyway,
> > the current code splats. And dynamic brings means explicit pin/unpin
> > callbacks, so should be good too.
> > -Daniel
> >
> >>
> >> Regards,
> >> Christian.
> >>
> >>>
> >>> Best regards
> >>> Thomas
> >>>
> >>> [1] https://patchwork.freedesktop.org/patch/400054/?series=83765&rev=1
> >>>
> >>>>
> >>>> - The new one, which allows vmapping with just dma_resv locked, and
> >>>> should
> >>>>     have some caching in exporters.
> >>>>
> >>>> Breaking code and then adding todos about that is kinda not so cool
> >>>> approach here imo.
> >>>>
> >>>> Also I guess ttm_bo_vmap should have a check that either the buffer is
> >>>> pinned, or dma_resv_lock is held.
> >>>>
> >>>> Cheers, Daniel
> >>>>
> >>>>
> >>>>
> >>>>>
> >>>>> Best regards
> >>>>> Thomas
> >>>>>
> >>>>>>
> >>>>>>> ---
> >>>>>>>     Documentation/gpu/todo.rst | 15 +++++++++++++
> >>>>>>>     include/drm/drm_gem.h      |  4 ++++
> >>>>>>>     include/linux/dma-buf.h    | 45
> >>>>>>> ++++++++++++++++++++++++++++++++++++++
> >>>>>>>     3 files changed, 64 insertions(+)
> >>>>>>>
> >>>>>>> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> >>>>>>> index 009d8e6c7e3c..32bb797a84fc 100644
> >>>>>>> --- a/Documentation/gpu/todo.rst
> >>>>>>> +++ b/Documentation/gpu/todo.rst
> >>>>>>> @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann
> >>>>>>> <tzimmermann@suse.de>, Christian König, Daniel Vette
> >>>>>>>     Level: Intermediate
> >>>>>>> +Enforce rules for dma-buf vmap and pin ops
> >>>>>>> +------------------------------------------
> >>>>>>> +
> >>>>>>> +Exporter implementations of vmap and pin in struct
> >>>>>>> dma_buf_ops (and consequently
> >>>>>>> +struct drm_gem_object_funcs) use a variety of locking
> >>>>>>> semantics. Some rely on
> >>>>>>> +the caller holding the dma-buf's reservation lock, some
> >>>>>>> do their own locking,
> >>>>>>> +some don't require any locking. VRAM helpers even used
> >>>>>>> to pin as part of vmap.
> >>>>>>> +
> >>>>>>> +We need to review each exporter and enforce the documented rules.
> >>>>>>> +
> >>>>>>> +Contact: Christian König, Daniel Vetter, Thomas
> >>>>>>> Zimmermann <tzimmermann@suse.de>
> >>>>>>> +
> >>>>>>> +Level: Advanced
> >>>>>>> +
> >>>>>>> +
> >>>>>>>     Core refactorings
> >>>>>>>     =================
> >>>>>>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> >>>>>>> index 5e6daa1c982f..1864c6a721b1 100644
> >>>>>>> --- a/include/drm/drm_gem.h
> >>>>>>> +++ b/include/drm/drm_gem.h
> >>>>>>> @@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
> >>>>>>>          * drm_gem_dmabuf_vmap() helper.
> >>>>>>>          *
> >>>>>>>          * This callback is optional.
> >>>>>>> +     *
> >>>>>>> +     * See also struct dma_buf_ops.vmap
> >>>>>>>          */
> >>>>>>>         int (*vmap)(struct drm_gem_object *obj, struct
> >>>>>>> dma_buf_map *map);
> >>>>>>> @@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
> >>>>>>>          * drm_gem_dmabuf_vunmap() helper.
> >>>>>>>          *
> >>>>>>>          * This callback is optional.
> >>>>>>> +     *
> >>>>>>> +     * See also struct dma_buf_ops.vunmap
> >>>>>>>          */
> >>>>>>>         void (*vunmap)(struct drm_gem_object *obj, struct
> >>>>>>> dma_buf_map *map);
> >>>>>>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> >>>>>>> index cf72699cb2bc..dc81fdc01dda 100644
> >>>>>>> --- a/include/linux/dma-buf.h
> >>>>>>> +++ b/include/linux/dma-buf.h
> >>>>>>> @@ -267,7 +267,52 @@ struct dma_buf_ops {
> >>>>>>>          */
> >>>>>>>         int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
> >>>>>>> +    /**
> >>>>>>> +     * @vmap:
> >>>>>>
> >>>>>> There's already a @vmap and @vunamp kerneldoc at the top comment, that
> >>>>>> needs to be removed.
> >>>>>> -Daniel
> >>>>>>
> >>>>>>> +     *
> >>>>>>> +     * Returns a virtual address for the buffer.
> >>>>>>> +     *
> >>>>>>> +     * Notes to callers:
> >>>>>>> +     *
> >>>>>>> +     * - Callers must hold the struct dma_buf.resv lock
> >>>>>>> before calling
> >>>>>>> +     *   this interface.
> >>>>>>> +     *
> >>>>>>> +     * - Callers must provide means to prevent the
> >>>>>>> mappings from going
> >>>>>>> +     *   stale, such as holding the reservation lock or providing a
> >>>>>>> +     *   move-notify callback to the exporter.
> >>>>>>> +     *
> >>>>>>> +     * Notes to implementors:
> >>>>>>> +     *
> >>>>>>> +     * - Implementations must expect pairs of @vmap and
> >>>>>>> @vunmap to be
> >>>>>>> +     *   called frequently and should optimize for this case.
> >>>>>>> +     *
> >>>>>>> +     * - Implementations should avoid additional operations, such as
> >>>>>>> +     *   pinning.
> >>>>>>> +     *
> >>>>>>> +     * - Implementations may expect the caller to hold the dma-buf's
> >>>>>>> +     *   reservation lock to protect against concurrent calls and
> >>>>>>> +     *   relocation.
> >>>>>>> +     *
> >>>>>>> +     * - Implementations may provide additional
> >>>>>>> guarantees, such as working
> >>>>>>> +     *   without holding the reservation lock.
> >>>>>>> +     *
> >>>>>>> +     * This callback is optional.
> >>>>>>> +     *
> >>>>>>> +     * Returns:
> >>>>>>> +     *
> >>>>>>> +     * 0 on success or a negative error code on failure.
> >>>>>>> +     */
> >>>>>>>         int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
> >>>>>>> +
> >>>>>>> +    /**
> >>>>>>> +     * @vunmap:
> >>>>>>> +     *
> >>>>>>> +     * Releases the address previously returned by @vmap.
> >>>>>>> +     *
> >>>>>>> +     * This callback is optional.
> >>>>>>> +     *
> >>>>>>> +     * See also @vmap()
> >>>>>>> +     */
> >>>>>>>         void (*vunmap)(struct dma_buf *dmabuf, struct
> >>>>>>> dma_buf_map *map);
> >>>>>>>     };
> >>>>>>> --
> >>>>>>> 2.29.2
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>> --
> >>>>> Thomas Zimmermann
> >>>>> Graphics Driver Developer
> >>>>> SUSE Software Solutions Germany GmbH
> >>>>> Maxfeldstr. 5, 90409 Nürnberg, Germany
> >>>>> (HRB 36809, AG Nürnberg)
> >>>>> Geschäftsführer: Felix Imendörffer
> >>>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>
> >>
> >
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
  2020-12-09 10:16                   ` Daniel Vetter
@ 2020-12-09 11:06                     ` Christian König
  -1 siblings, 0 replies; 36+ messages in thread
From: Christian König @ 2020-12-09 11:06 UTC (permalink / raw)
  To: Daniel Vetter, Thomas Zimmermann
  Cc: dri-devel, moderated list:DMA BUFFER SHARING FRAMEWORK,
	Hans de Goede, Dave Airlie,
	open list:DMA BUFFER SHARING FRAMEWORK

Am 09.12.20 um 11:16 schrieb Daniel Vetter:
> [SNIP]
>>>> At least I would be fine with that. For now amdgpu is the only exporter who
>>>> implements the explicit pin/unpin semantics anyway.
>>> Yup, I think that makes sense (if it works). Maybe we could use something
>>> like:
>>>
>>> a) dma_buf pin exists, driver is dynamic. This means dma_buf_vmap needs to
>>> first pin, then call ->vmap. dma_buf_vmap_local otoh can directly call
>>> ->vmap since the exporter relies on either a pin or dma_resv_lock.
>>>
>>> b) dma_buf pin not implement, driver is a legacy pile. dma_buf_vmap will
>>> pin (somewhere at least, or rely on some implicit pin), dma_buf_vmap_local
>>> doesn't work and should fail.
>> I think I read in the dma-buf documentation that pin is supposed to put
>> the BO in a domain that is suitable for scanout. Now I don't really
>> trust this to work. Amdgpu, radeon and nouveau put it into the GTT
>> region. Qxl appears to put it wherever it is.
> Uh that sounds wrong, or at least not the full complexity. ->pin's
> main use right now is to freeze the dma-buf into system memory when
> there's a non-dynamic attachement for a dynamic exporter. There's also
> a dma_buf_pin call in amdgpu, and I think that only works for scanout
> on integrated gpu. Pinning to vram would break sharing for all
> existing dma-buf users.
>
> Christian can you perhaps explain when amdgpu uses dma_buf_pin()?

You guys are writing mails faster than I can answer :)

dma_buf_pin() should be used when the buffer can't move any more and it 
is preferred to bin in a location which is accessible by all devices, 
that usually means system memory.

> My take is the ->pin callback should clarify that it should pin into
> system memory, for legacy (non-dynamic) dma-buf users.

I though I've wrote that on the documentation for the pin callback, but 
looks like I've forgotten to do this.

> And dma_buf_pin() should gain some kerneldoc which then states that "This
> should only be used in limited use cases like scanout and not for
> temporary pin operations." I think the problem with this kerneldoc is
> simply that it tries to document the exporter and importer side of the
> contract in one place and makes a mess of it, plus leaves the actual
> importer side function undocumented. I think the kerneldoc also
> predates the final patch version, and we didn't update it fully.

That's a really good point and I noticed this mess before as well.

How about this: We document the exporter side on the dma-buf function 
table (because the exporter should implement those), and the importer 
side on the dma-buf functions which then uses the exporter functions.

Regards,
Christian.

>>> I think for less transition work fbdev helpers could first try
>>> dma_resv_lock + dma_buf_vmap_local, if that fails, drop the dma_resv_lock
>>> and do the pinning dma_buf_vmap. That way we don't have to convert shmem
>>> helpers over to dma_resv locking, which should help.
>> I have meanwhile made a patchset that updates helpers for cma, shmem and
>> vram with vmap_local; and converts fbdev emulation as well. It needs a
>> bit more testing before being posted.
> Nice, even better!
> -Daniel
>
>> Best regards
>> Thomas
>>
>>> And ttm drivers would do the new clean interface, so at least everyone
>>> using dma_resv today is all fine. Intel's conversion to dma_resv lock is
>>> in-flight, but that needs a conversion to the dynamic interface anyway,
>>> the current code splats. And dynamic brings means explicit pin/unpin
>>> callbacks, so should be good too.
>>> -Daniel
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>> Best regards
>>>>> Thomas
>>>>>
>>>>> [1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.freedesktop.org%2Fpatch%2F400054%2F%3Fseries%3D83765%26rev%3D1&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C0a737292d0b643fd671a08d89c2b8d77%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637431058168569423%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=nT89kiTCaZKRORv%2B49%2F2zmW2zV5I6UaArkiKDx%2F4%2Bfw%3D&amp;reserved=0
>>>>>
>>>>>> - The new one, which allows vmapping with just dma_resv locked, and
>>>>>> should
>>>>>>      have some caching in exporters.
>>>>>>
>>>>>> Breaking code and then adding todos about that is kinda not so cool
>>>>>> approach here imo.
>>>>>>
>>>>>> Also I guess ttm_bo_vmap should have a check that either the buffer is
>>>>>> pinned, or dma_resv_lock is held.
>>>>>>
>>>>>> Cheers, Daniel
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Best regards
>>>>>>> Thomas
>>>>>>>
>>>>>>>>> ---
>>>>>>>>>      Documentation/gpu/todo.rst | 15 +++++++++++++
>>>>>>>>>      include/drm/drm_gem.h      |  4 ++++
>>>>>>>>>      include/linux/dma-buf.h    | 45
>>>>>>>>> ++++++++++++++++++++++++++++++++++++++
>>>>>>>>>      3 files changed, 64 insertions(+)
>>>>>>>>>
>>>>>>>>> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
>>>>>>>>> index 009d8e6c7e3c..32bb797a84fc 100644
>>>>>>>>> --- a/Documentation/gpu/todo.rst
>>>>>>>>> +++ b/Documentation/gpu/todo.rst
>>>>>>>>> @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann
>>>>>>>>> <tzimmermann@suse.de>, Christian König, Daniel Vette
>>>>>>>>>      Level: Intermediate
>>>>>>>>> +Enforce rules for dma-buf vmap and pin ops
>>>>>>>>> +------------------------------------------
>>>>>>>>> +
>>>>>>>>> +Exporter implementations of vmap and pin in struct
>>>>>>>>> dma_buf_ops (and consequently
>>>>>>>>> +struct drm_gem_object_funcs) use a variety of locking
>>>>>>>>> semantics. Some rely on
>>>>>>>>> +the caller holding the dma-buf's reservation lock, some
>>>>>>>>> do their own locking,
>>>>>>>>> +some don't require any locking. VRAM helpers even used
>>>>>>>>> to pin as part of vmap.
>>>>>>>>> +
>>>>>>>>> +We need to review each exporter and enforce the documented rules.
>>>>>>>>> +
>>>>>>>>> +Contact: Christian König, Daniel Vetter, Thomas
>>>>>>>>> Zimmermann <tzimmermann@suse.de>
>>>>>>>>> +
>>>>>>>>> +Level: Advanced
>>>>>>>>> +
>>>>>>>>> +
>>>>>>>>>      Core refactorings
>>>>>>>>>      =================
>>>>>>>>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
>>>>>>>>> index 5e6daa1c982f..1864c6a721b1 100644
>>>>>>>>> --- a/include/drm/drm_gem.h
>>>>>>>>> +++ b/include/drm/drm_gem.h
>>>>>>>>> @@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
>>>>>>>>>           * drm_gem_dmabuf_vmap() helper.
>>>>>>>>>           *
>>>>>>>>>           * This callback is optional.
>>>>>>>>> +     *
>>>>>>>>> +     * See also struct dma_buf_ops.vmap
>>>>>>>>>           */
>>>>>>>>>          int (*vmap)(struct drm_gem_object *obj, struct
>>>>>>>>> dma_buf_map *map);
>>>>>>>>> @@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
>>>>>>>>>           * drm_gem_dmabuf_vunmap() helper.
>>>>>>>>>           *
>>>>>>>>>           * This callback is optional.
>>>>>>>>> +     *
>>>>>>>>> +     * See also struct dma_buf_ops.vunmap
>>>>>>>>>           */
>>>>>>>>>          void (*vunmap)(struct drm_gem_object *obj, struct
>>>>>>>>> dma_buf_map *map);
>>>>>>>>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
>>>>>>>>> index cf72699cb2bc..dc81fdc01dda 100644
>>>>>>>>> --- a/include/linux/dma-buf.h
>>>>>>>>> +++ b/include/linux/dma-buf.h
>>>>>>>>> @@ -267,7 +267,52 @@ struct dma_buf_ops {
>>>>>>>>>           */
>>>>>>>>>          int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
>>>>>>>>> +    /**
>>>>>>>>> +     * @vmap:
>>>>>>>> There's already a @vmap and @vunamp kerneldoc at the top comment, that
>>>>>>>> needs to be removed.
>>>>>>>> -Daniel
>>>>>>>>
>>>>>>>>> +     *
>>>>>>>>> +     * Returns a virtual address for the buffer.
>>>>>>>>> +     *
>>>>>>>>> +     * Notes to callers:
>>>>>>>>> +     *
>>>>>>>>> +     * - Callers must hold the struct dma_buf.resv lock
>>>>>>>>> before calling
>>>>>>>>> +     *   this interface.
>>>>>>>>> +     *
>>>>>>>>> +     * - Callers must provide means to prevent the
>>>>>>>>> mappings from going
>>>>>>>>> +     *   stale, such as holding the reservation lock or providing a
>>>>>>>>> +     *   move-notify callback to the exporter.
>>>>>>>>> +     *
>>>>>>>>> +     * Notes to implementors:
>>>>>>>>> +     *
>>>>>>>>> +     * - Implementations must expect pairs of @vmap and
>>>>>>>>> @vunmap to be
>>>>>>>>> +     *   called frequently and should optimize for this case.
>>>>>>>>> +     *
>>>>>>>>> +     * - Implementations should avoid additional operations, such as
>>>>>>>>> +     *   pinning.
>>>>>>>>> +     *
>>>>>>>>> +     * - Implementations may expect the caller to hold the dma-buf's
>>>>>>>>> +     *   reservation lock to protect against concurrent calls and
>>>>>>>>> +     *   relocation.
>>>>>>>>> +     *
>>>>>>>>> +     * - Implementations may provide additional
>>>>>>>>> guarantees, such as working
>>>>>>>>> +     *   without holding the reservation lock.
>>>>>>>>> +     *
>>>>>>>>> +     * This callback is optional.
>>>>>>>>> +     *
>>>>>>>>> +     * Returns:
>>>>>>>>> +     *
>>>>>>>>> +     * 0 on success or a negative error code on failure.
>>>>>>>>> +     */
>>>>>>>>>          int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>>>>>>>>> +
>>>>>>>>> +    /**
>>>>>>>>> +     * @vunmap:
>>>>>>>>> +     *
>>>>>>>>> +     * Releases the address previously returned by @vmap.
>>>>>>>>> +     *
>>>>>>>>> +     * This callback is optional.
>>>>>>>>> +     *
>>>>>>>>> +     * See also @vmap()
>>>>>>>>> +     */
>>>>>>>>>          void (*vunmap)(struct dma_buf *dmabuf, struct
>>>>>>>>> dma_buf_map *map);
>>>>>>>>>      };
>>>>>>>>> --
>>>>>>>>> 2.29.2
>>>>>>>>>
>>>>>>> --
>>>>>>> Thomas Zimmermann
>>>>>>> Graphics Driver Developer
>>>>>>> SUSE Software Solutions Germany GmbH
>>>>>>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>>>>>>> (HRB 36809, AG Nürnberg)
>>>>>>> Geschäftsführer: Felix Imendörffer
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>> --
>> Thomas Zimmermann
>> Graphics Driver Developer
>> SUSE Software Solutions Germany GmbH
>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>> (HRB 36809, AG Nürnberg)
>> Geschäftsführer: Felix Imendörffer
>>
>


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage
@ 2020-12-09 11:06                     ` Christian König
  0 siblings, 0 replies; 36+ messages in thread
From: Christian König @ 2020-12-09 11:06 UTC (permalink / raw)
  To: Daniel Vetter, Thomas Zimmermann
  Cc: moderated list:DMA BUFFER SHARING FRAMEWORK, Hans de Goede,
	open list:DMA BUFFER SHARING FRAMEWORK, dri-devel, Dave Airlie

Am 09.12.20 um 11:16 schrieb Daniel Vetter:
> [SNIP]
>>>> At least I would be fine with that. For now amdgpu is the only exporter who
>>>> implements the explicit pin/unpin semantics anyway.
>>> Yup, I think that makes sense (if it works). Maybe we could use something
>>> like:
>>>
>>> a) dma_buf pin exists, driver is dynamic. This means dma_buf_vmap needs to
>>> first pin, then call ->vmap. dma_buf_vmap_local otoh can directly call
>>> ->vmap since the exporter relies on either a pin or dma_resv_lock.
>>>
>>> b) dma_buf pin not implement, driver is a legacy pile. dma_buf_vmap will
>>> pin (somewhere at least, or rely on some implicit pin), dma_buf_vmap_local
>>> doesn't work and should fail.
>> I think I read in the dma-buf documentation that pin is supposed to put
>> the BO in a domain that is suitable for scanout. Now I don't really
>> trust this to work. Amdgpu, radeon and nouveau put it into the GTT
>> region. Qxl appears to put it wherever it is.
> Uh that sounds wrong, or at least not the full complexity. ->pin's
> main use right now is to freeze the dma-buf into system memory when
> there's a non-dynamic attachement for a dynamic exporter. There's also
> a dma_buf_pin call in amdgpu, and I think that only works for scanout
> on integrated gpu. Pinning to vram would break sharing for all
> existing dma-buf users.
>
> Christian can you perhaps explain when amdgpu uses dma_buf_pin()?

You guys are writing mails faster than I can answer :)

dma_buf_pin() should be used when the buffer can't move any more and it 
is preferred to bin in a location which is accessible by all devices, 
that usually means system memory.

> My take is the ->pin callback should clarify that it should pin into
> system memory, for legacy (non-dynamic) dma-buf users.

I though I've wrote that on the documentation for the pin callback, but 
looks like I've forgotten to do this.

> And dma_buf_pin() should gain some kerneldoc which then states that "This
> should only be used in limited use cases like scanout and not for
> temporary pin operations." I think the problem with this kerneldoc is
> simply that it tries to document the exporter and importer side of the
> contract in one place and makes a mess of it, plus leaves the actual
> importer side function undocumented. I think the kerneldoc also
> predates the final patch version, and we didn't update it fully.

That's a really good point and I noticed this mess before as well.

How about this: We document the exporter side on the dma-buf function 
table (because the exporter should implement those), and the importer 
side on the dma-buf functions which then uses the exporter functions.

Regards,
Christian.

>>> I think for less transition work fbdev helpers could first try
>>> dma_resv_lock + dma_buf_vmap_local, if that fails, drop the dma_resv_lock
>>> and do the pinning dma_buf_vmap. That way we don't have to convert shmem
>>> helpers over to dma_resv locking, which should help.
>> I have meanwhile made a patchset that updates helpers for cma, shmem and
>> vram with vmap_local; and converts fbdev emulation as well. It needs a
>> bit more testing before being posted.
> Nice, even better!
> -Daniel
>
>> Best regards
>> Thomas
>>
>>> And ttm drivers would do the new clean interface, so at least everyone
>>> using dma_resv today is all fine. Intel's conversion to dma_resv lock is
>>> in-flight, but that needs a conversion to the dynamic interface anyway,
>>> the current code splats. And dynamic brings means explicit pin/unpin
>>> callbacks, so should be good too.
>>> -Daniel
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>> Best regards
>>>>> Thomas
>>>>>
>>>>> [1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.freedesktop.org%2Fpatch%2F400054%2F%3Fseries%3D83765%26rev%3D1&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C0a737292d0b643fd671a08d89c2b8d77%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637431058168569423%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=nT89kiTCaZKRORv%2B49%2F2zmW2zV5I6UaArkiKDx%2F4%2Bfw%3D&amp;reserved=0
>>>>>
>>>>>> - The new one, which allows vmapping with just dma_resv locked, and
>>>>>> should
>>>>>>      have some caching in exporters.
>>>>>>
>>>>>> Breaking code and then adding todos about that is kinda not so cool
>>>>>> approach here imo.
>>>>>>
>>>>>> Also I guess ttm_bo_vmap should have a check that either the buffer is
>>>>>> pinned, or dma_resv_lock is held.
>>>>>>
>>>>>> Cheers, Daniel
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Best regards
>>>>>>> Thomas
>>>>>>>
>>>>>>>>> ---
>>>>>>>>>      Documentation/gpu/todo.rst | 15 +++++++++++++
>>>>>>>>>      include/drm/drm_gem.h      |  4 ++++
>>>>>>>>>      include/linux/dma-buf.h    | 45
>>>>>>>>> ++++++++++++++++++++++++++++++++++++++
>>>>>>>>>      3 files changed, 64 insertions(+)
>>>>>>>>>
>>>>>>>>> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
>>>>>>>>> index 009d8e6c7e3c..32bb797a84fc 100644
>>>>>>>>> --- a/Documentation/gpu/todo.rst
>>>>>>>>> +++ b/Documentation/gpu/todo.rst
>>>>>>>>> @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann
>>>>>>>>> <tzimmermann@suse.de>, Christian König, Daniel Vette
>>>>>>>>>      Level: Intermediate
>>>>>>>>> +Enforce rules for dma-buf vmap and pin ops
>>>>>>>>> +------------------------------------------
>>>>>>>>> +
>>>>>>>>> +Exporter implementations of vmap and pin in struct
>>>>>>>>> dma_buf_ops (and consequently
>>>>>>>>> +struct drm_gem_object_funcs) use a variety of locking
>>>>>>>>> semantics. Some rely on
>>>>>>>>> +the caller holding the dma-buf's reservation lock, some
>>>>>>>>> do their own locking,
>>>>>>>>> +some don't require any locking. VRAM helpers even used
>>>>>>>>> to pin as part of vmap.
>>>>>>>>> +
>>>>>>>>> +We need to review each exporter and enforce the documented rules.
>>>>>>>>> +
>>>>>>>>> +Contact: Christian König, Daniel Vetter, Thomas
>>>>>>>>> Zimmermann <tzimmermann@suse.de>
>>>>>>>>> +
>>>>>>>>> +Level: Advanced
>>>>>>>>> +
>>>>>>>>> +
>>>>>>>>>      Core refactorings
>>>>>>>>>      =================
>>>>>>>>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
>>>>>>>>> index 5e6daa1c982f..1864c6a721b1 100644
>>>>>>>>> --- a/include/drm/drm_gem.h
>>>>>>>>> +++ b/include/drm/drm_gem.h
>>>>>>>>> @@ -138,6 +138,8 @@ struct drm_gem_object_funcs {
>>>>>>>>>           * drm_gem_dmabuf_vmap() helper.
>>>>>>>>>           *
>>>>>>>>>           * This callback is optional.
>>>>>>>>> +     *
>>>>>>>>> +     * See also struct dma_buf_ops.vmap
>>>>>>>>>           */
>>>>>>>>>          int (*vmap)(struct drm_gem_object *obj, struct
>>>>>>>>> dma_buf_map *map);
>>>>>>>>> @@ -148,6 +150,8 @@ struct drm_gem_object_funcs {
>>>>>>>>>           * drm_gem_dmabuf_vunmap() helper.
>>>>>>>>>           *
>>>>>>>>>           * This callback is optional.
>>>>>>>>> +     *
>>>>>>>>> +     * See also struct dma_buf_ops.vunmap
>>>>>>>>>           */
>>>>>>>>>          void (*vunmap)(struct drm_gem_object *obj, struct
>>>>>>>>> dma_buf_map *map);
>>>>>>>>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
>>>>>>>>> index cf72699cb2bc..dc81fdc01dda 100644
>>>>>>>>> --- a/include/linux/dma-buf.h
>>>>>>>>> +++ b/include/linux/dma-buf.h
>>>>>>>>> @@ -267,7 +267,52 @@ struct dma_buf_ops {
>>>>>>>>>           */
>>>>>>>>>          int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
>>>>>>>>> +    /**
>>>>>>>>> +     * @vmap:
>>>>>>>> There's already a @vmap and @vunamp kerneldoc at the top comment, that
>>>>>>>> needs to be removed.
>>>>>>>> -Daniel
>>>>>>>>
>>>>>>>>> +     *
>>>>>>>>> +     * Returns a virtual address for the buffer.
>>>>>>>>> +     *
>>>>>>>>> +     * Notes to callers:
>>>>>>>>> +     *
>>>>>>>>> +     * - Callers must hold the struct dma_buf.resv lock
>>>>>>>>> before calling
>>>>>>>>> +     *   this interface.
>>>>>>>>> +     *
>>>>>>>>> +     * - Callers must provide means to prevent the
>>>>>>>>> mappings from going
>>>>>>>>> +     *   stale, such as holding the reservation lock or providing a
>>>>>>>>> +     *   move-notify callback to the exporter.
>>>>>>>>> +     *
>>>>>>>>> +     * Notes to implementors:
>>>>>>>>> +     *
>>>>>>>>> +     * - Implementations must expect pairs of @vmap and
>>>>>>>>> @vunmap to be
>>>>>>>>> +     *   called frequently and should optimize for this case.
>>>>>>>>> +     *
>>>>>>>>> +     * - Implementations should avoid additional operations, such as
>>>>>>>>> +     *   pinning.
>>>>>>>>> +     *
>>>>>>>>> +     * - Implementations may expect the caller to hold the dma-buf's
>>>>>>>>> +     *   reservation lock to protect against concurrent calls and
>>>>>>>>> +     *   relocation.
>>>>>>>>> +     *
>>>>>>>>> +     * - Implementations may provide additional
>>>>>>>>> guarantees, such as working
>>>>>>>>> +     *   without holding the reservation lock.
>>>>>>>>> +     *
>>>>>>>>> +     * This callback is optional.
>>>>>>>>> +     *
>>>>>>>>> +     * Returns:
>>>>>>>>> +     *
>>>>>>>>> +     * 0 on success or a negative error code on failure.
>>>>>>>>> +     */
>>>>>>>>>          int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>>>>>>>>> +
>>>>>>>>> +    /**
>>>>>>>>> +     * @vunmap:
>>>>>>>>> +     *
>>>>>>>>> +     * Releases the address previously returned by @vmap.
>>>>>>>>> +     *
>>>>>>>>> +     * This callback is optional.
>>>>>>>>> +     *
>>>>>>>>> +     * See also @vmap()
>>>>>>>>> +     */
>>>>>>>>>          void (*vunmap)(struct dma_buf *dmabuf, struct
>>>>>>>>> dma_buf_map *map);
>>>>>>>>>      };
>>>>>>>>> --
>>>>>>>>> 2.29.2
>>>>>>>>>
>>>>>>> --
>>>>>>> Thomas Zimmermann
>>>>>>> Graphics Driver Developer
>>>>>>> SUSE Software Solutions Germany GmbH
>>>>>>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>>>>>>> (HRB 36809, AG Nürnberg)
>>>>>>> Geschäftsführer: Felix Imendörffer
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>> --
>> Thomas Zimmermann
>> Graphics Driver Developer
>> SUSE Software Solutions Germany GmbH
>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>> (HRB 36809, AG Nürnberg)
>> Geschäftsführer: Felix Imendörffer
>>
>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2020-12-09 11:08 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-03 14:02 [PATCH v2 0/7] drm/vram-helper: Lock GEM BOs while they are mapped Thomas Zimmermann
2020-12-03 14:02 ` Thomas Zimmermann
2020-12-03 14:02 ` [PATCH v2 1/7] drm/ast: Don't pin cursor source BO explicitly during update Thomas Zimmermann
2020-12-03 14:02   ` Thomas Zimmermann
2020-12-03 14:02 ` [PATCH v2 2/7] drm/ast: Only map cursor BOs during updates Thomas Zimmermann
2020-12-03 14:02   ` Thomas Zimmermann
2020-12-03 14:02 ` [PATCH v2 3/7] drm/vram-helper: Move BO locking from vmap code into callers Thomas Zimmermann
2020-12-03 14:02   ` Thomas Zimmermann
2020-12-03 14:02 ` [PATCH v2 4/7] drm/vram-helper: Remove pinning from drm_gem_vram_{vmap,vunmap}() Thomas Zimmermann
2020-12-03 14:02   ` [PATCH v2 4/7] drm/vram-helper: Remove pinning from drm_gem_vram_{vmap, vunmap}() Thomas Zimmermann
2020-12-03 14:02 ` [PATCH v2 5/7] drm/vram-helper: Remove vmap reference counting Thomas Zimmermann
2020-12-03 14:02   ` Thomas Zimmermann
2020-12-03 14:02 ` [PATCH v2 6/7] drm/vram-helper: Simplify vmap implementation Thomas Zimmermann
2020-12-03 14:02   ` Thomas Zimmermann
2020-12-03 14:02 ` [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage Thomas Zimmermann
2020-12-03 14:02   ` Thomas Zimmermann
2020-12-03 15:26   ` Daniel Vetter
2020-12-03 15:26     ` Daniel Vetter
2020-12-03 18:59     ` Thomas Zimmermann
2020-12-03 18:59       ` Thomas Zimmermann
2020-12-03 20:41       ` Daniel Vetter
2020-12-03 20:41         ` Daniel Vetter
2020-12-04  8:32         ` Thomas Zimmermann
2020-12-04  8:32           ` Thomas Zimmermann
2020-12-04  8:47           ` Christian König
2020-12-04  8:47             ` Christian König
2020-12-09  0:13             ` Daniel Vetter
2020-12-09  0:13               ` Daniel Vetter
2020-12-09  9:32               ` Thomas Zimmermann
2020-12-09  9:32                 ` Thomas Zimmermann
2020-12-09 10:16                 ` Daniel Vetter
2020-12-09 10:16                   ` Daniel Vetter
2020-12-09 11:06                   ` Christian König
2020-12-09 11:06                     ` Christian König
2020-12-03 14:05 ` [PATCH v2 0/7] drm/vram-helper: Lock GEM BOs while they are mapped Thomas Zimmermann
2020-12-03 14:05   ` Thomas Zimmermann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.