All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-14 11:25 ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-14 11:25 UTC (permalink / raw)
  To: sumit.semwal, christian.koenig, daniel, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom
  Cc: linux-media, dri-devel, linaro-mm-sig, etnaviv, intel-gfx,
	linux-tegra, sparclinux, Thomas Zimmermann

Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
of dma-buf memory in kernel address space. The functions operate with plain
addresses and the assumption is that the memory can be accessed with load
and store operations. This is not the case on some architectures (e.g.,
sparc64) where I/O memory can only be accessed with dedicated instructions.

This patchset introduces struct dma_buf_map, which contains the address of
a buffer and a flag that tells whether system- or I/O-memory instructions
are required.

Some background: updating the DRM framebuffer console on sparc64 makes the
kernel panic. This is because the framebuffer memory cannot be accessed with
system-memory instructions. We currently employ a workaround in DRM to
address this specific problem. [1]

To resolve the problem, we'd like to address it at the most common point,
which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
instructions are required and exports this information to it's users. The
new structure struct dma_buf_map stores the buffer address and a flag that
signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
can then access the memory accordingly.

This patchset only introduces struct dma_buf_map, and updates struct dma_buf
and it's interfaces. Further patches can update dma-buf users. For example,
there's a prototype patchset for DRM that fixes the framebuffer problem. [2]

Further work: TTM, one of DRM's memory managers, already exports an
is_iomem flag of its own. It could later be switched over to exporting struct
dma_buf_map, thus simplifying some code. Several DRM drivers expect their
fbdev console to operate on I/O memory. These could possibly be switched over
to the generic fbdev emulation, as soon as the generic code uses struct
dma_buf_map.

[1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
[2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/

Thomas Zimmermann (3):
  dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
  dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
  dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces

 Documentation/driver-api/dma-buf.rst          |   3 +
 drivers/dma-buf/dma-buf.c                     |  40 +++---
 drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
 drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
 drivers/gpu/drm/drm_prime.c                   |  14 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
 .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
 drivers/gpu/drm/tegra/gem.c                   |  23 ++--
 .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
 .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
 .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
 include/drm/drm_prime.h                       |   5 +-
 include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
 include/linux/dma-buf.h                       |  11 +-
 15 files changed, 274 insertions(+), 82 deletions(-)
 create mode 100644 include/linux/dma-buf-map.h

--
2.28.0


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-14 11:25 ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-14 11:25 UTC (permalink / raw)
  To: sumit.semwal, christian.koenig, daniel, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom
  Cc: intel-gfx, etnaviv, dri-devel, linaro-mm-sig, sparclinux,
	Thomas Zimmermann, linux-tegra, linux-media

Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
of dma-buf memory in kernel address space. The functions operate with plain
addresses and the assumption is that the memory can be accessed with load
and store operations. This is not the case on some architectures (e.g.,
sparc64) where I/O memory can only be accessed with dedicated instructions.

This patchset introduces struct dma_buf_map, which contains the address of
a buffer and a flag that tells whether system- or I/O-memory instructions
are required.

Some background: updating the DRM framebuffer console on sparc64 makes the
kernel panic. This is because the framebuffer memory cannot be accessed with
system-memory instructions. We currently employ a workaround in DRM to
address this specific problem. [1]

To resolve the problem, we'd like to address it at the most common point,
which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
instructions are required and exports this information to it's users. The
new structure struct dma_buf_map stores the buffer address and a flag that
signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
can then access the memory accordingly.

This patchset only introduces struct dma_buf_map, and updates struct dma_buf
and it's interfaces. Further patches can update dma-buf users. For example,
there's a prototype patchset for DRM that fixes the framebuffer problem. [2]

Further work: TTM, one of DRM's memory managers, already exports an
is_iomem flag of its own. It could later be switched over to exporting struct
dma_buf_map, thus simplifying some code. Several DRM drivers expect their
fbdev console to operate on I/O memory. These could possibly be switched over
to the generic fbdev emulation, as soon as the generic code uses struct
dma_buf_map.

[1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
[2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/

Thomas Zimmermann (3):
  dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
  dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
  dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces

 Documentation/driver-api/dma-buf.rst          |   3 +
 drivers/dma-buf/dma-buf.c                     |  40 +++---
 drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
 drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
 drivers/gpu/drm/drm_prime.c                   |  14 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
 .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
 drivers/gpu/drm/tegra/gem.c                   |  23 ++--
 .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
 .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
 .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
 include/drm/drm_prime.h                       |   5 +-
 include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
 include/linux/dma-buf.h                       |  11 +-
 15 files changed, 274 insertions(+), 82 deletions(-)
 create mode 100644 include/linux/dma-buf-map.h

--
2.28.0

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-14 11:25 ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-14 11:25 UTC (permalink / raw)
  To: sumit.semwal, christian.koenig, daniel, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom
  Cc: intel-gfx, etnaviv, dri-devel, linaro-mm-sig, sparclinux,
	Thomas Zimmermann, linux-tegra, linux-media

Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
of dma-buf memory in kernel address space. The functions operate with plain
addresses and the assumption is that the memory can be accessed with load
and store operations. This is not the case on some architectures (e.g.,
sparc64) where I/O memory can only be accessed with dedicated instructions.

This patchset introduces struct dma_buf_map, which contains the address of
a buffer and a flag that tells whether system- or I/O-memory instructions
are required.

Some background: updating the DRM framebuffer console on sparc64 makes the
kernel panic. This is because the framebuffer memory cannot be accessed with
system-memory instructions. We currently employ a workaround in DRM to
address this specific problem. [1]

To resolve the problem, we'd like to address it at the most common point,
which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
instructions are required and exports this information to it's users. The
new structure struct dma_buf_map stores the buffer address and a flag that
signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
can then access the memory accordingly.

This patchset only introduces struct dma_buf_map, and updates struct dma_buf
and it's interfaces. Further patches can update dma-buf users. For example,
there's a prototype patchset for DRM that fixes the framebuffer problem. [2]

Further work: TTM, one of DRM's memory managers, already exports an
is_iomem flag of its own. It could later be switched over to exporting struct
dma_buf_map, thus simplifying some code. Several DRM drivers expect their
fbdev console to operate on I/O memory. These could possibly be switched over
to the generic fbdev emulation, as soon as the generic code uses struct
dma_buf_map.

[1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
[2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/

Thomas Zimmermann (3):
  dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
  dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
  dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces

 Documentation/driver-api/dma-buf.rst          |   3 +
 drivers/dma-buf/dma-buf.c                     |  40 +++---
 drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
 drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
 drivers/gpu/drm/drm_prime.c                   |  14 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
 .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
 drivers/gpu/drm/tegra/gem.c                   |  23 ++--
 .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
 .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
 .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
 include/drm/drm_prime.h                       |   5 +-
 include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
 include/linux/dma-buf.h                       |  11 +-
 15 files changed, 274 insertions(+), 82 deletions(-)
 create mode 100644 include/linux/dma-buf-map.h

--
2.28.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [Intel-gfx] [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-14 11:25 ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-14 11:25 UTC (permalink / raw)
  To: sumit.semwal, christian.koenig, daniel, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom
  Cc: intel-gfx, etnaviv, dri-devel, linaro-mm-sig, sparclinux,
	Thomas Zimmermann, linux-tegra, linux-media

Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
of dma-buf memory in kernel address space. The functions operate with plain
addresses and the assumption is that the memory can be accessed with load
and store operations. This is not the case on some architectures (e.g.,
sparc64) where I/O memory can only be accessed with dedicated instructions.

This patchset introduces struct dma_buf_map, which contains the address of
a buffer and a flag that tells whether system- or I/O-memory instructions
are required.

Some background: updating the DRM framebuffer console on sparc64 makes the
kernel panic. This is because the framebuffer memory cannot be accessed with
system-memory instructions. We currently employ a workaround in DRM to
address this specific problem. [1]

To resolve the problem, we'd like to address it at the most common point,
which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
instructions are required and exports this information to it's users. The
new structure struct dma_buf_map stores the buffer address and a flag that
signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
can then access the memory accordingly.

This patchset only introduces struct dma_buf_map, and updates struct dma_buf
and it's interfaces. Further patches can update dma-buf users. For example,
there's a prototype patchset for DRM that fixes the framebuffer problem. [2]

Further work: TTM, one of DRM's memory managers, already exports an
is_iomem flag of its own. It could later be switched over to exporting struct
dma_buf_map, thus simplifying some code. Several DRM drivers expect their
fbdev console to operate on I/O memory. These could possibly be switched over
to the generic fbdev emulation, as soon as the generic code uses struct
dma_buf_map.

[1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
[2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/

Thomas Zimmermann (3):
  dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
  dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
  dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces

 Documentation/driver-api/dma-buf.rst          |   3 +
 drivers/dma-buf/dma-buf.c                     |  40 +++---
 drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
 drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
 drivers/gpu/drm/drm_prime.c                   |  14 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
 .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
 drivers/gpu/drm/tegra/gem.c                   |  23 ++--
 .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
 .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
 .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
 include/drm/drm_prime.h                       |   5 +-
 include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
 include/linux/dma-buf.h                       |  11 +-
 15 files changed, 274 insertions(+), 82 deletions(-)
 create mode 100644 include/linux/dma-buf-map.h

--
2.28.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 1/3] dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
  2020-09-14 11:25 ` Thomas Zimmermann
  (?)
  (?)
@ 2020-09-14 11:25   ` Thomas Zimmermann
  -1 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-14 11:25 UTC (permalink / raw)
  To: sumit.semwal, christian.koenig, daniel, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom
  Cc: linux-media, dri-devel, linaro-mm-sig, etnaviv, intel-gfx,
	linux-tegra, sparclinux, Thomas Zimmermann

The new type struct dma_buf_map represents a mapping of dma-buf memory
into kernel space. It contains a flag, is_iomem, that signals users to
access the mapped memory with I/O operations instead of regular loads
and stores.

It was assumed that DMA buffer memory can be accessed with regular load
and store operations. Some architectures, such as sparc64, require the
use of I/O operations to access dma-map buffers that are located in I/O
memory. Providing struct dma_buf_map allows drivers to implement this.
This was specifically a problem when refreshing the grahpics framebuffer
on such systems. [Link 1]

As the first step, struct dma_buf stores an instance of struct dma_buf_map
internally. Afterwards, dma-buf's vmap and vunmap interfaces are be
converted. Finally, affected drivers can be fixed.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
---
 Documentation/driver-api/dma-buf.rst |  3 +
 drivers/dma-buf/dma-buf.c            | 14 ++---
 include/linux/dma-buf-map.h          | 87 ++++++++++++++++++++++++++++
 include/linux/dma-buf.h              |  3 +-
 4 files changed, 99 insertions(+), 8 deletions(-)
 create mode 100644 include/linux/dma-buf-map.h

diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
index 13ea0cc0a3fa..3244c600a9a1 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -115,6 +115,9 @@ Kernel Functions and Structures Reference
 .. kernel-doc:: include/linux/dma-buf.h
    :internal:
 
+.. kernel-doc:: include/linux/dma-buf-map.h
+   :internal:
+
 Reservation Objects
 -------------------
 
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 58564d82a3a2..5e849ca241a0 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1207,12 +1207,12 @@ void *dma_buf_vmap(struct dma_buf *dmabuf)
 	mutex_lock(&dmabuf->lock);
 	if (dmabuf->vmapping_counter) {
 		dmabuf->vmapping_counter++;
-		BUG_ON(!dmabuf->vmap_ptr);
-		ptr = dmabuf->vmap_ptr;
+		BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
+		ptr = dmabuf->vmap_ptr.vaddr;
 		goto out_unlock;
 	}
 
-	BUG_ON(dmabuf->vmap_ptr);
+	BUG_ON(dma_buf_map_is_set(&dmabuf->vmap_ptr));
 
 	ptr = dmabuf->ops->vmap(dmabuf);
 	if (WARN_ON_ONCE(IS_ERR(ptr)))
@@ -1220,7 +1220,7 @@ void *dma_buf_vmap(struct dma_buf *dmabuf)
 	if (!ptr)
 		goto out_unlock;
 
-	dmabuf->vmap_ptr = ptr;
+	dmabuf->vmap_ptr.vaddr = ptr;
 	dmabuf->vmapping_counter = 1;
 
 out_unlock:
@@ -1239,15 +1239,15 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
 	if (WARN_ON(!dmabuf))
 		return;
 
-	BUG_ON(!dmabuf->vmap_ptr);
+	BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
 	BUG_ON(dmabuf->vmapping_counter == 0);
-	BUG_ON(dmabuf->vmap_ptr != vaddr);
+	BUG_ON(!dma_buf_map_is_vaddr(&dmabuf->vmap_ptr, vaddr));
 
 	mutex_lock(&dmabuf->lock);
 	if (--dmabuf->vmapping_counter == 0) {
 		if (dmabuf->ops->vunmap)
 			dmabuf->ops->vunmap(dmabuf, vaddr);
-		dmabuf->vmap_ptr = NULL;
+		dma_buf_map_clear(&dmabuf->vmap_ptr);
 	}
 	mutex_unlock(&dmabuf->lock);
 }
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
new file mode 100644
index 000000000000..d4b1bb3cc4b0
--- /dev/null
+++ b/include/linux/dma-buf-map.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Pointer to dma-buf-mapped memory, plus helpers.
+ */
+
+#ifndef __DMA_BUF_MAP_H__
+#define __DMA_BUF_MAP_H__
+
+#include <linux/io.h>
+
+/**
+ * struct dma_buf_map - Pointer to vmap'ed dma-buf memory.
+ * @vaddr_iomem:	The buffer's address if in I/O memory
+ * @vaddr:		The buffer's address if in system memory
+ * @is_iomem:		True if the dma-buf memory is located in I/O
+ *			memory, or false otherwise.
+ *
+ * Calling dma-buf's vmap operation returns a pointer to the buffer.
+ * Depending on the location of the buffer, users may have to access it
+ * with I/O operations or memory load/store operations. struct dma_buf_map
+ * stores the buffer address and a flag that signals the required access.
+ */
+struct dma_buf_map {
+	union {
+		void __iomem *vaddr_iomem;
+		void *vaddr;
+	};
+	bool is_iomem;
+};
+
+/* API transition helper */
+static inline bool dma_buf_map_is_vaddr(const struct dma_buf_map *map, const void *vaddr)
+{
+	return !map->is_iomem && (map->vaddr == vaddr);
+}
+
+/**
+ * dma_buf_map_is_null - Tests for a dma-buf mapping to be NULL
+ * @map:	The dma-buf mapping structure
+ *
+ * Depending on the state of struct dma_buf_map.is_iomem, tests if the
+ * mapping is NULL.
+ *
+ * Returns:
+ * True if the mapping is NULL, or false otherwise.
+ */
+static inline bool dma_buf_map_is_null(const struct dma_buf_map *map)
+{
+	if (map->is_iomem)
+		return map->vaddr_iomem == NULL;
+	return map->vaddr == NULL;
+}
+
+/**
+ * dma_buf_map_is_set - Tests is the dma-buf mapping has been set
+ * @map:	The dma-buf mapping structure
+ *
+ * Depending on the state of struct dma_buf_map.is_iomem, tests if the
+ * mapping has been set.
+ *
+ * Returns:
+ * True if the mapping is been set, or false otherwise.
+ */
+static inline bool dma_buf_map_is_set(const struct dma_buf_map *map)
+{
+	return !dma_buf_map_is_null(map);
+}
+
+/**
+ * dma_buf_map_clear - Clears a dma-buf mapping structure
+ * @map:	The dma-buf mapping structure
+ *
+ * Clears all fields to zero; including struct dma_buf_map.is_iomem. So
+ * mapping structures that were set to point to I/O memory are reset for
+ * system memory. Pointers are cleared to NULL. This is the default.
+ */
+static inline void dma_buf_map_clear(struct dma_buf_map *map)
+{
+	if (map->is_iomem) {
+		map->vaddr_iomem = NULL;
+		map->is_iomem = false;
+	} else {
+		map->vaddr = NULL;
+	}
+}
+
+#endif /* __DMA_BUF_MAP_H__ */
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 957b398d30e5..fcc2ddfb6d18 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -13,6 +13,7 @@
 #ifndef __DMA_BUF_H__
 #define __DMA_BUF_H__
 
+#include <linux/dma-buf-map.h>
 #include <linux/file.h>
 #include <linux/err.h>
 #include <linux/scatterlist.h>
@@ -309,7 +310,7 @@ struct dma_buf {
 	const struct dma_buf_ops *ops;
 	struct mutex lock;
 	unsigned vmapping_counter;
-	void *vmap_ptr;
+	struct dma_buf_map vmap_ptr;
 	const char *exp_name;
 	const char *name;
 	spinlock_t name_lock;
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 1/3] dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
@ 2020-09-14 11:25   ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-14 11:25 UTC (permalink / raw)
  To: sumit.semwal, christian.koenig, daniel, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom
  Cc: intel-gfx, etnaviv, dri-devel, linaro-mm-sig, sparclinux,
	Thomas Zimmermann, linux-tegra, linux-media

The new type struct dma_buf_map represents a mapping of dma-buf memory
into kernel space. It contains a flag, is_iomem, that signals users to
access the mapped memory with I/O operations instead of regular loads
and stores.

It was assumed that DMA buffer memory can be accessed with regular load
and store operations. Some architectures, such as sparc64, require the
use of I/O operations to access dma-map buffers that are located in I/O
memory. Providing struct dma_buf_map allows drivers to implement this.
This was specifically a problem when refreshing the grahpics framebuffer
on such systems. [Link 1]

As the first step, struct dma_buf stores an instance of struct dma_buf_map
internally. Afterwards, dma-buf's vmap and vunmap interfaces are be
converted. Finally, affected drivers can be fixed.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
---
 Documentation/driver-api/dma-buf.rst |  3 +
 drivers/dma-buf/dma-buf.c            | 14 ++---
 include/linux/dma-buf-map.h          | 87 ++++++++++++++++++++++++++++
 include/linux/dma-buf.h              |  3 +-
 4 files changed, 99 insertions(+), 8 deletions(-)
 create mode 100644 include/linux/dma-buf-map.h

diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
index 13ea0cc0a3fa..3244c600a9a1 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -115,6 +115,9 @@ Kernel Functions and Structures Reference
 .. kernel-doc:: include/linux/dma-buf.h
    :internal:
 
+.. kernel-doc:: include/linux/dma-buf-map.h
+   :internal:
+
 Reservation Objects
 -------------------
 
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 58564d82a3a2..5e849ca241a0 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1207,12 +1207,12 @@ void *dma_buf_vmap(struct dma_buf *dmabuf)
 	mutex_lock(&dmabuf->lock);
 	if (dmabuf->vmapping_counter) {
 		dmabuf->vmapping_counter++;
-		BUG_ON(!dmabuf->vmap_ptr);
-		ptr = dmabuf->vmap_ptr;
+		BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
+		ptr = dmabuf->vmap_ptr.vaddr;
 		goto out_unlock;
 	}
 
-	BUG_ON(dmabuf->vmap_ptr);
+	BUG_ON(dma_buf_map_is_set(&dmabuf->vmap_ptr));
 
 	ptr = dmabuf->ops->vmap(dmabuf);
 	if (WARN_ON_ONCE(IS_ERR(ptr)))
@@ -1220,7 +1220,7 @@ void *dma_buf_vmap(struct dma_buf *dmabuf)
 	if (!ptr)
 		goto out_unlock;
 
-	dmabuf->vmap_ptr = ptr;
+	dmabuf->vmap_ptr.vaddr = ptr;
 	dmabuf->vmapping_counter = 1;
 
 out_unlock:
@@ -1239,15 +1239,15 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
 	if (WARN_ON(!dmabuf))
 		return;
 
-	BUG_ON(!dmabuf->vmap_ptr);
+	BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
 	BUG_ON(dmabuf->vmapping_counter = 0);
-	BUG_ON(dmabuf->vmap_ptr != vaddr);
+	BUG_ON(!dma_buf_map_is_vaddr(&dmabuf->vmap_ptr, vaddr));
 
 	mutex_lock(&dmabuf->lock);
 	if (--dmabuf->vmapping_counter = 0) {
 		if (dmabuf->ops->vunmap)
 			dmabuf->ops->vunmap(dmabuf, vaddr);
-		dmabuf->vmap_ptr = NULL;
+		dma_buf_map_clear(&dmabuf->vmap_ptr);
 	}
 	mutex_unlock(&dmabuf->lock);
 }
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
new file mode 100644
index 000000000000..d4b1bb3cc4b0
--- /dev/null
+++ b/include/linux/dma-buf-map.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Pointer to dma-buf-mapped memory, plus helpers.
+ */
+
+#ifndef __DMA_BUF_MAP_H__
+#define __DMA_BUF_MAP_H__
+
+#include <linux/io.h>
+
+/**
+ * struct dma_buf_map - Pointer to vmap'ed dma-buf memory.
+ * @vaddr_iomem:	The buffer's address if in I/O memory
+ * @vaddr:		The buffer's address if in system memory
+ * @is_iomem:		True if the dma-buf memory is located in I/O
+ *			memory, or false otherwise.
+ *
+ * Calling dma-buf's vmap operation returns a pointer to the buffer.
+ * Depending on the location of the buffer, users may have to access it
+ * with I/O operations or memory load/store operations. struct dma_buf_map
+ * stores the buffer address and a flag that signals the required access.
+ */
+struct dma_buf_map {
+	union {
+		void __iomem *vaddr_iomem;
+		void *vaddr;
+	};
+	bool is_iomem;
+};
+
+/* API transition helper */
+static inline bool dma_buf_map_is_vaddr(const struct dma_buf_map *map, const void *vaddr)
+{
+	return !map->is_iomem && (map->vaddr = vaddr);
+}
+
+/**
+ * dma_buf_map_is_null - Tests for a dma-buf mapping to be NULL
+ * @map:	The dma-buf mapping structure
+ *
+ * Depending on the state of struct dma_buf_map.is_iomem, tests if the
+ * mapping is NULL.
+ *
+ * Returns:
+ * True if the mapping is NULL, or false otherwise.
+ */
+static inline bool dma_buf_map_is_null(const struct dma_buf_map *map)
+{
+	if (map->is_iomem)
+		return map->vaddr_iomem = NULL;
+	return map->vaddr = NULL;
+}
+
+/**
+ * dma_buf_map_is_set - Tests is the dma-buf mapping has been set
+ * @map:	The dma-buf mapping structure
+ *
+ * Depending on the state of struct dma_buf_map.is_iomem, tests if the
+ * mapping has been set.
+ *
+ * Returns:
+ * True if the mapping is been set, or false otherwise.
+ */
+static inline bool dma_buf_map_is_set(const struct dma_buf_map *map)
+{
+	return !dma_buf_map_is_null(map);
+}
+
+/**
+ * dma_buf_map_clear - Clears a dma-buf mapping structure
+ * @map:	The dma-buf mapping structure
+ *
+ * Clears all fields to zero; including struct dma_buf_map.is_iomem. So
+ * mapping structures that were set to point to I/O memory are reset for
+ * system memory. Pointers are cleared to NULL. This is the default.
+ */
+static inline void dma_buf_map_clear(struct dma_buf_map *map)
+{
+	if (map->is_iomem) {
+		map->vaddr_iomem = NULL;
+		map->is_iomem = false;
+	} else {
+		map->vaddr = NULL;
+	}
+}
+
+#endif /* __DMA_BUF_MAP_H__ */
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 957b398d30e5..fcc2ddfb6d18 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -13,6 +13,7 @@
 #ifndef __DMA_BUF_H__
 #define __DMA_BUF_H__
 
+#include <linux/dma-buf-map.h>
 #include <linux/file.h>
 #include <linux/err.h>
 #include <linux/scatterlist.h>
@@ -309,7 +310,7 @@ struct dma_buf {
 	const struct dma_buf_ops *ops;
 	struct mutex lock;
 	unsigned vmapping_counter;
-	void *vmap_ptr;
+	struct dma_buf_map vmap_ptr;
 	const char *exp_name;
 	const char *name;
 	spinlock_t name_lock;
-- 
2.28.0

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 1/3] dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
@ 2020-09-14 11:25   ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-14 11:25 UTC (permalink / raw)
  To: sumit.semwal, christian.koenig, daniel, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom
  Cc: intel-gfx, etnaviv, dri-devel, linaro-mm-sig, sparclinux,
	Thomas Zimmermann, linux-tegra, linux-media

The new type struct dma_buf_map represents a mapping of dma-buf memory
into kernel space. It contains a flag, is_iomem, that signals users to
access the mapped memory with I/O operations instead of regular loads
and stores.

It was assumed that DMA buffer memory can be accessed with regular load
and store operations. Some architectures, such as sparc64, require the
use of I/O operations to access dma-map buffers that are located in I/O
memory. Providing struct dma_buf_map allows drivers to implement this.
This was specifically a problem when refreshing the grahpics framebuffer
on such systems. [Link 1]

As the first step, struct dma_buf stores an instance of struct dma_buf_map
internally. Afterwards, dma-buf's vmap and vunmap interfaces are be
converted. Finally, affected drivers can be fixed.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
---
 Documentation/driver-api/dma-buf.rst |  3 +
 drivers/dma-buf/dma-buf.c            | 14 ++---
 include/linux/dma-buf-map.h          | 87 ++++++++++++++++++++++++++++
 include/linux/dma-buf.h              |  3 +-
 4 files changed, 99 insertions(+), 8 deletions(-)
 create mode 100644 include/linux/dma-buf-map.h

diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
index 13ea0cc0a3fa..3244c600a9a1 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -115,6 +115,9 @@ Kernel Functions and Structures Reference
 .. kernel-doc:: include/linux/dma-buf.h
    :internal:
 
+.. kernel-doc:: include/linux/dma-buf-map.h
+   :internal:
+
 Reservation Objects
 -------------------
 
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 58564d82a3a2..5e849ca241a0 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1207,12 +1207,12 @@ void *dma_buf_vmap(struct dma_buf *dmabuf)
 	mutex_lock(&dmabuf->lock);
 	if (dmabuf->vmapping_counter) {
 		dmabuf->vmapping_counter++;
-		BUG_ON(!dmabuf->vmap_ptr);
-		ptr = dmabuf->vmap_ptr;
+		BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
+		ptr = dmabuf->vmap_ptr.vaddr;
 		goto out_unlock;
 	}
 
-	BUG_ON(dmabuf->vmap_ptr);
+	BUG_ON(dma_buf_map_is_set(&dmabuf->vmap_ptr));
 
 	ptr = dmabuf->ops->vmap(dmabuf);
 	if (WARN_ON_ONCE(IS_ERR(ptr)))
@@ -1220,7 +1220,7 @@ void *dma_buf_vmap(struct dma_buf *dmabuf)
 	if (!ptr)
 		goto out_unlock;
 
-	dmabuf->vmap_ptr = ptr;
+	dmabuf->vmap_ptr.vaddr = ptr;
 	dmabuf->vmapping_counter = 1;
 
 out_unlock:
@@ -1239,15 +1239,15 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
 	if (WARN_ON(!dmabuf))
 		return;
 
-	BUG_ON(!dmabuf->vmap_ptr);
+	BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
 	BUG_ON(dmabuf->vmapping_counter == 0);
-	BUG_ON(dmabuf->vmap_ptr != vaddr);
+	BUG_ON(!dma_buf_map_is_vaddr(&dmabuf->vmap_ptr, vaddr));
 
 	mutex_lock(&dmabuf->lock);
 	if (--dmabuf->vmapping_counter == 0) {
 		if (dmabuf->ops->vunmap)
 			dmabuf->ops->vunmap(dmabuf, vaddr);
-		dmabuf->vmap_ptr = NULL;
+		dma_buf_map_clear(&dmabuf->vmap_ptr);
 	}
 	mutex_unlock(&dmabuf->lock);
 }
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
new file mode 100644
index 000000000000..d4b1bb3cc4b0
--- /dev/null
+++ b/include/linux/dma-buf-map.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Pointer to dma-buf-mapped memory, plus helpers.
+ */
+
+#ifndef __DMA_BUF_MAP_H__
+#define __DMA_BUF_MAP_H__
+
+#include <linux/io.h>
+
+/**
+ * struct dma_buf_map - Pointer to vmap'ed dma-buf memory.
+ * @vaddr_iomem:	The buffer's address if in I/O memory
+ * @vaddr:		The buffer's address if in system memory
+ * @is_iomem:		True if the dma-buf memory is located in I/O
+ *			memory, or false otherwise.
+ *
+ * Calling dma-buf's vmap operation returns a pointer to the buffer.
+ * Depending on the location of the buffer, users may have to access it
+ * with I/O operations or memory load/store operations. struct dma_buf_map
+ * stores the buffer address and a flag that signals the required access.
+ */
+struct dma_buf_map {
+	union {
+		void __iomem *vaddr_iomem;
+		void *vaddr;
+	};
+	bool is_iomem;
+};
+
+/* API transition helper */
+static inline bool dma_buf_map_is_vaddr(const struct dma_buf_map *map, const void *vaddr)
+{
+	return !map->is_iomem && (map->vaddr == vaddr);
+}
+
+/**
+ * dma_buf_map_is_null - Tests for a dma-buf mapping to be NULL
+ * @map:	The dma-buf mapping structure
+ *
+ * Depending on the state of struct dma_buf_map.is_iomem, tests if the
+ * mapping is NULL.
+ *
+ * Returns:
+ * True if the mapping is NULL, or false otherwise.
+ */
+static inline bool dma_buf_map_is_null(const struct dma_buf_map *map)
+{
+	if (map->is_iomem)
+		return map->vaddr_iomem == NULL;
+	return map->vaddr == NULL;
+}
+
+/**
+ * dma_buf_map_is_set - Tests is the dma-buf mapping has been set
+ * @map:	The dma-buf mapping structure
+ *
+ * Depending on the state of struct dma_buf_map.is_iomem, tests if the
+ * mapping has been set.
+ *
+ * Returns:
+ * True if the mapping is been set, or false otherwise.
+ */
+static inline bool dma_buf_map_is_set(const struct dma_buf_map *map)
+{
+	return !dma_buf_map_is_null(map);
+}
+
+/**
+ * dma_buf_map_clear - Clears a dma-buf mapping structure
+ * @map:	The dma-buf mapping structure
+ *
+ * Clears all fields to zero; including struct dma_buf_map.is_iomem. So
+ * mapping structures that were set to point to I/O memory are reset for
+ * system memory. Pointers are cleared to NULL. This is the default.
+ */
+static inline void dma_buf_map_clear(struct dma_buf_map *map)
+{
+	if (map->is_iomem) {
+		map->vaddr_iomem = NULL;
+		map->is_iomem = false;
+	} else {
+		map->vaddr = NULL;
+	}
+}
+
+#endif /* __DMA_BUF_MAP_H__ */
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 957b398d30e5..fcc2ddfb6d18 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -13,6 +13,7 @@
 #ifndef __DMA_BUF_H__
 #define __DMA_BUF_H__
 
+#include <linux/dma-buf-map.h>
 #include <linux/file.h>
 #include <linux/err.h>
 #include <linux/scatterlist.h>
@@ -309,7 +310,7 @@ struct dma_buf {
 	const struct dma_buf_ops *ops;
 	struct mutex lock;
 	unsigned vmapping_counter;
-	void *vmap_ptr;
+	struct dma_buf_map vmap_ptr;
 	const char *exp_name;
 	const char *name;
 	spinlock_t name_lock;
-- 
2.28.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [Intel-gfx] [PATCH 1/3] dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
@ 2020-09-14 11:25   ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-14 11:25 UTC (permalink / raw)
  To: sumit.semwal, christian.koenig, daniel, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom
  Cc: intel-gfx, etnaviv, dri-devel, linaro-mm-sig, sparclinux,
	Thomas Zimmermann, linux-tegra, linux-media

The new type struct dma_buf_map represents a mapping of dma-buf memory
into kernel space. It contains a flag, is_iomem, that signals users to
access the mapped memory with I/O operations instead of regular loads
and stores.

It was assumed that DMA buffer memory can be accessed with regular load
and store operations. Some architectures, such as sparc64, require the
use of I/O operations to access dma-map buffers that are located in I/O
memory. Providing struct dma_buf_map allows drivers to implement this.
This was specifically a problem when refreshing the grahpics framebuffer
on such systems. [Link 1]

As the first step, struct dma_buf stores an instance of struct dma_buf_map
internally. Afterwards, dma-buf's vmap and vunmap interfaces are be
converted. Finally, affected drivers can be fixed.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
---
 Documentation/driver-api/dma-buf.rst |  3 +
 drivers/dma-buf/dma-buf.c            | 14 ++---
 include/linux/dma-buf-map.h          | 87 ++++++++++++++++++++++++++++
 include/linux/dma-buf.h              |  3 +-
 4 files changed, 99 insertions(+), 8 deletions(-)
 create mode 100644 include/linux/dma-buf-map.h

diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
index 13ea0cc0a3fa..3244c600a9a1 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -115,6 +115,9 @@ Kernel Functions and Structures Reference
 .. kernel-doc:: include/linux/dma-buf.h
    :internal:
 
+.. kernel-doc:: include/linux/dma-buf-map.h
+   :internal:
+
 Reservation Objects
 -------------------
 
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 58564d82a3a2..5e849ca241a0 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1207,12 +1207,12 @@ void *dma_buf_vmap(struct dma_buf *dmabuf)
 	mutex_lock(&dmabuf->lock);
 	if (dmabuf->vmapping_counter) {
 		dmabuf->vmapping_counter++;
-		BUG_ON(!dmabuf->vmap_ptr);
-		ptr = dmabuf->vmap_ptr;
+		BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
+		ptr = dmabuf->vmap_ptr.vaddr;
 		goto out_unlock;
 	}
 
-	BUG_ON(dmabuf->vmap_ptr);
+	BUG_ON(dma_buf_map_is_set(&dmabuf->vmap_ptr));
 
 	ptr = dmabuf->ops->vmap(dmabuf);
 	if (WARN_ON_ONCE(IS_ERR(ptr)))
@@ -1220,7 +1220,7 @@ void *dma_buf_vmap(struct dma_buf *dmabuf)
 	if (!ptr)
 		goto out_unlock;
 
-	dmabuf->vmap_ptr = ptr;
+	dmabuf->vmap_ptr.vaddr = ptr;
 	dmabuf->vmapping_counter = 1;
 
 out_unlock:
@@ -1239,15 +1239,15 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
 	if (WARN_ON(!dmabuf))
 		return;
 
-	BUG_ON(!dmabuf->vmap_ptr);
+	BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
 	BUG_ON(dmabuf->vmapping_counter == 0);
-	BUG_ON(dmabuf->vmap_ptr != vaddr);
+	BUG_ON(!dma_buf_map_is_vaddr(&dmabuf->vmap_ptr, vaddr));
 
 	mutex_lock(&dmabuf->lock);
 	if (--dmabuf->vmapping_counter == 0) {
 		if (dmabuf->ops->vunmap)
 			dmabuf->ops->vunmap(dmabuf, vaddr);
-		dmabuf->vmap_ptr = NULL;
+		dma_buf_map_clear(&dmabuf->vmap_ptr);
 	}
 	mutex_unlock(&dmabuf->lock);
 }
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
new file mode 100644
index 000000000000..d4b1bb3cc4b0
--- /dev/null
+++ b/include/linux/dma-buf-map.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Pointer to dma-buf-mapped memory, plus helpers.
+ */
+
+#ifndef __DMA_BUF_MAP_H__
+#define __DMA_BUF_MAP_H__
+
+#include <linux/io.h>
+
+/**
+ * struct dma_buf_map - Pointer to vmap'ed dma-buf memory.
+ * @vaddr_iomem:	The buffer's address if in I/O memory
+ * @vaddr:		The buffer's address if in system memory
+ * @is_iomem:		True if the dma-buf memory is located in I/O
+ *			memory, or false otherwise.
+ *
+ * Calling dma-buf's vmap operation returns a pointer to the buffer.
+ * Depending on the location of the buffer, users may have to access it
+ * with I/O operations or memory load/store operations. struct dma_buf_map
+ * stores the buffer address and a flag that signals the required access.
+ */
+struct dma_buf_map {
+	union {
+		void __iomem *vaddr_iomem;
+		void *vaddr;
+	};
+	bool is_iomem;
+};
+
+/* API transition helper */
+static inline bool dma_buf_map_is_vaddr(const struct dma_buf_map *map, const void *vaddr)
+{
+	return !map->is_iomem && (map->vaddr == vaddr);
+}
+
+/**
+ * dma_buf_map_is_null - Tests for a dma-buf mapping to be NULL
+ * @map:	The dma-buf mapping structure
+ *
+ * Depending on the state of struct dma_buf_map.is_iomem, tests if the
+ * mapping is NULL.
+ *
+ * Returns:
+ * True if the mapping is NULL, or false otherwise.
+ */
+static inline bool dma_buf_map_is_null(const struct dma_buf_map *map)
+{
+	if (map->is_iomem)
+		return map->vaddr_iomem == NULL;
+	return map->vaddr == NULL;
+}
+
+/**
+ * dma_buf_map_is_set - Tests is the dma-buf mapping has been set
+ * @map:	The dma-buf mapping structure
+ *
+ * Depending on the state of struct dma_buf_map.is_iomem, tests if the
+ * mapping has been set.
+ *
+ * Returns:
+ * True if the mapping is been set, or false otherwise.
+ */
+static inline bool dma_buf_map_is_set(const struct dma_buf_map *map)
+{
+	return !dma_buf_map_is_null(map);
+}
+
+/**
+ * dma_buf_map_clear - Clears a dma-buf mapping structure
+ * @map:	The dma-buf mapping structure
+ *
+ * Clears all fields to zero; including struct dma_buf_map.is_iomem. So
+ * mapping structures that were set to point to I/O memory are reset for
+ * system memory. Pointers are cleared to NULL. This is the default.
+ */
+static inline void dma_buf_map_clear(struct dma_buf_map *map)
+{
+	if (map->is_iomem) {
+		map->vaddr_iomem = NULL;
+		map->is_iomem = false;
+	} else {
+		map->vaddr = NULL;
+	}
+}
+
+#endif /* __DMA_BUF_MAP_H__ */
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 957b398d30e5..fcc2ddfb6d18 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -13,6 +13,7 @@
 #ifndef __DMA_BUF_H__
 #define __DMA_BUF_H__
 
+#include <linux/dma-buf-map.h>
 #include <linux/file.h>
 #include <linux/err.h>
 #include <linux/scatterlist.h>
@@ -309,7 +310,7 @@ struct dma_buf {
 	const struct dma_buf_ops *ops;
 	struct mutex lock;
 	unsigned vmapping_counter;
-	void *vmap_ptr;
+	struct dma_buf_map vmap_ptr;
 	const char *exp_name;
 	const char *name;
 	spinlock_t name_lock;
-- 
2.28.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 2/3] dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
  2020-09-14 11:25 ` Thomas Zimmermann
  (?)
  (?)
@ 2020-09-14 11:25   ` Thomas Zimmermann
  -1 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-14 11:25 UTC (permalink / raw)
  To: sumit.semwal, christian.koenig, daniel, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom
  Cc: linux-media, dri-devel, linaro-mm-sig, etnaviv, intel-gfx,
	linux-tegra, sparclinux, Thomas Zimmermann

This patch updates dma_buf_vmap() and dma-buf's vmap callback to use
struct dma_buf_map.

The interfaces used to return a buffer address. This address now gets
stored in an instance of the structure that is given as an additional
argument. The functions return an errno code on errors.

Users of the functions are updated accordingly. This is only an interface
change. It is currently expected that dma-buf memory can be accessed with
system memory load/store operations.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/dma-buf/dma-buf.c                     | 26 ++++++++++---------
 drivers/gpu/drm/drm_gem_cma_helper.c          | 13 +++++-----
 drivers/gpu/drm/drm_gem_shmem_helper.c        | 14 ++++++----
 drivers/gpu/drm/drm_prime.c                   |  8 +++---
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  8 +++++-
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    | 11 ++++++--
 .../drm/i915/gem/selftests/i915_gem_dmabuf.c  | 12 ++++++---
 drivers/gpu/drm/tegra/gem.c                   | 18 ++++++++-----
 .../common/videobuf2/videobuf2-dma-contig.c   | 14 +++++++---
 .../media/common/videobuf2/videobuf2-dma-sg.c | 16 ++++++++----
 .../common/videobuf2/videobuf2-vmalloc.c      | 15 ++++++++---
 include/drm/drm_prime.h                       |  3 ++-
 include/linux/dma-buf-map.h                   | 13 ++++++++++
 include/linux/dma-buf.h                       |  6 ++---
 14 files changed, 122 insertions(+), 55 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 5e849ca241a0..c99e3577aa2f 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1186,46 +1186,48 @@ EXPORT_SYMBOL_GPL(dma_buf_mmap);
  * dma_buf_vmap - Create virtual mapping for the buffer object into kernel
  * address space. Same restrictions as for vmap and friends apply.
  * @dmabuf:	[in]	buffer to vmap
+ * @map:	[out]	returns the vmap pointer
  *
  * This call may fail due to lack of virtual mapping address space.
  * These calls are optional in drivers. The intended use for them
  * is for mapping objects linear in kernel space for high use objects.
  * Please attempt to use kmap/kunmap before thinking about these interfaces.
  *
- * Returns NULL on error.
+ * Returns 0 on success, or a negative errno code otherwise.
  */
-void *dma_buf_vmap(struct dma_buf *dmabuf)
+int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
 {
-	void *ptr;
+	struct dma_buf_map ptr;
+	int ret = 0;
 
 	if (WARN_ON(!dmabuf))
-		return NULL;
+		return -EINVAL;
 
 	if (!dmabuf->ops->vmap)
-		return NULL;
+		return -EINVAL;
 
 	mutex_lock(&dmabuf->lock);
 	if (dmabuf->vmapping_counter) {
 		dmabuf->vmapping_counter++;
 		BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
-		ptr = dmabuf->vmap_ptr.vaddr;
+		*map = dmabuf->vmap_ptr;
 		goto out_unlock;
 	}
 
 	BUG_ON(dma_buf_map_is_set(&dmabuf->vmap_ptr));
 
-	ptr = dmabuf->ops->vmap(dmabuf);
-	if (WARN_ON_ONCE(IS_ERR(ptr)))
-		ptr = NULL;
-	if (!ptr)
+	ret = dmabuf->ops->vmap(dmabuf, &ptr);
+	if (WARN_ON_ONCE(ret))
 		goto out_unlock;
 
-	dmabuf->vmap_ptr.vaddr = ptr;
+	dmabuf->vmap_ptr = ptr;
 	dmabuf->vmapping_counter = 1;
 
+	*map = dmabuf->vmap_ptr;
+
 out_unlock:
 	mutex_unlock(&dmabuf->lock);
-	return ptr;
+	return ret;
 }
 EXPORT_SYMBOL_GPL(dma_buf_vmap);
 
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 822edeadbab3..062315c25c12 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -634,22 +634,23 @@ drm_gem_cma_prime_import_sg_table_vmap(struct drm_device *dev,
 {
 	struct drm_gem_cma_object *cma_obj;
 	struct drm_gem_object *obj;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
-	vaddr = dma_buf_vmap(attach->dmabuf);
-	if (!vaddr) {
+	ret = dma_buf_vmap(attach->dmabuf, &map);
+	if (ret) {
 		DRM_ERROR("Failed to vmap PRIME buffer\n");
-		return ERR_PTR(-ENOMEM);
+		return ERR_PTR(ret);
 	}
 
 	obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
 	if (IS_ERR(obj)) {
-		dma_buf_vunmap(attach->dmabuf, vaddr);
+		dma_buf_vunmap(attach->dmabuf, map.vaddr);
 		return obj;
 	}
 
 	cma_obj = to_drm_gem_cma_obj(obj);
-	cma_obj->vaddr = vaddr;
+	cma_obj->vaddr = map.vaddr;
 
 	return obj;
 }
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 0a952f27c184..ad10a57cfece 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -261,13 +261,16 @@ EXPORT_SYMBOL(drm_gem_shmem_unpin);
 static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
-	int ret;
+	struct dma_buf_map map;
+	int ret = 0;
 
 	if (shmem->vmap_use_count++ > 0)
 		return shmem->vaddr;
 
 	if (obj->import_attach) {
-		shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
+		ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
+		if (!ret)
+			shmem->vaddr = map.vaddr;
 	} else {
 		pgprot_t prot = PAGE_KERNEL;
 
@@ -279,11 +282,12 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 			prot = pgprot_writecombine(prot);
 		shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
 				    VM_MAP, prot);
+		if (!shmem->vaddr)
+			ret = -ENOMEM;
 	}
 
-	if (!shmem->vaddr) {
-		DRM_DEBUG_KMS("Failed to vmap pages\n");
-		ret = -ENOMEM;
+	if (ret) {
+		DRM_DEBUG_KMS("Failed to vmap pages, error %d\n", ret);
 		goto err_put_pages;
 	}
 
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 8a6a3c99b7d8..1b7d86c7842d 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -668,16 +668,18 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
  *
  * Returns the kernel virtual address or NULL on failure.
  */
-void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf)
+int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
 	void *vaddr;
 
 	vaddr = drm_gem_vmap(obj);
 	if (IS_ERR(vaddr))
-		vaddr = NULL;
+		return PTR_ERR(vaddr);
 
-	return vaddr;
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 4aa3426a9ba4..80a9fc143bbb 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -85,9 +85,15 @@ static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj)
 
 static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj)
 {
+	struct dma_buf_map map;
+	int ret;
+
 	lockdep_assert_held(&etnaviv_obj->lock);
 
-	return dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf);
+	ret = dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf, &map);
+	if (ret)
+		return NULL;
+	return map.vaddr;
 }
 
 static int etnaviv_gem_prime_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 27fddc22a7c6..77b363d3000b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -82,11 +82,18 @@ static void i915_gem_unmap_dma_buf(struct dma_buf_attachment *attachment,
 	i915_gem_object_unpin_pages(obj);
 }
 
-static void *i915_gem_dmabuf_vmap(struct dma_buf *dma_buf)
+static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
+	void *vaddr;
 
-	return i915_gem_object_pin_map(obj, I915_MAP_WB);
+	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	if (IS_ERR(vaddr))
+		return PTR_ERR(vaddr);
+
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 
 static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
index 2a52b92586b9..f79ebc5329b7 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
@@ -82,6 +82,7 @@ static int igt_dmabuf_import(void *arg)
 	struct drm_i915_gem_object *obj;
 	struct dma_buf *dmabuf;
 	void *obj_map, *dma_map;
+	struct dma_buf_map map;
 	u32 pattern[] = { 0, 0xaa, 0xcc, 0x55, 0xff };
 	int err, i;
 
@@ -110,7 +111,8 @@ static int igt_dmabuf_import(void *arg)
 		goto out_obj;
 	}
 
-	dma_map = dma_buf_vmap(dmabuf);
+	err = dma_buf_vmap(dmabuf, &map);
+	dma_map = err ? NULL : map.vaddr;
 	if (!dma_map) {
 		pr_err("dma_buf_vmap failed\n");
 		err = -ENOMEM;
@@ -163,6 +165,7 @@ static int igt_dmabuf_import_ownership(void *arg)
 	struct drm_i915_private *i915 = arg;
 	struct drm_i915_gem_object *obj;
 	struct dma_buf *dmabuf;
+	struct dma_buf_map map;
 	void *ptr;
 	int err;
 
@@ -170,7 +173,8 @@ static int igt_dmabuf_import_ownership(void *arg)
 	if (IS_ERR(dmabuf))
 		return PTR_ERR(dmabuf);
 
-	ptr = dma_buf_vmap(dmabuf);
+	err = dma_buf_vmap(dmabuf, &map);
+	ptr = err ? NULL : map.vaddr;
 	if (!ptr) {
 		pr_err("dma_buf_vmap failed\n");
 		err = -ENOMEM;
@@ -212,6 +216,7 @@ static int igt_dmabuf_export_vmap(void *arg)
 	struct drm_i915_private *i915 = arg;
 	struct drm_i915_gem_object *obj;
 	struct dma_buf *dmabuf;
+	struct dma_buf_map map;
 	void *ptr;
 	int err;
 
@@ -228,7 +233,8 @@ static int igt_dmabuf_export_vmap(void *arg)
 	}
 	i915_gem_object_put(obj);
 
-	ptr = dma_buf_vmap(dmabuf);
+	err = dma_buf_vmap(dmabuf, &map);
+	ptr = err ? NULL : map.vaddr;
 	if (!ptr) {
 		pr_err("dma_buf_vmap failed\n");
 		err = -ENOMEM;
diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
index 47e2935b8c68..81663036c701 100644
--- a/drivers/gpu/drm/tegra/gem.c
+++ b/drivers/gpu/drm/tegra/gem.c
@@ -132,14 +132,18 @@ static void tegra_bo_unpin(struct device *dev, struct sg_table *sgt)
 static void *tegra_bo_mmap(struct host1x_bo *bo)
 {
 	struct tegra_bo *obj = host1x_to_tegra_bo(bo);
+	struct dma_buf_map map;
+	int ret;
 
-	if (obj->vaddr)
+	if (obj->vaddr) {
 		return obj->vaddr;
-	else if (obj->gem.import_attach)
-		return dma_buf_vmap(obj->gem.import_attach->dmabuf);
-	else
+	} else if (obj->gem.import_attach) {
+		ret = dma_buf_vmap(obj->gem.import_attach->dmabuf, &map);
+		return ret ? NULL : map.vaddr;
+	} else {
 		return vmap(obj->pages, obj->num_pages, VM_MAP,
 			    pgprot_writecombine(PAGE_KERNEL));
+	}
 }
 
 static void tegra_bo_munmap(struct host1x_bo *bo, void *addr)
@@ -641,12 +645,14 @@ static int tegra_gem_prime_mmap(struct dma_buf *buf, struct vm_area_struct *vma)
 	return __tegra_gem_mmap(gem, vma);
 }
 
-static void *tegra_gem_prime_vmap(struct dma_buf *buf)
+static int tegra_gem_prime_vmap(struct dma_buf *buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *gem = buf->priv;
 	struct tegra_bo *bo = to_tegra_bo(gem);
 
-	return bo->vaddr;
+	dma_buf_map_set_vaddr(map, bo->vaddr);
+
+	return 0;
 }
 
 static void tegra_gem_prime_vunmap(struct dma_buf *buf, void *vaddr)
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
index ec3446cc45b8..11428287bdf3 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
@@ -81,9 +81,13 @@ static void *vb2_dc_cookie(void *buf_priv)
 static void *vb2_dc_vaddr(void *buf_priv)
 {
 	struct vb2_dc_buf *buf = buf_priv;
+	struct dma_buf_map map;
+	int ret;
 
-	if (!buf->vaddr && buf->db_attach)
-		buf->vaddr = dma_buf_vmap(buf->db_attach->dmabuf);
+	if (!buf->vaddr && buf->db_attach) {
+		ret = dma_buf_vmap(buf->db_attach->dmabuf, &map);
+		buf->vaddr = ret ? NULL : map.vaddr;
+	}
 
 	return buf->vaddr;
 }
@@ -365,11 +369,13 @@ vb2_dc_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
 	return 0;
 }
 
-static void *vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf)
+static int vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
 {
 	struct vb2_dc_buf *buf = dbuf->priv;
 
-	return buf->vaddr;
+	dma_buf_map_set_vaddr(map, buf->vaddr);
+
+	return 0;
 }
 
 static int vb2_dc_dmabuf_ops_mmap(struct dma_buf *dbuf,
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
index 0a40e00f0d7e..c51170e9c1b9 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
@@ -300,14 +300,18 @@ static void vb2_dma_sg_put_userptr(void *buf_priv)
 static void *vb2_dma_sg_vaddr(void *buf_priv)
 {
 	struct vb2_dma_sg_buf *buf = buf_priv;
+	struct dma_buf_map map;
+	int ret;
 
 	BUG_ON(!buf);
 
 	if (!buf->vaddr) {
-		if (buf->db_attach)
-			buf->vaddr = dma_buf_vmap(buf->db_attach->dmabuf);
-		else
+		if (buf->db_attach) {
+			ret = dma_buf_vmap(buf->db_attach->dmabuf, &map);
+			buf->vaddr = ret ? NULL : map.vaddr;
+		} else {
 			buf->vaddr = vm_map_ram(buf->pages, buf->num_pages, -1);
+		}
 	}
 
 	/* add offset in case userptr is not page-aligned */
@@ -489,11 +493,13 @@ vb2_dma_sg_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
 	return 0;
 }
 
-static void *vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf)
+static int vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
 {
 	struct vb2_dma_sg_buf *buf = dbuf->priv;
 
-	return vb2_dma_sg_vaddr(buf);
+	dma_buf_map_set_vaddr(map, buf->vaddr);
+
+	return 0;
 }
 
 static int vb2_dma_sg_dmabuf_ops_mmap(struct dma_buf *dbuf,
diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
index c66fda4a65e4..7b68e2379c65 100644
--- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
+++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
@@ -318,11 +318,13 @@ static void vb2_vmalloc_dmabuf_ops_release(struct dma_buf *dbuf)
 	vb2_vmalloc_put(dbuf->priv);
 }
 
-static void *vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf)
+static int vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
 {
 	struct vb2_vmalloc_buf *buf = dbuf->priv;
 
-	return buf->vaddr;
+	dma_buf_map_set_vaddr(map, buf->vaddr);
+
+	return 0;
 }
 
 static int vb2_vmalloc_dmabuf_ops_mmap(struct dma_buf *dbuf,
@@ -374,10 +376,15 @@ static struct dma_buf *vb2_vmalloc_get_dmabuf(void *buf_priv, unsigned long flag
 static int vb2_vmalloc_map_dmabuf(void *mem_priv)
 {
 	struct vb2_vmalloc_buf *buf = mem_priv;
+	struct dma_buf_map map;
+	int ret;
 
-	buf->vaddr = dma_buf_vmap(buf->dbuf);
+	ret = dma_buf_vmap(buf->dbuf, &map);
+	if (ret)
+		return -EFAULT;
+	buf->vaddr = map.vaddr;
 
-	return buf->vaddr ? 0 : -EFAULT;
+	return 0;
 }
 
 static void vb2_vmalloc_unmap_dmabuf(void *mem_priv)
diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
index bf141e74a1c2..3ee22639ff77 100644
--- a/include/drm/drm_prime.h
+++ b/include/drm/drm_prime.h
@@ -54,6 +54,7 @@ struct device;
 struct dma_buf_export_info;
 struct dma_buf;
 struct dma_buf_attachment;
+struct dma_buf_map;
 
 enum dma_data_direction;
 
@@ -82,7 +83,7 @@ struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach,
 void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
 			   struct sg_table *sgt,
 			   enum dma_data_direction dir);
-void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf);
+int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map);
 void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr);
 
 int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index d4b1bb3cc4b0..6b4f6e0e8b5d 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -28,6 +28,19 @@ struct dma_buf_map {
 	bool is_iomem;
 };
 
+/**
+ * dma_buf_map_set_vaddr - Sets a dma-buf mapping structure to an address in system memory
+ * @map:	The dma-buf mapping structure
+ * @vaddr:	A system-memory address
+ *
+ * Sets the address and clears the I/O-memory flag.
+ */
+static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
+{
+	map->vaddr = vaddr;
+	map->is_iomem = false;
+}
+
 /* API transition helper */
 static inline bool dma_buf_map_is_vaddr(const struct dma_buf_map *map, const void *vaddr)
 {
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index fcc2ddfb6d18..7237997cfa38 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -266,7 +266,7 @@ struct dma_buf_ops {
 	 */
 	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
 
-	void *(*vmap)(struct dma_buf *);
+	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
 	void (*vunmap)(struct dma_buf *, void *vaddr);
 };
 
@@ -503,6 +503,6 @@ int dma_buf_end_cpu_access(struct dma_buf *dma_buf,
 
 int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
 		 unsigned long);
-void *dma_buf_vmap(struct dma_buf *);
-void dma_buf_vunmap(struct dma_buf *, void *vaddr);
+int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
+void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr);
 #endif /* __DMA_BUF_H__ */
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 2/3] dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
@ 2020-09-14 11:25   ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-14 11:25 UTC (permalink / raw)
  To: sumit.semwal, christian.koenig, daniel, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom
  Cc: intel-gfx, etnaviv, dri-devel, linaro-mm-sig, sparclinux,
	Thomas Zimmermann, linux-tegra, linux-media

This patch updates dma_buf_vmap() and dma-buf's vmap callback to use
struct dma_buf_map.

The interfaces used to return a buffer address. This address now gets
stored in an instance of the structure that is given as an additional
argument. The functions return an errno code on errors.

Users of the functions are updated accordingly. This is only an interface
change. It is currently expected that dma-buf memory can be accessed with
system memory load/store operations.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/dma-buf/dma-buf.c                     | 26 ++++++++++---------
 drivers/gpu/drm/drm_gem_cma_helper.c          | 13 +++++-----
 drivers/gpu/drm/drm_gem_shmem_helper.c        | 14 ++++++----
 drivers/gpu/drm/drm_prime.c                   |  8 +++---
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  8 +++++-
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    | 11 ++++++--
 .../drm/i915/gem/selftests/i915_gem_dmabuf.c  | 12 ++++++---
 drivers/gpu/drm/tegra/gem.c                   | 18 ++++++++-----
 .../common/videobuf2/videobuf2-dma-contig.c   | 14 +++++++---
 .../media/common/videobuf2/videobuf2-dma-sg.c | 16 ++++++++----
 .../common/videobuf2/videobuf2-vmalloc.c      | 15 ++++++++---
 include/drm/drm_prime.h                       |  3 ++-
 include/linux/dma-buf-map.h                   | 13 ++++++++++
 include/linux/dma-buf.h                       |  6 ++---
 14 files changed, 122 insertions(+), 55 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 5e849ca241a0..c99e3577aa2f 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1186,46 +1186,48 @@ EXPORT_SYMBOL_GPL(dma_buf_mmap);
  * dma_buf_vmap - Create virtual mapping for the buffer object into kernel
  * address space. Same restrictions as for vmap and friends apply.
  * @dmabuf:	[in]	buffer to vmap
+ * @map:	[out]	returns the vmap pointer
  *
  * This call may fail due to lack of virtual mapping address space.
  * These calls are optional in drivers. The intended use for them
  * is for mapping objects linear in kernel space for high use objects.
  * Please attempt to use kmap/kunmap before thinking about these interfaces.
  *
- * Returns NULL on error.
+ * Returns 0 on success, or a negative errno code otherwise.
  */
-void *dma_buf_vmap(struct dma_buf *dmabuf)
+int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
 {
-	void *ptr;
+	struct dma_buf_map ptr;
+	int ret = 0;
 
 	if (WARN_ON(!dmabuf))
-		return NULL;
+		return -EINVAL;
 
 	if (!dmabuf->ops->vmap)
-		return NULL;
+		return -EINVAL;
 
 	mutex_lock(&dmabuf->lock);
 	if (dmabuf->vmapping_counter) {
 		dmabuf->vmapping_counter++;
 		BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
-		ptr = dmabuf->vmap_ptr.vaddr;
+		*map = dmabuf->vmap_ptr;
 		goto out_unlock;
 	}
 
 	BUG_ON(dma_buf_map_is_set(&dmabuf->vmap_ptr));
 
-	ptr = dmabuf->ops->vmap(dmabuf);
-	if (WARN_ON_ONCE(IS_ERR(ptr)))
-		ptr = NULL;
-	if (!ptr)
+	ret = dmabuf->ops->vmap(dmabuf, &ptr);
+	if (WARN_ON_ONCE(ret))
 		goto out_unlock;
 
-	dmabuf->vmap_ptr.vaddr = ptr;
+	dmabuf->vmap_ptr = ptr;
 	dmabuf->vmapping_counter = 1;
 
+	*map = dmabuf->vmap_ptr;
+
 out_unlock:
 	mutex_unlock(&dmabuf->lock);
-	return ptr;
+	return ret;
 }
 EXPORT_SYMBOL_GPL(dma_buf_vmap);
 
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 822edeadbab3..062315c25c12 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -634,22 +634,23 @@ drm_gem_cma_prime_import_sg_table_vmap(struct drm_device *dev,
 {
 	struct drm_gem_cma_object *cma_obj;
 	struct drm_gem_object *obj;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
-	vaddr = dma_buf_vmap(attach->dmabuf);
-	if (!vaddr) {
+	ret = dma_buf_vmap(attach->dmabuf, &map);
+	if (ret) {
 		DRM_ERROR("Failed to vmap PRIME buffer\n");
-		return ERR_PTR(-ENOMEM);
+		return ERR_PTR(ret);
 	}
 
 	obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
 	if (IS_ERR(obj)) {
-		dma_buf_vunmap(attach->dmabuf, vaddr);
+		dma_buf_vunmap(attach->dmabuf, map.vaddr);
 		return obj;
 	}
 
 	cma_obj = to_drm_gem_cma_obj(obj);
-	cma_obj->vaddr = vaddr;
+	cma_obj->vaddr = map.vaddr;
 
 	return obj;
 }
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 0a952f27c184..ad10a57cfece 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -261,13 +261,16 @@ EXPORT_SYMBOL(drm_gem_shmem_unpin);
 static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
-	int ret;
+	struct dma_buf_map map;
+	int ret = 0;
 
 	if (shmem->vmap_use_count++ > 0)
 		return shmem->vaddr;
 
 	if (obj->import_attach) {
-		shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
+		ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
+		if (!ret)
+			shmem->vaddr = map.vaddr;
 	} else {
 		pgprot_t prot = PAGE_KERNEL;
 
@@ -279,11 +282,12 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 			prot = pgprot_writecombine(prot);
 		shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
 				    VM_MAP, prot);
+		if (!shmem->vaddr)
+			ret = -ENOMEM;
 	}
 
-	if (!shmem->vaddr) {
-		DRM_DEBUG_KMS("Failed to vmap pages\n");
-		ret = -ENOMEM;
+	if (ret) {
+		DRM_DEBUG_KMS("Failed to vmap pages, error %d\n", ret);
 		goto err_put_pages;
 	}
 
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 8a6a3c99b7d8..1b7d86c7842d 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -668,16 +668,18 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
  *
  * Returns the kernel virtual address or NULL on failure.
  */
-void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf)
+int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
 	void *vaddr;
 
 	vaddr = drm_gem_vmap(obj);
 	if (IS_ERR(vaddr))
-		vaddr = NULL;
+		return PTR_ERR(vaddr);
 
-	return vaddr;
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 4aa3426a9ba4..80a9fc143bbb 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -85,9 +85,15 @@ static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj)
 
 static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj)
 {
+	struct dma_buf_map map;
+	int ret;
+
 	lockdep_assert_held(&etnaviv_obj->lock);
 
-	return dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf);
+	ret = dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf, &map);
+	if (ret)
+		return NULL;
+	return map.vaddr;
 }
 
 static int etnaviv_gem_prime_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 27fddc22a7c6..77b363d3000b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -82,11 +82,18 @@ static void i915_gem_unmap_dma_buf(struct dma_buf_attachment *attachment,
 	i915_gem_object_unpin_pages(obj);
 }
 
-static void *i915_gem_dmabuf_vmap(struct dma_buf *dma_buf)
+static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
+	void *vaddr;
 
-	return i915_gem_object_pin_map(obj, I915_MAP_WB);
+	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	if (IS_ERR(vaddr))
+		return PTR_ERR(vaddr);
+
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 
 static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
index 2a52b92586b9..f79ebc5329b7 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
@@ -82,6 +82,7 @@ static int igt_dmabuf_import(void *arg)
 	struct drm_i915_gem_object *obj;
 	struct dma_buf *dmabuf;
 	void *obj_map, *dma_map;
+	struct dma_buf_map map;
 	u32 pattern[] = { 0, 0xaa, 0xcc, 0x55, 0xff };
 	int err, i;
 
@@ -110,7 +111,8 @@ static int igt_dmabuf_import(void *arg)
 		goto out_obj;
 	}
 
-	dma_map = dma_buf_vmap(dmabuf);
+	err = dma_buf_vmap(dmabuf, &map);
+	dma_map = err ? NULL : map.vaddr;
 	if (!dma_map) {
 		pr_err("dma_buf_vmap failed\n");
 		err = -ENOMEM;
@@ -163,6 +165,7 @@ static int igt_dmabuf_import_ownership(void *arg)
 	struct drm_i915_private *i915 = arg;
 	struct drm_i915_gem_object *obj;
 	struct dma_buf *dmabuf;
+	struct dma_buf_map map;
 	void *ptr;
 	int err;
 
@@ -170,7 +173,8 @@ static int igt_dmabuf_import_ownership(void *arg)
 	if (IS_ERR(dmabuf))
 		return PTR_ERR(dmabuf);
 
-	ptr = dma_buf_vmap(dmabuf);
+	err = dma_buf_vmap(dmabuf, &map);
+	ptr = err ? NULL : map.vaddr;
 	if (!ptr) {
 		pr_err("dma_buf_vmap failed\n");
 		err = -ENOMEM;
@@ -212,6 +216,7 @@ static int igt_dmabuf_export_vmap(void *arg)
 	struct drm_i915_private *i915 = arg;
 	struct drm_i915_gem_object *obj;
 	struct dma_buf *dmabuf;
+	struct dma_buf_map map;
 	void *ptr;
 	int err;
 
@@ -228,7 +233,8 @@ static int igt_dmabuf_export_vmap(void *arg)
 	}
 	i915_gem_object_put(obj);
 
-	ptr = dma_buf_vmap(dmabuf);
+	err = dma_buf_vmap(dmabuf, &map);
+	ptr = err ? NULL : map.vaddr;
 	if (!ptr) {
 		pr_err("dma_buf_vmap failed\n");
 		err = -ENOMEM;
diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
index 47e2935b8c68..81663036c701 100644
--- a/drivers/gpu/drm/tegra/gem.c
+++ b/drivers/gpu/drm/tegra/gem.c
@@ -132,14 +132,18 @@ static void tegra_bo_unpin(struct device *dev, struct sg_table *sgt)
 static void *tegra_bo_mmap(struct host1x_bo *bo)
 {
 	struct tegra_bo *obj = host1x_to_tegra_bo(bo);
+	struct dma_buf_map map;
+	int ret;
 
-	if (obj->vaddr)
+	if (obj->vaddr) {
 		return obj->vaddr;
-	else if (obj->gem.import_attach)
-		return dma_buf_vmap(obj->gem.import_attach->dmabuf);
-	else
+	} else if (obj->gem.import_attach) {
+		ret = dma_buf_vmap(obj->gem.import_attach->dmabuf, &map);
+		return ret ? NULL : map.vaddr;
+	} else {
 		return vmap(obj->pages, obj->num_pages, VM_MAP,
 			    pgprot_writecombine(PAGE_KERNEL));
+	}
 }
 
 static void tegra_bo_munmap(struct host1x_bo *bo, void *addr)
@@ -641,12 +645,14 @@ static int tegra_gem_prime_mmap(struct dma_buf *buf, struct vm_area_struct *vma)
 	return __tegra_gem_mmap(gem, vma);
 }
 
-static void *tegra_gem_prime_vmap(struct dma_buf *buf)
+static int tegra_gem_prime_vmap(struct dma_buf *buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *gem = buf->priv;
 	struct tegra_bo *bo = to_tegra_bo(gem);
 
-	return bo->vaddr;
+	dma_buf_map_set_vaddr(map, bo->vaddr);
+
+	return 0;
 }
 
 static void tegra_gem_prime_vunmap(struct dma_buf *buf, void *vaddr)
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
index ec3446cc45b8..11428287bdf3 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
@@ -81,9 +81,13 @@ static void *vb2_dc_cookie(void *buf_priv)
 static void *vb2_dc_vaddr(void *buf_priv)
 {
 	struct vb2_dc_buf *buf = buf_priv;
+	struct dma_buf_map map;
+	int ret;
 
-	if (!buf->vaddr && buf->db_attach)
-		buf->vaddr = dma_buf_vmap(buf->db_attach->dmabuf);
+	if (!buf->vaddr && buf->db_attach) {
+		ret = dma_buf_vmap(buf->db_attach->dmabuf, &map);
+		buf->vaddr = ret ? NULL : map.vaddr;
+	}
 
 	return buf->vaddr;
 }
@@ -365,11 +369,13 @@ vb2_dc_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
 	return 0;
 }
 
-static void *vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf)
+static int vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
 {
 	struct vb2_dc_buf *buf = dbuf->priv;
 
-	return buf->vaddr;
+	dma_buf_map_set_vaddr(map, buf->vaddr);
+
+	return 0;
 }
 
 static int vb2_dc_dmabuf_ops_mmap(struct dma_buf *dbuf,
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
index 0a40e00f0d7e..c51170e9c1b9 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
@@ -300,14 +300,18 @@ static void vb2_dma_sg_put_userptr(void *buf_priv)
 static void *vb2_dma_sg_vaddr(void *buf_priv)
 {
 	struct vb2_dma_sg_buf *buf = buf_priv;
+	struct dma_buf_map map;
+	int ret;
 
 	BUG_ON(!buf);
 
 	if (!buf->vaddr) {
-		if (buf->db_attach)
-			buf->vaddr = dma_buf_vmap(buf->db_attach->dmabuf);
-		else
+		if (buf->db_attach) {
+			ret = dma_buf_vmap(buf->db_attach->dmabuf, &map);
+			buf->vaddr = ret ? NULL : map.vaddr;
+		} else {
 			buf->vaddr = vm_map_ram(buf->pages, buf->num_pages, -1);
+		}
 	}
 
 	/* add offset in case userptr is not page-aligned */
@@ -489,11 +493,13 @@ vb2_dma_sg_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
 	return 0;
 }
 
-static void *vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf)
+static int vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
 {
 	struct vb2_dma_sg_buf *buf = dbuf->priv;
 
-	return vb2_dma_sg_vaddr(buf);
+	dma_buf_map_set_vaddr(map, buf->vaddr);
+
+	return 0;
 }
 
 static int vb2_dma_sg_dmabuf_ops_mmap(struct dma_buf *dbuf,
diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
index c66fda4a65e4..7b68e2379c65 100644
--- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
+++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
@@ -318,11 +318,13 @@ static void vb2_vmalloc_dmabuf_ops_release(struct dma_buf *dbuf)
 	vb2_vmalloc_put(dbuf->priv);
 }
 
-static void *vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf)
+static int vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
 {
 	struct vb2_vmalloc_buf *buf = dbuf->priv;
 
-	return buf->vaddr;
+	dma_buf_map_set_vaddr(map, buf->vaddr);
+
+	return 0;
 }
 
 static int vb2_vmalloc_dmabuf_ops_mmap(struct dma_buf *dbuf,
@@ -374,10 +376,15 @@ static struct dma_buf *vb2_vmalloc_get_dmabuf(void *buf_priv, unsigned long flag
 static int vb2_vmalloc_map_dmabuf(void *mem_priv)
 {
 	struct vb2_vmalloc_buf *buf = mem_priv;
+	struct dma_buf_map map;
+	int ret;
 
-	buf->vaddr = dma_buf_vmap(buf->dbuf);
+	ret = dma_buf_vmap(buf->dbuf, &map);
+	if (ret)
+		return -EFAULT;
+	buf->vaddr = map.vaddr;
 
-	return buf->vaddr ? 0 : -EFAULT;
+	return 0;
 }
 
 static void vb2_vmalloc_unmap_dmabuf(void *mem_priv)
diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
index bf141e74a1c2..3ee22639ff77 100644
--- a/include/drm/drm_prime.h
+++ b/include/drm/drm_prime.h
@@ -54,6 +54,7 @@ struct device;
 struct dma_buf_export_info;
 struct dma_buf;
 struct dma_buf_attachment;
+struct dma_buf_map;
 
 enum dma_data_direction;
 
@@ -82,7 +83,7 @@ struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach,
 void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
 			   struct sg_table *sgt,
 			   enum dma_data_direction dir);
-void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf);
+int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map);
 void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr);
 
 int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index d4b1bb3cc4b0..6b4f6e0e8b5d 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -28,6 +28,19 @@ struct dma_buf_map {
 	bool is_iomem;
 };
 
+/**
+ * dma_buf_map_set_vaddr - Sets a dma-buf mapping structure to an address in system memory
+ * @map:	The dma-buf mapping structure
+ * @vaddr:	A system-memory address
+ *
+ * Sets the address and clears the I/O-memory flag.
+ */
+static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
+{
+	map->vaddr = vaddr;
+	map->is_iomem = false;
+}
+
 /* API transition helper */
 static inline bool dma_buf_map_is_vaddr(const struct dma_buf_map *map, const void *vaddr)
 {
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index fcc2ddfb6d18..7237997cfa38 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -266,7 +266,7 @@ struct dma_buf_ops {
 	 */
 	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
 
-	void *(*vmap)(struct dma_buf *);
+	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
 	void (*vunmap)(struct dma_buf *, void *vaddr);
 };
 
@@ -503,6 +503,6 @@ int dma_buf_end_cpu_access(struct dma_buf *dma_buf,
 
 int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
 		 unsigned long);
-void *dma_buf_vmap(struct dma_buf *);
-void dma_buf_vunmap(struct dma_buf *, void *vaddr);
+int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
+void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr);
 #endif /* __DMA_BUF_H__ */
-- 
2.28.0

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 2/3] dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
@ 2020-09-14 11:25   ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-14 11:25 UTC (permalink / raw)
  To: sumit.semwal, christian.koenig, daniel, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom
  Cc: intel-gfx, etnaviv, dri-devel, linaro-mm-sig, sparclinux,
	Thomas Zimmermann, linux-tegra, linux-media

This patch updates dma_buf_vmap() and dma-buf's vmap callback to use
struct dma_buf_map.

The interfaces used to return a buffer address. This address now gets
stored in an instance of the structure that is given as an additional
argument. The functions return an errno code on errors.

Users of the functions are updated accordingly. This is only an interface
change. It is currently expected that dma-buf memory can be accessed with
system memory load/store operations.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/dma-buf/dma-buf.c                     | 26 ++++++++++---------
 drivers/gpu/drm/drm_gem_cma_helper.c          | 13 +++++-----
 drivers/gpu/drm/drm_gem_shmem_helper.c        | 14 ++++++----
 drivers/gpu/drm/drm_prime.c                   |  8 +++---
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  8 +++++-
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    | 11 ++++++--
 .../drm/i915/gem/selftests/i915_gem_dmabuf.c  | 12 ++++++---
 drivers/gpu/drm/tegra/gem.c                   | 18 ++++++++-----
 .../common/videobuf2/videobuf2-dma-contig.c   | 14 +++++++---
 .../media/common/videobuf2/videobuf2-dma-sg.c | 16 ++++++++----
 .../common/videobuf2/videobuf2-vmalloc.c      | 15 ++++++++---
 include/drm/drm_prime.h                       |  3 ++-
 include/linux/dma-buf-map.h                   | 13 ++++++++++
 include/linux/dma-buf.h                       |  6 ++---
 14 files changed, 122 insertions(+), 55 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 5e849ca241a0..c99e3577aa2f 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1186,46 +1186,48 @@ EXPORT_SYMBOL_GPL(dma_buf_mmap);
  * dma_buf_vmap - Create virtual mapping for the buffer object into kernel
  * address space. Same restrictions as for vmap and friends apply.
  * @dmabuf:	[in]	buffer to vmap
+ * @map:	[out]	returns the vmap pointer
  *
  * This call may fail due to lack of virtual mapping address space.
  * These calls are optional in drivers. The intended use for them
  * is for mapping objects linear in kernel space for high use objects.
  * Please attempt to use kmap/kunmap before thinking about these interfaces.
  *
- * Returns NULL on error.
+ * Returns 0 on success, or a negative errno code otherwise.
  */
-void *dma_buf_vmap(struct dma_buf *dmabuf)
+int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
 {
-	void *ptr;
+	struct dma_buf_map ptr;
+	int ret = 0;
 
 	if (WARN_ON(!dmabuf))
-		return NULL;
+		return -EINVAL;
 
 	if (!dmabuf->ops->vmap)
-		return NULL;
+		return -EINVAL;
 
 	mutex_lock(&dmabuf->lock);
 	if (dmabuf->vmapping_counter) {
 		dmabuf->vmapping_counter++;
 		BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
-		ptr = dmabuf->vmap_ptr.vaddr;
+		*map = dmabuf->vmap_ptr;
 		goto out_unlock;
 	}
 
 	BUG_ON(dma_buf_map_is_set(&dmabuf->vmap_ptr));
 
-	ptr = dmabuf->ops->vmap(dmabuf);
-	if (WARN_ON_ONCE(IS_ERR(ptr)))
-		ptr = NULL;
-	if (!ptr)
+	ret = dmabuf->ops->vmap(dmabuf, &ptr);
+	if (WARN_ON_ONCE(ret))
 		goto out_unlock;
 
-	dmabuf->vmap_ptr.vaddr = ptr;
+	dmabuf->vmap_ptr = ptr;
 	dmabuf->vmapping_counter = 1;
 
+	*map = dmabuf->vmap_ptr;
+
 out_unlock:
 	mutex_unlock(&dmabuf->lock);
-	return ptr;
+	return ret;
 }
 EXPORT_SYMBOL_GPL(dma_buf_vmap);
 
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 822edeadbab3..062315c25c12 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -634,22 +634,23 @@ drm_gem_cma_prime_import_sg_table_vmap(struct drm_device *dev,
 {
 	struct drm_gem_cma_object *cma_obj;
 	struct drm_gem_object *obj;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
-	vaddr = dma_buf_vmap(attach->dmabuf);
-	if (!vaddr) {
+	ret = dma_buf_vmap(attach->dmabuf, &map);
+	if (ret) {
 		DRM_ERROR("Failed to vmap PRIME buffer\n");
-		return ERR_PTR(-ENOMEM);
+		return ERR_PTR(ret);
 	}
 
 	obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
 	if (IS_ERR(obj)) {
-		dma_buf_vunmap(attach->dmabuf, vaddr);
+		dma_buf_vunmap(attach->dmabuf, map.vaddr);
 		return obj;
 	}
 
 	cma_obj = to_drm_gem_cma_obj(obj);
-	cma_obj->vaddr = vaddr;
+	cma_obj->vaddr = map.vaddr;
 
 	return obj;
 }
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 0a952f27c184..ad10a57cfece 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -261,13 +261,16 @@ EXPORT_SYMBOL(drm_gem_shmem_unpin);
 static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
-	int ret;
+	struct dma_buf_map map;
+	int ret = 0;
 
 	if (shmem->vmap_use_count++ > 0)
 		return shmem->vaddr;
 
 	if (obj->import_attach) {
-		shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
+		ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
+		if (!ret)
+			shmem->vaddr = map.vaddr;
 	} else {
 		pgprot_t prot = PAGE_KERNEL;
 
@@ -279,11 +282,12 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 			prot = pgprot_writecombine(prot);
 		shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
 				    VM_MAP, prot);
+		if (!shmem->vaddr)
+			ret = -ENOMEM;
 	}
 
-	if (!shmem->vaddr) {
-		DRM_DEBUG_KMS("Failed to vmap pages\n");
-		ret = -ENOMEM;
+	if (ret) {
+		DRM_DEBUG_KMS("Failed to vmap pages, error %d\n", ret);
 		goto err_put_pages;
 	}
 
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 8a6a3c99b7d8..1b7d86c7842d 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -668,16 +668,18 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
  *
  * Returns the kernel virtual address or NULL on failure.
  */
-void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf)
+int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
 	void *vaddr;
 
 	vaddr = drm_gem_vmap(obj);
 	if (IS_ERR(vaddr))
-		vaddr = NULL;
+		return PTR_ERR(vaddr);
 
-	return vaddr;
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 4aa3426a9ba4..80a9fc143bbb 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -85,9 +85,15 @@ static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj)
 
 static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj)
 {
+	struct dma_buf_map map;
+	int ret;
+
 	lockdep_assert_held(&etnaviv_obj->lock);
 
-	return dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf);
+	ret = dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf, &map);
+	if (ret)
+		return NULL;
+	return map.vaddr;
 }
 
 static int etnaviv_gem_prime_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 27fddc22a7c6..77b363d3000b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -82,11 +82,18 @@ static void i915_gem_unmap_dma_buf(struct dma_buf_attachment *attachment,
 	i915_gem_object_unpin_pages(obj);
 }
 
-static void *i915_gem_dmabuf_vmap(struct dma_buf *dma_buf)
+static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
+	void *vaddr;
 
-	return i915_gem_object_pin_map(obj, I915_MAP_WB);
+	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	if (IS_ERR(vaddr))
+		return PTR_ERR(vaddr);
+
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 
 static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
index 2a52b92586b9..f79ebc5329b7 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
@@ -82,6 +82,7 @@ static int igt_dmabuf_import(void *arg)
 	struct drm_i915_gem_object *obj;
 	struct dma_buf *dmabuf;
 	void *obj_map, *dma_map;
+	struct dma_buf_map map;
 	u32 pattern[] = { 0, 0xaa, 0xcc, 0x55, 0xff };
 	int err, i;
 
@@ -110,7 +111,8 @@ static int igt_dmabuf_import(void *arg)
 		goto out_obj;
 	}
 
-	dma_map = dma_buf_vmap(dmabuf);
+	err = dma_buf_vmap(dmabuf, &map);
+	dma_map = err ? NULL : map.vaddr;
 	if (!dma_map) {
 		pr_err("dma_buf_vmap failed\n");
 		err = -ENOMEM;
@@ -163,6 +165,7 @@ static int igt_dmabuf_import_ownership(void *arg)
 	struct drm_i915_private *i915 = arg;
 	struct drm_i915_gem_object *obj;
 	struct dma_buf *dmabuf;
+	struct dma_buf_map map;
 	void *ptr;
 	int err;
 
@@ -170,7 +173,8 @@ static int igt_dmabuf_import_ownership(void *arg)
 	if (IS_ERR(dmabuf))
 		return PTR_ERR(dmabuf);
 
-	ptr = dma_buf_vmap(dmabuf);
+	err = dma_buf_vmap(dmabuf, &map);
+	ptr = err ? NULL : map.vaddr;
 	if (!ptr) {
 		pr_err("dma_buf_vmap failed\n");
 		err = -ENOMEM;
@@ -212,6 +216,7 @@ static int igt_dmabuf_export_vmap(void *arg)
 	struct drm_i915_private *i915 = arg;
 	struct drm_i915_gem_object *obj;
 	struct dma_buf *dmabuf;
+	struct dma_buf_map map;
 	void *ptr;
 	int err;
 
@@ -228,7 +233,8 @@ static int igt_dmabuf_export_vmap(void *arg)
 	}
 	i915_gem_object_put(obj);
 
-	ptr = dma_buf_vmap(dmabuf);
+	err = dma_buf_vmap(dmabuf, &map);
+	ptr = err ? NULL : map.vaddr;
 	if (!ptr) {
 		pr_err("dma_buf_vmap failed\n");
 		err = -ENOMEM;
diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
index 47e2935b8c68..81663036c701 100644
--- a/drivers/gpu/drm/tegra/gem.c
+++ b/drivers/gpu/drm/tegra/gem.c
@@ -132,14 +132,18 @@ static void tegra_bo_unpin(struct device *dev, struct sg_table *sgt)
 static void *tegra_bo_mmap(struct host1x_bo *bo)
 {
 	struct tegra_bo *obj = host1x_to_tegra_bo(bo);
+	struct dma_buf_map map;
+	int ret;
 
-	if (obj->vaddr)
+	if (obj->vaddr) {
 		return obj->vaddr;
-	else if (obj->gem.import_attach)
-		return dma_buf_vmap(obj->gem.import_attach->dmabuf);
-	else
+	} else if (obj->gem.import_attach) {
+		ret = dma_buf_vmap(obj->gem.import_attach->dmabuf, &map);
+		return ret ? NULL : map.vaddr;
+	} else {
 		return vmap(obj->pages, obj->num_pages, VM_MAP,
 			    pgprot_writecombine(PAGE_KERNEL));
+	}
 }
 
 static void tegra_bo_munmap(struct host1x_bo *bo, void *addr)
@@ -641,12 +645,14 @@ static int tegra_gem_prime_mmap(struct dma_buf *buf, struct vm_area_struct *vma)
 	return __tegra_gem_mmap(gem, vma);
 }
 
-static void *tegra_gem_prime_vmap(struct dma_buf *buf)
+static int tegra_gem_prime_vmap(struct dma_buf *buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *gem = buf->priv;
 	struct tegra_bo *bo = to_tegra_bo(gem);
 
-	return bo->vaddr;
+	dma_buf_map_set_vaddr(map, bo->vaddr);
+
+	return 0;
 }
 
 static void tegra_gem_prime_vunmap(struct dma_buf *buf, void *vaddr)
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
index ec3446cc45b8..11428287bdf3 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
@@ -81,9 +81,13 @@ static void *vb2_dc_cookie(void *buf_priv)
 static void *vb2_dc_vaddr(void *buf_priv)
 {
 	struct vb2_dc_buf *buf = buf_priv;
+	struct dma_buf_map map;
+	int ret;
 
-	if (!buf->vaddr && buf->db_attach)
-		buf->vaddr = dma_buf_vmap(buf->db_attach->dmabuf);
+	if (!buf->vaddr && buf->db_attach) {
+		ret = dma_buf_vmap(buf->db_attach->dmabuf, &map);
+		buf->vaddr = ret ? NULL : map.vaddr;
+	}
 
 	return buf->vaddr;
 }
@@ -365,11 +369,13 @@ vb2_dc_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
 	return 0;
 }
 
-static void *vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf)
+static int vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
 {
 	struct vb2_dc_buf *buf = dbuf->priv;
 
-	return buf->vaddr;
+	dma_buf_map_set_vaddr(map, buf->vaddr);
+
+	return 0;
 }
 
 static int vb2_dc_dmabuf_ops_mmap(struct dma_buf *dbuf,
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
index 0a40e00f0d7e..c51170e9c1b9 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
@@ -300,14 +300,18 @@ static void vb2_dma_sg_put_userptr(void *buf_priv)
 static void *vb2_dma_sg_vaddr(void *buf_priv)
 {
 	struct vb2_dma_sg_buf *buf = buf_priv;
+	struct dma_buf_map map;
+	int ret;
 
 	BUG_ON(!buf);
 
 	if (!buf->vaddr) {
-		if (buf->db_attach)
-			buf->vaddr = dma_buf_vmap(buf->db_attach->dmabuf);
-		else
+		if (buf->db_attach) {
+			ret = dma_buf_vmap(buf->db_attach->dmabuf, &map);
+			buf->vaddr = ret ? NULL : map.vaddr;
+		} else {
 			buf->vaddr = vm_map_ram(buf->pages, buf->num_pages, -1);
+		}
 	}
 
 	/* add offset in case userptr is not page-aligned */
@@ -489,11 +493,13 @@ vb2_dma_sg_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
 	return 0;
 }
 
-static void *vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf)
+static int vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
 {
 	struct vb2_dma_sg_buf *buf = dbuf->priv;
 
-	return vb2_dma_sg_vaddr(buf);
+	dma_buf_map_set_vaddr(map, buf->vaddr);
+
+	return 0;
 }
 
 static int vb2_dma_sg_dmabuf_ops_mmap(struct dma_buf *dbuf,
diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
index c66fda4a65e4..7b68e2379c65 100644
--- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
+++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
@@ -318,11 +318,13 @@ static void vb2_vmalloc_dmabuf_ops_release(struct dma_buf *dbuf)
 	vb2_vmalloc_put(dbuf->priv);
 }
 
-static void *vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf)
+static int vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
 {
 	struct vb2_vmalloc_buf *buf = dbuf->priv;
 
-	return buf->vaddr;
+	dma_buf_map_set_vaddr(map, buf->vaddr);
+
+	return 0;
 }
 
 static int vb2_vmalloc_dmabuf_ops_mmap(struct dma_buf *dbuf,
@@ -374,10 +376,15 @@ static struct dma_buf *vb2_vmalloc_get_dmabuf(void *buf_priv, unsigned long flag
 static int vb2_vmalloc_map_dmabuf(void *mem_priv)
 {
 	struct vb2_vmalloc_buf *buf = mem_priv;
+	struct dma_buf_map map;
+	int ret;
 
-	buf->vaddr = dma_buf_vmap(buf->dbuf);
+	ret = dma_buf_vmap(buf->dbuf, &map);
+	if (ret)
+		return -EFAULT;
+	buf->vaddr = map.vaddr;
 
-	return buf->vaddr ? 0 : -EFAULT;
+	return 0;
 }
 
 static void vb2_vmalloc_unmap_dmabuf(void *mem_priv)
diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
index bf141e74a1c2..3ee22639ff77 100644
--- a/include/drm/drm_prime.h
+++ b/include/drm/drm_prime.h
@@ -54,6 +54,7 @@ struct device;
 struct dma_buf_export_info;
 struct dma_buf;
 struct dma_buf_attachment;
+struct dma_buf_map;
 
 enum dma_data_direction;
 
@@ -82,7 +83,7 @@ struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach,
 void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
 			   struct sg_table *sgt,
 			   enum dma_data_direction dir);
-void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf);
+int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map);
 void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr);
 
 int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index d4b1bb3cc4b0..6b4f6e0e8b5d 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -28,6 +28,19 @@ struct dma_buf_map {
 	bool is_iomem;
 };
 
+/**
+ * dma_buf_map_set_vaddr - Sets a dma-buf mapping structure to an address in system memory
+ * @map:	The dma-buf mapping structure
+ * @vaddr:	A system-memory address
+ *
+ * Sets the address and clears the I/O-memory flag.
+ */
+static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
+{
+	map->vaddr = vaddr;
+	map->is_iomem = false;
+}
+
 /* API transition helper */
 static inline bool dma_buf_map_is_vaddr(const struct dma_buf_map *map, const void *vaddr)
 {
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index fcc2ddfb6d18..7237997cfa38 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -266,7 +266,7 @@ struct dma_buf_ops {
 	 */
 	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
 
-	void *(*vmap)(struct dma_buf *);
+	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
 	void (*vunmap)(struct dma_buf *, void *vaddr);
 };
 
@@ -503,6 +503,6 @@ int dma_buf_end_cpu_access(struct dma_buf *dma_buf,
 
 int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
 		 unsigned long);
-void *dma_buf_vmap(struct dma_buf *);
-void dma_buf_vunmap(struct dma_buf *, void *vaddr);
+int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
+void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr);
 #endif /* __DMA_BUF_H__ */
-- 
2.28.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [Intel-gfx] [PATCH 2/3] dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
@ 2020-09-14 11:25   ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-14 11:25 UTC (permalink / raw)
  To: sumit.semwal, christian.koenig, daniel, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom
  Cc: intel-gfx, etnaviv, dri-devel, linaro-mm-sig, sparclinux,
	Thomas Zimmermann, linux-tegra, linux-media

This patch updates dma_buf_vmap() and dma-buf's vmap callback to use
struct dma_buf_map.

The interfaces used to return a buffer address. This address now gets
stored in an instance of the structure that is given as an additional
argument. The functions return an errno code on errors.

Users of the functions are updated accordingly. This is only an interface
change. It is currently expected that dma-buf memory can be accessed with
system memory load/store operations.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/dma-buf/dma-buf.c                     | 26 ++++++++++---------
 drivers/gpu/drm/drm_gem_cma_helper.c          | 13 +++++-----
 drivers/gpu/drm/drm_gem_shmem_helper.c        | 14 ++++++----
 drivers/gpu/drm/drm_prime.c                   |  8 +++---
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  8 +++++-
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    | 11 ++++++--
 .../drm/i915/gem/selftests/i915_gem_dmabuf.c  | 12 ++++++---
 drivers/gpu/drm/tegra/gem.c                   | 18 ++++++++-----
 .../common/videobuf2/videobuf2-dma-contig.c   | 14 +++++++---
 .../media/common/videobuf2/videobuf2-dma-sg.c | 16 ++++++++----
 .../common/videobuf2/videobuf2-vmalloc.c      | 15 ++++++++---
 include/drm/drm_prime.h                       |  3 ++-
 include/linux/dma-buf-map.h                   | 13 ++++++++++
 include/linux/dma-buf.h                       |  6 ++---
 14 files changed, 122 insertions(+), 55 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 5e849ca241a0..c99e3577aa2f 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1186,46 +1186,48 @@ EXPORT_SYMBOL_GPL(dma_buf_mmap);
  * dma_buf_vmap - Create virtual mapping for the buffer object into kernel
  * address space. Same restrictions as for vmap and friends apply.
  * @dmabuf:	[in]	buffer to vmap
+ * @map:	[out]	returns the vmap pointer
  *
  * This call may fail due to lack of virtual mapping address space.
  * These calls are optional in drivers. The intended use for them
  * is for mapping objects linear in kernel space for high use objects.
  * Please attempt to use kmap/kunmap before thinking about these interfaces.
  *
- * Returns NULL on error.
+ * Returns 0 on success, or a negative errno code otherwise.
  */
-void *dma_buf_vmap(struct dma_buf *dmabuf)
+int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
 {
-	void *ptr;
+	struct dma_buf_map ptr;
+	int ret = 0;
 
 	if (WARN_ON(!dmabuf))
-		return NULL;
+		return -EINVAL;
 
 	if (!dmabuf->ops->vmap)
-		return NULL;
+		return -EINVAL;
 
 	mutex_lock(&dmabuf->lock);
 	if (dmabuf->vmapping_counter) {
 		dmabuf->vmapping_counter++;
 		BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
-		ptr = dmabuf->vmap_ptr.vaddr;
+		*map = dmabuf->vmap_ptr;
 		goto out_unlock;
 	}
 
 	BUG_ON(dma_buf_map_is_set(&dmabuf->vmap_ptr));
 
-	ptr = dmabuf->ops->vmap(dmabuf);
-	if (WARN_ON_ONCE(IS_ERR(ptr)))
-		ptr = NULL;
-	if (!ptr)
+	ret = dmabuf->ops->vmap(dmabuf, &ptr);
+	if (WARN_ON_ONCE(ret))
 		goto out_unlock;
 
-	dmabuf->vmap_ptr.vaddr = ptr;
+	dmabuf->vmap_ptr = ptr;
 	dmabuf->vmapping_counter = 1;
 
+	*map = dmabuf->vmap_ptr;
+
 out_unlock:
 	mutex_unlock(&dmabuf->lock);
-	return ptr;
+	return ret;
 }
 EXPORT_SYMBOL_GPL(dma_buf_vmap);
 
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 822edeadbab3..062315c25c12 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -634,22 +634,23 @@ drm_gem_cma_prime_import_sg_table_vmap(struct drm_device *dev,
 {
 	struct drm_gem_cma_object *cma_obj;
 	struct drm_gem_object *obj;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
-	vaddr = dma_buf_vmap(attach->dmabuf);
-	if (!vaddr) {
+	ret = dma_buf_vmap(attach->dmabuf, &map);
+	if (ret) {
 		DRM_ERROR("Failed to vmap PRIME buffer\n");
-		return ERR_PTR(-ENOMEM);
+		return ERR_PTR(ret);
 	}
 
 	obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
 	if (IS_ERR(obj)) {
-		dma_buf_vunmap(attach->dmabuf, vaddr);
+		dma_buf_vunmap(attach->dmabuf, map.vaddr);
 		return obj;
 	}
 
 	cma_obj = to_drm_gem_cma_obj(obj);
-	cma_obj->vaddr = vaddr;
+	cma_obj->vaddr = map.vaddr;
 
 	return obj;
 }
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 0a952f27c184..ad10a57cfece 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -261,13 +261,16 @@ EXPORT_SYMBOL(drm_gem_shmem_unpin);
 static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
-	int ret;
+	struct dma_buf_map map;
+	int ret = 0;
 
 	if (shmem->vmap_use_count++ > 0)
 		return shmem->vaddr;
 
 	if (obj->import_attach) {
-		shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
+		ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
+		if (!ret)
+			shmem->vaddr = map.vaddr;
 	} else {
 		pgprot_t prot = PAGE_KERNEL;
 
@@ -279,11 +282,12 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 			prot = pgprot_writecombine(prot);
 		shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
 				    VM_MAP, prot);
+		if (!shmem->vaddr)
+			ret = -ENOMEM;
 	}
 
-	if (!shmem->vaddr) {
-		DRM_DEBUG_KMS("Failed to vmap pages\n");
-		ret = -ENOMEM;
+	if (ret) {
+		DRM_DEBUG_KMS("Failed to vmap pages, error %d\n", ret);
 		goto err_put_pages;
 	}
 
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 8a6a3c99b7d8..1b7d86c7842d 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -668,16 +668,18 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
  *
  * Returns the kernel virtual address or NULL on failure.
  */
-void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf)
+int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
 	void *vaddr;
 
 	vaddr = drm_gem_vmap(obj);
 	if (IS_ERR(vaddr))
-		vaddr = NULL;
+		return PTR_ERR(vaddr);
 
-	return vaddr;
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 4aa3426a9ba4..80a9fc143bbb 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -85,9 +85,15 @@ static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj)
 
 static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj)
 {
+	struct dma_buf_map map;
+	int ret;
+
 	lockdep_assert_held(&etnaviv_obj->lock);
 
-	return dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf);
+	ret = dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf, &map);
+	if (ret)
+		return NULL;
+	return map.vaddr;
 }
 
 static int etnaviv_gem_prime_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 27fddc22a7c6..77b363d3000b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -82,11 +82,18 @@ static void i915_gem_unmap_dma_buf(struct dma_buf_attachment *attachment,
 	i915_gem_object_unpin_pages(obj);
 }
 
-static void *i915_gem_dmabuf_vmap(struct dma_buf *dma_buf)
+static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
+	void *vaddr;
 
-	return i915_gem_object_pin_map(obj, I915_MAP_WB);
+	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
+	if (IS_ERR(vaddr))
+		return PTR_ERR(vaddr);
+
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 
 static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
index 2a52b92586b9..f79ebc5329b7 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
@@ -82,6 +82,7 @@ static int igt_dmabuf_import(void *arg)
 	struct drm_i915_gem_object *obj;
 	struct dma_buf *dmabuf;
 	void *obj_map, *dma_map;
+	struct dma_buf_map map;
 	u32 pattern[] = { 0, 0xaa, 0xcc, 0x55, 0xff };
 	int err, i;
 
@@ -110,7 +111,8 @@ static int igt_dmabuf_import(void *arg)
 		goto out_obj;
 	}
 
-	dma_map = dma_buf_vmap(dmabuf);
+	err = dma_buf_vmap(dmabuf, &map);
+	dma_map = err ? NULL : map.vaddr;
 	if (!dma_map) {
 		pr_err("dma_buf_vmap failed\n");
 		err = -ENOMEM;
@@ -163,6 +165,7 @@ static int igt_dmabuf_import_ownership(void *arg)
 	struct drm_i915_private *i915 = arg;
 	struct drm_i915_gem_object *obj;
 	struct dma_buf *dmabuf;
+	struct dma_buf_map map;
 	void *ptr;
 	int err;
 
@@ -170,7 +173,8 @@ static int igt_dmabuf_import_ownership(void *arg)
 	if (IS_ERR(dmabuf))
 		return PTR_ERR(dmabuf);
 
-	ptr = dma_buf_vmap(dmabuf);
+	err = dma_buf_vmap(dmabuf, &map);
+	ptr = err ? NULL : map.vaddr;
 	if (!ptr) {
 		pr_err("dma_buf_vmap failed\n");
 		err = -ENOMEM;
@@ -212,6 +216,7 @@ static int igt_dmabuf_export_vmap(void *arg)
 	struct drm_i915_private *i915 = arg;
 	struct drm_i915_gem_object *obj;
 	struct dma_buf *dmabuf;
+	struct dma_buf_map map;
 	void *ptr;
 	int err;
 
@@ -228,7 +233,8 @@ static int igt_dmabuf_export_vmap(void *arg)
 	}
 	i915_gem_object_put(obj);
 
-	ptr = dma_buf_vmap(dmabuf);
+	err = dma_buf_vmap(dmabuf, &map);
+	ptr = err ? NULL : map.vaddr;
 	if (!ptr) {
 		pr_err("dma_buf_vmap failed\n");
 		err = -ENOMEM;
diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
index 47e2935b8c68..81663036c701 100644
--- a/drivers/gpu/drm/tegra/gem.c
+++ b/drivers/gpu/drm/tegra/gem.c
@@ -132,14 +132,18 @@ static void tegra_bo_unpin(struct device *dev, struct sg_table *sgt)
 static void *tegra_bo_mmap(struct host1x_bo *bo)
 {
 	struct tegra_bo *obj = host1x_to_tegra_bo(bo);
+	struct dma_buf_map map;
+	int ret;
 
-	if (obj->vaddr)
+	if (obj->vaddr) {
 		return obj->vaddr;
-	else if (obj->gem.import_attach)
-		return dma_buf_vmap(obj->gem.import_attach->dmabuf);
-	else
+	} else if (obj->gem.import_attach) {
+		ret = dma_buf_vmap(obj->gem.import_attach->dmabuf, &map);
+		return ret ? NULL : map.vaddr;
+	} else {
 		return vmap(obj->pages, obj->num_pages, VM_MAP,
 			    pgprot_writecombine(PAGE_KERNEL));
+	}
 }
 
 static void tegra_bo_munmap(struct host1x_bo *bo, void *addr)
@@ -641,12 +645,14 @@ static int tegra_gem_prime_mmap(struct dma_buf *buf, struct vm_area_struct *vma)
 	return __tegra_gem_mmap(gem, vma);
 }
 
-static void *tegra_gem_prime_vmap(struct dma_buf *buf)
+static int tegra_gem_prime_vmap(struct dma_buf *buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *gem = buf->priv;
 	struct tegra_bo *bo = to_tegra_bo(gem);
 
-	return bo->vaddr;
+	dma_buf_map_set_vaddr(map, bo->vaddr);
+
+	return 0;
 }
 
 static void tegra_gem_prime_vunmap(struct dma_buf *buf, void *vaddr)
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
index ec3446cc45b8..11428287bdf3 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
@@ -81,9 +81,13 @@ static void *vb2_dc_cookie(void *buf_priv)
 static void *vb2_dc_vaddr(void *buf_priv)
 {
 	struct vb2_dc_buf *buf = buf_priv;
+	struct dma_buf_map map;
+	int ret;
 
-	if (!buf->vaddr && buf->db_attach)
-		buf->vaddr = dma_buf_vmap(buf->db_attach->dmabuf);
+	if (!buf->vaddr && buf->db_attach) {
+		ret = dma_buf_vmap(buf->db_attach->dmabuf, &map);
+		buf->vaddr = ret ? NULL : map.vaddr;
+	}
 
 	return buf->vaddr;
 }
@@ -365,11 +369,13 @@ vb2_dc_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
 	return 0;
 }
 
-static void *vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf)
+static int vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
 {
 	struct vb2_dc_buf *buf = dbuf->priv;
 
-	return buf->vaddr;
+	dma_buf_map_set_vaddr(map, buf->vaddr);
+
+	return 0;
 }
 
 static int vb2_dc_dmabuf_ops_mmap(struct dma_buf *dbuf,
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
index 0a40e00f0d7e..c51170e9c1b9 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
@@ -300,14 +300,18 @@ static void vb2_dma_sg_put_userptr(void *buf_priv)
 static void *vb2_dma_sg_vaddr(void *buf_priv)
 {
 	struct vb2_dma_sg_buf *buf = buf_priv;
+	struct dma_buf_map map;
+	int ret;
 
 	BUG_ON(!buf);
 
 	if (!buf->vaddr) {
-		if (buf->db_attach)
-			buf->vaddr = dma_buf_vmap(buf->db_attach->dmabuf);
-		else
+		if (buf->db_attach) {
+			ret = dma_buf_vmap(buf->db_attach->dmabuf, &map);
+			buf->vaddr = ret ? NULL : map.vaddr;
+		} else {
 			buf->vaddr = vm_map_ram(buf->pages, buf->num_pages, -1);
+		}
 	}
 
 	/* add offset in case userptr is not page-aligned */
@@ -489,11 +493,13 @@ vb2_dma_sg_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
 	return 0;
 }
 
-static void *vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf)
+static int vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
 {
 	struct vb2_dma_sg_buf *buf = dbuf->priv;
 
-	return vb2_dma_sg_vaddr(buf);
+	dma_buf_map_set_vaddr(map, buf->vaddr);
+
+	return 0;
 }
 
 static int vb2_dma_sg_dmabuf_ops_mmap(struct dma_buf *dbuf,
diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
index c66fda4a65e4..7b68e2379c65 100644
--- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
+++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
@@ -318,11 +318,13 @@ static void vb2_vmalloc_dmabuf_ops_release(struct dma_buf *dbuf)
 	vb2_vmalloc_put(dbuf->priv);
 }
 
-static void *vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf)
+static int vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
 {
 	struct vb2_vmalloc_buf *buf = dbuf->priv;
 
-	return buf->vaddr;
+	dma_buf_map_set_vaddr(map, buf->vaddr);
+
+	return 0;
 }
 
 static int vb2_vmalloc_dmabuf_ops_mmap(struct dma_buf *dbuf,
@@ -374,10 +376,15 @@ static struct dma_buf *vb2_vmalloc_get_dmabuf(void *buf_priv, unsigned long flag
 static int vb2_vmalloc_map_dmabuf(void *mem_priv)
 {
 	struct vb2_vmalloc_buf *buf = mem_priv;
+	struct dma_buf_map map;
+	int ret;
 
-	buf->vaddr = dma_buf_vmap(buf->dbuf);
+	ret = dma_buf_vmap(buf->dbuf, &map);
+	if (ret)
+		return -EFAULT;
+	buf->vaddr = map.vaddr;
 
-	return buf->vaddr ? 0 : -EFAULT;
+	return 0;
 }
 
 static void vb2_vmalloc_unmap_dmabuf(void *mem_priv)
diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
index bf141e74a1c2..3ee22639ff77 100644
--- a/include/drm/drm_prime.h
+++ b/include/drm/drm_prime.h
@@ -54,6 +54,7 @@ struct device;
 struct dma_buf_export_info;
 struct dma_buf;
 struct dma_buf_attachment;
+struct dma_buf_map;
 
 enum dma_data_direction;
 
@@ -82,7 +83,7 @@ struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach,
 void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
 			   struct sg_table *sgt,
 			   enum dma_data_direction dir);
-void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf);
+int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map);
 void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr);
 
 int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index d4b1bb3cc4b0..6b4f6e0e8b5d 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -28,6 +28,19 @@ struct dma_buf_map {
 	bool is_iomem;
 };
 
+/**
+ * dma_buf_map_set_vaddr - Sets a dma-buf mapping structure to an address in system memory
+ * @map:	The dma-buf mapping structure
+ * @vaddr:	A system-memory address
+ *
+ * Sets the address and clears the I/O-memory flag.
+ */
+static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
+{
+	map->vaddr = vaddr;
+	map->is_iomem = false;
+}
+
 /* API transition helper */
 static inline bool dma_buf_map_is_vaddr(const struct dma_buf_map *map, const void *vaddr)
 {
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index fcc2ddfb6d18..7237997cfa38 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -266,7 +266,7 @@ struct dma_buf_ops {
 	 */
 	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
 
-	void *(*vmap)(struct dma_buf *);
+	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
 	void (*vunmap)(struct dma_buf *, void *vaddr);
 };
 
@@ -503,6 +503,6 @@ int dma_buf_end_cpu_access(struct dma_buf *dma_buf,
 
 int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
 		 unsigned long);
-void *dma_buf_vmap(struct dma_buf *);
-void dma_buf_vunmap(struct dma_buf *, void *vaddr);
+int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
+void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr);
 #endif /* __DMA_BUF_H__ */
-- 
2.28.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 3/3] dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
  2020-09-14 11:25 ` Thomas Zimmermann
  (?)
  (?)
@ 2020-09-14 11:25   ` Thomas Zimmermann
  -1 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-14 11:25 UTC (permalink / raw)
  To: sumit.semwal, christian.koenig, daniel, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom
  Cc: linux-media, dri-devel, linaro-mm-sig, etnaviv, intel-gfx,
	linux-tegra, sparclinux, Thomas Zimmermann

This patch updates dma_buf_vunmap() and dma-buf's vunmap callback to
use struct dma_buf_map. The interfaces used to receive a buffer address.
This address is now given in an instance of the structure.

Users of the functions are updated accordingly. This is only an interface
change. It is currently expected that dma-buf memory can be accessed with
system memory load/store operations.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/dma-buf/dma-buf.c                     |  8 ++---
 drivers/gpu/drm/drm_gem_cma_helper.c          |  5 +--
 drivers/gpu/drm/drm_gem_shmem_helper.c        |  3 +-
 drivers/gpu/drm/drm_prime.c                   |  6 ++--
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  5 +--
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  2 +-
 .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  6 ++--
 drivers/gpu/drm/tegra/gem.c                   |  5 +--
 .../common/videobuf2/videobuf2-dma-contig.c   |  3 +-
 .../media/common/videobuf2/videobuf2-dma-sg.c |  3 +-
 .../common/videobuf2/videobuf2-vmalloc.c      |  6 ++--
 include/drm/drm_prime.h                       |  2 +-
 include/linux/dma-buf-map.h                   | 32 +++++++++++++++++--
 include/linux/dma-buf.h                       |  4 +--
 14 files changed, 62 insertions(+), 28 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index c99e3577aa2f..90022003b065 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1234,21 +1234,21 @@ EXPORT_SYMBOL_GPL(dma_buf_vmap);
 /**
  * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap.
  * @dmabuf:	[in]	buffer to vunmap
- * @vaddr:	[in]	vmap to vunmap
+ * @map:	[in]	vmap pointer to vunmap
  */
-void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
+void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
 {
 	if (WARN_ON(!dmabuf))
 		return;
 
 	BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
 	BUG_ON(dmabuf->vmapping_counter == 0);
-	BUG_ON(!dma_buf_map_is_vaddr(&dmabuf->vmap_ptr, vaddr));
+	BUG_ON(!dma_buf_map_is_equal(&dmabuf->vmap_ptr, map));
 
 	mutex_lock(&dmabuf->lock);
 	if (--dmabuf->vmapping_counter == 0) {
 		if (dmabuf->ops->vunmap)
-			dmabuf->ops->vunmap(dmabuf, vaddr);
+			dmabuf->ops->vunmap(dmabuf, map);
 		dma_buf_map_clear(&dmabuf->vmap_ptr);
 	}
 	mutex_unlock(&dmabuf->lock);
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 062315c25c12..55f500c85fe6 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -176,12 +176,13 @@ drm_gem_cma_create_with_handle(struct drm_file *file_priv,
 void drm_gem_cma_free_object(struct drm_gem_object *gem_obj)
 {
 	struct drm_gem_cma_object *cma_obj;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(cma_obj->vaddr);
 
 	cma_obj = to_drm_gem_cma_obj(gem_obj);
 
 	if (gem_obj->import_attach) {
 		if (cma_obj->vaddr)
-			dma_buf_vunmap(gem_obj->import_attach->dmabuf, cma_obj->vaddr);
+			dma_buf_vunmap(gem_obj->import_attach->dmabuf, &map);
 		drm_prime_gem_destroy(gem_obj, cma_obj->sgt);
 	} else if (cma_obj->vaddr) {
 		dma_free_wc(gem_obj->dev->dev, cma_obj->base.size,
@@ -645,7 +646,7 @@ drm_gem_cma_prime_import_sg_table_vmap(struct drm_device *dev,
 
 	obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
 	if (IS_ERR(obj)) {
-		dma_buf_vunmap(attach->dmabuf, map.vaddr);
+		dma_buf_vunmap(attach->dmabuf, &map);
 		return obj;
 	}
 
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index ad10a57cfece..3c2e8cde69a8 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -337,6 +337,7 @@ EXPORT_SYMBOL(drm_gem_shmem_vmap);
 static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
 
 	if (WARN_ON_ONCE(!shmem->vmap_use_count))
 		return;
@@ -345,7 +346,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 		return;
 
 	if (obj->import_attach)
-		dma_buf_vunmap(obj->import_attach->dmabuf, shmem->vaddr);
+		dma_buf_vunmap(obj->import_attach->dmabuf, &map);
 	else
 		vunmap(shmem->vaddr);
 
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 1b7d86c7842d..8a02b739924f 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -686,16 +686,16 @@ EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
 /**
  * drm_gem_dmabuf_vunmap - dma_buf vunmap implementation for GEM
  * @dma_buf: buffer to be unmapped
- * @vaddr: the virtual address of the buffer
+ * @map: the virtual address of the buffer
  *
  * Releases a kernel virtual mapping. This can be used as the
  * &dma_buf_ops.vunmap callback. Calls into &drm_gem_object_funcs.vunmap for device specific handling.
  */
-void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
+void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
 
-	drm_gem_vunmap(obj, vaddr);
+	drm_gem_vunmap(obj, map->vaddr);
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 80a9fc143bbb..135fbff6fecf 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -70,9 +70,10 @@ void etnaviv_gem_prime_unpin(struct drm_gem_object *obj)
 
 static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj)
 {
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(etnaviv_obj->vaddr);
+
 	if (etnaviv_obj->vaddr)
-		dma_buf_vunmap(etnaviv_obj->base.import_attach->dmabuf,
-			       etnaviv_obj->vaddr);
+		dma_buf_vunmap(etnaviv_obj->base.import_attach->dmabuf, &map);
 
 	/* Don't drop the pages for imported dmabuf, as they are not
 	 * ours, just free the array we allocated:
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 77b363d3000b..fec0e1e3dc3e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -96,7 +96,7 @@ static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map
 	return 0;
 }
 
-static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
+static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
 
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
index f79ebc5329b7..0b4d19729e1f 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
@@ -152,7 +152,7 @@ static int igt_dmabuf_import(void *arg)
 
 	err = 0;
 out_dma_map:
-	dma_buf_vunmap(dmabuf, dma_map);
+	dma_buf_vunmap(dmabuf, &map);
 out_obj:
 	i915_gem_object_put(obj);
 out_dmabuf:
@@ -182,7 +182,7 @@ static int igt_dmabuf_import_ownership(void *arg)
 	}
 
 	memset(ptr, 0xc5, PAGE_SIZE);
-	dma_buf_vunmap(dmabuf, ptr);
+	dma_buf_vunmap(dmabuf, &map);
 
 	obj = to_intel_bo(i915_gem_prime_import(&i915->drm, dmabuf));
 	if (IS_ERR(obj)) {
@@ -250,7 +250,7 @@ static int igt_dmabuf_export_vmap(void *arg)
 	memset(ptr, 0xc5, dmabuf->size);
 
 	err = 0;
-	dma_buf_vunmap(dmabuf, ptr);
+	dma_buf_vunmap(dmabuf, &map);
 out:
 	dma_buf_put(dmabuf);
 	return err;
diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
index 81663036c701..ff2e33144a25 100644
--- a/drivers/gpu/drm/tegra/gem.c
+++ b/drivers/gpu/drm/tegra/gem.c
@@ -149,11 +149,12 @@ static void *tegra_bo_mmap(struct host1x_bo *bo)
 static void tegra_bo_munmap(struct host1x_bo *bo, void *addr)
 {
 	struct tegra_bo *obj = host1x_to_tegra_bo(bo);
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(addr);
 
 	if (obj->vaddr)
 		return;
 	else if (obj->gem.import_attach)
-		dma_buf_vunmap(obj->gem.import_attach->dmabuf, addr);
+		dma_buf_vunmap(obj->gem.import_attach->dmabuf, &map);
 	else
 		vunmap(addr);
 }
@@ -655,7 +656,7 @@ static int tegra_gem_prime_vmap(struct dma_buf *buf, struct dma_buf_map *map)
 	return 0;
 }
 
-static void tegra_gem_prime_vunmap(struct dma_buf *buf, void *vaddr)
+static void tegra_gem_prime_vunmap(struct dma_buf *buf, struct dma_buf_map *map)
 {
 }
 
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
index 11428287bdf3..a1eb8279b113 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
@@ -648,6 +648,7 @@ static void vb2_dc_unmap_dmabuf(void *mem_priv)
 {
 	struct vb2_dc_buf *buf = mem_priv;
 	struct sg_table *sgt = buf->dma_sgt;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr);
 
 	if (WARN_ON(!buf->db_attach)) {
 		pr_err("trying to unpin a not attached buffer\n");
@@ -660,7 +661,7 @@ static void vb2_dc_unmap_dmabuf(void *mem_priv)
 	}
 
 	if (buf->vaddr) {
-		dma_buf_vunmap(buf->db_attach->dmabuf, buf->vaddr);
+		dma_buf_vunmap(buf->db_attach->dmabuf, &map);
 		buf->vaddr = NULL;
 	}
 	dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir);
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
index c51170e9c1b9..d5157e903e27 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
@@ -580,6 +580,7 @@ static void vb2_dma_sg_unmap_dmabuf(void *mem_priv)
 {
 	struct vb2_dma_sg_buf *buf = mem_priv;
 	struct sg_table *sgt = buf->dma_sgt;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr);
 
 	if (WARN_ON(!buf->db_attach)) {
 		pr_err("trying to unpin a not attached buffer\n");
@@ -592,7 +593,7 @@ static void vb2_dma_sg_unmap_dmabuf(void *mem_priv)
 	}
 
 	if (buf->vaddr) {
-		dma_buf_vunmap(buf->db_attach->dmabuf, buf->vaddr);
+		dma_buf_vunmap(buf->db_attach->dmabuf, &map);
 		buf->vaddr = NULL;
 	}
 	dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir);
diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
index 7b68e2379c65..11ba0eb1315b 100644
--- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
+++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
@@ -390,17 +390,19 @@ static int vb2_vmalloc_map_dmabuf(void *mem_priv)
 static void vb2_vmalloc_unmap_dmabuf(void *mem_priv)
 {
 	struct vb2_vmalloc_buf *buf = mem_priv;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr);
 
-	dma_buf_vunmap(buf->dbuf, buf->vaddr);
+	dma_buf_vunmap(buf->dbuf, &map);
 	buf->vaddr = NULL;
 }
 
 static void vb2_vmalloc_detach_dmabuf(void *mem_priv)
 {
 	struct vb2_vmalloc_buf *buf = mem_priv;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr);
 
 	if (buf->vaddr)
-		dma_buf_vunmap(buf->dbuf, buf->vaddr);
+		dma_buf_vunmap(buf->dbuf, &map);
 
 	kfree(buf);
 }
diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
index 3ee22639ff77..093f760cc131 100644
--- a/include/drm/drm_prime.h
+++ b/include/drm/drm_prime.h
@@ -84,7 +84,7 @@ void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
 			   struct sg_table *sgt,
 			   enum dma_data_direction dir);
 int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map);
-void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr);
+void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map);
 
 int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma);
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index 6b4f6e0e8b5d..303e1363b221 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -28,6 +28,16 @@ struct dma_buf_map {
 	bool is_iomem;
 };
 
+/**
+ * DMA_BUF_MAP_INIT_VADDR - Initializes struct dma_buf_map to an address in system memory
+ * @vaddr:	A system-memory address
+ */
+#define DMA_BUF_MAP_INIT_VADDR(vaddr_) \
+	{ \
+		.vaddr = (vaddr_), \
+		.is_iomem = false, \
+	}
+
 /**
  * dma_buf_map_set_vaddr - Sets a dma-buf mapping structure to an address in system memory
  * @map:	The dma-buf mapping structure
@@ -41,10 +51,26 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
 	map->is_iomem = false;
 }
 
-/* API transition helper */
-static inline bool dma_buf_map_is_vaddr(const struct dma_buf_map *map, const void *vaddr)
+/**
+ * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
+ * @lhs:	The dma-buf mapping structure
+ * @rhs:	A dma-buf mapping structure to compare with
+ *
+ * Two dma-buf mapping structures are equal if they both refer to the same type of memory
+ * and to the same address within that memory.
+ *
+ * Returns:
+ * True is both structures are equal, or false otherwise.
+ */
+static inline bool dma_buf_map_is_equal(const struct dma_buf_map *lhs,
+					const struct dma_buf_map *rhs)
 {
-	return !map->is_iomem && (map->vaddr == vaddr);
+	if (lhs->is_iomem != rhs->is_iomem)
+		return false;
+	else if (lhs->is_iomem)
+		return lhs->vaddr_iomem == rhs->vaddr_iomem;
+	else
+		return lhs->vaddr == rhs->vaddr;
 }
 
 /**
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 7237997cfa38..cf77cc15f4ba 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -267,7 +267,7 @@ struct dma_buf_ops {
 	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
 
 	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
-	void (*vunmap)(struct dma_buf *, void *vaddr);
+	void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
 };
 
 /**
@@ -504,5 +504,5 @@ int dma_buf_end_cpu_access(struct dma_buf *dma_buf,
 int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
 		 unsigned long);
 int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
-void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr);
+void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
 #endif /* __DMA_BUF_H__ */
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 3/3] dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
@ 2020-09-14 11:25   ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-14 11:25 UTC (permalink / raw)
  To: sumit.semwal, christian.koenig, daniel, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom
  Cc: intel-gfx, etnaviv, dri-devel, linaro-mm-sig, sparclinux,
	Thomas Zimmermann, linux-tegra, linux-media

This patch updates dma_buf_vunmap() and dma-buf's vunmap callback to
use struct dma_buf_map. The interfaces used to receive a buffer address.
This address is now given in an instance of the structure.

Users of the functions are updated accordingly. This is only an interface
change. It is currently expected that dma-buf memory can be accessed with
system memory load/store operations.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/dma-buf/dma-buf.c                     |  8 ++---
 drivers/gpu/drm/drm_gem_cma_helper.c          |  5 +--
 drivers/gpu/drm/drm_gem_shmem_helper.c        |  3 +-
 drivers/gpu/drm/drm_prime.c                   |  6 ++--
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  5 +--
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  2 +-
 .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  6 ++--
 drivers/gpu/drm/tegra/gem.c                   |  5 +--
 .../common/videobuf2/videobuf2-dma-contig.c   |  3 +-
 .../media/common/videobuf2/videobuf2-dma-sg.c |  3 +-
 .../common/videobuf2/videobuf2-vmalloc.c      |  6 ++--
 include/drm/drm_prime.h                       |  2 +-
 include/linux/dma-buf-map.h                   | 32 +++++++++++++++++--
 include/linux/dma-buf.h                       |  4 +--
 14 files changed, 62 insertions(+), 28 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index c99e3577aa2f..90022003b065 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1234,21 +1234,21 @@ EXPORT_SYMBOL_GPL(dma_buf_vmap);
 /**
  * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap.
  * @dmabuf:	[in]	buffer to vunmap
- * @vaddr:	[in]	vmap to vunmap
+ * @map:	[in]	vmap pointer to vunmap
  */
-void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
+void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
 {
 	if (WARN_ON(!dmabuf))
 		return;
 
 	BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
 	BUG_ON(dmabuf->vmapping_counter = 0);
-	BUG_ON(!dma_buf_map_is_vaddr(&dmabuf->vmap_ptr, vaddr));
+	BUG_ON(!dma_buf_map_is_equal(&dmabuf->vmap_ptr, map));
 
 	mutex_lock(&dmabuf->lock);
 	if (--dmabuf->vmapping_counter = 0) {
 		if (dmabuf->ops->vunmap)
-			dmabuf->ops->vunmap(dmabuf, vaddr);
+			dmabuf->ops->vunmap(dmabuf, map);
 		dma_buf_map_clear(&dmabuf->vmap_ptr);
 	}
 	mutex_unlock(&dmabuf->lock);
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 062315c25c12..55f500c85fe6 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -176,12 +176,13 @@ drm_gem_cma_create_with_handle(struct drm_file *file_priv,
 void drm_gem_cma_free_object(struct drm_gem_object *gem_obj)
 {
 	struct drm_gem_cma_object *cma_obj;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(cma_obj->vaddr);
 
 	cma_obj = to_drm_gem_cma_obj(gem_obj);
 
 	if (gem_obj->import_attach) {
 		if (cma_obj->vaddr)
-			dma_buf_vunmap(gem_obj->import_attach->dmabuf, cma_obj->vaddr);
+			dma_buf_vunmap(gem_obj->import_attach->dmabuf, &map);
 		drm_prime_gem_destroy(gem_obj, cma_obj->sgt);
 	} else if (cma_obj->vaddr) {
 		dma_free_wc(gem_obj->dev->dev, cma_obj->base.size,
@@ -645,7 +646,7 @@ drm_gem_cma_prime_import_sg_table_vmap(struct drm_device *dev,
 
 	obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
 	if (IS_ERR(obj)) {
-		dma_buf_vunmap(attach->dmabuf, map.vaddr);
+		dma_buf_vunmap(attach->dmabuf, &map);
 		return obj;
 	}
 
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index ad10a57cfece..3c2e8cde69a8 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -337,6 +337,7 @@ EXPORT_SYMBOL(drm_gem_shmem_vmap);
 static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
 
 	if (WARN_ON_ONCE(!shmem->vmap_use_count))
 		return;
@@ -345,7 +346,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 		return;
 
 	if (obj->import_attach)
-		dma_buf_vunmap(obj->import_attach->dmabuf, shmem->vaddr);
+		dma_buf_vunmap(obj->import_attach->dmabuf, &map);
 	else
 		vunmap(shmem->vaddr);
 
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 1b7d86c7842d..8a02b739924f 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -686,16 +686,16 @@ EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
 /**
  * drm_gem_dmabuf_vunmap - dma_buf vunmap implementation for GEM
  * @dma_buf: buffer to be unmapped
- * @vaddr: the virtual address of the buffer
+ * @map: the virtual address of the buffer
  *
  * Releases a kernel virtual mapping. This can be used as the
  * &dma_buf_ops.vunmap callback. Calls into &drm_gem_object_funcs.vunmap for device specific handling.
  */
-void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
+void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
 
-	drm_gem_vunmap(obj, vaddr);
+	drm_gem_vunmap(obj, map->vaddr);
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 80a9fc143bbb..135fbff6fecf 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -70,9 +70,10 @@ void etnaviv_gem_prime_unpin(struct drm_gem_object *obj)
 
 static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj)
 {
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(etnaviv_obj->vaddr);
+
 	if (etnaviv_obj->vaddr)
-		dma_buf_vunmap(etnaviv_obj->base.import_attach->dmabuf,
-			       etnaviv_obj->vaddr);
+		dma_buf_vunmap(etnaviv_obj->base.import_attach->dmabuf, &map);
 
 	/* Don't drop the pages for imported dmabuf, as they are not
 	 * ours, just free the array we allocated:
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 77b363d3000b..fec0e1e3dc3e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -96,7 +96,7 @@ static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map
 	return 0;
 }
 
-static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
+static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
 
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
index f79ebc5329b7..0b4d19729e1f 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
@@ -152,7 +152,7 @@ static int igt_dmabuf_import(void *arg)
 
 	err = 0;
 out_dma_map:
-	dma_buf_vunmap(dmabuf, dma_map);
+	dma_buf_vunmap(dmabuf, &map);
 out_obj:
 	i915_gem_object_put(obj);
 out_dmabuf:
@@ -182,7 +182,7 @@ static int igt_dmabuf_import_ownership(void *arg)
 	}
 
 	memset(ptr, 0xc5, PAGE_SIZE);
-	dma_buf_vunmap(dmabuf, ptr);
+	dma_buf_vunmap(dmabuf, &map);
 
 	obj = to_intel_bo(i915_gem_prime_import(&i915->drm, dmabuf));
 	if (IS_ERR(obj)) {
@@ -250,7 +250,7 @@ static int igt_dmabuf_export_vmap(void *arg)
 	memset(ptr, 0xc5, dmabuf->size);
 
 	err = 0;
-	dma_buf_vunmap(dmabuf, ptr);
+	dma_buf_vunmap(dmabuf, &map);
 out:
 	dma_buf_put(dmabuf);
 	return err;
diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
index 81663036c701..ff2e33144a25 100644
--- a/drivers/gpu/drm/tegra/gem.c
+++ b/drivers/gpu/drm/tegra/gem.c
@@ -149,11 +149,12 @@ static void *tegra_bo_mmap(struct host1x_bo *bo)
 static void tegra_bo_munmap(struct host1x_bo *bo, void *addr)
 {
 	struct tegra_bo *obj = host1x_to_tegra_bo(bo);
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(addr);
 
 	if (obj->vaddr)
 		return;
 	else if (obj->gem.import_attach)
-		dma_buf_vunmap(obj->gem.import_attach->dmabuf, addr);
+		dma_buf_vunmap(obj->gem.import_attach->dmabuf, &map);
 	else
 		vunmap(addr);
 }
@@ -655,7 +656,7 @@ static int tegra_gem_prime_vmap(struct dma_buf *buf, struct dma_buf_map *map)
 	return 0;
 }
 
-static void tegra_gem_prime_vunmap(struct dma_buf *buf, void *vaddr)
+static void tegra_gem_prime_vunmap(struct dma_buf *buf, struct dma_buf_map *map)
 {
 }
 
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
index 11428287bdf3..a1eb8279b113 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
@@ -648,6 +648,7 @@ static void vb2_dc_unmap_dmabuf(void *mem_priv)
 {
 	struct vb2_dc_buf *buf = mem_priv;
 	struct sg_table *sgt = buf->dma_sgt;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr);
 
 	if (WARN_ON(!buf->db_attach)) {
 		pr_err("trying to unpin a not attached buffer\n");
@@ -660,7 +661,7 @@ static void vb2_dc_unmap_dmabuf(void *mem_priv)
 	}
 
 	if (buf->vaddr) {
-		dma_buf_vunmap(buf->db_attach->dmabuf, buf->vaddr);
+		dma_buf_vunmap(buf->db_attach->dmabuf, &map);
 		buf->vaddr = NULL;
 	}
 	dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir);
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
index c51170e9c1b9..d5157e903e27 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
@@ -580,6 +580,7 @@ static void vb2_dma_sg_unmap_dmabuf(void *mem_priv)
 {
 	struct vb2_dma_sg_buf *buf = mem_priv;
 	struct sg_table *sgt = buf->dma_sgt;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr);
 
 	if (WARN_ON(!buf->db_attach)) {
 		pr_err("trying to unpin a not attached buffer\n");
@@ -592,7 +593,7 @@ static void vb2_dma_sg_unmap_dmabuf(void *mem_priv)
 	}
 
 	if (buf->vaddr) {
-		dma_buf_vunmap(buf->db_attach->dmabuf, buf->vaddr);
+		dma_buf_vunmap(buf->db_attach->dmabuf, &map);
 		buf->vaddr = NULL;
 	}
 	dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir);
diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
index 7b68e2379c65..11ba0eb1315b 100644
--- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
+++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
@@ -390,17 +390,19 @@ static int vb2_vmalloc_map_dmabuf(void *mem_priv)
 static void vb2_vmalloc_unmap_dmabuf(void *mem_priv)
 {
 	struct vb2_vmalloc_buf *buf = mem_priv;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr);
 
-	dma_buf_vunmap(buf->dbuf, buf->vaddr);
+	dma_buf_vunmap(buf->dbuf, &map);
 	buf->vaddr = NULL;
 }
 
 static void vb2_vmalloc_detach_dmabuf(void *mem_priv)
 {
 	struct vb2_vmalloc_buf *buf = mem_priv;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr);
 
 	if (buf->vaddr)
-		dma_buf_vunmap(buf->dbuf, buf->vaddr);
+		dma_buf_vunmap(buf->dbuf, &map);
 
 	kfree(buf);
 }
diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
index 3ee22639ff77..093f760cc131 100644
--- a/include/drm/drm_prime.h
+++ b/include/drm/drm_prime.h
@@ -84,7 +84,7 @@ void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
 			   struct sg_table *sgt,
 			   enum dma_data_direction dir);
 int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map);
-void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr);
+void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map);
 
 int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma);
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index 6b4f6e0e8b5d..303e1363b221 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -28,6 +28,16 @@ struct dma_buf_map {
 	bool is_iomem;
 };
 
+/**
+ * DMA_BUF_MAP_INIT_VADDR - Initializes struct dma_buf_map to an address in system memory
+ * @vaddr:	A system-memory address
+ */
+#define DMA_BUF_MAP_INIT_VADDR(vaddr_) \
+	{ \
+		.vaddr = (vaddr_), \
+		.is_iomem = false, \
+	}
+
 /**
  * dma_buf_map_set_vaddr - Sets a dma-buf mapping structure to an address in system memory
  * @map:	The dma-buf mapping structure
@@ -41,10 +51,26 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
 	map->is_iomem = false;
 }
 
-/* API transition helper */
-static inline bool dma_buf_map_is_vaddr(const struct dma_buf_map *map, const void *vaddr)
+/**
+ * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
+ * @lhs:	The dma-buf mapping structure
+ * @rhs:	A dma-buf mapping structure to compare with
+ *
+ * Two dma-buf mapping structures are equal if they both refer to the same type of memory
+ * and to the same address within that memory.
+ *
+ * Returns:
+ * True is both structures are equal, or false otherwise.
+ */
+static inline bool dma_buf_map_is_equal(const struct dma_buf_map *lhs,
+					const struct dma_buf_map *rhs)
 {
-	return !map->is_iomem && (map->vaddr = vaddr);
+	if (lhs->is_iomem != rhs->is_iomem)
+		return false;
+	else if (lhs->is_iomem)
+		return lhs->vaddr_iomem = rhs->vaddr_iomem;
+	else
+		return lhs->vaddr = rhs->vaddr;
 }
 
 /**
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 7237997cfa38..cf77cc15f4ba 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -267,7 +267,7 @@ struct dma_buf_ops {
 	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
 
 	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
-	void (*vunmap)(struct dma_buf *, void *vaddr);
+	void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
 };
 
 /**
@@ -504,5 +504,5 @@ int dma_buf_end_cpu_access(struct dma_buf *dma_buf,
 int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
 		 unsigned long);
 int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
-void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr);
+void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
 #endif /* __DMA_BUF_H__ */
-- 
2.28.0

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 3/3] dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
@ 2020-09-14 11:25   ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-14 11:25 UTC (permalink / raw)
  To: sumit.semwal, christian.koenig, daniel, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom
  Cc: intel-gfx, etnaviv, dri-devel, linaro-mm-sig, sparclinux,
	Thomas Zimmermann, linux-tegra, linux-media

This patch updates dma_buf_vunmap() and dma-buf's vunmap callback to
use struct dma_buf_map. The interfaces used to receive a buffer address.
This address is now given in an instance of the structure.

Users of the functions are updated accordingly. This is only an interface
change. It is currently expected that dma-buf memory can be accessed with
system memory load/store operations.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/dma-buf/dma-buf.c                     |  8 ++---
 drivers/gpu/drm/drm_gem_cma_helper.c          |  5 +--
 drivers/gpu/drm/drm_gem_shmem_helper.c        |  3 +-
 drivers/gpu/drm/drm_prime.c                   |  6 ++--
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  5 +--
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  2 +-
 .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  6 ++--
 drivers/gpu/drm/tegra/gem.c                   |  5 +--
 .../common/videobuf2/videobuf2-dma-contig.c   |  3 +-
 .../media/common/videobuf2/videobuf2-dma-sg.c |  3 +-
 .../common/videobuf2/videobuf2-vmalloc.c      |  6 ++--
 include/drm/drm_prime.h                       |  2 +-
 include/linux/dma-buf-map.h                   | 32 +++++++++++++++++--
 include/linux/dma-buf.h                       |  4 +--
 14 files changed, 62 insertions(+), 28 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index c99e3577aa2f..90022003b065 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1234,21 +1234,21 @@ EXPORT_SYMBOL_GPL(dma_buf_vmap);
 /**
  * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap.
  * @dmabuf:	[in]	buffer to vunmap
- * @vaddr:	[in]	vmap to vunmap
+ * @map:	[in]	vmap pointer to vunmap
  */
-void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
+void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
 {
 	if (WARN_ON(!dmabuf))
 		return;
 
 	BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
 	BUG_ON(dmabuf->vmapping_counter == 0);
-	BUG_ON(!dma_buf_map_is_vaddr(&dmabuf->vmap_ptr, vaddr));
+	BUG_ON(!dma_buf_map_is_equal(&dmabuf->vmap_ptr, map));
 
 	mutex_lock(&dmabuf->lock);
 	if (--dmabuf->vmapping_counter == 0) {
 		if (dmabuf->ops->vunmap)
-			dmabuf->ops->vunmap(dmabuf, vaddr);
+			dmabuf->ops->vunmap(dmabuf, map);
 		dma_buf_map_clear(&dmabuf->vmap_ptr);
 	}
 	mutex_unlock(&dmabuf->lock);
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 062315c25c12..55f500c85fe6 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -176,12 +176,13 @@ drm_gem_cma_create_with_handle(struct drm_file *file_priv,
 void drm_gem_cma_free_object(struct drm_gem_object *gem_obj)
 {
 	struct drm_gem_cma_object *cma_obj;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(cma_obj->vaddr);
 
 	cma_obj = to_drm_gem_cma_obj(gem_obj);
 
 	if (gem_obj->import_attach) {
 		if (cma_obj->vaddr)
-			dma_buf_vunmap(gem_obj->import_attach->dmabuf, cma_obj->vaddr);
+			dma_buf_vunmap(gem_obj->import_attach->dmabuf, &map);
 		drm_prime_gem_destroy(gem_obj, cma_obj->sgt);
 	} else if (cma_obj->vaddr) {
 		dma_free_wc(gem_obj->dev->dev, cma_obj->base.size,
@@ -645,7 +646,7 @@ drm_gem_cma_prime_import_sg_table_vmap(struct drm_device *dev,
 
 	obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
 	if (IS_ERR(obj)) {
-		dma_buf_vunmap(attach->dmabuf, map.vaddr);
+		dma_buf_vunmap(attach->dmabuf, &map);
 		return obj;
 	}
 
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index ad10a57cfece..3c2e8cde69a8 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -337,6 +337,7 @@ EXPORT_SYMBOL(drm_gem_shmem_vmap);
 static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
 
 	if (WARN_ON_ONCE(!shmem->vmap_use_count))
 		return;
@@ -345,7 +346,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 		return;
 
 	if (obj->import_attach)
-		dma_buf_vunmap(obj->import_attach->dmabuf, shmem->vaddr);
+		dma_buf_vunmap(obj->import_attach->dmabuf, &map);
 	else
 		vunmap(shmem->vaddr);
 
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 1b7d86c7842d..8a02b739924f 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -686,16 +686,16 @@ EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
 /**
  * drm_gem_dmabuf_vunmap - dma_buf vunmap implementation for GEM
  * @dma_buf: buffer to be unmapped
- * @vaddr: the virtual address of the buffer
+ * @map: the virtual address of the buffer
  *
  * Releases a kernel virtual mapping. This can be used as the
  * &dma_buf_ops.vunmap callback. Calls into &drm_gem_object_funcs.vunmap for device specific handling.
  */
-void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
+void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
 
-	drm_gem_vunmap(obj, vaddr);
+	drm_gem_vunmap(obj, map->vaddr);
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 80a9fc143bbb..135fbff6fecf 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -70,9 +70,10 @@ void etnaviv_gem_prime_unpin(struct drm_gem_object *obj)
 
 static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj)
 {
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(etnaviv_obj->vaddr);
+
 	if (etnaviv_obj->vaddr)
-		dma_buf_vunmap(etnaviv_obj->base.import_attach->dmabuf,
-			       etnaviv_obj->vaddr);
+		dma_buf_vunmap(etnaviv_obj->base.import_attach->dmabuf, &map);
 
 	/* Don't drop the pages for imported dmabuf, as they are not
 	 * ours, just free the array we allocated:
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 77b363d3000b..fec0e1e3dc3e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -96,7 +96,7 @@ static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map
 	return 0;
 }
 
-static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
+static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
 
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
index f79ebc5329b7..0b4d19729e1f 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
@@ -152,7 +152,7 @@ static int igt_dmabuf_import(void *arg)
 
 	err = 0;
 out_dma_map:
-	dma_buf_vunmap(dmabuf, dma_map);
+	dma_buf_vunmap(dmabuf, &map);
 out_obj:
 	i915_gem_object_put(obj);
 out_dmabuf:
@@ -182,7 +182,7 @@ static int igt_dmabuf_import_ownership(void *arg)
 	}
 
 	memset(ptr, 0xc5, PAGE_SIZE);
-	dma_buf_vunmap(dmabuf, ptr);
+	dma_buf_vunmap(dmabuf, &map);
 
 	obj = to_intel_bo(i915_gem_prime_import(&i915->drm, dmabuf));
 	if (IS_ERR(obj)) {
@@ -250,7 +250,7 @@ static int igt_dmabuf_export_vmap(void *arg)
 	memset(ptr, 0xc5, dmabuf->size);
 
 	err = 0;
-	dma_buf_vunmap(dmabuf, ptr);
+	dma_buf_vunmap(dmabuf, &map);
 out:
 	dma_buf_put(dmabuf);
 	return err;
diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
index 81663036c701..ff2e33144a25 100644
--- a/drivers/gpu/drm/tegra/gem.c
+++ b/drivers/gpu/drm/tegra/gem.c
@@ -149,11 +149,12 @@ static void *tegra_bo_mmap(struct host1x_bo *bo)
 static void tegra_bo_munmap(struct host1x_bo *bo, void *addr)
 {
 	struct tegra_bo *obj = host1x_to_tegra_bo(bo);
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(addr);
 
 	if (obj->vaddr)
 		return;
 	else if (obj->gem.import_attach)
-		dma_buf_vunmap(obj->gem.import_attach->dmabuf, addr);
+		dma_buf_vunmap(obj->gem.import_attach->dmabuf, &map);
 	else
 		vunmap(addr);
 }
@@ -655,7 +656,7 @@ static int tegra_gem_prime_vmap(struct dma_buf *buf, struct dma_buf_map *map)
 	return 0;
 }
 
-static void tegra_gem_prime_vunmap(struct dma_buf *buf, void *vaddr)
+static void tegra_gem_prime_vunmap(struct dma_buf *buf, struct dma_buf_map *map)
 {
 }
 
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
index 11428287bdf3..a1eb8279b113 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
@@ -648,6 +648,7 @@ static void vb2_dc_unmap_dmabuf(void *mem_priv)
 {
 	struct vb2_dc_buf *buf = mem_priv;
 	struct sg_table *sgt = buf->dma_sgt;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr);
 
 	if (WARN_ON(!buf->db_attach)) {
 		pr_err("trying to unpin a not attached buffer\n");
@@ -660,7 +661,7 @@ static void vb2_dc_unmap_dmabuf(void *mem_priv)
 	}
 
 	if (buf->vaddr) {
-		dma_buf_vunmap(buf->db_attach->dmabuf, buf->vaddr);
+		dma_buf_vunmap(buf->db_attach->dmabuf, &map);
 		buf->vaddr = NULL;
 	}
 	dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir);
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
index c51170e9c1b9..d5157e903e27 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
@@ -580,6 +580,7 @@ static void vb2_dma_sg_unmap_dmabuf(void *mem_priv)
 {
 	struct vb2_dma_sg_buf *buf = mem_priv;
 	struct sg_table *sgt = buf->dma_sgt;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr);
 
 	if (WARN_ON(!buf->db_attach)) {
 		pr_err("trying to unpin a not attached buffer\n");
@@ -592,7 +593,7 @@ static void vb2_dma_sg_unmap_dmabuf(void *mem_priv)
 	}
 
 	if (buf->vaddr) {
-		dma_buf_vunmap(buf->db_attach->dmabuf, buf->vaddr);
+		dma_buf_vunmap(buf->db_attach->dmabuf, &map);
 		buf->vaddr = NULL;
 	}
 	dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir);
diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
index 7b68e2379c65..11ba0eb1315b 100644
--- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
+++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
@@ -390,17 +390,19 @@ static int vb2_vmalloc_map_dmabuf(void *mem_priv)
 static void vb2_vmalloc_unmap_dmabuf(void *mem_priv)
 {
 	struct vb2_vmalloc_buf *buf = mem_priv;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr);
 
-	dma_buf_vunmap(buf->dbuf, buf->vaddr);
+	dma_buf_vunmap(buf->dbuf, &map);
 	buf->vaddr = NULL;
 }
 
 static void vb2_vmalloc_detach_dmabuf(void *mem_priv)
 {
 	struct vb2_vmalloc_buf *buf = mem_priv;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr);
 
 	if (buf->vaddr)
-		dma_buf_vunmap(buf->dbuf, buf->vaddr);
+		dma_buf_vunmap(buf->dbuf, &map);
 
 	kfree(buf);
 }
diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
index 3ee22639ff77..093f760cc131 100644
--- a/include/drm/drm_prime.h
+++ b/include/drm/drm_prime.h
@@ -84,7 +84,7 @@ void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
 			   struct sg_table *sgt,
 			   enum dma_data_direction dir);
 int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map);
-void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr);
+void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map);
 
 int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma);
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index 6b4f6e0e8b5d..303e1363b221 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -28,6 +28,16 @@ struct dma_buf_map {
 	bool is_iomem;
 };
 
+/**
+ * DMA_BUF_MAP_INIT_VADDR - Initializes struct dma_buf_map to an address in system memory
+ * @vaddr:	A system-memory address
+ */
+#define DMA_BUF_MAP_INIT_VADDR(vaddr_) \
+	{ \
+		.vaddr = (vaddr_), \
+		.is_iomem = false, \
+	}
+
 /**
  * dma_buf_map_set_vaddr - Sets a dma-buf mapping structure to an address in system memory
  * @map:	The dma-buf mapping structure
@@ -41,10 +51,26 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
 	map->is_iomem = false;
 }
 
-/* API transition helper */
-static inline bool dma_buf_map_is_vaddr(const struct dma_buf_map *map, const void *vaddr)
+/**
+ * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
+ * @lhs:	The dma-buf mapping structure
+ * @rhs:	A dma-buf mapping structure to compare with
+ *
+ * Two dma-buf mapping structures are equal if they both refer to the same type of memory
+ * and to the same address within that memory.
+ *
+ * Returns:
+ * True is both structures are equal, or false otherwise.
+ */
+static inline bool dma_buf_map_is_equal(const struct dma_buf_map *lhs,
+					const struct dma_buf_map *rhs)
 {
-	return !map->is_iomem && (map->vaddr == vaddr);
+	if (lhs->is_iomem != rhs->is_iomem)
+		return false;
+	else if (lhs->is_iomem)
+		return lhs->vaddr_iomem == rhs->vaddr_iomem;
+	else
+		return lhs->vaddr == rhs->vaddr;
 }
 
 /**
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 7237997cfa38..cf77cc15f4ba 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -267,7 +267,7 @@ struct dma_buf_ops {
 	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
 
 	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
-	void (*vunmap)(struct dma_buf *, void *vaddr);
+	void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
 };
 
 /**
@@ -504,5 +504,5 @@ int dma_buf_end_cpu_access(struct dma_buf *dma_buf,
 int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
 		 unsigned long);
 int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
-void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr);
+void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
 #endif /* __DMA_BUF_H__ */
-- 
2.28.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [Intel-gfx] [PATCH 3/3] dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
@ 2020-09-14 11:25   ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-14 11:25 UTC (permalink / raw)
  To: sumit.semwal, christian.koenig, daniel, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom
  Cc: intel-gfx, etnaviv, dri-devel, linaro-mm-sig, sparclinux,
	Thomas Zimmermann, linux-tegra, linux-media

This patch updates dma_buf_vunmap() and dma-buf's vunmap callback to
use struct dma_buf_map. The interfaces used to receive a buffer address.
This address is now given in an instance of the structure.

Users of the functions are updated accordingly. This is only an interface
change. It is currently expected that dma-buf memory can be accessed with
system memory load/store operations.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/dma-buf/dma-buf.c                     |  8 ++---
 drivers/gpu/drm/drm_gem_cma_helper.c          |  5 +--
 drivers/gpu/drm/drm_gem_shmem_helper.c        |  3 +-
 drivers/gpu/drm/drm_prime.c                   |  6 ++--
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  5 +--
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  2 +-
 .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  6 ++--
 drivers/gpu/drm/tegra/gem.c                   |  5 +--
 .../common/videobuf2/videobuf2-dma-contig.c   |  3 +-
 .../media/common/videobuf2/videobuf2-dma-sg.c |  3 +-
 .../common/videobuf2/videobuf2-vmalloc.c      |  6 ++--
 include/drm/drm_prime.h                       |  2 +-
 include/linux/dma-buf-map.h                   | 32 +++++++++++++++++--
 include/linux/dma-buf.h                       |  4 +--
 14 files changed, 62 insertions(+), 28 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index c99e3577aa2f..90022003b065 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1234,21 +1234,21 @@ EXPORT_SYMBOL_GPL(dma_buf_vmap);
 /**
  * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap.
  * @dmabuf:	[in]	buffer to vunmap
- * @vaddr:	[in]	vmap to vunmap
+ * @map:	[in]	vmap pointer to vunmap
  */
-void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
+void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
 {
 	if (WARN_ON(!dmabuf))
 		return;
 
 	BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
 	BUG_ON(dmabuf->vmapping_counter == 0);
-	BUG_ON(!dma_buf_map_is_vaddr(&dmabuf->vmap_ptr, vaddr));
+	BUG_ON(!dma_buf_map_is_equal(&dmabuf->vmap_ptr, map));
 
 	mutex_lock(&dmabuf->lock);
 	if (--dmabuf->vmapping_counter == 0) {
 		if (dmabuf->ops->vunmap)
-			dmabuf->ops->vunmap(dmabuf, vaddr);
+			dmabuf->ops->vunmap(dmabuf, map);
 		dma_buf_map_clear(&dmabuf->vmap_ptr);
 	}
 	mutex_unlock(&dmabuf->lock);
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 062315c25c12..55f500c85fe6 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -176,12 +176,13 @@ drm_gem_cma_create_with_handle(struct drm_file *file_priv,
 void drm_gem_cma_free_object(struct drm_gem_object *gem_obj)
 {
 	struct drm_gem_cma_object *cma_obj;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(cma_obj->vaddr);
 
 	cma_obj = to_drm_gem_cma_obj(gem_obj);
 
 	if (gem_obj->import_attach) {
 		if (cma_obj->vaddr)
-			dma_buf_vunmap(gem_obj->import_attach->dmabuf, cma_obj->vaddr);
+			dma_buf_vunmap(gem_obj->import_attach->dmabuf, &map);
 		drm_prime_gem_destroy(gem_obj, cma_obj->sgt);
 	} else if (cma_obj->vaddr) {
 		dma_free_wc(gem_obj->dev->dev, cma_obj->base.size,
@@ -645,7 +646,7 @@ drm_gem_cma_prime_import_sg_table_vmap(struct drm_device *dev,
 
 	obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
 	if (IS_ERR(obj)) {
-		dma_buf_vunmap(attach->dmabuf, map.vaddr);
+		dma_buf_vunmap(attach->dmabuf, &map);
 		return obj;
 	}
 
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index ad10a57cfece..3c2e8cde69a8 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -337,6 +337,7 @@ EXPORT_SYMBOL(drm_gem_shmem_vmap);
 static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 {
 	struct drm_gem_object *obj = &shmem->base;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
 
 	if (WARN_ON_ONCE(!shmem->vmap_use_count))
 		return;
@@ -345,7 +346,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 		return;
 
 	if (obj->import_attach)
-		dma_buf_vunmap(obj->import_attach->dmabuf, shmem->vaddr);
+		dma_buf_vunmap(obj->import_attach->dmabuf, &map);
 	else
 		vunmap(shmem->vaddr);
 
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 1b7d86c7842d..8a02b739924f 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -686,16 +686,16 @@ EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
 /**
  * drm_gem_dmabuf_vunmap - dma_buf vunmap implementation for GEM
  * @dma_buf: buffer to be unmapped
- * @vaddr: the virtual address of the buffer
+ * @map: the virtual address of the buffer
  *
  * Releases a kernel virtual mapping. This can be used as the
  * &dma_buf_ops.vunmap callback. Calls into &drm_gem_object_funcs.vunmap for device specific handling.
  */
-void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
+void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
 
-	drm_gem_vunmap(obj, vaddr);
+	drm_gem_vunmap(obj, map->vaddr);
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 80a9fc143bbb..135fbff6fecf 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -70,9 +70,10 @@ void etnaviv_gem_prime_unpin(struct drm_gem_object *obj)
 
 static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj)
 {
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(etnaviv_obj->vaddr);
+
 	if (etnaviv_obj->vaddr)
-		dma_buf_vunmap(etnaviv_obj->base.import_attach->dmabuf,
-			       etnaviv_obj->vaddr);
+		dma_buf_vunmap(etnaviv_obj->base.import_attach->dmabuf, &map);
 
 	/* Don't drop the pages for imported dmabuf, as they are not
 	 * ours, just free the array we allocated:
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 77b363d3000b..fec0e1e3dc3e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -96,7 +96,7 @@ static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map
 	return 0;
 }
 
-static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
+static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
 
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
index f79ebc5329b7..0b4d19729e1f 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
@@ -152,7 +152,7 @@ static int igt_dmabuf_import(void *arg)
 
 	err = 0;
 out_dma_map:
-	dma_buf_vunmap(dmabuf, dma_map);
+	dma_buf_vunmap(dmabuf, &map);
 out_obj:
 	i915_gem_object_put(obj);
 out_dmabuf:
@@ -182,7 +182,7 @@ static int igt_dmabuf_import_ownership(void *arg)
 	}
 
 	memset(ptr, 0xc5, PAGE_SIZE);
-	dma_buf_vunmap(dmabuf, ptr);
+	dma_buf_vunmap(dmabuf, &map);
 
 	obj = to_intel_bo(i915_gem_prime_import(&i915->drm, dmabuf));
 	if (IS_ERR(obj)) {
@@ -250,7 +250,7 @@ static int igt_dmabuf_export_vmap(void *arg)
 	memset(ptr, 0xc5, dmabuf->size);
 
 	err = 0;
-	dma_buf_vunmap(dmabuf, ptr);
+	dma_buf_vunmap(dmabuf, &map);
 out:
 	dma_buf_put(dmabuf);
 	return err;
diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
index 81663036c701..ff2e33144a25 100644
--- a/drivers/gpu/drm/tegra/gem.c
+++ b/drivers/gpu/drm/tegra/gem.c
@@ -149,11 +149,12 @@ static void *tegra_bo_mmap(struct host1x_bo *bo)
 static void tegra_bo_munmap(struct host1x_bo *bo, void *addr)
 {
 	struct tegra_bo *obj = host1x_to_tegra_bo(bo);
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(addr);
 
 	if (obj->vaddr)
 		return;
 	else if (obj->gem.import_attach)
-		dma_buf_vunmap(obj->gem.import_attach->dmabuf, addr);
+		dma_buf_vunmap(obj->gem.import_attach->dmabuf, &map);
 	else
 		vunmap(addr);
 }
@@ -655,7 +656,7 @@ static int tegra_gem_prime_vmap(struct dma_buf *buf, struct dma_buf_map *map)
 	return 0;
 }
 
-static void tegra_gem_prime_vunmap(struct dma_buf *buf, void *vaddr)
+static void tegra_gem_prime_vunmap(struct dma_buf *buf, struct dma_buf_map *map)
 {
 }
 
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
index 11428287bdf3..a1eb8279b113 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
@@ -648,6 +648,7 @@ static void vb2_dc_unmap_dmabuf(void *mem_priv)
 {
 	struct vb2_dc_buf *buf = mem_priv;
 	struct sg_table *sgt = buf->dma_sgt;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr);
 
 	if (WARN_ON(!buf->db_attach)) {
 		pr_err("trying to unpin a not attached buffer\n");
@@ -660,7 +661,7 @@ static void vb2_dc_unmap_dmabuf(void *mem_priv)
 	}
 
 	if (buf->vaddr) {
-		dma_buf_vunmap(buf->db_attach->dmabuf, buf->vaddr);
+		dma_buf_vunmap(buf->db_attach->dmabuf, &map);
 		buf->vaddr = NULL;
 	}
 	dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir);
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
index c51170e9c1b9..d5157e903e27 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
@@ -580,6 +580,7 @@ static void vb2_dma_sg_unmap_dmabuf(void *mem_priv)
 {
 	struct vb2_dma_sg_buf *buf = mem_priv;
 	struct sg_table *sgt = buf->dma_sgt;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr);
 
 	if (WARN_ON(!buf->db_attach)) {
 		pr_err("trying to unpin a not attached buffer\n");
@@ -592,7 +593,7 @@ static void vb2_dma_sg_unmap_dmabuf(void *mem_priv)
 	}
 
 	if (buf->vaddr) {
-		dma_buf_vunmap(buf->db_attach->dmabuf, buf->vaddr);
+		dma_buf_vunmap(buf->db_attach->dmabuf, &map);
 		buf->vaddr = NULL;
 	}
 	dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir);
diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
index 7b68e2379c65..11ba0eb1315b 100644
--- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
+++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
@@ -390,17 +390,19 @@ static int vb2_vmalloc_map_dmabuf(void *mem_priv)
 static void vb2_vmalloc_unmap_dmabuf(void *mem_priv)
 {
 	struct vb2_vmalloc_buf *buf = mem_priv;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr);
 
-	dma_buf_vunmap(buf->dbuf, buf->vaddr);
+	dma_buf_vunmap(buf->dbuf, &map);
 	buf->vaddr = NULL;
 }
 
 static void vb2_vmalloc_detach_dmabuf(void *mem_priv)
 {
 	struct vb2_vmalloc_buf *buf = mem_priv;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr);
 
 	if (buf->vaddr)
-		dma_buf_vunmap(buf->dbuf, buf->vaddr);
+		dma_buf_vunmap(buf->dbuf, &map);
 
 	kfree(buf);
 }
diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
index 3ee22639ff77..093f760cc131 100644
--- a/include/drm/drm_prime.h
+++ b/include/drm/drm_prime.h
@@ -84,7 +84,7 @@ void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
 			   struct sg_table *sgt,
 			   enum dma_data_direction dir);
 int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map);
-void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr);
+void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map);
 
 int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma);
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index 6b4f6e0e8b5d..303e1363b221 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -28,6 +28,16 @@ struct dma_buf_map {
 	bool is_iomem;
 };
 
+/**
+ * DMA_BUF_MAP_INIT_VADDR - Initializes struct dma_buf_map to an address in system memory
+ * @vaddr:	A system-memory address
+ */
+#define DMA_BUF_MAP_INIT_VADDR(vaddr_) \
+	{ \
+		.vaddr = (vaddr_), \
+		.is_iomem = false, \
+	}
+
 /**
  * dma_buf_map_set_vaddr - Sets a dma-buf mapping structure to an address in system memory
  * @map:	The dma-buf mapping structure
@@ -41,10 +51,26 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
 	map->is_iomem = false;
 }
 
-/* API transition helper */
-static inline bool dma_buf_map_is_vaddr(const struct dma_buf_map *map, const void *vaddr)
+/**
+ * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
+ * @lhs:	The dma-buf mapping structure
+ * @rhs:	A dma-buf mapping structure to compare with
+ *
+ * Two dma-buf mapping structures are equal if they both refer to the same type of memory
+ * and to the same address within that memory.
+ *
+ * Returns:
+ * True is both structures are equal, or false otherwise.
+ */
+static inline bool dma_buf_map_is_equal(const struct dma_buf_map *lhs,
+					const struct dma_buf_map *rhs)
 {
-	return !map->is_iomem && (map->vaddr == vaddr);
+	if (lhs->is_iomem != rhs->is_iomem)
+		return false;
+	else if (lhs->is_iomem)
+		return lhs->vaddr_iomem == rhs->vaddr_iomem;
+	else
+		return lhs->vaddr == rhs->vaddr;
 }
 
 /**
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 7237997cfa38..cf77cc15f4ba 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -267,7 +267,7 @@ struct dma_buf_ops {
 	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
 
 	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
-	void (*vunmap)(struct dma_buf *, void *vaddr);
+	void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
 };
 
 /**
@@ -504,5 +504,5 @@ int dma_buf_end_cpu_access(struct dma_buf *dma_buf,
 int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
 		 unsigned long);
 int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
-void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr);
+void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
 #endif /* __DMA_BUF_H__ */
-- 
2.28.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [Intel-gfx] [PATCH 3/3] dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
  2020-09-14 11:25   ` Thomas Zimmermann
                     ` (2 preceding siblings ...)
  (?)
@ 2020-09-14 17:22   ` kernel test robot
  -1 siblings, 0 replies; 57+ messages in thread
From: kernel test robot @ 2020-09-14 17:22 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 3947 bytes --]

Hi Thomas,

I love your patch! Perhaps something to improve:

[auto build test WARNING on next-20200914]
[also build test WARNING on v5.9-rc5]
[cannot apply to linuxtv-media/master drm-intel/for-linux-next tegra/for-next linus/master v5.9-rc5 v5.9-rc4 v5.9-rc3]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Thomas-Zimmermann/dma-buf-Flag-vmap-ed-memory-as-system-or-I-O-memory/20200914-192712
base:    f965d3ec86fa89285db0fbb983da76ba9c398efa
config: arm-randconfig-r002-20200914 (attached as .config)
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project b2c32c90bab09a6e2c1f370429db26017a182143)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install arm cross compiling tool for clang build
        # apt-get install binutils-arm-linux-gnueabi
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=arm 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/gpu/drm/drm_gem_cma_helper.c:179:50: warning: variable 'cma_obj' is uninitialized when used here [-Wuninitialized]
           struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(cma_obj->vaddr);
                                                           ^~~~~~~
   include/linux/dma-buf-map.h:37:13: note: expanded from macro 'DMA_BUF_MAP_INIT_VADDR'
                   .vaddr = (vaddr_), \
                             ^~~~~~
   drivers/gpu/drm/drm_gem_cma_helper.c:178:36: note: initialize the variable 'cma_obj' to silence this warning
           struct drm_gem_cma_object *cma_obj;
                                             ^
                                              = NULL
   1 warning generated.

# https://github.com/0day-ci/linux/commit/7fd403952126005980734501c5d0de5e13b3673b
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Thomas-Zimmermann/dma-buf-Flag-vmap-ed-memory-as-system-or-I-O-memory/20200914-192712
git checkout 7fd403952126005980734501c5d0de5e13b3673b
vim +/cma_obj +179 drivers/gpu/drm/drm_gem_cma_helper.c

   165	
   166	/**
   167	 * drm_gem_cma_free_object - free resources associated with a CMA GEM object
   168	 * @gem_obj: GEM object to free
   169	 *
   170	 * This function frees the backing memory of the CMA GEM object, cleans up the
   171	 * GEM object state and frees the memory used to store the object itself.
   172	 * If the buffer is imported and the virtual address is set, it is released.
   173	 * Drivers using the CMA helpers should set this as their
   174	 * &drm_driver.gem_free_object_unlocked callback.
   175	 */
   176	void drm_gem_cma_free_object(struct drm_gem_object *gem_obj)
   177	{
   178		struct drm_gem_cma_object *cma_obj;
 > 179		struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(cma_obj->vaddr);
   180	
   181		cma_obj = to_drm_gem_cma_obj(gem_obj);
   182	
   183		if (gem_obj->import_attach) {
   184			if (cma_obj->vaddr)
   185				dma_buf_vunmap(gem_obj->import_attach->dmabuf, &map);
   186			drm_prime_gem_destroy(gem_obj, cma_obj->sgt);
   187		} else if (cma_obj->vaddr) {
   188			dma_free_wc(gem_obj->dev->dev, cma_obj->base.size,
   189				    cma_obj->vaddr, cma_obj->paddr);
   190		}
   191	
   192		drm_gem_object_release(gem_obj);
   193	
   194		kfree(cma_obj);
   195	}
   196	EXPORT_SYMBOL_GPL(drm_gem_cma_free_object);
   197	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 27134 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BUILD: failure for dma-buf: Flag vmap'ed memory as system or I/O memory
  2020-09-14 11:25 ` Thomas Zimmermann
                   ` (5 preceding siblings ...)
  (?)
@ 2020-09-14 17:41 ` Patchwork
  -1 siblings, 0 replies; 57+ messages in thread
From: Patchwork @ 2020-09-14 17:41 UTC (permalink / raw)
  To: Thomas Zimmermann; +Cc: intel-gfx

== Series Details ==

Series: dma-buf: Flag vmap'ed memory as system or I/O memory
URL   : https://patchwork.freedesktop.org/series/81647/
State : failure

== Summary ==

CALL    scripts/checksyscalls.sh
  CALL    scripts/atomic/check-atomics.sh
  DESCEND  objtool
  CHK     include/generated/compile.h
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.o
In file included from drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c:291:0:
drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c:89:10: error: initialization from incompatible pointer type [-Werror=incompatible-pointer-types]
  .vmap = mock_dmabuf_vmap,
          ^~~~~~~~~~~~~~~~
drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c:89:10: note: (near initialization for ‘mock_dmabuf_ops.vmap’)
drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c:90:12: error: initialization from incompatible pointer type [-Werror=incompatible-pointer-types]
  .vunmap = mock_dmabuf_vunmap,
            ^~~~~~~~~~~~~~~~~~
drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c:90:12: note: (near initialization for ‘mock_dmabuf_ops.vunmap’)
cc1: all warnings being treated as errors
scripts/Makefile.build:283: recipe for target 'drivers/gpu/drm/i915/gem/i915_gem_dmabuf.o' failed
make[4]: *** [drivers/gpu/drm/i915/gem/i915_gem_dmabuf.o] Error 1
scripts/Makefile.build:500: recipe for target 'drivers/gpu/drm/i915' failed
make[3]: *** [drivers/gpu/drm/i915] Error 2
scripts/Makefile.build:500: recipe for target 'drivers/gpu/drm' failed
make[2]: *** [drivers/gpu/drm] Error 2
scripts/Makefile.build:500: recipe for target 'drivers/gpu' failed
make[1]: *** [drivers/gpu] Error 2
Makefile:1784: recipe for target 'drivers' failed
make: *** [drivers] Error 2


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [Intel-gfx] [PATCH 2/3] dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
  2020-09-14 11:25   ` Thomas Zimmermann
                     ` (2 preceding siblings ...)
  (?)
@ 2020-09-14 18:33   ` kernel test robot
  -1 siblings, 0 replies; 57+ messages in thread
From: kernel test robot @ 2020-09-14 18:33 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 3147 bytes --]

Hi Thomas,

I love your patch! Yet something to improve:

[auto build test ERROR on next-20200914]
[also build test ERROR on v5.9-rc5]
[cannot apply to linuxtv-media/master drm-intel/for-linux-next tegra/for-next linus/master v5.9-rc5 v5.9-rc4 v5.9-rc3]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Thomas-Zimmermann/dma-buf-Flag-vmap-ed-memory-as-system-or-I-O-memory/20200914-192712
base:    f965d3ec86fa89285db0fbb983da76ba9c398efa
config: x86_64-randconfig-a004-20200913 (attached as .config)
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project b2c32c90bab09a6e2c1f370429db26017a182143)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install x86_64 cross compiling tool for clang build
        # apt-get install binutils-x86-64-linux-gnu
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> drivers/dma-buf/heaps/heap-helpers.c:269:10: error: incompatible function pointer types initializing 'int (*)(struct dma_buf *, struct dma_buf_map *)' with an expression of type 'void *(struct dma_buf *)' [-Werror,-Wincompatible-function-pointer-types]
           .vmap = dma_heap_dma_buf_vmap,
                   ^~~~~~~~~~~~~~~~~~~~~
   1 error generated.

# https://github.com/0day-ci/linux/commit/b9513704e28f25636f00827154183df60a80d95c
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Thomas-Zimmermann/dma-buf-Flag-vmap-ed-memory-as-system-or-I-O-memory/20200914-192712
git checkout b9513704e28f25636f00827154183df60a80d95c
vim +269 drivers/dma-buf/heaps/heap-helpers.c

5248eb12fea890a John Stultz 2019-12-03  259  
5248eb12fea890a John Stultz 2019-12-03  260  const struct dma_buf_ops heap_helper_ops = {
5248eb12fea890a John Stultz 2019-12-03  261  	.map_dma_buf = dma_heap_map_dma_buf,
5248eb12fea890a John Stultz 2019-12-03  262  	.unmap_dma_buf = dma_heap_unmap_dma_buf,
5248eb12fea890a John Stultz 2019-12-03  263  	.mmap = dma_heap_mmap,
5248eb12fea890a John Stultz 2019-12-03  264  	.release = dma_heap_dma_buf_release,
5248eb12fea890a John Stultz 2019-12-03  265  	.attach = dma_heap_attach,
5248eb12fea890a John Stultz 2019-12-03  266  	.detach = dma_heap_detach,
5248eb12fea890a John Stultz 2019-12-03  267  	.begin_cpu_access = dma_heap_dma_buf_begin_cpu_access,
5248eb12fea890a John Stultz 2019-12-03  268  	.end_cpu_access = dma_heap_dma_buf_end_cpu_access,
5248eb12fea890a John Stultz 2019-12-03 @269  	.vmap = dma_heap_dma_buf_vmap,

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 36678 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [Intel-gfx] [PATCH 3/3] dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
  2020-09-14 11:25   ` Thomas Zimmermann
                     ` (3 preceding siblings ...)
  (?)
@ 2020-09-14 19:28   ` kernel test robot
  -1 siblings, 0 replies; 57+ messages in thread
From: kernel test robot @ 2020-09-14 19:28 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 3588 bytes --]

Hi Thomas,

I love your patch! Yet something to improve:

[auto build test ERROR on next-20200914]
[also build test ERROR on v5.9-rc5]
[cannot apply to linuxtv-media/master drm-intel/for-linux-next tegra/for-next linus/master v5.9-rc5 v5.9-rc4 v5.9-rc3]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Thomas-Zimmermann/dma-buf-Flag-vmap-ed-memory-as-system-or-I-O-memory/20200914-192712
base:    f965d3ec86fa89285db0fbb983da76ba9c398efa
config: x86_64-randconfig-a004-20200913 (attached as .config)
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project b2c32c90bab09a6e2c1f370429db26017a182143)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install x86_64 cross compiling tool for clang build
        # apt-get install binutils-x86-64-linux-gnu
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   drivers/dma-buf/heaps/heap-helpers.c:269:10: error: incompatible function pointer types initializing 'int (*)(struct dma_buf *, struct dma_buf_map *)' with an expression of type 'void *(struct dma_buf *)' [-Werror,-Wincompatible-function-pointer-types]
           .vmap = dma_heap_dma_buf_vmap,
                   ^~~~~~~~~~~~~~~~~~~~~
>> drivers/dma-buf/heaps/heap-helpers.c:270:12: error: incompatible function pointer types initializing 'void (*)(struct dma_buf *, struct dma_buf_map *)' with an expression of type 'void (struct dma_buf *, void *)' [-Werror,-Wincompatible-function-pointer-types]
           .vunmap = dma_heap_dma_buf_vunmap,
                     ^~~~~~~~~~~~~~~~~~~~~~~
   2 errors generated.

# https://github.com/0day-ci/linux/commit/7fd403952126005980734501c5d0de5e13b3673b
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Thomas-Zimmermann/dma-buf-Flag-vmap-ed-memory-as-system-or-I-O-memory/20200914-192712
git checkout 7fd403952126005980734501c5d0de5e13b3673b
vim +270 drivers/dma-buf/heaps/heap-helpers.c

5248eb12fea890a John Stultz 2019-12-03  259  
5248eb12fea890a John Stultz 2019-12-03  260  const struct dma_buf_ops heap_helper_ops = {
5248eb12fea890a John Stultz 2019-12-03  261  	.map_dma_buf = dma_heap_map_dma_buf,
5248eb12fea890a John Stultz 2019-12-03  262  	.unmap_dma_buf = dma_heap_unmap_dma_buf,
5248eb12fea890a John Stultz 2019-12-03  263  	.mmap = dma_heap_mmap,
5248eb12fea890a John Stultz 2019-12-03  264  	.release = dma_heap_dma_buf_release,
5248eb12fea890a John Stultz 2019-12-03  265  	.attach = dma_heap_attach,
5248eb12fea890a John Stultz 2019-12-03  266  	.detach = dma_heap_detach,
5248eb12fea890a John Stultz 2019-12-03  267  	.begin_cpu_access = dma_heap_dma_buf_begin_cpu_access,
5248eb12fea890a John Stultz 2019-12-03  268  	.end_cpu_access = dma_heap_dma_buf_end_cpu_access,
5248eb12fea890a John Stultz 2019-12-03 @269  	.vmap = dma_heap_dma_buf_vmap,
5248eb12fea890a John Stultz 2019-12-03 @270  	.vunmap = dma_heap_dma_buf_vunmap,

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 36678 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [Intel-gfx] [PATCH 2/3] dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
  2020-09-14 11:25   ` Thomas Zimmermann
                     ` (3 preceding siblings ...)
  (?)
@ 2020-09-14 23:54   ` kernel test robot
  -1 siblings, 0 replies; 57+ messages in thread
From: kernel test robot @ 2020-09-14 23:54 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 3044 bytes --]

Hi Thomas,

I love your patch! Yet something to improve:

[auto build test ERROR on next-20200914]
[also build test ERROR on v5.9-rc5]
[cannot apply to linuxtv-media/master drm-intel/for-linux-next tegra/for-next linus/master v5.9-rc5 v5.9-rc4 v5.9-rc3]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Thomas-Zimmermann/dma-buf-Flag-vmap-ed-memory-as-system-or-I-O-memory/20200914-192712
base:    f965d3ec86fa89285db0fbb983da76ba9c398efa
config: mips-randconfig-r013-20200914 (attached as .config)
compiler: mipsel-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=mips 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> drivers/dma-buf/heaps/heap-helpers.c:269:10: error: initialization of 'int (*)(struct dma_buf *, struct dma_buf_map *)' from incompatible pointer type 'void * (*)(struct dma_buf *)' [-Werror=incompatible-pointer-types]
     269 |  .vmap = dma_heap_dma_buf_vmap,
         |          ^~~~~~~~~~~~~~~~~~~~~
   drivers/dma-buf/heaps/heap-helpers.c:269:10: note: (near initialization for 'heap_helper_ops.vmap')
   cc1: some warnings being treated as errors

# https://github.com/0day-ci/linux/commit/b9513704e28f25636f00827154183df60a80d95c
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Thomas-Zimmermann/dma-buf-Flag-vmap-ed-memory-as-system-or-I-O-memory/20200914-192712
git checkout b9513704e28f25636f00827154183df60a80d95c
vim +269 drivers/dma-buf/heaps/heap-helpers.c

5248eb12fea890 John Stultz 2019-12-03  259  
5248eb12fea890 John Stultz 2019-12-03  260  const struct dma_buf_ops heap_helper_ops = {
5248eb12fea890 John Stultz 2019-12-03  261  	.map_dma_buf = dma_heap_map_dma_buf,
5248eb12fea890 John Stultz 2019-12-03  262  	.unmap_dma_buf = dma_heap_unmap_dma_buf,
5248eb12fea890 John Stultz 2019-12-03  263  	.mmap = dma_heap_mmap,
5248eb12fea890 John Stultz 2019-12-03  264  	.release = dma_heap_dma_buf_release,
5248eb12fea890 John Stultz 2019-12-03  265  	.attach = dma_heap_attach,
5248eb12fea890 John Stultz 2019-12-03  266  	.detach = dma_heap_detach,
5248eb12fea890 John Stultz 2019-12-03  267  	.begin_cpu_access = dma_heap_dma_buf_begin_cpu_access,
5248eb12fea890 John Stultz 2019-12-03  268  	.end_cpu_access = dma_heap_dma_buf_end_cpu_access,
5248eb12fea890 John Stultz 2019-12-03 @269  	.vmap = dma_heap_dma_buf_vmap,

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 27864 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [Intel-gfx] [PATCH 2/3] dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
  2020-09-14 11:25   ` Thomas Zimmermann
                     ` (4 preceding siblings ...)
  (?)
@ 2020-09-15  1:56   ` kernel test robot
  -1 siblings, 0 replies; 57+ messages in thread
From: kernel test robot @ 2020-09-15  1:56 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 3215 bytes --]

Hi Thomas,

I love your patch! Yet something to improve:

[auto build test ERROR on next-20200914]
[also build test ERROR on v5.9-rc5]
[cannot apply to linuxtv-media/master drm-intel/for-linux-next tegra/for-next linus/master v5.9-rc5 v5.9-rc4 v5.9-rc3]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Thomas-Zimmermann/dma-buf-Flag-vmap-ed-memory-as-system-or-I-O-memory/20200914-192712
base:    f965d3ec86fa89285db0fbb983da76ba9c398efa
config: i386-randconfig-r034-20200913 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce (this is a W=1 build):
        # save the attached .config to linux build tree
        make W=1 ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   In file included from drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c:291:
>> drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c:89:10: error: initialization of 'int (*)(struct dma_buf *, struct dma_buf_map *)' from incompatible pointer type 'void * (*)(struct dma_buf *)' [-Werror=incompatible-pointer-types]
      89 |  .vmap = mock_dmabuf_vmap,
         |          ^~~~~~~~~~~~~~~~
   drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c:89:10: note: (near initialization for 'mock_dmabuf_ops.vmap')
   cc1: some warnings being treated as errors

# https://github.com/0day-ci/linux/commit/b9513704e28f25636f00827154183df60a80d95c
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Thomas-Zimmermann/dma-buf-Flag-vmap-ed-memory-as-system-or-I-O-memory/20200914-192712
git checkout b9513704e28f25636f00827154183df60a80d95c
vim +89 drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c

6cca22ede8a448 drivers/gpu/drm/i915/selftests/mock_dmabuf.c Chris Wilson 2017-02-13  83  
6cca22ede8a448 drivers/gpu/drm/i915/selftests/mock_dmabuf.c Chris Wilson 2017-02-13  84  static const struct dma_buf_ops mock_dmabuf_ops =  {
6cca22ede8a448 drivers/gpu/drm/i915/selftests/mock_dmabuf.c Chris Wilson 2017-02-13  85  	.map_dma_buf = mock_map_dma_buf,
6cca22ede8a448 drivers/gpu/drm/i915/selftests/mock_dmabuf.c Chris Wilson 2017-02-13  86  	.unmap_dma_buf = mock_unmap_dma_buf,
6cca22ede8a448 drivers/gpu/drm/i915/selftests/mock_dmabuf.c Chris Wilson 2017-02-13  87  	.release = mock_dmabuf_release,
6cca22ede8a448 drivers/gpu/drm/i915/selftests/mock_dmabuf.c Chris Wilson 2017-02-13  88  	.mmap = mock_dmabuf_mmap,
6cca22ede8a448 drivers/gpu/drm/i915/selftests/mock_dmabuf.c Chris Wilson 2017-02-13 @89  	.vmap = mock_dmabuf_vmap,
6cca22ede8a448 drivers/gpu/drm/i915/selftests/mock_dmabuf.c Chris Wilson 2017-02-13  90  	.vunmap = mock_dmabuf_vunmap,
6cca22ede8a448 drivers/gpu/drm/i915/selftests/mock_dmabuf.c Chris Wilson 2017-02-13  91  };
6cca22ede8a448 drivers/gpu/drm/i915/selftests/mock_dmabuf.c Chris Wilson 2017-02-13  92  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 38656 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/3] dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
  2020-09-14 11:25   ` Thomas Zimmermann
  (?)
  (?)
@ 2020-09-16  9:35     ` Daniel Vetter
  -1 siblings, 0 replies; 57+ messages in thread
From: Daniel Vetter @ 2020-09-16  9:35 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: sumit.semwal, christian.koenig, daniel, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom, linux-media, dri-devel, linaro-mm-sig, etnaviv,
	intel-gfx, linux-tegra, sparclinux

On Mon, Sep 14, 2020 at 01:25:20PM +0200, Thomas Zimmermann wrote:
> This patch updates dma_buf_vmap() and dma-buf's vmap callback to use
> struct dma_buf_map.
> 
> The interfaces used to return a buffer address. This address now gets
> stored in an instance of the structure that is given as an additional
> argument. The functions return an errno code on errors.
> 
> Users of the functions are updated accordingly. This is only an interface
> change. It is currently expected that dma-buf memory can be accessed with
> system memory load/store operations.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/dma-buf/dma-buf.c                     | 26 ++++++++++---------
>  drivers/gpu/drm/drm_gem_cma_helper.c          | 13 +++++-----
>  drivers/gpu/drm/drm_gem_shmem_helper.c        | 14 ++++++----
>  drivers/gpu/drm/drm_prime.c                   |  8 +++---
>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  8 +++++-
>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    | 11 ++++++--
>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  | 12 ++++++---
>  drivers/gpu/drm/tegra/gem.c                   | 18 ++++++++-----
>  .../common/videobuf2/videobuf2-dma-contig.c   | 14 +++++++---
>  .../media/common/videobuf2/videobuf2-dma-sg.c | 16 ++++++++----
>  .../common/videobuf2/videobuf2-vmalloc.c      | 15 ++++++++---
>  include/drm/drm_prime.h                       |  3 ++-
>  include/linux/dma-buf-map.h                   | 13 ++++++++++
>  include/linux/dma-buf.h                       |  6 ++---
>  14 files changed, 122 insertions(+), 55 deletions(-)
> 
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 5e849ca241a0..c99e3577aa2f 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -1186,46 +1186,48 @@ EXPORT_SYMBOL_GPL(dma_buf_mmap);
>   * dma_buf_vmap - Create virtual mapping for the buffer object into kernel
>   * address space. Same restrictions as for vmap and friends apply.
>   * @dmabuf:	[in]	buffer to vmap
> + * @map:	[out]	returns the vmap pointer
>   *
>   * This call may fail due to lack of virtual mapping address space.
>   * These calls are optional in drivers. The intended use for them
>   * is for mapping objects linear in kernel space for high use objects.
>   * Please attempt to use kmap/kunmap before thinking about these interfaces.
>   *
> - * Returns NULL on error.
> + * Returns 0 on success, or a negative errno code otherwise.
>   */
> -void *dma_buf_vmap(struct dma_buf *dmabuf)
> +int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
>  {
> -	void *ptr;
> +	struct dma_buf_map ptr;

Hm I think for safety we should unconditionally clear map, even when it
fails. Otherwise callers of this who fail to handle errors might trip up
on stack garbage, instead of tripping over a NULL pointer. That's a step
back from the old version of

	vaddr = dma_buf_vmap()

where you where guaranteed to trip over a NULL pointer if you didn't check
for errors. So also no need for the local variable.

Otherwise I think this interface looks clean.
-Daniel

> +	int ret = 0;
>  
>  	if (WARN_ON(!dmabuf))
> -		return NULL;
> +		return -EINVAL;
>  
>  	if (!dmabuf->ops->vmap)
> -		return NULL;
> +		return -EINVAL;
>  
>  	mutex_lock(&dmabuf->lock);
>  	if (dmabuf->vmapping_counter) {
>  		dmabuf->vmapping_counter++;
>  		BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
> -		ptr = dmabuf->vmap_ptr.vaddr;
> +		*map = dmabuf->vmap_ptr;
>  		goto out_unlock;
>  	}
>  
>  	BUG_ON(dma_buf_map_is_set(&dmabuf->vmap_ptr));
>  
> -	ptr = dmabuf->ops->vmap(dmabuf);
> -	if (WARN_ON_ONCE(IS_ERR(ptr)))
> -		ptr = NULL;
> -	if (!ptr)
> +	ret = dmabuf->ops->vmap(dmabuf, &ptr);
> +	if (WARN_ON_ONCE(ret))
>  		goto out_unlock;
>  
> -	dmabuf->vmap_ptr.vaddr = ptr;
> +	dmabuf->vmap_ptr = ptr;
>  	dmabuf->vmapping_counter = 1;
>  
> +	*map = dmabuf->vmap_ptr;
> +
>  out_unlock:
>  	mutex_unlock(&dmabuf->lock);
> -	return ptr;
> +	return ret;
>  }
>  EXPORT_SYMBOL_GPL(dma_buf_vmap);
>  
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 822edeadbab3..062315c25c12 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -634,22 +634,23 @@ drm_gem_cma_prime_import_sg_table_vmap(struct drm_device *dev,
>  {
>  	struct drm_gem_cma_object *cma_obj;
>  	struct drm_gem_object *obj;
> -	void *vaddr;
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	vaddr = dma_buf_vmap(attach->dmabuf);
> -	if (!vaddr) {
> +	ret = dma_buf_vmap(attach->dmabuf, &map);
> +	if (ret) {
>  		DRM_ERROR("Failed to vmap PRIME buffer\n");
> -		return ERR_PTR(-ENOMEM);
> +		return ERR_PTR(ret);
>  	}
>  
>  	obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
>  	if (IS_ERR(obj)) {
> -		dma_buf_vunmap(attach->dmabuf, vaddr);
> +		dma_buf_vunmap(attach->dmabuf, map.vaddr);
>  		return obj;
>  	}
>  
>  	cma_obj = to_drm_gem_cma_obj(obj);
> -	cma_obj->vaddr = vaddr;
> +	cma_obj->vaddr = map.vaddr;
>  
>  	return obj;
>  }
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 0a952f27c184..ad10a57cfece 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -261,13 +261,16 @@ EXPORT_SYMBOL(drm_gem_shmem_unpin);
>  static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>  {
>  	struct drm_gem_object *obj = &shmem->base;
> -	int ret;
> +	struct dma_buf_map map;
> +	int ret = 0;
>  
>  	if (shmem->vmap_use_count++ > 0)
>  		return shmem->vaddr;
>  
>  	if (obj->import_attach) {
> -		shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
> +		ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
> +		if (!ret)
> +			shmem->vaddr = map.vaddr;
>  	} else {
>  		pgprot_t prot = PAGE_KERNEL;
>  
> @@ -279,11 +282,12 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>  			prot = pgprot_writecombine(prot);
>  		shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
>  				    VM_MAP, prot);
> +		if (!shmem->vaddr)
> +			ret = -ENOMEM;
>  	}
>  
> -	if (!shmem->vaddr) {
> -		DRM_DEBUG_KMS("Failed to vmap pages\n");
> -		ret = -ENOMEM;
> +	if (ret) {
> +		DRM_DEBUG_KMS("Failed to vmap pages, error %d\n", ret);
>  		goto err_put_pages;
>  	}
>  
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index 8a6a3c99b7d8..1b7d86c7842d 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -668,16 +668,18 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
>   *
>   * Returns the kernel virtual address or NULL on failure.
>   */
> -void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf)
> +int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
>  {
>  	struct drm_gem_object *obj = dma_buf->priv;
>  	void *vaddr;
>  
>  	vaddr = drm_gem_vmap(obj);
>  	if (IS_ERR(vaddr))
> -		vaddr = NULL;
> +		return PTR_ERR(vaddr);
>  
> -	return vaddr;
> +	dma_buf_map_set_vaddr(map, vaddr);
> +
> +	return 0;
>  }
>  EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
>  
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 4aa3426a9ba4..80a9fc143bbb 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -85,9 +85,15 @@ static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj)
>  
>  static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj)
>  {
> +	struct dma_buf_map map;
> +	int ret;
> +
>  	lockdep_assert_held(&etnaviv_obj->lock);
>  
> -	return dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf);
> +	ret = dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf, &map);
> +	if (ret)
> +		return NULL;
> +	return map.vaddr;
>  }
>  
>  static int etnaviv_gem_prime_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> index 27fddc22a7c6..77b363d3000b 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> @@ -82,11 +82,18 @@ static void i915_gem_unmap_dma_buf(struct dma_buf_attachment *attachment,
>  	i915_gem_object_unpin_pages(obj);
>  }
>  
> -static void *i915_gem_dmabuf_vmap(struct dma_buf *dma_buf)
> +static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
>  {
>  	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
> +	void *vaddr;
>  
> -	return i915_gem_object_pin_map(obj, I915_MAP_WB);
> +	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
> +	if (IS_ERR(vaddr))
> +		return PTR_ERR(vaddr);
> +
> +	dma_buf_map_set_vaddr(map, vaddr);
> +
> +	return 0;
>  }
>  
>  static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
> diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> index 2a52b92586b9..f79ebc5329b7 100644
> --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> @@ -82,6 +82,7 @@ static int igt_dmabuf_import(void *arg)
>  	struct drm_i915_gem_object *obj;
>  	struct dma_buf *dmabuf;
>  	void *obj_map, *dma_map;
> +	struct dma_buf_map map;
>  	u32 pattern[] = { 0, 0xaa, 0xcc, 0x55, 0xff };
>  	int err, i;
>  
> @@ -110,7 +111,8 @@ static int igt_dmabuf_import(void *arg)
>  		goto out_obj;
>  	}
>  
> -	dma_map = dma_buf_vmap(dmabuf);
> +	err = dma_buf_vmap(dmabuf, &map);
> +	dma_map = err ? NULL : map.vaddr;
>  	if (!dma_map) {
>  		pr_err("dma_buf_vmap failed\n");
>  		err = -ENOMEM;
> @@ -163,6 +165,7 @@ static int igt_dmabuf_import_ownership(void *arg)
>  	struct drm_i915_private *i915 = arg;
>  	struct drm_i915_gem_object *obj;
>  	struct dma_buf *dmabuf;
> +	struct dma_buf_map map;
>  	void *ptr;
>  	int err;
>  
> @@ -170,7 +173,8 @@ static int igt_dmabuf_import_ownership(void *arg)
>  	if (IS_ERR(dmabuf))
>  		return PTR_ERR(dmabuf);
>  
> -	ptr = dma_buf_vmap(dmabuf);
> +	err = dma_buf_vmap(dmabuf, &map);
> +	ptr = err ? NULL : map.vaddr;
>  	if (!ptr) {
>  		pr_err("dma_buf_vmap failed\n");
>  		err = -ENOMEM;
> @@ -212,6 +216,7 @@ static int igt_dmabuf_export_vmap(void *arg)
>  	struct drm_i915_private *i915 = arg;
>  	struct drm_i915_gem_object *obj;
>  	struct dma_buf *dmabuf;
> +	struct dma_buf_map map;
>  	void *ptr;
>  	int err;
>  
> @@ -228,7 +233,8 @@ static int igt_dmabuf_export_vmap(void *arg)
>  	}
>  	i915_gem_object_put(obj);
>  
> -	ptr = dma_buf_vmap(dmabuf);
> +	err = dma_buf_vmap(dmabuf, &map);
> +	ptr = err ? NULL : map.vaddr;
>  	if (!ptr) {
>  		pr_err("dma_buf_vmap failed\n");
>  		err = -ENOMEM;
> diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
> index 47e2935b8c68..81663036c701 100644
> --- a/drivers/gpu/drm/tegra/gem.c
> +++ b/drivers/gpu/drm/tegra/gem.c
> @@ -132,14 +132,18 @@ static void tegra_bo_unpin(struct device *dev, struct sg_table *sgt)
>  static void *tegra_bo_mmap(struct host1x_bo *bo)
>  {
>  	struct tegra_bo *obj = host1x_to_tegra_bo(bo);
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	if (obj->vaddr)
> +	if (obj->vaddr) {
>  		return obj->vaddr;
> -	else if (obj->gem.import_attach)
> -		return dma_buf_vmap(obj->gem.import_attach->dmabuf);
> -	else
> +	} else if (obj->gem.import_attach) {
> +		ret = dma_buf_vmap(obj->gem.import_attach->dmabuf, &map);
> +		return ret ? NULL : map.vaddr;
> +	} else {
>  		return vmap(obj->pages, obj->num_pages, VM_MAP,
>  			    pgprot_writecombine(PAGE_KERNEL));
> +	}
>  }
>  
>  static void tegra_bo_munmap(struct host1x_bo *bo, void *addr)
> @@ -641,12 +645,14 @@ static int tegra_gem_prime_mmap(struct dma_buf *buf, struct vm_area_struct *vma)
>  	return __tegra_gem_mmap(gem, vma);
>  }
>  
> -static void *tegra_gem_prime_vmap(struct dma_buf *buf)
> +static int tegra_gem_prime_vmap(struct dma_buf *buf, struct dma_buf_map *map)
>  {
>  	struct drm_gem_object *gem = buf->priv;
>  	struct tegra_bo *bo = to_tegra_bo(gem);
>  
> -	return bo->vaddr;
> +	dma_buf_map_set_vaddr(map, bo->vaddr);
> +
> +	return 0;
>  }
>  
>  static void tegra_gem_prime_vunmap(struct dma_buf *buf, void *vaddr)
> diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> index ec3446cc45b8..11428287bdf3 100644
> --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> @@ -81,9 +81,13 @@ static void *vb2_dc_cookie(void *buf_priv)
>  static void *vb2_dc_vaddr(void *buf_priv)
>  {
>  	struct vb2_dc_buf *buf = buf_priv;
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	if (!buf->vaddr && buf->db_attach)
> -		buf->vaddr = dma_buf_vmap(buf->db_attach->dmabuf);
> +	if (!buf->vaddr && buf->db_attach) {
> +		ret = dma_buf_vmap(buf->db_attach->dmabuf, &map);
> +		buf->vaddr = ret ? NULL : map.vaddr;
> +	}
>  
>  	return buf->vaddr;
>  }
> @@ -365,11 +369,13 @@ vb2_dc_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
>  	return 0;
>  }
>  
> -static void *vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf)
> +static int vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
>  {
>  	struct vb2_dc_buf *buf = dbuf->priv;
>  
> -	return buf->vaddr;
> +	dma_buf_map_set_vaddr(map, buf->vaddr);
> +
> +	return 0;
>  }
>  
>  static int vb2_dc_dmabuf_ops_mmap(struct dma_buf *dbuf,
> diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> index 0a40e00f0d7e..c51170e9c1b9 100644
> --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> @@ -300,14 +300,18 @@ static void vb2_dma_sg_put_userptr(void *buf_priv)
>  static void *vb2_dma_sg_vaddr(void *buf_priv)
>  {
>  	struct vb2_dma_sg_buf *buf = buf_priv;
> +	struct dma_buf_map map;
> +	int ret;
>  
>  	BUG_ON(!buf);
>  
>  	if (!buf->vaddr) {
> -		if (buf->db_attach)
> -			buf->vaddr = dma_buf_vmap(buf->db_attach->dmabuf);
> -		else
> +		if (buf->db_attach) {
> +			ret = dma_buf_vmap(buf->db_attach->dmabuf, &map);
> +			buf->vaddr = ret ? NULL : map.vaddr;
> +		} else {
>  			buf->vaddr = vm_map_ram(buf->pages, buf->num_pages, -1);
> +		}
>  	}
>  
>  	/* add offset in case userptr is not page-aligned */
> @@ -489,11 +493,13 @@ vb2_dma_sg_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
>  	return 0;
>  }
>  
> -static void *vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf)
> +static int vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
>  {
>  	struct vb2_dma_sg_buf *buf = dbuf->priv;
>  
> -	return vb2_dma_sg_vaddr(buf);
> +	dma_buf_map_set_vaddr(map, buf->vaddr);
> +
> +	return 0;
>  }
>  
>  static int vb2_dma_sg_dmabuf_ops_mmap(struct dma_buf *dbuf,
> diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> index c66fda4a65e4..7b68e2379c65 100644
> --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> @@ -318,11 +318,13 @@ static void vb2_vmalloc_dmabuf_ops_release(struct dma_buf *dbuf)
>  	vb2_vmalloc_put(dbuf->priv);
>  }
>  
> -static void *vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf)
> +static int vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
>  {
>  	struct vb2_vmalloc_buf *buf = dbuf->priv;
>  
> -	return buf->vaddr;
> +	dma_buf_map_set_vaddr(map, buf->vaddr);
> +
> +	return 0;
>  }
>  
>  static int vb2_vmalloc_dmabuf_ops_mmap(struct dma_buf *dbuf,
> @@ -374,10 +376,15 @@ static struct dma_buf *vb2_vmalloc_get_dmabuf(void *buf_priv, unsigned long flag
>  static int vb2_vmalloc_map_dmabuf(void *mem_priv)
>  {
>  	struct vb2_vmalloc_buf *buf = mem_priv;
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	buf->vaddr = dma_buf_vmap(buf->dbuf);
> +	ret = dma_buf_vmap(buf->dbuf, &map);
> +	if (ret)
> +		return -EFAULT;
> +	buf->vaddr = map.vaddr;
>  
> -	return buf->vaddr ? 0 : -EFAULT;
> +	return 0;
>  }
>  
>  static void vb2_vmalloc_unmap_dmabuf(void *mem_priv)
> diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
> index bf141e74a1c2..3ee22639ff77 100644
> --- a/include/drm/drm_prime.h
> +++ b/include/drm/drm_prime.h
> @@ -54,6 +54,7 @@ struct device;
>  struct dma_buf_export_info;
>  struct dma_buf;
>  struct dma_buf_attachment;
> +struct dma_buf_map;
>  
>  enum dma_data_direction;
>  
> @@ -82,7 +83,7 @@ struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach,
>  void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
>  			   struct sg_table *sgt,
>  			   enum dma_data_direction dir);
> -void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf);
> +int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map);
>  void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr);
>  
>  int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index d4b1bb3cc4b0..6b4f6e0e8b5d 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -28,6 +28,19 @@ struct dma_buf_map {
>  	bool is_iomem;
>  };
>  
> +/**
> + * dma_buf_map_set_vaddr - Sets a dma-buf mapping structure to an address in system memory
> + * @map:	The dma-buf mapping structure
> + * @vaddr:	A system-memory address
> + *
> + * Sets the address and clears the I/O-memory flag.
> + */
> +static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
> +{
> +	map->vaddr = vaddr;
> +	map->is_iomem = false;
> +}
> +
>  /* API transition helper */
>  static inline bool dma_buf_map_is_vaddr(const struct dma_buf_map *map, const void *vaddr)
>  {
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index fcc2ddfb6d18..7237997cfa38 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -266,7 +266,7 @@ struct dma_buf_ops {
>  	 */
>  	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
>  
> -	void *(*vmap)(struct dma_buf *);
> +	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>  	void (*vunmap)(struct dma_buf *, void *vaddr);
>  };
>  
> @@ -503,6 +503,6 @@ int dma_buf_end_cpu_access(struct dma_buf *dma_buf,
>  
>  int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
>  		 unsigned long);
> -void *dma_buf_vmap(struct dma_buf *);
> -void dma_buf_vunmap(struct dma_buf *, void *vaddr);
> +int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
> +void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr);
>  #endif /* __DMA_BUF_H__ */
> -- 
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/3] dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
@ 2020-09-16  9:35     ` Daniel Vetter
  0 siblings, 0 replies; 57+ messages in thread
From: Daniel Vetter @ 2020-09-16  9:35 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: christian.koenig, airlied, mark.cave-ayland, dri-devel, chris,
	thierry.reding, kraxel, sparclinux, sam, m.szyprowski, jonathanh,
	matthew.auld, linux+etnaviv, linux-media, pawel, intel-gfx,
	etnaviv, linaro-mm-sig, thomas.hellstrom, rodrigo.vivi,
	linux-tegra, mchehab, tfiga, kyungmin.park, davem

On Mon, Sep 14, 2020 at 01:25:20PM +0200, Thomas Zimmermann wrote:
> This patch updates dma_buf_vmap() and dma-buf's vmap callback to use
> struct dma_buf_map.
> 
> The interfaces used to return a buffer address. This address now gets
> stored in an instance of the structure that is given as an additional
> argument. The functions return an errno code on errors.
> 
> Users of the functions are updated accordingly. This is only an interface
> change. It is currently expected that dma-buf memory can be accessed with
> system memory load/store operations.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/dma-buf/dma-buf.c                     | 26 ++++++++++---------
>  drivers/gpu/drm/drm_gem_cma_helper.c          | 13 +++++-----
>  drivers/gpu/drm/drm_gem_shmem_helper.c        | 14 ++++++----
>  drivers/gpu/drm/drm_prime.c                   |  8 +++---
>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  8 +++++-
>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    | 11 ++++++--
>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  | 12 ++++++---
>  drivers/gpu/drm/tegra/gem.c                   | 18 ++++++++-----
>  .../common/videobuf2/videobuf2-dma-contig.c   | 14 +++++++---
>  .../media/common/videobuf2/videobuf2-dma-sg.c | 16 ++++++++----
>  .../common/videobuf2/videobuf2-vmalloc.c      | 15 ++++++++---
>  include/drm/drm_prime.h                       |  3 ++-
>  include/linux/dma-buf-map.h                   | 13 ++++++++++
>  include/linux/dma-buf.h                       |  6 ++---
>  14 files changed, 122 insertions(+), 55 deletions(-)
> 
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 5e849ca241a0..c99e3577aa2f 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -1186,46 +1186,48 @@ EXPORT_SYMBOL_GPL(dma_buf_mmap);
>   * dma_buf_vmap - Create virtual mapping for the buffer object into kernel
>   * address space. Same restrictions as for vmap and friends apply.
>   * @dmabuf:	[in]	buffer to vmap
> + * @map:	[out]	returns the vmap pointer
>   *
>   * This call may fail due to lack of virtual mapping address space.
>   * These calls are optional in drivers. The intended use for them
>   * is for mapping objects linear in kernel space for high use objects.
>   * Please attempt to use kmap/kunmap before thinking about these interfaces.
>   *
> - * Returns NULL on error.
> + * Returns 0 on success, or a negative errno code otherwise.
>   */
> -void *dma_buf_vmap(struct dma_buf *dmabuf)
> +int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
>  {
> -	void *ptr;
> +	struct dma_buf_map ptr;

Hm I think for safety we should unconditionally clear map, even when it
fails. Otherwise callers of this who fail to handle errors might trip up
on stack garbage, instead of tripping over a NULL pointer. That's a step
back from the old version of

	vaddr = dma_buf_vmap()

where you where guaranteed to trip over a NULL pointer if you didn't check
for errors. So also no need for the local variable.

Otherwise I think this interface looks clean.
-Daniel

> +	int ret = 0;
>  
>  	if (WARN_ON(!dmabuf))
> -		return NULL;
> +		return -EINVAL;
>  
>  	if (!dmabuf->ops->vmap)
> -		return NULL;
> +		return -EINVAL;
>  
>  	mutex_lock(&dmabuf->lock);
>  	if (dmabuf->vmapping_counter) {
>  		dmabuf->vmapping_counter++;
>  		BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
> -		ptr = dmabuf->vmap_ptr.vaddr;
> +		*map = dmabuf->vmap_ptr;
>  		goto out_unlock;
>  	}
>  
>  	BUG_ON(dma_buf_map_is_set(&dmabuf->vmap_ptr));
>  
> -	ptr = dmabuf->ops->vmap(dmabuf);
> -	if (WARN_ON_ONCE(IS_ERR(ptr)))
> -		ptr = NULL;
> -	if (!ptr)
> +	ret = dmabuf->ops->vmap(dmabuf, &ptr);
> +	if (WARN_ON_ONCE(ret))
>  		goto out_unlock;
>  
> -	dmabuf->vmap_ptr.vaddr = ptr;
> +	dmabuf->vmap_ptr = ptr;
>  	dmabuf->vmapping_counter = 1;
>  
> +	*map = dmabuf->vmap_ptr;
> +
>  out_unlock:
>  	mutex_unlock(&dmabuf->lock);
> -	return ptr;
> +	return ret;
>  }
>  EXPORT_SYMBOL_GPL(dma_buf_vmap);
>  
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 822edeadbab3..062315c25c12 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -634,22 +634,23 @@ drm_gem_cma_prime_import_sg_table_vmap(struct drm_device *dev,
>  {
>  	struct drm_gem_cma_object *cma_obj;
>  	struct drm_gem_object *obj;
> -	void *vaddr;
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	vaddr = dma_buf_vmap(attach->dmabuf);
> -	if (!vaddr) {
> +	ret = dma_buf_vmap(attach->dmabuf, &map);
> +	if (ret) {
>  		DRM_ERROR("Failed to vmap PRIME buffer\n");
> -		return ERR_PTR(-ENOMEM);
> +		return ERR_PTR(ret);
>  	}
>  
>  	obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
>  	if (IS_ERR(obj)) {
> -		dma_buf_vunmap(attach->dmabuf, vaddr);
> +		dma_buf_vunmap(attach->dmabuf, map.vaddr);
>  		return obj;
>  	}
>  
>  	cma_obj = to_drm_gem_cma_obj(obj);
> -	cma_obj->vaddr = vaddr;
> +	cma_obj->vaddr = map.vaddr;
>  
>  	return obj;
>  }
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 0a952f27c184..ad10a57cfece 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -261,13 +261,16 @@ EXPORT_SYMBOL(drm_gem_shmem_unpin);
>  static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>  {
>  	struct drm_gem_object *obj = &shmem->base;
> -	int ret;
> +	struct dma_buf_map map;
> +	int ret = 0;
>  
>  	if (shmem->vmap_use_count++ > 0)
>  		return shmem->vaddr;
>  
>  	if (obj->import_attach) {
> -		shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
> +		ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
> +		if (!ret)
> +			shmem->vaddr = map.vaddr;
>  	} else {
>  		pgprot_t prot = PAGE_KERNEL;
>  
> @@ -279,11 +282,12 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>  			prot = pgprot_writecombine(prot);
>  		shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
>  				    VM_MAP, prot);
> +		if (!shmem->vaddr)
> +			ret = -ENOMEM;
>  	}
>  
> -	if (!shmem->vaddr) {
> -		DRM_DEBUG_KMS("Failed to vmap pages\n");
> -		ret = -ENOMEM;
> +	if (ret) {
> +		DRM_DEBUG_KMS("Failed to vmap pages, error %d\n", ret);
>  		goto err_put_pages;
>  	}
>  
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index 8a6a3c99b7d8..1b7d86c7842d 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -668,16 +668,18 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
>   *
>   * Returns the kernel virtual address or NULL on failure.
>   */
> -void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf)
> +int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
>  {
>  	struct drm_gem_object *obj = dma_buf->priv;
>  	void *vaddr;
>  
>  	vaddr = drm_gem_vmap(obj);
>  	if (IS_ERR(vaddr))
> -		vaddr = NULL;
> +		return PTR_ERR(vaddr);
>  
> -	return vaddr;
> +	dma_buf_map_set_vaddr(map, vaddr);
> +
> +	return 0;
>  }
>  EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
>  
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 4aa3426a9ba4..80a9fc143bbb 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -85,9 +85,15 @@ static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj)
>  
>  static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj)
>  {
> +	struct dma_buf_map map;
> +	int ret;
> +
>  	lockdep_assert_held(&etnaviv_obj->lock);
>  
> -	return dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf);
> +	ret = dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf, &map);
> +	if (ret)
> +		return NULL;
> +	return map.vaddr;
>  }
>  
>  static int etnaviv_gem_prime_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> index 27fddc22a7c6..77b363d3000b 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> @@ -82,11 +82,18 @@ static void i915_gem_unmap_dma_buf(struct dma_buf_attachment *attachment,
>  	i915_gem_object_unpin_pages(obj);
>  }
>  
> -static void *i915_gem_dmabuf_vmap(struct dma_buf *dma_buf)
> +static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
>  {
>  	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
> +	void *vaddr;
>  
> -	return i915_gem_object_pin_map(obj, I915_MAP_WB);
> +	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
> +	if (IS_ERR(vaddr))
> +		return PTR_ERR(vaddr);
> +
> +	dma_buf_map_set_vaddr(map, vaddr);
> +
> +	return 0;
>  }
>  
>  static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
> diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> index 2a52b92586b9..f79ebc5329b7 100644
> --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> @@ -82,6 +82,7 @@ static int igt_dmabuf_import(void *arg)
>  	struct drm_i915_gem_object *obj;
>  	struct dma_buf *dmabuf;
>  	void *obj_map, *dma_map;
> +	struct dma_buf_map map;
>  	u32 pattern[] = { 0, 0xaa, 0xcc, 0x55, 0xff };
>  	int err, i;
>  
> @@ -110,7 +111,8 @@ static int igt_dmabuf_import(void *arg)
>  		goto out_obj;
>  	}
>  
> -	dma_map = dma_buf_vmap(dmabuf);
> +	err = dma_buf_vmap(dmabuf, &map);
> +	dma_map = err ? NULL : map.vaddr;
>  	if (!dma_map) {
>  		pr_err("dma_buf_vmap failed\n");
>  		err = -ENOMEM;
> @@ -163,6 +165,7 @@ static int igt_dmabuf_import_ownership(void *arg)
>  	struct drm_i915_private *i915 = arg;
>  	struct drm_i915_gem_object *obj;
>  	struct dma_buf *dmabuf;
> +	struct dma_buf_map map;
>  	void *ptr;
>  	int err;
>  
> @@ -170,7 +173,8 @@ static int igt_dmabuf_import_ownership(void *arg)
>  	if (IS_ERR(dmabuf))
>  		return PTR_ERR(dmabuf);
>  
> -	ptr = dma_buf_vmap(dmabuf);
> +	err = dma_buf_vmap(dmabuf, &map);
> +	ptr = err ? NULL : map.vaddr;
>  	if (!ptr) {
>  		pr_err("dma_buf_vmap failed\n");
>  		err = -ENOMEM;
> @@ -212,6 +216,7 @@ static int igt_dmabuf_export_vmap(void *arg)
>  	struct drm_i915_private *i915 = arg;
>  	struct drm_i915_gem_object *obj;
>  	struct dma_buf *dmabuf;
> +	struct dma_buf_map map;
>  	void *ptr;
>  	int err;
>  
> @@ -228,7 +233,8 @@ static int igt_dmabuf_export_vmap(void *arg)
>  	}
>  	i915_gem_object_put(obj);
>  
> -	ptr = dma_buf_vmap(dmabuf);
> +	err = dma_buf_vmap(dmabuf, &map);
> +	ptr = err ? NULL : map.vaddr;
>  	if (!ptr) {
>  		pr_err("dma_buf_vmap failed\n");
>  		err = -ENOMEM;
> diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
> index 47e2935b8c68..81663036c701 100644
> --- a/drivers/gpu/drm/tegra/gem.c
> +++ b/drivers/gpu/drm/tegra/gem.c
> @@ -132,14 +132,18 @@ static void tegra_bo_unpin(struct device *dev, struct sg_table *sgt)
>  static void *tegra_bo_mmap(struct host1x_bo *bo)
>  {
>  	struct tegra_bo *obj = host1x_to_tegra_bo(bo);
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	if (obj->vaddr)
> +	if (obj->vaddr) {
>  		return obj->vaddr;
> -	else if (obj->gem.import_attach)
> -		return dma_buf_vmap(obj->gem.import_attach->dmabuf);
> -	else
> +	} else if (obj->gem.import_attach) {
> +		ret = dma_buf_vmap(obj->gem.import_attach->dmabuf, &map);
> +		return ret ? NULL : map.vaddr;
> +	} else {
>  		return vmap(obj->pages, obj->num_pages, VM_MAP,
>  			    pgprot_writecombine(PAGE_KERNEL));
> +	}
>  }
>  
>  static void tegra_bo_munmap(struct host1x_bo *bo, void *addr)
> @@ -641,12 +645,14 @@ static int tegra_gem_prime_mmap(struct dma_buf *buf, struct vm_area_struct *vma)
>  	return __tegra_gem_mmap(gem, vma);
>  }
>  
> -static void *tegra_gem_prime_vmap(struct dma_buf *buf)
> +static int tegra_gem_prime_vmap(struct dma_buf *buf, struct dma_buf_map *map)
>  {
>  	struct drm_gem_object *gem = buf->priv;
>  	struct tegra_bo *bo = to_tegra_bo(gem);
>  
> -	return bo->vaddr;
> +	dma_buf_map_set_vaddr(map, bo->vaddr);
> +
> +	return 0;
>  }
>  
>  static void tegra_gem_prime_vunmap(struct dma_buf *buf, void *vaddr)
> diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> index ec3446cc45b8..11428287bdf3 100644
> --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> @@ -81,9 +81,13 @@ static void *vb2_dc_cookie(void *buf_priv)
>  static void *vb2_dc_vaddr(void *buf_priv)
>  {
>  	struct vb2_dc_buf *buf = buf_priv;
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	if (!buf->vaddr && buf->db_attach)
> -		buf->vaddr = dma_buf_vmap(buf->db_attach->dmabuf);
> +	if (!buf->vaddr && buf->db_attach) {
> +		ret = dma_buf_vmap(buf->db_attach->dmabuf, &map);
> +		buf->vaddr = ret ? NULL : map.vaddr;
> +	}
>  
>  	return buf->vaddr;
>  }
> @@ -365,11 +369,13 @@ vb2_dc_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
>  	return 0;
>  }
>  
> -static void *vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf)
> +static int vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
>  {
>  	struct vb2_dc_buf *buf = dbuf->priv;
>  
> -	return buf->vaddr;
> +	dma_buf_map_set_vaddr(map, buf->vaddr);
> +
> +	return 0;
>  }
>  
>  static int vb2_dc_dmabuf_ops_mmap(struct dma_buf *dbuf,
> diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> index 0a40e00f0d7e..c51170e9c1b9 100644
> --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> @@ -300,14 +300,18 @@ static void vb2_dma_sg_put_userptr(void *buf_priv)
>  static void *vb2_dma_sg_vaddr(void *buf_priv)
>  {
>  	struct vb2_dma_sg_buf *buf = buf_priv;
> +	struct dma_buf_map map;
> +	int ret;
>  
>  	BUG_ON(!buf);
>  
>  	if (!buf->vaddr) {
> -		if (buf->db_attach)
> -			buf->vaddr = dma_buf_vmap(buf->db_attach->dmabuf);
> -		else
> +		if (buf->db_attach) {
> +			ret = dma_buf_vmap(buf->db_attach->dmabuf, &map);
> +			buf->vaddr = ret ? NULL : map.vaddr;
> +		} else {
>  			buf->vaddr = vm_map_ram(buf->pages, buf->num_pages, -1);
> +		}
>  	}
>  
>  	/* add offset in case userptr is not page-aligned */
> @@ -489,11 +493,13 @@ vb2_dma_sg_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
>  	return 0;
>  }
>  
> -static void *vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf)
> +static int vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
>  {
>  	struct vb2_dma_sg_buf *buf = dbuf->priv;
>  
> -	return vb2_dma_sg_vaddr(buf);
> +	dma_buf_map_set_vaddr(map, buf->vaddr);
> +
> +	return 0;
>  }
>  
>  static int vb2_dma_sg_dmabuf_ops_mmap(struct dma_buf *dbuf,
> diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> index c66fda4a65e4..7b68e2379c65 100644
> --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> @@ -318,11 +318,13 @@ static void vb2_vmalloc_dmabuf_ops_release(struct dma_buf *dbuf)
>  	vb2_vmalloc_put(dbuf->priv);
>  }
>  
> -static void *vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf)
> +static int vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
>  {
>  	struct vb2_vmalloc_buf *buf = dbuf->priv;
>  
> -	return buf->vaddr;
> +	dma_buf_map_set_vaddr(map, buf->vaddr);
> +
> +	return 0;
>  }
>  
>  static int vb2_vmalloc_dmabuf_ops_mmap(struct dma_buf *dbuf,
> @@ -374,10 +376,15 @@ static struct dma_buf *vb2_vmalloc_get_dmabuf(void *buf_priv, unsigned long flag
>  static int vb2_vmalloc_map_dmabuf(void *mem_priv)
>  {
>  	struct vb2_vmalloc_buf *buf = mem_priv;
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	buf->vaddr = dma_buf_vmap(buf->dbuf);
> +	ret = dma_buf_vmap(buf->dbuf, &map);
> +	if (ret)
> +		return -EFAULT;
> +	buf->vaddr = map.vaddr;
>  
> -	return buf->vaddr ? 0 : -EFAULT;
> +	return 0;
>  }
>  
>  static void vb2_vmalloc_unmap_dmabuf(void *mem_priv)
> diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
> index bf141e74a1c2..3ee22639ff77 100644
> --- a/include/drm/drm_prime.h
> +++ b/include/drm/drm_prime.h
> @@ -54,6 +54,7 @@ struct device;
>  struct dma_buf_export_info;
>  struct dma_buf;
>  struct dma_buf_attachment;
> +struct dma_buf_map;
>  
>  enum dma_data_direction;
>  
> @@ -82,7 +83,7 @@ struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach,
>  void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
>  			   struct sg_table *sgt,
>  			   enum dma_data_direction dir);
> -void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf);
> +int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map);
>  void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr);
>  
>  int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index d4b1bb3cc4b0..6b4f6e0e8b5d 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -28,6 +28,19 @@ struct dma_buf_map {
>  	bool is_iomem;
>  };
>  
> +/**
> + * dma_buf_map_set_vaddr - Sets a dma-buf mapping structure to an address in system memory
> + * @map:	The dma-buf mapping structure
> + * @vaddr:	A system-memory address
> + *
> + * Sets the address and clears the I/O-memory flag.
> + */
> +static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
> +{
> +	map->vaddr = vaddr;
> +	map->is_iomem = false;
> +}
> +
>  /* API transition helper */
>  static inline bool dma_buf_map_is_vaddr(const struct dma_buf_map *map, const void *vaddr)
>  {
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index fcc2ddfb6d18..7237997cfa38 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -266,7 +266,7 @@ struct dma_buf_ops {
>  	 */
>  	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
>  
> -	void *(*vmap)(struct dma_buf *);
> +	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>  	void (*vunmap)(struct dma_buf *, void *vaddr);
>  };
>  
> @@ -503,6 +503,6 @@ int dma_buf_end_cpu_access(struct dma_buf *dma_buf,
>  
>  int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
>  		 unsigned long);
> -void *dma_buf_vmap(struct dma_buf *);
> -void dma_buf_vunmap(struct dma_buf *, void *vaddr);
> +int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
> +void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr);
>  #endif /* __DMA_BUF_H__ */
> -- 
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/3] dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
@ 2020-09-16  9:35     ` Daniel Vetter
  0 siblings, 0 replies; 57+ messages in thread
From: Daniel Vetter @ 2020-09-16  9:35 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: christian.koenig, airlied, mark.cave-ayland, dri-devel, chris,
	thierry.reding, kraxel, sparclinux, sam, m.szyprowski, jonathanh,
	matthew.auld, linux+etnaviv, linux-media, pawel, intel-gfx,
	etnaviv, linaro-mm-sig, thomas.hellstrom, rodrigo.vivi,
	linux-tegra, mchehab, tfiga, kyungmin.park, davem

On Mon, Sep 14, 2020 at 01:25:20PM +0200, Thomas Zimmermann wrote:
> This patch updates dma_buf_vmap() and dma-buf's vmap callback to use
> struct dma_buf_map.
> 
> The interfaces used to return a buffer address. This address now gets
> stored in an instance of the structure that is given as an additional
> argument. The functions return an errno code on errors.
> 
> Users of the functions are updated accordingly. This is only an interface
> change. It is currently expected that dma-buf memory can be accessed with
> system memory load/store operations.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/dma-buf/dma-buf.c                     | 26 ++++++++++---------
>  drivers/gpu/drm/drm_gem_cma_helper.c          | 13 +++++-----
>  drivers/gpu/drm/drm_gem_shmem_helper.c        | 14 ++++++----
>  drivers/gpu/drm/drm_prime.c                   |  8 +++---
>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  8 +++++-
>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    | 11 ++++++--
>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  | 12 ++++++---
>  drivers/gpu/drm/tegra/gem.c                   | 18 ++++++++-----
>  .../common/videobuf2/videobuf2-dma-contig.c   | 14 +++++++---
>  .../media/common/videobuf2/videobuf2-dma-sg.c | 16 ++++++++----
>  .../common/videobuf2/videobuf2-vmalloc.c      | 15 ++++++++---
>  include/drm/drm_prime.h                       |  3 ++-
>  include/linux/dma-buf-map.h                   | 13 ++++++++++
>  include/linux/dma-buf.h                       |  6 ++---
>  14 files changed, 122 insertions(+), 55 deletions(-)
> 
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 5e849ca241a0..c99e3577aa2f 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -1186,46 +1186,48 @@ EXPORT_SYMBOL_GPL(dma_buf_mmap);
>   * dma_buf_vmap - Create virtual mapping for the buffer object into kernel
>   * address space. Same restrictions as for vmap and friends apply.
>   * @dmabuf:	[in]	buffer to vmap
> + * @map:	[out]	returns the vmap pointer
>   *
>   * This call may fail due to lack of virtual mapping address space.
>   * These calls are optional in drivers. The intended use for them
>   * is for mapping objects linear in kernel space for high use objects.
>   * Please attempt to use kmap/kunmap before thinking about these interfaces.
>   *
> - * Returns NULL on error.
> + * Returns 0 on success, or a negative errno code otherwise.
>   */
> -void *dma_buf_vmap(struct dma_buf *dmabuf)
> +int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
>  {
> -	void *ptr;
> +	struct dma_buf_map ptr;

Hm I think for safety we should unconditionally clear map, even when it
fails. Otherwise callers of this who fail to handle errors might trip up
on stack garbage, instead of tripping over a NULL pointer. That's a step
back from the old version of

	vaddr = dma_buf_vmap()

where you where guaranteed to trip over a NULL pointer if you didn't check
for errors. So also no need for the local variable.

Otherwise I think this interface looks clean.
-Daniel

> +	int ret = 0;
>  
>  	if (WARN_ON(!dmabuf))
> -		return NULL;
> +		return -EINVAL;
>  
>  	if (!dmabuf->ops->vmap)
> -		return NULL;
> +		return -EINVAL;
>  
>  	mutex_lock(&dmabuf->lock);
>  	if (dmabuf->vmapping_counter) {
>  		dmabuf->vmapping_counter++;
>  		BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
> -		ptr = dmabuf->vmap_ptr.vaddr;
> +		*map = dmabuf->vmap_ptr;
>  		goto out_unlock;
>  	}
>  
>  	BUG_ON(dma_buf_map_is_set(&dmabuf->vmap_ptr));
>  
> -	ptr = dmabuf->ops->vmap(dmabuf);
> -	if (WARN_ON_ONCE(IS_ERR(ptr)))
> -		ptr = NULL;
> -	if (!ptr)
> +	ret = dmabuf->ops->vmap(dmabuf, &ptr);
> +	if (WARN_ON_ONCE(ret))
>  		goto out_unlock;
>  
> -	dmabuf->vmap_ptr.vaddr = ptr;
> +	dmabuf->vmap_ptr = ptr;
>  	dmabuf->vmapping_counter = 1;
>  
> +	*map = dmabuf->vmap_ptr;
> +
>  out_unlock:
>  	mutex_unlock(&dmabuf->lock);
> -	return ptr;
> +	return ret;
>  }
>  EXPORT_SYMBOL_GPL(dma_buf_vmap);
>  
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 822edeadbab3..062315c25c12 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -634,22 +634,23 @@ drm_gem_cma_prime_import_sg_table_vmap(struct drm_device *dev,
>  {
>  	struct drm_gem_cma_object *cma_obj;
>  	struct drm_gem_object *obj;
> -	void *vaddr;
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	vaddr = dma_buf_vmap(attach->dmabuf);
> -	if (!vaddr) {
> +	ret = dma_buf_vmap(attach->dmabuf, &map);
> +	if (ret) {
>  		DRM_ERROR("Failed to vmap PRIME buffer\n");
> -		return ERR_PTR(-ENOMEM);
> +		return ERR_PTR(ret);
>  	}
>  
>  	obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
>  	if (IS_ERR(obj)) {
> -		dma_buf_vunmap(attach->dmabuf, vaddr);
> +		dma_buf_vunmap(attach->dmabuf, map.vaddr);
>  		return obj;
>  	}
>  
>  	cma_obj = to_drm_gem_cma_obj(obj);
> -	cma_obj->vaddr = vaddr;
> +	cma_obj->vaddr = map.vaddr;
>  
>  	return obj;
>  }
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 0a952f27c184..ad10a57cfece 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -261,13 +261,16 @@ EXPORT_SYMBOL(drm_gem_shmem_unpin);
>  static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>  {
>  	struct drm_gem_object *obj = &shmem->base;
> -	int ret;
> +	struct dma_buf_map map;
> +	int ret = 0;
>  
>  	if (shmem->vmap_use_count++ > 0)
>  		return shmem->vaddr;
>  
>  	if (obj->import_attach) {
> -		shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
> +		ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
> +		if (!ret)
> +			shmem->vaddr = map.vaddr;
>  	} else {
>  		pgprot_t prot = PAGE_KERNEL;
>  
> @@ -279,11 +282,12 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>  			prot = pgprot_writecombine(prot);
>  		shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
>  				    VM_MAP, prot);
> +		if (!shmem->vaddr)
> +			ret = -ENOMEM;
>  	}
>  
> -	if (!shmem->vaddr) {
> -		DRM_DEBUG_KMS("Failed to vmap pages\n");
> -		ret = -ENOMEM;
> +	if (ret) {
> +		DRM_DEBUG_KMS("Failed to vmap pages, error %d\n", ret);
>  		goto err_put_pages;
>  	}
>  
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index 8a6a3c99b7d8..1b7d86c7842d 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -668,16 +668,18 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
>   *
>   * Returns the kernel virtual address or NULL on failure.
>   */
> -void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf)
> +int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
>  {
>  	struct drm_gem_object *obj = dma_buf->priv;
>  	void *vaddr;
>  
>  	vaddr = drm_gem_vmap(obj);
>  	if (IS_ERR(vaddr))
> -		vaddr = NULL;
> +		return PTR_ERR(vaddr);
>  
> -	return vaddr;
> +	dma_buf_map_set_vaddr(map, vaddr);
> +
> +	return 0;
>  }
>  EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
>  
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 4aa3426a9ba4..80a9fc143bbb 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -85,9 +85,15 @@ static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj)
>  
>  static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj)
>  {
> +	struct dma_buf_map map;
> +	int ret;
> +
>  	lockdep_assert_held(&etnaviv_obj->lock);
>  
> -	return dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf);
> +	ret = dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf, &map);
> +	if (ret)
> +		return NULL;
> +	return map.vaddr;
>  }
>  
>  static int etnaviv_gem_prime_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> index 27fddc22a7c6..77b363d3000b 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> @@ -82,11 +82,18 @@ static void i915_gem_unmap_dma_buf(struct dma_buf_attachment *attachment,
>  	i915_gem_object_unpin_pages(obj);
>  }
>  
> -static void *i915_gem_dmabuf_vmap(struct dma_buf *dma_buf)
> +static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
>  {
>  	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
> +	void *vaddr;
>  
> -	return i915_gem_object_pin_map(obj, I915_MAP_WB);
> +	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
> +	if (IS_ERR(vaddr))
> +		return PTR_ERR(vaddr);
> +
> +	dma_buf_map_set_vaddr(map, vaddr);
> +
> +	return 0;
>  }
>  
>  static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
> diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> index 2a52b92586b9..f79ebc5329b7 100644
> --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> @@ -82,6 +82,7 @@ static int igt_dmabuf_import(void *arg)
>  	struct drm_i915_gem_object *obj;
>  	struct dma_buf *dmabuf;
>  	void *obj_map, *dma_map;
> +	struct dma_buf_map map;
>  	u32 pattern[] = { 0, 0xaa, 0xcc, 0x55, 0xff };
>  	int err, i;
>  
> @@ -110,7 +111,8 @@ static int igt_dmabuf_import(void *arg)
>  		goto out_obj;
>  	}
>  
> -	dma_map = dma_buf_vmap(dmabuf);
> +	err = dma_buf_vmap(dmabuf, &map);
> +	dma_map = err ? NULL : map.vaddr;
>  	if (!dma_map) {
>  		pr_err("dma_buf_vmap failed\n");
>  		err = -ENOMEM;
> @@ -163,6 +165,7 @@ static int igt_dmabuf_import_ownership(void *arg)
>  	struct drm_i915_private *i915 = arg;
>  	struct drm_i915_gem_object *obj;
>  	struct dma_buf *dmabuf;
> +	struct dma_buf_map map;
>  	void *ptr;
>  	int err;
>  
> @@ -170,7 +173,8 @@ static int igt_dmabuf_import_ownership(void *arg)
>  	if (IS_ERR(dmabuf))
>  		return PTR_ERR(dmabuf);
>  
> -	ptr = dma_buf_vmap(dmabuf);
> +	err = dma_buf_vmap(dmabuf, &map);
> +	ptr = err ? NULL : map.vaddr;
>  	if (!ptr) {
>  		pr_err("dma_buf_vmap failed\n");
>  		err = -ENOMEM;
> @@ -212,6 +216,7 @@ static int igt_dmabuf_export_vmap(void *arg)
>  	struct drm_i915_private *i915 = arg;
>  	struct drm_i915_gem_object *obj;
>  	struct dma_buf *dmabuf;
> +	struct dma_buf_map map;
>  	void *ptr;
>  	int err;
>  
> @@ -228,7 +233,8 @@ static int igt_dmabuf_export_vmap(void *arg)
>  	}
>  	i915_gem_object_put(obj);
>  
> -	ptr = dma_buf_vmap(dmabuf);
> +	err = dma_buf_vmap(dmabuf, &map);
> +	ptr = err ? NULL : map.vaddr;
>  	if (!ptr) {
>  		pr_err("dma_buf_vmap failed\n");
>  		err = -ENOMEM;
> diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
> index 47e2935b8c68..81663036c701 100644
> --- a/drivers/gpu/drm/tegra/gem.c
> +++ b/drivers/gpu/drm/tegra/gem.c
> @@ -132,14 +132,18 @@ static void tegra_bo_unpin(struct device *dev, struct sg_table *sgt)
>  static void *tegra_bo_mmap(struct host1x_bo *bo)
>  {
>  	struct tegra_bo *obj = host1x_to_tegra_bo(bo);
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	if (obj->vaddr)
> +	if (obj->vaddr) {
>  		return obj->vaddr;
> -	else if (obj->gem.import_attach)
> -		return dma_buf_vmap(obj->gem.import_attach->dmabuf);
> -	else
> +	} else if (obj->gem.import_attach) {
> +		ret = dma_buf_vmap(obj->gem.import_attach->dmabuf, &map);
> +		return ret ? NULL : map.vaddr;
> +	} else {
>  		return vmap(obj->pages, obj->num_pages, VM_MAP,
>  			    pgprot_writecombine(PAGE_KERNEL));
> +	}
>  }
>  
>  static void tegra_bo_munmap(struct host1x_bo *bo, void *addr)
> @@ -641,12 +645,14 @@ static int tegra_gem_prime_mmap(struct dma_buf *buf, struct vm_area_struct *vma)
>  	return __tegra_gem_mmap(gem, vma);
>  }
>  
> -static void *tegra_gem_prime_vmap(struct dma_buf *buf)
> +static int tegra_gem_prime_vmap(struct dma_buf *buf, struct dma_buf_map *map)
>  {
>  	struct drm_gem_object *gem = buf->priv;
>  	struct tegra_bo *bo = to_tegra_bo(gem);
>  
> -	return bo->vaddr;
> +	dma_buf_map_set_vaddr(map, bo->vaddr);
> +
> +	return 0;
>  }
>  
>  static void tegra_gem_prime_vunmap(struct dma_buf *buf, void *vaddr)
> diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> index ec3446cc45b8..11428287bdf3 100644
> --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> @@ -81,9 +81,13 @@ static void *vb2_dc_cookie(void *buf_priv)
>  static void *vb2_dc_vaddr(void *buf_priv)
>  {
>  	struct vb2_dc_buf *buf = buf_priv;
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	if (!buf->vaddr && buf->db_attach)
> -		buf->vaddr = dma_buf_vmap(buf->db_attach->dmabuf);
> +	if (!buf->vaddr && buf->db_attach) {
> +		ret = dma_buf_vmap(buf->db_attach->dmabuf, &map);
> +		buf->vaddr = ret ? NULL : map.vaddr;
> +	}
>  
>  	return buf->vaddr;
>  }
> @@ -365,11 +369,13 @@ vb2_dc_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
>  	return 0;
>  }
>  
> -static void *vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf)
> +static int vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
>  {
>  	struct vb2_dc_buf *buf = dbuf->priv;
>  
> -	return buf->vaddr;
> +	dma_buf_map_set_vaddr(map, buf->vaddr);
> +
> +	return 0;
>  }
>  
>  static int vb2_dc_dmabuf_ops_mmap(struct dma_buf *dbuf,
> diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> index 0a40e00f0d7e..c51170e9c1b9 100644
> --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> @@ -300,14 +300,18 @@ static void vb2_dma_sg_put_userptr(void *buf_priv)
>  static void *vb2_dma_sg_vaddr(void *buf_priv)
>  {
>  	struct vb2_dma_sg_buf *buf = buf_priv;
> +	struct dma_buf_map map;
> +	int ret;
>  
>  	BUG_ON(!buf);
>  
>  	if (!buf->vaddr) {
> -		if (buf->db_attach)
> -			buf->vaddr = dma_buf_vmap(buf->db_attach->dmabuf);
> -		else
> +		if (buf->db_attach) {
> +			ret = dma_buf_vmap(buf->db_attach->dmabuf, &map);
> +			buf->vaddr = ret ? NULL : map.vaddr;
> +		} else {
>  			buf->vaddr = vm_map_ram(buf->pages, buf->num_pages, -1);
> +		}
>  	}
>  
>  	/* add offset in case userptr is not page-aligned */
> @@ -489,11 +493,13 @@ vb2_dma_sg_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
>  	return 0;
>  }
>  
> -static void *vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf)
> +static int vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
>  {
>  	struct vb2_dma_sg_buf *buf = dbuf->priv;
>  
> -	return vb2_dma_sg_vaddr(buf);
> +	dma_buf_map_set_vaddr(map, buf->vaddr);
> +
> +	return 0;
>  }
>  
>  static int vb2_dma_sg_dmabuf_ops_mmap(struct dma_buf *dbuf,
> diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> index c66fda4a65e4..7b68e2379c65 100644
> --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> @@ -318,11 +318,13 @@ static void vb2_vmalloc_dmabuf_ops_release(struct dma_buf *dbuf)
>  	vb2_vmalloc_put(dbuf->priv);
>  }
>  
> -static void *vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf)
> +static int vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
>  {
>  	struct vb2_vmalloc_buf *buf = dbuf->priv;
>  
> -	return buf->vaddr;
> +	dma_buf_map_set_vaddr(map, buf->vaddr);
> +
> +	return 0;
>  }
>  
>  static int vb2_vmalloc_dmabuf_ops_mmap(struct dma_buf *dbuf,
> @@ -374,10 +376,15 @@ static struct dma_buf *vb2_vmalloc_get_dmabuf(void *buf_priv, unsigned long flag
>  static int vb2_vmalloc_map_dmabuf(void *mem_priv)
>  {
>  	struct vb2_vmalloc_buf *buf = mem_priv;
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	buf->vaddr = dma_buf_vmap(buf->dbuf);
> +	ret = dma_buf_vmap(buf->dbuf, &map);
> +	if (ret)
> +		return -EFAULT;
> +	buf->vaddr = map.vaddr;
>  
> -	return buf->vaddr ? 0 : -EFAULT;
> +	return 0;
>  }
>  
>  static void vb2_vmalloc_unmap_dmabuf(void *mem_priv)
> diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
> index bf141e74a1c2..3ee22639ff77 100644
> --- a/include/drm/drm_prime.h
> +++ b/include/drm/drm_prime.h
> @@ -54,6 +54,7 @@ struct device;
>  struct dma_buf_export_info;
>  struct dma_buf;
>  struct dma_buf_attachment;
> +struct dma_buf_map;
>  
>  enum dma_data_direction;
>  
> @@ -82,7 +83,7 @@ struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach,
>  void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
>  			   struct sg_table *sgt,
>  			   enum dma_data_direction dir);
> -void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf);
> +int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map);
>  void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr);
>  
>  int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index d4b1bb3cc4b0..6b4f6e0e8b5d 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -28,6 +28,19 @@ struct dma_buf_map {
>  	bool is_iomem;
>  };
>  
> +/**
> + * dma_buf_map_set_vaddr - Sets a dma-buf mapping structure to an address in system memory
> + * @map:	The dma-buf mapping structure
> + * @vaddr:	A system-memory address
> + *
> + * Sets the address and clears the I/O-memory flag.
> + */
> +static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
> +{
> +	map->vaddr = vaddr;
> +	map->is_iomem = false;
> +}
> +
>  /* API transition helper */
>  static inline bool dma_buf_map_is_vaddr(const struct dma_buf_map *map, const void *vaddr)
>  {
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index fcc2ddfb6d18..7237997cfa38 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -266,7 +266,7 @@ struct dma_buf_ops {
>  	 */
>  	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
>  
> -	void *(*vmap)(struct dma_buf *);
> +	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>  	void (*vunmap)(struct dma_buf *, void *vaddr);
>  };
>  
> @@ -503,6 +503,6 @@ int dma_buf_end_cpu_access(struct dma_buf *dma_buf,
>  
>  int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
>  		 unsigned long);
> -void *dma_buf_vmap(struct dma_buf *);
> -void dma_buf_vunmap(struct dma_buf *, void *vaddr);
> +int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
> +void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr);
>  #endif /* __DMA_BUF_H__ */
> -- 
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [Intel-gfx] [PATCH 2/3] dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
@ 2020-09-16  9:35     ` Daniel Vetter
  0 siblings, 0 replies; 57+ messages in thread
From: Daniel Vetter @ 2020-09-16  9:35 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: christian.koenig, airlied, mark.cave-ayland, dri-devel, chris,
	kraxel, sparclinux, sam, sumit.semwal, m.szyprowski, jonathanh,
	matthew.auld, linux+etnaviv, linux-media, pawel, intel-gfx,
	etnaviv, linaro-mm-sig, christian.gmeiner, thomas.hellstrom,
	mripard, linux-tegra, mchehab, tfiga, kyungmin.park, davem,
	l.stach

On Mon, Sep 14, 2020 at 01:25:20PM +0200, Thomas Zimmermann wrote:
> This patch updates dma_buf_vmap() and dma-buf's vmap callback to use
> struct dma_buf_map.
> 
> The interfaces used to return a buffer address. This address now gets
> stored in an instance of the structure that is given as an additional
> argument. The functions return an errno code on errors.
> 
> Users of the functions are updated accordingly. This is only an interface
> change. It is currently expected that dma-buf memory can be accessed with
> system memory load/store operations.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/dma-buf/dma-buf.c                     | 26 ++++++++++---------
>  drivers/gpu/drm/drm_gem_cma_helper.c          | 13 +++++-----
>  drivers/gpu/drm/drm_gem_shmem_helper.c        | 14 ++++++----
>  drivers/gpu/drm/drm_prime.c                   |  8 +++---
>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  8 +++++-
>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    | 11 ++++++--
>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  | 12 ++++++---
>  drivers/gpu/drm/tegra/gem.c                   | 18 ++++++++-----
>  .../common/videobuf2/videobuf2-dma-contig.c   | 14 +++++++---
>  .../media/common/videobuf2/videobuf2-dma-sg.c | 16 ++++++++----
>  .../common/videobuf2/videobuf2-vmalloc.c      | 15 ++++++++---
>  include/drm/drm_prime.h                       |  3 ++-
>  include/linux/dma-buf-map.h                   | 13 ++++++++++
>  include/linux/dma-buf.h                       |  6 ++---
>  14 files changed, 122 insertions(+), 55 deletions(-)
> 
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 5e849ca241a0..c99e3577aa2f 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -1186,46 +1186,48 @@ EXPORT_SYMBOL_GPL(dma_buf_mmap);
>   * dma_buf_vmap - Create virtual mapping for the buffer object into kernel
>   * address space. Same restrictions as for vmap and friends apply.
>   * @dmabuf:	[in]	buffer to vmap
> + * @map:	[out]	returns the vmap pointer
>   *
>   * This call may fail due to lack of virtual mapping address space.
>   * These calls are optional in drivers. The intended use for them
>   * is for mapping objects linear in kernel space for high use objects.
>   * Please attempt to use kmap/kunmap before thinking about these interfaces.
>   *
> - * Returns NULL on error.
> + * Returns 0 on success, or a negative errno code otherwise.
>   */
> -void *dma_buf_vmap(struct dma_buf *dmabuf)
> +int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
>  {
> -	void *ptr;
> +	struct dma_buf_map ptr;

Hm I think for safety we should unconditionally clear map, even when it
fails. Otherwise callers of this who fail to handle errors might trip up
on stack garbage, instead of tripping over a NULL pointer. That's a step
back from the old version of

	vaddr = dma_buf_vmap()

where you where guaranteed to trip over a NULL pointer if you didn't check
for errors. So also no need for the local variable.

Otherwise I think this interface looks clean.
-Daniel

> +	int ret = 0;
>  
>  	if (WARN_ON(!dmabuf))
> -		return NULL;
> +		return -EINVAL;
>  
>  	if (!dmabuf->ops->vmap)
> -		return NULL;
> +		return -EINVAL;
>  
>  	mutex_lock(&dmabuf->lock);
>  	if (dmabuf->vmapping_counter) {
>  		dmabuf->vmapping_counter++;
>  		BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
> -		ptr = dmabuf->vmap_ptr.vaddr;
> +		*map = dmabuf->vmap_ptr;
>  		goto out_unlock;
>  	}
>  
>  	BUG_ON(dma_buf_map_is_set(&dmabuf->vmap_ptr));
>  
> -	ptr = dmabuf->ops->vmap(dmabuf);
> -	if (WARN_ON_ONCE(IS_ERR(ptr)))
> -		ptr = NULL;
> -	if (!ptr)
> +	ret = dmabuf->ops->vmap(dmabuf, &ptr);
> +	if (WARN_ON_ONCE(ret))
>  		goto out_unlock;
>  
> -	dmabuf->vmap_ptr.vaddr = ptr;
> +	dmabuf->vmap_ptr = ptr;
>  	dmabuf->vmapping_counter = 1;
>  
> +	*map = dmabuf->vmap_ptr;
> +
>  out_unlock:
>  	mutex_unlock(&dmabuf->lock);
> -	return ptr;
> +	return ret;
>  }
>  EXPORT_SYMBOL_GPL(dma_buf_vmap);
>  
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 822edeadbab3..062315c25c12 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -634,22 +634,23 @@ drm_gem_cma_prime_import_sg_table_vmap(struct drm_device *dev,
>  {
>  	struct drm_gem_cma_object *cma_obj;
>  	struct drm_gem_object *obj;
> -	void *vaddr;
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	vaddr = dma_buf_vmap(attach->dmabuf);
> -	if (!vaddr) {
> +	ret = dma_buf_vmap(attach->dmabuf, &map);
> +	if (ret) {
>  		DRM_ERROR("Failed to vmap PRIME buffer\n");
> -		return ERR_PTR(-ENOMEM);
> +		return ERR_PTR(ret);
>  	}
>  
>  	obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
>  	if (IS_ERR(obj)) {
> -		dma_buf_vunmap(attach->dmabuf, vaddr);
> +		dma_buf_vunmap(attach->dmabuf, map.vaddr);
>  		return obj;
>  	}
>  
>  	cma_obj = to_drm_gem_cma_obj(obj);
> -	cma_obj->vaddr = vaddr;
> +	cma_obj->vaddr = map.vaddr;
>  
>  	return obj;
>  }
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 0a952f27c184..ad10a57cfece 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -261,13 +261,16 @@ EXPORT_SYMBOL(drm_gem_shmem_unpin);
>  static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>  {
>  	struct drm_gem_object *obj = &shmem->base;
> -	int ret;
> +	struct dma_buf_map map;
> +	int ret = 0;
>  
>  	if (shmem->vmap_use_count++ > 0)
>  		return shmem->vaddr;
>  
>  	if (obj->import_attach) {
> -		shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
> +		ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
> +		if (!ret)
> +			shmem->vaddr = map.vaddr;
>  	} else {
>  		pgprot_t prot = PAGE_KERNEL;
>  
> @@ -279,11 +282,12 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>  			prot = pgprot_writecombine(prot);
>  		shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
>  				    VM_MAP, prot);
> +		if (!shmem->vaddr)
> +			ret = -ENOMEM;
>  	}
>  
> -	if (!shmem->vaddr) {
> -		DRM_DEBUG_KMS("Failed to vmap pages\n");
> -		ret = -ENOMEM;
> +	if (ret) {
> +		DRM_DEBUG_KMS("Failed to vmap pages, error %d\n", ret);
>  		goto err_put_pages;
>  	}
>  
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index 8a6a3c99b7d8..1b7d86c7842d 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -668,16 +668,18 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
>   *
>   * Returns the kernel virtual address or NULL on failure.
>   */
> -void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf)
> +int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
>  {
>  	struct drm_gem_object *obj = dma_buf->priv;
>  	void *vaddr;
>  
>  	vaddr = drm_gem_vmap(obj);
>  	if (IS_ERR(vaddr))
> -		vaddr = NULL;
> +		return PTR_ERR(vaddr);
>  
> -	return vaddr;
> +	dma_buf_map_set_vaddr(map, vaddr);
> +
> +	return 0;
>  }
>  EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
>  
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 4aa3426a9ba4..80a9fc143bbb 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -85,9 +85,15 @@ static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj)
>  
>  static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj)
>  {
> +	struct dma_buf_map map;
> +	int ret;
> +
>  	lockdep_assert_held(&etnaviv_obj->lock);
>  
> -	return dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf);
> +	ret = dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf, &map);
> +	if (ret)
> +		return NULL;
> +	return map.vaddr;
>  }
>  
>  static int etnaviv_gem_prime_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> index 27fddc22a7c6..77b363d3000b 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> @@ -82,11 +82,18 @@ static void i915_gem_unmap_dma_buf(struct dma_buf_attachment *attachment,
>  	i915_gem_object_unpin_pages(obj);
>  }
>  
> -static void *i915_gem_dmabuf_vmap(struct dma_buf *dma_buf)
> +static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
>  {
>  	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
> +	void *vaddr;
>  
> -	return i915_gem_object_pin_map(obj, I915_MAP_WB);
> +	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
> +	if (IS_ERR(vaddr))
> +		return PTR_ERR(vaddr);
> +
> +	dma_buf_map_set_vaddr(map, vaddr);
> +
> +	return 0;
>  }
>  
>  static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
> diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> index 2a52b92586b9..f79ebc5329b7 100644
> --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> @@ -82,6 +82,7 @@ static int igt_dmabuf_import(void *arg)
>  	struct drm_i915_gem_object *obj;
>  	struct dma_buf *dmabuf;
>  	void *obj_map, *dma_map;
> +	struct dma_buf_map map;
>  	u32 pattern[] = { 0, 0xaa, 0xcc, 0x55, 0xff };
>  	int err, i;
>  
> @@ -110,7 +111,8 @@ static int igt_dmabuf_import(void *arg)
>  		goto out_obj;
>  	}
>  
> -	dma_map = dma_buf_vmap(dmabuf);
> +	err = dma_buf_vmap(dmabuf, &map);
> +	dma_map = err ? NULL : map.vaddr;
>  	if (!dma_map) {
>  		pr_err("dma_buf_vmap failed\n");
>  		err = -ENOMEM;
> @@ -163,6 +165,7 @@ static int igt_dmabuf_import_ownership(void *arg)
>  	struct drm_i915_private *i915 = arg;
>  	struct drm_i915_gem_object *obj;
>  	struct dma_buf *dmabuf;
> +	struct dma_buf_map map;
>  	void *ptr;
>  	int err;
>  
> @@ -170,7 +173,8 @@ static int igt_dmabuf_import_ownership(void *arg)
>  	if (IS_ERR(dmabuf))
>  		return PTR_ERR(dmabuf);
>  
> -	ptr = dma_buf_vmap(dmabuf);
> +	err = dma_buf_vmap(dmabuf, &map);
> +	ptr = err ? NULL : map.vaddr;
>  	if (!ptr) {
>  		pr_err("dma_buf_vmap failed\n");
>  		err = -ENOMEM;
> @@ -212,6 +216,7 @@ static int igt_dmabuf_export_vmap(void *arg)
>  	struct drm_i915_private *i915 = arg;
>  	struct drm_i915_gem_object *obj;
>  	struct dma_buf *dmabuf;
> +	struct dma_buf_map map;
>  	void *ptr;
>  	int err;
>  
> @@ -228,7 +233,8 @@ static int igt_dmabuf_export_vmap(void *arg)
>  	}
>  	i915_gem_object_put(obj);
>  
> -	ptr = dma_buf_vmap(dmabuf);
> +	err = dma_buf_vmap(dmabuf, &map);
> +	ptr = err ? NULL : map.vaddr;
>  	if (!ptr) {
>  		pr_err("dma_buf_vmap failed\n");
>  		err = -ENOMEM;
> diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
> index 47e2935b8c68..81663036c701 100644
> --- a/drivers/gpu/drm/tegra/gem.c
> +++ b/drivers/gpu/drm/tegra/gem.c
> @@ -132,14 +132,18 @@ static void tegra_bo_unpin(struct device *dev, struct sg_table *sgt)
>  static void *tegra_bo_mmap(struct host1x_bo *bo)
>  {
>  	struct tegra_bo *obj = host1x_to_tegra_bo(bo);
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	if (obj->vaddr)
> +	if (obj->vaddr) {
>  		return obj->vaddr;
> -	else if (obj->gem.import_attach)
> -		return dma_buf_vmap(obj->gem.import_attach->dmabuf);
> -	else
> +	} else if (obj->gem.import_attach) {
> +		ret = dma_buf_vmap(obj->gem.import_attach->dmabuf, &map);
> +		return ret ? NULL : map.vaddr;
> +	} else {
>  		return vmap(obj->pages, obj->num_pages, VM_MAP,
>  			    pgprot_writecombine(PAGE_KERNEL));
> +	}
>  }
>  
>  static void tegra_bo_munmap(struct host1x_bo *bo, void *addr)
> @@ -641,12 +645,14 @@ static int tegra_gem_prime_mmap(struct dma_buf *buf, struct vm_area_struct *vma)
>  	return __tegra_gem_mmap(gem, vma);
>  }
>  
> -static void *tegra_gem_prime_vmap(struct dma_buf *buf)
> +static int tegra_gem_prime_vmap(struct dma_buf *buf, struct dma_buf_map *map)
>  {
>  	struct drm_gem_object *gem = buf->priv;
>  	struct tegra_bo *bo = to_tegra_bo(gem);
>  
> -	return bo->vaddr;
> +	dma_buf_map_set_vaddr(map, bo->vaddr);
> +
> +	return 0;
>  }
>  
>  static void tegra_gem_prime_vunmap(struct dma_buf *buf, void *vaddr)
> diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> index ec3446cc45b8..11428287bdf3 100644
> --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> @@ -81,9 +81,13 @@ static void *vb2_dc_cookie(void *buf_priv)
>  static void *vb2_dc_vaddr(void *buf_priv)
>  {
>  	struct vb2_dc_buf *buf = buf_priv;
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	if (!buf->vaddr && buf->db_attach)
> -		buf->vaddr = dma_buf_vmap(buf->db_attach->dmabuf);
> +	if (!buf->vaddr && buf->db_attach) {
> +		ret = dma_buf_vmap(buf->db_attach->dmabuf, &map);
> +		buf->vaddr = ret ? NULL : map.vaddr;
> +	}
>  
>  	return buf->vaddr;
>  }
> @@ -365,11 +369,13 @@ vb2_dc_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
>  	return 0;
>  }
>  
> -static void *vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf)
> +static int vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
>  {
>  	struct vb2_dc_buf *buf = dbuf->priv;
>  
> -	return buf->vaddr;
> +	dma_buf_map_set_vaddr(map, buf->vaddr);
> +
> +	return 0;
>  }
>  
>  static int vb2_dc_dmabuf_ops_mmap(struct dma_buf *dbuf,
> diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> index 0a40e00f0d7e..c51170e9c1b9 100644
> --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> @@ -300,14 +300,18 @@ static void vb2_dma_sg_put_userptr(void *buf_priv)
>  static void *vb2_dma_sg_vaddr(void *buf_priv)
>  {
>  	struct vb2_dma_sg_buf *buf = buf_priv;
> +	struct dma_buf_map map;
> +	int ret;
>  
>  	BUG_ON(!buf);
>  
>  	if (!buf->vaddr) {
> -		if (buf->db_attach)
> -			buf->vaddr = dma_buf_vmap(buf->db_attach->dmabuf);
> -		else
> +		if (buf->db_attach) {
> +			ret = dma_buf_vmap(buf->db_attach->dmabuf, &map);
> +			buf->vaddr = ret ? NULL : map.vaddr;
> +		} else {
>  			buf->vaddr = vm_map_ram(buf->pages, buf->num_pages, -1);
> +		}
>  	}
>  
>  	/* add offset in case userptr is not page-aligned */
> @@ -489,11 +493,13 @@ vb2_dma_sg_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
>  	return 0;
>  }
>  
> -static void *vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf)
> +static int vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
>  {
>  	struct vb2_dma_sg_buf *buf = dbuf->priv;
>  
> -	return vb2_dma_sg_vaddr(buf);
> +	dma_buf_map_set_vaddr(map, buf->vaddr);
> +
> +	return 0;
>  }
>  
>  static int vb2_dma_sg_dmabuf_ops_mmap(struct dma_buf *dbuf,
> diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> index c66fda4a65e4..7b68e2379c65 100644
> --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> @@ -318,11 +318,13 @@ static void vb2_vmalloc_dmabuf_ops_release(struct dma_buf *dbuf)
>  	vb2_vmalloc_put(dbuf->priv);
>  }
>  
> -static void *vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf)
> +static int vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map)
>  {
>  	struct vb2_vmalloc_buf *buf = dbuf->priv;
>  
> -	return buf->vaddr;
> +	dma_buf_map_set_vaddr(map, buf->vaddr);
> +
> +	return 0;
>  }
>  
>  static int vb2_vmalloc_dmabuf_ops_mmap(struct dma_buf *dbuf,
> @@ -374,10 +376,15 @@ static struct dma_buf *vb2_vmalloc_get_dmabuf(void *buf_priv, unsigned long flag
>  static int vb2_vmalloc_map_dmabuf(void *mem_priv)
>  {
>  	struct vb2_vmalloc_buf *buf = mem_priv;
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	buf->vaddr = dma_buf_vmap(buf->dbuf);
> +	ret = dma_buf_vmap(buf->dbuf, &map);
> +	if (ret)
> +		return -EFAULT;
> +	buf->vaddr = map.vaddr;
>  
> -	return buf->vaddr ? 0 : -EFAULT;
> +	return 0;
>  }
>  
>  static void vb2_vmalloc_unmap_dmabuf(void *mem_priv)
> diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
> index bf141e74a1c2..3ee22639ff77 100644
> --- a/include/drm/drm_prime.h
> +++ b/include/drm/drm_prime.h
> @@ -54,6 +54,7 @@ struct device;
>  struct dma_buf_export_info;
>  struct dma_buf;
>  struct dma_buf_attachment;
> +struct dma_buf_map;
>  
>  enum dma_data_direction;
>  
> @@ -82,7 +83,7 @@ struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach,
>  void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
>  			   struct sg_table *sgt,
>  			   enum dma_data_direction dir);
> -void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf);
> +int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map);
>  void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr);
>  
>  int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index d4b1bb3cc4b0..6b4f6e0e8b5d 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -28,6 +28,19 @@ struct dma_buf_map {
>  	bool is_iomem;
>  };
>  
> +/**
> + * dma_buf_map_set_vaddr - Sets a dma-buf mapping structure to an address in system memory
> + * @map:	The dma-buf mapping structure
> + * @vaddr:	A system-memory address
> + *
> + * Sets the address and clears the I/O-memory flag.
> + */
> +static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
> +{
> +	map->vaddr = vaddr;
> +	map->is_iomem = false;
> +}
> +
>  /* API transition helper */
>  static inline bool dma_buf_map_is_vaddr(const struct dma_buf_map *map, const void *vaddr)
>  {
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index fcc2ddfb6d18..7237997cfa38 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -266,7 +266,7 @@ struct dma_buf_ops {
>  	 */
>  	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
>  
> -	void *(*vmap)(struct dma_buf *);
> +	int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map);
>  	void (*vunmap)(struct dma_buf *, void *vaddr);
>  };
>  
> @@ -503,6 +503,6 @@ int dma_buf_end_cpu_access(struct dma_buf *dma_buf,
>  
>  int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
>  		 unsigned long);
> -void *dma_buf_vmap(struct dma_buf *);
> -void dma_buf_vunmap(struct dma_buf *, void *vaddr);
> +int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
> +void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr);
>  #endif /* __DMA_BUF_H__ */
> -- 
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
  2020-09-14 11:25 ` Thomas Zimmermann
  (?)
  (?)
@ 2020-09-16  9:37   ` Daniel Vetter
  -1 siblings, 0 replies; 57+ messages in thread
From: Daniel Vetter @ 2020-09-16  9:37 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: sumit.semwal, christian.koenig, daniel, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom, linux-media, dri-devel, linaro-mm-sig, etnaviv,
	intel-gfx, linux-tegra, sparclinux

On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
> of dma-buf memory in kernel address space. The functions operate with plain
> addresses and the assumption is that the memory can be accessed with load
> and store operations. This is not the case on some architectures (e.g.,
> sparc64) where I/O memory can only be accessed with dedicated instructions.
> 
> This patchset introduces struct dma_buf_map, which contains the address of
> a buffer and a flag that tells whether system- or I/O-memory instructions
> are required.
> 
> Some background: updating the DRM framebuffer console on sparc64 makes the
> kernel panic. This is because the framebuffer memory cannot be accessed with
> system-memory instructions. We currently employ a workaround in DRM to
> address this specific problem. [1]
> 
> To resolve the problem, we'd like to address it at the most common point,
> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
> instructions are required and exports this information to it's users. The
> new structure struct dma_buf_map stores the buffer address and a flag that
> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
> can then access the memory accordingly.
> 
> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
> and it's interfaces. Further patches can update dma-buf users. For example,
> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
> 
> Further work: TTM, one of DRM's memory managers, already exports an
> is_iomem flag of its own. It could later be switched over to exporting struct
> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
> fbdev console to operate on I/O memory. These could possibly be switched over
> to the generic fbdev emulation, as soon as the generic code uses struct
> dma_buf_map.
> 
> [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
> [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/

lgtm, imo ready to convert the follow-up patches over to this. But I think
would be good to get at least some ack from the ttm side for the overall
plan.

Also, I think we should put all the various helpers (writel/readl, memset,
memcpy, whatever else) into the dma-buf-map.h helper, so that most code
using this can just treat it as an abstract pointer type and never look
underneath it.
-Daniel

> 
> Thomas Zimmermann (3):
>   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
>   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
> 
>  Documentation/driver-api/dma-buf.rst          |   3 +
>  drivers/dma-buf/dma-buf.c                     |  40 +++---
>  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>  drivers/gpu/drm/drm_prime.c                   |  14 +-
>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>  include/drm/drm_prime.h                       |   5 +-
>  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
>  include/linux/dma-buf.h                       |  11 +-
>  15 files changed, 274 insertions(+), 82 deletions(-)
>  create mode 100644 include/linux/dma-buf-map.h
> 
> --
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-16  9:37   ` Daniel Vetter
  0 siblings, 0 replies; 57+ messages in thread
From: Daniel Vetter @ 2020-09-16  9:37 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: christian.koenig, airlied, mark.cave-ayland, dri-devel, chris,
	thierry.reding, kraxel, sparclinux, sam, m.szyprowski, jonathanh,
	matthew.auld, linux+etnaviv, linux-media, pawel, intel-gfx,
	etnaviv, linaro-mm-sig, thomas.hellstrom, rodrigo.vivi,
	linux-tegra, mchehab, tfiga, kyungmin.park, davem

On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
> of dma-buf memory in kernel address space. The functions operate with plain
> addresses and the assumption is that the memory can be accessed with load
> and store operations. This is not the case on some architectures (e.g.,
> sparc64) where I/O memory can only be accessed with dedicated instructions.
> 
> This patchset introduces struct dma_buf_map, which contains the address of
> a buffer and a flag that tells whether system- or I/O-memory instructions
> are required.
> 
> Some background: updating the DRM framebuffer console on sparc64 makes the
> kernel panic. This is because the framebuffer memory cannot be accessed with
> system-memory instructions. We currently employ a workaround in DRM to
> address this specific problem. [1]
> 
> To resolve the problem, we'd like to address it at the most common point,
> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
> instructions are required and exports this information to it's users. The
> new structure struct dma_buf_map stores the buffer address and a flag that
> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
> can then access the memory accordingly.
> 
> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
> and it's interfaces. Further patches can update dma-buf users. For example,
> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
> 
> Further work: TTM, one of DRM's memory managers, already exports an
> is_iomem flag of its own. It could later be switched over to exporting struct
> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
> fbdev console to operate on I/O memory. These could possibly be switched over
> to the generic fbdev emulation, as soon as the generic code uses struct
> dma_buf_map.
> 
> [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
> [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/

lgtm, imo ready to convert the follow-up patches over to this. But I think
would be good to get at least some ack from the ttm side for the overall
plan.

Also, I think we should put all the various helpers (writel/readl, memset,
memcpy, whatever else) into the dma-buf-map.h helper, so that most code
using this can just treat it as an abstract pointer type and never look
underneath it.
-Daniel

> 
> Thomas Zimmermann (3):
>   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
>   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
> 
>  Documentation/driver-api/dma-buf.rst          |   3 +
>  drivers/dma-buf/dma-buf.c                     |  40 +++---
>  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>  drivers/gpu/drm/drm_prime.c                   |  14 +-
>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>  include/drm/drm_prime.h                       |   5 +-
>  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
>  include/linux/dma-buf.h                       |  11 +-
>  15 files changed, 274 insertions(+), 82 deletions(-)
>  create mode 100644 include/linux/dma-buf-map.h
> 
> --
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-16  9:37   ` Daniel Vetter
  0 siblings, 0 replies; 57+ messages in thread
From: Daniel Vetter @ 2020-09-16  9:37 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: christian.koenig, airlied, mark.cave-ayland, dri-devel, chris,
	thierry.reding, kraxel, sparclinux, sam, m.szyprowski, jonathanh,
	matthew.auld, linux+etnaviv, linux-media, pawel, intel-gfx,
	etnaviv, linaro-mm-sig, thomas.hellstrom, rodrigo.vivi,
	linux-tegra, mchehab, tfiga, kyungmin.park, davem

On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
> of dma-buf memory in kernel address space. The functions operate with plain
> addresses and the assumption is that the memory can be accessed with load
> and store operations. This is not the case on some architectures (e.g.,
> sparc64) where I/O memory can only be accessed with dedicated instructions.
> 
> This patchset introduces struct dma_buf_map, which contains the address of
> a buffer and a flag that tells whether system- or I/O-memory instructions
> are required.
> 
> Some background: updating the DRM framebuffer console on sparc64 makes the
> kernel panic. This is because the framebuffer memory cannot be accessed with
> system-memory instructions. We currently employ a workaround in DRM to
> address this specific problem. [1]
> 
> To resolve the problem, we'd like to address it at the most common point,
> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
> instructions are required and exports this information to it's users. The
> new structure struct dma_buf_map stores the buffer address and a flag that
> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
> can then access the memory accordingly.
> 
> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
> and it's interfaces. Further patches can update dma-buf users. For example,
> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
> 
> Further work: TTM, one of DRM's memory managers, already exports an
> is_iomem flag of its own. It could later be switched over to exporting struct
> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
> fbdev console to operate on I/O memory. These could possibly be switched over
> to the generic fbdev emulation, as soon as the generic code uses struct
> dma_buf_map.
> 
> [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
> [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/

lgtm, imo ready to convert the follow-up patches over to this. But I think
would be good to get at least some ack from the ttm side for the overall
plan.

Also, I think we should put all the various helpers (writel/readl, memset,
memcpy, whatever else) into the dma-buf-map.h helper, so that most code
using this can just treat it as an abstract pointer type and never look
underneath it.
-Daniel

> 
> Thomas Zimmermann (3):
>   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
>   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
> 
>  Documentation/driver-api/dma-buf.rst          |   3 +
>  drivers/dma-buf/dma-buf.c                     |  40 +++---
>  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>  drivers/gpu/drm/drm_prime.c                   |  14 +-
>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>  include/drm/drm_prime.h                       |   5 +-
>  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
>  include/linux/dma-buf.h                       |  11 +-
>  15 files changed, 274 insertions(+), 82 deletions(-)
>  create mode 100644 include/linux/dma-buf-map.h
> 
> --
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [Intel-gfx] [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-16  9:37   ` Daniel Vetter
  0 siblings, 0 replies; 57+ messages in thread
From: Daniel Vetter @ 2020-09-16  9:37 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: christian.koenig, airlied, mark.cave-ayland, dri-devel, chris,
	kraxel, sparclinux, sam, sumit.semwal, m.szyprowski, jonathanh,
	matthew.auld, linux+etnaviv, linux-media, pawel, intel-gfx,
	etnaviv, linaro-mm-sig, christian.gmeiner, thomas.hellstrom,
	mripard, linux-tegra, mchehab, tfiga, kyungmin.park, davem,
	l.stach

On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
> of dma-buf memory in kernel address space. The functions operate with plain
> addresses and the assumption is that the memory can be accessed with load
> and store operations. This is not the case on some architectures (e.g.,
> sparc64) where I/O memory can only be accessed with dedicated instructions.
> 
> This patchset introduces struct dma_buf_map, which contains the address of
> a buffer and a flag that tells whether system- or I/O-memory instructions
> are required.
> 
> Some background: updating the DRM framebuffer console on sparc64 makes the
> kernel panic. This is because the framebuffer memory cannot be accessed with
> system-memory instructions. We currently employ a workaround in DRM to
> address this specific problem. [1]
> 
> To resolve the problem, we'd like to address it at the most common point,
> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
> instructions are required and exports this information to it's users. The
> new structure struct dma_buf_map stores the buffer address and a flag that
> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
> can then access the memory accordingly.
> 
> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
> and it's interfaces. Further patches can update dma-buf users. For example,
> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
> 
> Further work: TTM, one of DRM's memory managers, already exports an
> is_iomem flag of its own. It could later be switched over to exporting struct
> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
> fbdev console to operate on I/O memory. These could possibly be switched over
> to the generic fbdev emulation, as soon as the generic code uses struct
> dma_buf_map.
> 
> [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
> [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/

lgtm, imo ready to convert the follow-up patches over to this. But I think
would be good to get at least some ack from the ttm side for the overall
plan.

Also, I think we should put all the various helpers (writel/readl, memset,
memcpy, whatever else) into the dma-buf-map.h helper, so that most code
using this can just treat it as an abstract pointer type and never look
underneath it.
-Daniel

> 
> Thomas Zimmermann (3):
>   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
>   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
> 
>  Documentation/driver-api/dma-buf.rst          |   3 +
>  drivers/dma-buf/dma-buf.c                     |  40 +++---
>  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>  drivers/gpu/drm/drm_prime.c                   |  14 +-
>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>  include/drm/drm_prime.h                       |   5 +-
>  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
>  include/linux/dma-buf.h                       |  11 +-
>  15 files changed, 274 insertions(+), 82 deletions(-)
>  create mode 100644 include/linux/dma-buf-map.h
> 
> --
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
  2020-09-16  9:37   ` Daniel Vetter
  (?)
  (?)
@ 2020-09-16 10:48     ` Thomas Zimmermann
  -1 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-16 10:48 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: sumit.semwal, christian.koenig, airlied, sam, mark.cave-ayland,
	kraxel, davem, maarten.lankhorst, mripard, l.stach,
	linux+etnaviv, christian.gmeiner, jani.nikula, joonas.lahtinen,
	rodrigo.vivi, thierry.reding, jonathanh, pawel, m.szyprowski,
	kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom, linux-media, dri-devel, linaro-mm-sig, etnaviv,
	intel-gfx, linux-tegra, sparclinux


[-- Attachment #1.1: Type: text/plain, Size: 4603 bytes --]

Hi

Am 16.09.20 um 11:37 schrieb Daniel Vetter:
> On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
>> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
>> of dma-buf memory in kernel address space. The functions operate with plain
>> addresses and the assumption is that the memory can be accessed with load
>> and store operations. This is not the case on some architectures (e.g.,
>> sparc64) where I/O memory can only be accessed with dedicated instructions.
>>
>> This patchset introduces struct dma_buf_map, which contains the address of
>> a buffer and a flag that tells whether system- or I/O-memory instructions
>> are required.
>>
>> Some background: updating the DRM framebuffer console on sparc64 makes the
>> kernel panic. This is because the framebuffer memory cannot be accessed with
>> system-memory instructions. We currently employ a workaround in DRM to
>> address this specific problem. [1]
>>
>> To resolve the problem, we'd like to address it at the most common point,
>> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
>> instructions are required and exports this information to it's users. The
>> new structure struct dma_buf_map stores the buffer address and a flag that
>> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
>> can then access the memory accordingly.
>>
>> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
>> and it's interfaces. Further patches can update dma-buf users. For example,
>> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
>>
>> Further work: TTM, one of DRM's memory managers, already exports an
>> is_iomem flag of its own. It could later be switched over to exporting struct
>> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
>> fbdev console to operate on I/O memory. These could possibly be switched over
>> to the generic fbdev emulation, as soon as the generic code uses struct
>> dma_buf_map.
>>
>> [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
>> [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/
> 
> lgtm, imo ready to convert the follow-up patches over to this. But I think
> would be good to get at least some ack from the ttm side for the overall
> plan.

Yup, it would be nice if TTM could had out these types automatically.
Then all TTM-based drivers would automatically support it.

> 
> Also, I think we should put all the various helpers (writel/readl, memset,
> memcpy, whatever else) into the dma-buf-map.h helper, so that most code
> using this can just treat it as an abstract pointer type and never look
> underneath it.

We have some framebuffer helpers that rely on pointer arithmetic, so
we'd need that too. No big deal wrt code, but I was worried about the
overhead. If a loop goes over framebuffer memory, there's an if/else
branch for each access to the memory buffer.

Best regards
Thomas

> -Daniel
> 
>>
>> Thomas Zimmermann (3):
>>   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
>>   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>>   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
>>
>>  Documentation/driver-api/dma-buf.rst          |   3 +
>>  drivers/dma-buf/dma-buf.c                     |  40 +++---
>>  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>>  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>>  drivers/gpu/drm/drm_prime.c                   |  14 +-
>>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>>  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>>  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>>  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>>  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>>  include/drm/drm_prime.h                       |   5 +-
>>  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
>>  include/linux/dma-buf.h                       |  11 +-
>>  15 files changed, 274 insertions(+), 82 deletions(-)
>>  create mode 100644 include/linux/dma-buf-map.h
>>
>> --
>> 2.28.0
>>
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-16 10:48     ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-16 10:48 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: christian.koenig, airlied, mark.cave-ayland, dri-devel, chris,
	thierry.reding, kraxel, sparclinux, sam, m.szyprowski, jonathanh,
	matthew.auld, linux+etnaviv, linux-media, pawel, intel-gfx,
	etnaviv, linaro-mm-sig, thomas.hellstrom, rodrigo.vivi,
	linux-tegra, mchehab, tfiga, kyungmin.park, davem


[-- Attachment #1.1: Type: text/plain, Size: 4603 bytes --]

Hi

Am 16.09.20 um 11:37 schrieb Daniel Vetter:
> On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
>> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
>> of dma-buf memory in kernel address space. The functions operate with plain
>> addresses and the assumption is that the memory can be accessed with load
>> and store operations. This is not the case on some architectures (e.g.,
>> sparc64) where I/O memory can only be accessed with dedicated instructions.
>>
>> This patchset introduces struct dma_buf_map, which contains the address of
>> a buffer and a flag that tells whether system- or I/O-memory instructions
>> are required.
>>
>> Some background: updating the DRM framebuffer console on sparc64 makes the
>> kernel panic. This is because the framebuffer memory cannot be accessed with
>> system-memory instructions. We currently employ a workaround in DRM to
>> address this specific problem. [1]
>>
>> To resolve the problem, we'd like to address it at the most common point,
>> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
>> instructions are required and exports this information to it's users. The
>> new structure struct dma_buf_map stores the buffer address and a flag that
>> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
>> can then access the memory accordingly.
>>
>> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
>> and it's interfaces. Further patches can update dma-buf users. For example,
>> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
>>
>> Further work: TTM, one of DRM's memory managers, already exports an
>> is_iomem flag of its own. It could later be switched over to exporting struct
>> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
>> fbdev console to operate on I/O memory. These could possibly be switched over
>> to the generic fbdev emulation, as soon as the generic code uses struct
>> dma_buf_map.
>>
>> [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
>> [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/
> 
> lgtm, imo ready to convert the follow-up patches over to this. But I think
> would be good to get at least some ack from the ttm side for the overall
> plan.

Yup, it would be nice if TTM could had out these types automatically.
Then all TTM-based drivers would automatically support it.

> 
> Also, I think we should put all the various helpers (writel/readl, memset,
> memcpy, whatever else) into the dma-buf-map.h helper, so that most code
> using this can just treat it as an abstract pointer type and never look
> underneath it.

We have some framebuffer helpers that rely on pointer arithmetic, so
we'd need that too. No big deal wrt code, but I was worried about the
overhead. If a loop goes over framebuffer memory, there's an if/else
branch for each access to the memory buffer.

Best regards
Thomas

> -Daniel
> 
>>
>> Thomas Zimmermann (3):
>>   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
>>   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>>   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
>>
>>  Documentation/driver-api/dma-buf.rst          |   3 +
>>  drivers/dma-buf/dma-buf.c                     |  40 +++---
>>  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>>  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>>  drivers/gpu/drm/drm_prime.c                   |  14 +-
>>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>>  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>>  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>>  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>>  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>>  include/drm/drm_prime.h                       |   5 +-
>>  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
>>  include/linux/dma-buf.h                       |  11 +-
>>  15 files changed, 274 insertions(+), 82 deletions(-)
>>  create mode 100644 include/linux/dma-buf-map.h
>>
>> --
>> 2.28.0
>>
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-16 10:48     ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-16 10:48 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: christian.koenig, airlied, mark.cave-ayland, dri-devel, chris,
	thierry.reding, kraxel, sparclinux, sam, m.szyprowski, jonathanh,
	matthew.auld, linux+etnaviv, linux-media, pawel, intel-gfx,
	etnaviv, linaro-mm-sig, thomas.hellstrom, rodrigo.vivi,
	linux-tegra, mchehab, tfiga, kyungmin.park, davem


[-- Attachment #1.1.1: Type: text/plain, Size: 4603 bytes --]

Hi

Am 16.09.20 um 11:37 schrieb Daniel Vetter:
> On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
>> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
>> of dma-buf memory in kernel address space. The functions operate with plain
>> addresses and the assumption is that the memory can be accessed with load
>> and store operations. This is not the case on some architectures (e.g.,
>> sparc64) where I/O memory can only be accessed with dedicated instructions.
>>
>> This patchset introduces struct dma_buf_map, which contains the address of
>> a buffer and a flag that tells whether system- or I/O-memory instructions
>> are required.
>>
>> Some background: updating the DRM framebuffer console on sparc64 makes the
>> kernel panic. This is because the framebuffer memory cannot be accessed with
>> system-memory instructions. We currently employ a workaround in DRM to
>> address this specific problem. [1]
>>
>> To resolve the problem, we'd like to address it at the most common point,
>> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
>> instructions are required and exports this information to it's users. The
>> new structure struct dma_buf_map stores the buffer address and a flag that
>> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
>> can then access the memory accordingly.
>>
>> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
>> and it's interfaces. Further patches can update dma-buf users. For example,
>> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
>>
>> Further work: TTM, one of DRM's memory managers, already exports an
>> is_iomem flag of its own. It could later be switched over to exporting struct
>> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
>> fbdev console to operate on I/O memory. These could possibly be switched over
>> to the generic fbdev emulation, as soon as the generic code uses struct
>> dma_buf_map.
>>
>> [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
>> [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/
> 
> lgtm, imo ready to convert the follow-up patches over to this. But I think
> would be good to get at least some ack from the ttm side for the overall
> plan.

Yup, it would be nice if TTM could had out these types automatically.
Then all TTM-based drivers would automatically support it.

> 
> Also, I think we should put all the various helpers (writel/readl, memset,
> memcpy, whatever else) into the dma-buf-map.h helper, so that most code
> using this can just treat it as an abstract pointer type and never look
> underneath it.

We have some framebuffer helpers that rely on pointer arithmetic, so
we'd need that too. No big deal wrt code, but I was worried about the
overhead. If a loop goes over framebuffer memory, there's an if/else
branch for each access to the memory buffer.

Best regards
Thomas

> -Daniel
> 
>>
>> Thomas Zimmermann (3):
>>   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
>>   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>>   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
>>
>>  Documentation/driver-api/dma-buf.rst          |   3 +
>>  drivers/dma-buf/dma-buf.c                     |  40 +++---
>>  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>>  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>>  drivers/gpu/drm/drm_prime.c                   |  14 +-
>>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>>  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>>  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>>  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>>  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>>  include/drm/drm_prime.h                       |   5 +-
>>  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
>>  include/linux/dma-buf.h                       |  11 +-
>>  15 files changed, 274 insertions(+), 82 deletions(-)
>>  create mode 100644 include/linux/dma-buf-map.h
>>
>> --
>> 2.28.0
>>
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [Intel-gfx] [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-16 10:48     ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-16 10:48 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: christian.koenig, airlied, mark.cave-ayland, dri-devel, chris,
	kraxel, sparclinux, sam, sumit.semwal, m.szyprowski, jonathanh,
	matthew.auld, linux+etnaviv, linux-media, pawel, intel-gfx,
	etnaviv, linaro-mm-sig, christian.gmeiner, thomas.hellstrom,
	mripard, linux-tegra, mchehab, tfiga, kyungmin.park, davem,
	l.stach


[-- Attachment #1.1.1: Type: text/plain, Size: 4603 bytes --]

Hi

Am 16.09.20 um 11:37 schrieb Daniel Vetter:
> On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
>> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
>> of dma-buf memory in kernel address space. The functions operate with plain
>> addresses and the assumption is that the memory can be accessed with load
>> and store operations. This is not the case on some architectures (e.g.,
>> sparc64) where I/O memory can only be accessed with dedicated instructions.
>>
>> This patchset introduces struct dma_buf_map, which contains the address of
>> a buffer and a flag that tells whether system- or I/O-memory instructions
>> are required.
>>
>> Some background: updating the DRM framebuffer console on sparc64 makes the
>> kernel panic. This is because the framebuffer memory cannot be accessed with
>> system-memory instructions. We currently employ a workaround in DRM to
>> address this specific problem. [1]
>>
>> To resolve the problem, we'd like to address it at the most common point,
>> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
>> instructions are required and exports this information to it's users. The
>> new structure struct dma_buf_map stores the buffer address and a flag that
>> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
>> can then access the memory accordingly.
>>
>> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
>> and it's interfaces. Further patches can update dma-buf users. For example,
>> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
>>
>> Further work: TTM, one of DRM's memory managers, already exports an
>> is_iomem flag of its own. It could later be switched over to exporting struct
>> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
>> fbdev console to operate on I/O memory. These could possibly be switched over
>> to the generic fbdev emulation, as soon as the generic code uses struct
>> dma_buf_map.
>>
>> [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
>> [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/
> 
> lgtm, imo ready to convert the follow-up patches over to this. But I think
> would be good to get at least some ack from the ttm side for the overall
> plan.

Yup, it would be nice if TTM could had out these types automatically.
Then all TTM-based drivers would automatically support it.

> 
> Also, I think we should put all the various helpers (writel/readl, memset,
> memcpy, whatever else) into the dma-buf-map.h helper, so that most code
> using this can just treat it as an abstract pointer type and never look
> underneath it.

We have some framebuffer helpers that rely on pointer arithmetic, so
we'd need that too. No big deal wrt code, but I was worried about the
overhead. If a loop goes over framebuffer memory, there's an if/else
branch for each access to the memory buffer.

Best regards
Thomas

> -Daniel
> 
>>
>> Thomas Zimmermann (3):
>>   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
>>   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>>   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
>>
>>  Documentation/driver-api/dma-buf.rst          |   3 +
>>  drivers/dma-buf/dma-buf.c                     |  40 +++---
>>  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>>  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>>  drivers/gpu/drm/drm_prime.c                   |  14 +-
>>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>>  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>>  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>>  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>>  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>>  include/drm/drm_prime.h                       |   5 +-
>>  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
>>  include/linux/dma-buf.h                       |  11 +-
>>  15 files changed, 274 insertions(+), 82 deletions(-)
>>  create mode 100644 include/linux/dma-buf-map.h
>>
>> --
>> 2.28.0
>>
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
  2020-09-16 10:48     ` Thomas Zimmermann
  (?)
  (?)
@ 2020-09-16 12:24       ` Daniel Vetter
  -1 siblings, 0 replies; 57+ messages in thread
From: Daniel Vetter @ 2020-09-16 12:24 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: Daniel Vetter, sumit.semwal, christian.koenig, airlied, sam,
	mark.cave-ayland, kraxel, davem, maarten.lankhorst, mripard,
	l.stach, linux+etnaviv, christian.gmeiner, jani.nikula,
	joonas.lahtinen, rodrigo.vivi, thierry.reding, jonathanh, pawel,
	m.szyprowski, kyungmin.park, tfiga, mchehab, chris, matthew.auld,
	thomas.hellstrom, linux-media, dri-devel, linaro-mm-sig, etnaviv,
	intel-gfx, linux-tegra, sparclinux

On Wed, Sep 16, 2020 at 12:48:20PM +0200, Thomas Zimmermann wrote:
> Hi
> 
> Am 16.09.20 um 11:37 schrieb Daniel Vetter:
> > On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
> >> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
> >> of dma-buf memory in kernel address space. The functions operate with plain
> >> addresses and the assumption is that the memory can be accessed with load
> >> and store operations. This is not the case on some architectures (e.g.,
> >> sparc64) where I/O memory can only be accessed with dedicated instructions.
> >>
> >> This patchset introduces struct dma_buf_map, which contains the address of
> >> a buffer and a flag that tells whether system- or I/O-memory instructions
> >> are required.
> >>
> >> Some background: updating the DRM framebuffer console on sparc64 makes the
> >> kernel panic. This is because the framebuffer memory cannot be accessed with
> >> system-memory instructions. We currently employ a workaround in DRM to
> >> address this specific problem. [1]
> >>
> >> To resolve the problem, we'd like to address it at the most common point,
> >> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
> >> instructions are required and exports this information to it's users. The
> >> new structure struct dma_buf_map stores the buffer address and a flag that
> >> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
> >> can then access the memory accordingly.
> >>
> >> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
> >> and it's interfaces. Further patches can update dma-buf users. For example,
> >> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
> >>
> >> Further work: TTM, one of DRM's memory managers, already exports an
> >> is_iomem flag of its own. It could later be switched over to exporting struct
> >> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
> >> fbdev console to operate on I/O memory. These could possibly be switched over
> >> to the generic fbdev emulation, as soon as the generic code uses struct
> >> dma_buf_map.
> >>
> >> [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
> >> [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/
> > 
> > lgtm, imo ready to convert the follow-up patches over to this. But I think
> > would be good to get at least some ack from the ttm side for the overall
> > plan.
> 
> Yup, it would be nice if TTM could had out these types automatically.
> Then all TTM-based drivers would automatically support it.
> 
> > 
> > Also, I think we should put all the various helpers (writel/readl, memset,
> > memcpy, whatever else) into the dma-buf-map.h helper, so that most code
> > using this can just treat it as an abstract pointer type and never look
> > underneath it.
> 
> We have some framebuffer helpers that rely on pointer arithmetic, so
> we'd need that too. No big deal wrt code, but I was worried about the
> overhead. If a loop goes over framebuffer memory, there's an if/else
> branch for each access to the memory buffer.

If we make all the helpers static inline, then the compiler should be able
to see that dma_buf_map.is_iomem is always the same, and produced really
optimized code for it by pulling that check out from all the loops.

So should only result in somewhat verbose code of having to call
dma_buf_map pointer arthimetic helpers, but not in bad generated code.
Still worth double-checking I think, since e.g. on x86 the generated code
should be the same for both cases (but maybe the compiler doesn't see
through the inline asm to realize that, so we might end up with 2 copies).
-Daniel


> 
> Best regards
> Thomas
> 
> > -Daniel
> > 
> >>
> >> Thomas Zimmermann (3):
> >>   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
> >>   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
> >>   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
> >>
> >>  Documentation/driver-api/dma-buf.rst          |   3 +
> >>  drivers/dma-buf/dma-buf.c                     |  40 +++---
> >>  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
> >>  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
> >>  drivers/gpu/drm/drm_prime.c                   |  14 +-
> >>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
> >>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
> >>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
> >>  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
> >>  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
> >>  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
> >>  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
> >>  include/drm/drm_prime.h                       |   5 +-
> >>  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
> >>  include/linux/dma-buf.h                       |  11 +-
> >>  15 files changed, 274 insertions(+), 82 deletions(-)
> >>  create mode 100644 include/linux/dma-buf-map.h
> >>
> >> --
> >> 2.28.0
> >>
> > 
> 
> -- 
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
> 




-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-16 12:24       ` Daniel Vetter
  0 siblings, 0 replies; 57+ messages in thread
From: Daniel Vetter @ 2020-09-16 12:24 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: christian.koenig, airlied, mark.cave-ayland, dri-devel, chris,
	thierry.reding, kraxel, sparclinux, sam, m.szyprowski, jonathanh,
	matthew.auld, linux+etnaviv, linux-media, pawel, intel-gfx,
	etnaviv, linaro-mm-sig, thomas.hellstrom, rodrigo.vivi,
	linux-tegra, mchehab, tfiga, kyungmin.park, davem

On Wed, Sep 16, 2020 at 12:48:20PM +0200, Thomas Zimmermann wrote:
> Hi
> 
> Am 16.09.20 um 11:37 schrieb Daniel Vetter:
> > On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
> >> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
> >> of dma-buf memory in kernel address space. The functions operate with plain
> >> addresses and the assumption is that the memory can be accessed with load
> >> and store operations. This is not the case on some architectures (e.g.,
> >> sparc64) where I/O memory can only be accessed with dedicated instructions.
> >>
> >> This patchset introduces struct dma_buf_map, which contains the address of
> >> a buffer and a flag that tells whether system- or I/O-memory instructions
> >> are required.
> >>
> >> Some background: updating the DRM framebuffer console on sparc64 makes the
> >> kernel panic. This is because the framebuffer memory cannot be accessed with
> >> system-memory instructions. We currently employ a workaround in DRM to
> >> address this specific problem. [1]
> >>
> >> To resolve the problem, we'd like to address it at the most common point,
> >> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
> >> instructions are required and exports this information to it's users. The
> >> new structure struct dma_buf_map stores the buffer address and a flag that
> >> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
> >> can then access the memory accordingly.
> >>
> >> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
> >> and it's interfaces. Further patches can update dma-buf users. For example,
> >> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
> >>
> >> Further work: TTM, one of DRM's memory managers, already exports an
> >> is_iomem flag of its own. It could later be switched over to exporting struct
> >> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
> >> fbdev console to operate on I/O memory. These could possibly be switched over
> >> to the generic fbdev emulation, as soon as the generic code uses struct
> >> dma_buf_map.
> >>
> >> [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
> >> [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/
> > 
> > lgtm, imo ready to convert the follow-up patches over to this. But I think
> > would be good to get at least some ack from the ttm side for the overall
> > plan.
> 
> Yup, it would be nice if TTM could had out these types automatically.
> Then all TTM-based drivers would automatically support it.
> 
> > 
> > Also, I think we should put all the various helpers (writel/readl, memset,
> > memcpy, whatever else) into the dma-buf-map.h helper, so that most code
> > using this can just treat it as an abstract pointer type and never look
> > underneath it.
> 
> We have some framebuffer helpers that rely on pointer arithmetic, so
> we'd need that too. No big deal wrt code, but I was worried about the
> overhead. If a loop goes over framebuffer memory, there's an if/else
> branch for each access to the memory buffer.

If we make all the helpers static inline, then the compiler should be able
to see that dma_buf_map.is_iomem is always the same, and produced really
optimized code for it by pulling that check out from all the loops.

So should only result in somewhat verbose code of having to call
dma_buf_map pointer arthimetic helpers, but not in bad generated code.
Still worth double-checking I think, since e.g. on x86 the generated code
should be the same for both cases (but maybe the compiler doesn't see
through the inline asm to realize that, so we might end up with 2 copies).
-Daniel


> 
> Best regards
> Thomas
> 
> > -Daniel
> > 
> >>
> >> Thomas Zimmermann (3):
> >>   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
> >>   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
> >>   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
> >>
> >>  Documentation/driver-api/dma-buf.rst          |   3 +
> >>  drivers/dma-buf/dma-buf.c                     |  40 +++---
> >>  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
> >>  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
> >>  drivers/gpu/drm/drm_prime.c                   |  14 +-
> >>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
> >>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
> >>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
> >>  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
> >>  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
> >>  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
> >>  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
> >>  include/drm/drm_prime.h                       |   5 +-
> >>  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
> >>  include/linux/dma-buf.h                       |  11 +-
> >>  15 files changed, 274 insertions(+), 82 deletions(-)
> >>  create mode 100644 include/linux/dma-buf-map.h
> >>
> >> --
> >> 2.28.0
> >>
> > 
> 
> -- 
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
> 




-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-16 12:24       ` Daniel Vetter
  0 siblings, 0 replies; 57+ messages in thread
From: Daniel Vetter @ 2020-09-16 12:24 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: christian.koenig, airlied, mark.cave-ayland, dri-devel, chris,
	thierry.reding, kraxel, sparclinux, sam, m.szyprowski, jonathanh,
	matthew.auld, linux+etnaviv, linux-media, pawel, intel-gfx,
	etnaviv, linaro-mm-sig, thomas.hellstrom, rodrigo.vivi,
	linux-tegra, mchehab, tfiga, kyungmin.park, davem

On Wed, Sep 16, 2020 at 12:48:20PM +0200, Thomas Zimmermann wrote:
> Hi
> 
> Am 16.09.20 um 11:37 schrieb Daniel Vetter:
> > On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
> >> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
> >> of dma-buf memory in kernel address space. The functions operate with plain
> >> addresses and the assumption is that the memory can be accessed with load
> >> and store operations. This is not the case on some architectures (e.g.,
> >> sparc64) where I/O memory can only be accessed with dedicated instructions.
> >>
> >> This patchset introduces struct dma_buf_map, which contains the address of
> >> a buffer and a flag that tells whether system- or I/O-memory instructions
> >> are required.
> >>
> >> Some background: updating the DRM framebuffer console on sparc64 makes the
> >> kernel panic. This is because the framebuffer memory cannot be accessed with
> >> system-memory instructions. We currently employ a workaround in DRM to
> >> address this specific problem. [1]
> >>
> >> To resolve the problem, we'd like to address it at the most common point,
> >> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
> >> instructions are required and exports this information to it's users. The
> >> new structure struct dma_buf_map stores the buffer address and a flag that
> >> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
> >> can then access the memory accordingly.
> >>
> >> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
> >> and it's interfaces. Further patches can update dma-buf users. For example,
> >> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
> >>
> >> Further work: TTM, one of DRM's memory managers, already exports an
> >> is_iomem flag of its own. It could later be switched over to exporting struct
> >> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
> >> fbdev console to operate on I/O memory. These could possibly be switched over
> >> to the generic fbdev emulation, as soon as the generic code uses struct
> >> dma_buf_map.
> >>
> >> [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
> >> [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/
> > 
> > lgtm, imo ready to convert the follow-up patches over to this. But I think
> > would be good to get at least some ack from the ttm side for the overall
> > plan.
> 
> Yup, it would be nice if TTM could had out these types automatically.
> Then all TTM-based drivers would automatically support it.
> 
> > 
> > Also, I think we should put all the various helpers (writel/readl, memset,
> > memcpy, whatever else) into the dma-buf-map.h helper, so that most code
> > using this can just treat it as an abstract pointer type and never look
> > underneath it.
> 
> We have some framebuffer helpers that rely on pointer arithmetic, so
> we'd need that too. No big deal wrt code, but I was worried about the
> overhead. If a loop goes over framebuffer memory, there's an if/else
> branch for each access to the memory buffer.

If we make all the helpers static inline, then the compiler should be able
to see that dma_buf_map.is_iomem is always the same, and produced really
optimized code for it by pulling that check out from all the loops.

So should only result in somewhat verbose code of having to call
dma_buf_map pointer arthimetic helpers, but not in bad generated code.
Still worth double-checking I think, since e.g. on x86 the generated code
should be the same for both cases (but maybe the compiler doesn't see
through the inline asm to realize that, so we might end up with 2 copies).
-Daniel


> 
> Best regards
> Thomas
> 
> > -Daniel
> > 
> >>
> >> Thomas Zimmermann (3):
> >>   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
> >>   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
> >>   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
> >>
> >>  Documentation/driver-api/dma-buf.rst          |   3 +
> >>  drivers/dma-buf/dma-buf.c                     |  40 +++---
> >>  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
> >>  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
> >>  drivers/gpu/drm/drm_prime.c                   |  14 +-
> >>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
> >>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
> >>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
> >>  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
> >>  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
> >>  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
> >>  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
> >>  include/drm/drm_prime.h                       |   5 +-
> >>  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
> >>  include/linux/dma-buf.h                       |  11 +-
> >>  15 files changed, 274 insertions(+), 82 deletions(-)
> >>  create mode 100644 include/linux/dma-buf-map.h
> >>
> >> --
> >> 2.28.0
> >>
> > 
> 
> -- 
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
> 




-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [Intel-gfx] [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-16 12:24       ` Daniel Vetter
  0 siblings, 0 replies; 57+ messages in thread
From: Daniel Vetter @ 2020-09-16 12:24 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: christian.koenig, airlied, mark.cave-ayland, dri-devel, chris,
	kraxel, sparclinux, sam, sumit.semwal, m.szyprowski, jonathanh,
	matthew.auld, linux+etnaviv, linux-media, pawel, intel-gfx,
	etnaviv, linaro-mm-sig, christian.gmeiner, thomas.hellstrom,
	mripard, linux-tegra, mchehab, tfiga, kyungmin.park, davem,
	l.stach

On Wed, Sep 16, 2020 at 12:48:20PM +0200, Thomas Zimmermann wrote:
> Hi
> 
> Am 16.09.20 um 11:37 schrieb Daniel Vetter:
> > On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
> >> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
> >> of dma-buf memory in kernel address space. The functions operate with plain
> >> addresses and the assumption is that the memory can be accessed with load
> >> and store operations. This is not the case on some architectures (e.g.,
> >> sparc64) where I/O memory can only be accessed with dedicated instructions.
> >>
> >> This patchset introduces struct dma_buf_map, which contains the address of
> >> a buffer and a flag that tells whether system- or I/O-memory instructions
> >> are required.
> >>
> >> Some background: updating the DRM framebuffer console on sparc64 makes the
> >> kernel panic. This is because the framebuffer memory cannot be accessed with
> >> system-memory instructions. We currently employ a workaround in DRM to
> >> address this specific problem. [1]
> >>
> >> To resolve the problem, we'd like to address it at the most common point,
> >> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
> >> instructions are required and exports this information to it's users. The
> >> new structure struct dma_buf_map stores the buffer address and a flag that
> >> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
> >> can then access the memory accordingly.
> >>
> >> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
> >> and it's interfaces. Further patches can update dma-buf users. For example,
> >> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
> >>
> >> Further work: TTM, one of DRM's memory managers, already exports an
> >> is_iomem flag of its own. It could later be switched over to exporting struct
> >> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
> >> fbdev console to operate on I/O memory. These could possibly be switched over
> >> to the generic fbdev emulation, as soon as the generic code uses struct
> >> dma_buf_map.
> >>
> >> [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
> >> [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/
> > 
> > lgtm, imo ready to convert the follow-up patches over to this. But I think
> > would be good to get at least some ack from the ttm side for the overall
> > plan.
> 
> Yup, it would be nice if TTM could had out these types automatically.
> Then all TTM-based drivers would automatically support it.
> 
> > 
> > Also, I think we should put all the various helpers (writel/readl, memset,
> > memcpy, whatever else) into the dma-buf-map.h helper, so that most code
> > using this can just treat it as an abstract pointer type and never look
> > underneath it.
> 
> We have some framebuffer helpers that rely on pointer arithmetic, so
> we'd need that too. No big deal wrt code, but I was worried about the
> overhead. If a loop goes over framebuffer memory, there's an if/else
> branch for each access to the memory buffer.

If we make all the helpers static inline, then the compiler should be able
to see that dma_buf_map.is_iomem is always the same, and produced really
optimized code for it by pulling that check out from all the loops.

So should only result in somewhat verbose code of having to call
dma_buf_map pointer arthimetic helpers, but not in bad generated code.
Still worth double-checking I think, since e.g. on x86 the generated code
should be the same for both cases (but maybe the compiler doesn't see
through the inline asm to realize that, so we might end up with 2 copies).
-Daniel


> 
> Best regards
> Thomas
> 
> > -Daniel
> > 
> >>
> >> Thomas Zimmermann (3):
> >>   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
> >>   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
> >>   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
> >>
> >>  Documentation/driver-api/dma-buf.rst          |   3 +
> >>  drivers/dma-buf/dma-buf.c                     |  40 +++---
> >>  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
> >>  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
> >>  drivers/gpu/drm/drm_prime.c                   |  14 +-
> >>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
> >>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
> >>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
> >>  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
> >>  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
> >>  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
> >>  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
> >>  include/drm/drm_prime.h                       |   5 +-
> >>  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
> >>  include/linux/dma-buf.h                       |  11 +-
> >>  15 files changed, 274 insertions(+), 82 deletions(-)
> >>  create mode 100644 include/linux/dma-buf-map.h
> >>
> >> --
> >> 2.28.0
> >>
> > 
> 
> -- 
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
> 




-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
  2020-09-16 12:24       ` Daniel Vetter
  (?)
  (?)
@ 2020-09-16 12:59         ` Christian König
  -1 siblings, 0 replies; 57+ messages in thread
From: Christian König @ 2020-09-16 12:59 UTC (permalink / raw)
  To: Daniel Vetter, Thomas Zimmermann
  Cc: sumit.semwal, airlied, sam, mark.cave-ayland, kraxel, davem,
	maarten.lankhorst, mripard, l.stach, linux+etnaviv,
	christian.gmeiner, jani.nikula, joonas.lahtinen, rodrigo.vivi,
	thierry.reding, jonathanh, pawel, m.szyprowski, kyungmin.park,
	tfiga, mchehab, chris, matthew.auld, thomas.hellstrom,
	linux-media, dri-devel, linaro-mm-sig, etnaviv, intel-gfx,
	linux-tegra, sparclinux

Am 16.09.20 um 14:24 schrieb Daniel Vetter:
> On Wed, Sep 16, 2020 at 12:48:20PM +0200, Thomas Zimmermann wrote:
>> Hi
>>
>> Am 16.09.20 um 11:37 schrieb Daniel Vetter:
>>> On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
>>>> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
>>>> of dma-buf memory in kernel address space. The functions operate with plain
>>>> addresses and the assumption is that the memory can be accessed with load
>>>> and store operations. This is not the case on some architectures (e.g.,
>>>> sparc64) where I/O memory can only be accessed with dedicated instructions.
>>>>
>>>> This patchset introduces struct dma_buf_map, which contains the address of
>>>> a buffer and a flag that tells whether system- or I/O-memory instructions
>>>> are required.
>>>>
>>>> Some background: updating the DRM framebuffer console on sparc64 makes the
>>>> kernel panic. This is because the framebuffer memory cannot be accessed with
>>>> system-memory instructions. We currently employ a workaround in DRM to
>>>> address this specific problem. [1]
>>>>
>>>> To resolve the problem, we'd like to address it at the most common point,
>>>> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
>>>> instructions are required and exports this information to it's users. The
>>>> new structure struct dma_buf_map stores the buffer address and a flag that
>>>> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
>>>> can then access the memory accordingly.
>>>>
>>>> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
>>>> and it's interfaces. Further patches can update dma-buf users. For example,
>>>> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
>>>>
>>>> Further work: TTM, one of DRM's memory managers, already exports an
>>>> is_iomem flag of its own. It could later be switched over to exporting struct
>>>> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
>>>> fbdev console to operate on I/O memory. These could possibly be switched over
>>>> to the generic fbdev emulation, as soon as the generic code uses struct
>>>> dma_buf_map.
>>>>
>>>> [1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200725191012.GA434957%40ravnborg.org%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=wTmFuB95GhKUU%2F2Q91V0%2BtzAu4%2BEe3VBUcriBy3jx2g%3D&amp;reserved=0
>>>> [2] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200806085239.4606-1-tzimmermann%40suse.de%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=L4rBHmegO63b%2FiTQdTyH158KNxAZwSuJCQOaFszo5L0%3D&amp;reserved=0
>>> lgtm, imo ready to convert the follow-up patches over to this. But I think
>>> would be good to get at least some ack from the ttm side for the overall
>>> plan.
>> Yup, it would be nice if TTM could had out these types automatically.
>> Then all TTM-based drivers would automatically support it.
>>
>>> Also, I think we should put all the various helpers (writel/readl, memset,
>>> memcpy, whatever else) into the dma-buf-map.h helper, so that most code
>>> using this can just treat it as an abstract pointer type and never look
>>> underneath it.
>> We have some framebuffer helpers that rely on pointer arithmetic, so
>> we'd need that too. No big deal wrt code, but I was worried about the
>> overhead. If a loop goes over framebuffer memory, there's an if/else
>> branch for each access to the memory buffer.
> If we make all the helpers static inline, then the compiler should be able
> to see that dma_buf_map.is_iomem is always the same, and produced really
> optimized code for it by pulling that check out from all the loops.
>
> So should only result in somewhat verbose code of having to call
> dma_buf_map pointer arthimetic helpers, but not in bad generated code.
> Still worth double-checking I think, since e.g. on x86 the generated code
> should be the same for both cases (but maybe the compiler doesn't see
> through the inline asm to realize that, so we might end up with 2 copies).

Can we have that even independent of DMA-buf? We have essentially the 
same problem in TTM and the code around that is a complete mess if you 
ask me.

Christian.

> -Daniel
>
>
>> Best regards
>> Thomas
>>
>>> -Daniel
>>>
>>>> Thomas Zimmermann (3):
>>>>    dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
>>>>    dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>>>>    dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
>>>>
>>>>   Documentation/driver-api/dma-buf.rst          |   3 +
>>>>   drivers/dma-buf/dma-buf.c                     |  40 +++---
>>>>   drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>>>>   drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>>>>   drivers/gpu/drm/drm_prime.c                   |  14 +-
>>>>   drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>>>>   drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>>>>   .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>>>>   drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>>>>   .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>>>>   .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>>>>   .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>>>>   include/drm/drm_prime.h                       |   5 +-
>>>>   include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
>>>>   include/linux/dma-buf.h                       |  11 +-
>>>>   15 files changed, 274 insertions(+), 82 deletions(-)
>>>>   create mode 100644 include/linux/dma-buf-map.h
>>>>
>>>> --
>>>> 2.28.0
>>>>
>> -- 
>> Thomas Zimmermann
>> Graphics Driver Developer
>> SUSE Software Solutions Germany GmbH
>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>> (HRB 36809, AG Nürnberg)
>> Geschäftsführer: Felix Imendörffer
>>
>
>
>


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-16 12:59         ` Christian König
  0 siblings, 0 replies; 57+ messages in thread
From: Christian König @ 2020-09-16 12:59 UTC (permalink / raw)
  To: Daniel Vetter, Thomas Zimmermann
  Cc: airlied, mark.cave-ayland, dri-devel, chris, thierry.reding,
	kraxel, sparclinux, sam, m.szyprowski, jonathanh, matthew.auld,
	linux+etnaviv, linux-media, pawel, intel-gfx, etnaviv,
	linaro-mm-sig, thomas.hellstrom, rodrigo.vivi, linux-tegra,
	mchehab, tfiga, kyungmin.park, davem

Am 16.09.20 um 14:24 schrieb Daniel Vetter:
> On Wed, Sep 16, 2020 at 12:48:20PM +0200, Thomas Zimmermann wrote:
>> Hi
>>
>> Am 16.09.20 um 11:37 schrieb Daniel Vetter:
>>> On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
>>>> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
>>>> of dma-buf memory in kernel address space. The functions operate with plain
>>>> addresses and the assumption is that the memory can be accessed with load
>>>> and store operations. This is not the case on some architectures (e.g.,
>>>> sparc64) where I/O memory can only be accessed with dedicated instructions.
>>>>
>>>> This patchset introduces struct dma_buf_map, which contains the address of
>>>> a buffer and a flag that tells whether system- or I/O-memory instructions
>>>> are required.
>>>>
>>>> Some background: updating the DRM framebuffer console on sparc64 makes the
>>>> kernel panic. This is because the framebuffer memory cannot be accessed with
>>>> system-memory instructions. We currently employ a workaround in DRM to
>>>> address this specific problem. [1]
>>>>
>>>> To resolve the problem, we'd like to address it at the most common point,
>>>> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
>>>> instructions are required and exports this information to it's users. The
>>>> new structure struct dma_buf_map stores the buffer address and a flag that
>>>> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
>>>> can then access the memory accordingly.
>>>>
>>>> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
>>>> and it's interfaces. Further patches can update dma-buf users. For example,
>>>> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
>>>>
>>>> Further work: TTM, one of DRM's memory managers, already exports an
>>>> is_iomem flag of its own. It could later be switched over to exporting struct
>>>> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
>>>> fbdev console to operate on I/O memory. These could possibly be switched over
>>>> to the generic fbdev emulation, as soon as the generic code uses struct
>>>> dma_buf_map.
>>>>
>>>> [1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200725191012.GA434957%40ravnborg.org%2F&amp;data\x02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=wTmFuB95GhKUU%2F2Q91V0%2BtzAu4%2BEe3VBUcriBy3jx2g%3D&amp;reserved=0
>>>> [2] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200806085239.4606-1-tzimmermann%40suse.de%2F&amp;data\x02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=L4rBHmegO63b%2FiTQdTyH158KNxAZwSuJCQOaFszo5L0%3D&amp;reserved=0
>>> lgtm, imo ready to convert the follow-up patches over to this. But I think
>>> would be good to get at least some ack from the ttm side for the overall
>>> plan.
>> Yup, it would be nice if TTM could had out these types automatically.
>> Then all TTM-based drivers would automatically support it.
>>
>>> Also, I think we should put all the various helpers (writel/readl, memset,
>>> memcpy, whatever else) into the dma-buf-map.h helper, so that most code
>>> using this can just treat it as an abstract pointer type and never look
>>> underneath it.
>> We have some framebuffer helpers that rely on pointer arithmetic, so
>> we'd need that too. No big deal wrt code, but I was worried about the
>> overhead. If a loop goes over framebuffer memory, there's an if/else
>> branch for each access to the memory buffer.
> If we make all the helpers static inline, then the compiler should be able
> to see that dma_buf_map.is_iomem is always the same, and produced really
> optimized code for it by pulling that check out from all the loops.
>
> So should only result in somewhat verbose code of having to call
> dma_buf_map pointer arthimetic helpers, but not in bad generated code.
> Still worth double-checking I think, since e.g. on x86 the generated code
> should be the same for both cases (but maybe the compiler doesn't see
> through the inline asm to realize that, so we might end up with 2 copies).

Can we have that even independent of DMA-buf? We have essentially the 
same problem in TTM and the code around that is a complete mess if you 
ask me.

Christian.

> -Daniel
>
>
>> Best regards
>> Thomas
>>
>>> -Daniel
>>>
>>>> Thomas Zimmermann (3):
>>>>    dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
>>>>    dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>>>>    dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
>>>>
>>>>   Documentation/driver-api/dma-buf.rst          |   3 +
>>>>   drivers/dma-buf/dma-buf.c                     |  40 +++---
>>>>   drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>>>>   drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>>>>   drivers/gpu/drm/drm_prime.c                   |  14 +-
>>>>   drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>>>>   drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>>>>   .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>>>>   drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>>>>   .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>>>>   .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>>>>   .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>>>>   include/drm/drm_prime.h                       |   5 +-
>>>>   include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
>>>>   include/linux/dma-buf.h                       |  11 +-
>>>>   15 files changed, 274 insertions(+), 82 deletions(-)
>>>>   create mode 100644 include/linux/dma-buf-map.h
>>>>
>>>> --
>>>> 2.28.0
>>>>
>> -- 
>> Thomas Zimmermann
>> Graphics Driver Developer
>> SUSE Software Solutions Germany GmbH
>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>> (HRB 36809, AG Nürnberg)
>> Geschäftsführer: Felix Imendörffer
>>
>
>
>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-16 12:59         ` Christian König
  0 siblings, 0 replies; 57+ messages in thread
From: Christian König @ 2020-09-16 12:59 UTC (permalink / raw)
  To: Daniel Vetter, Thomas Zimmermann
  Cc: airlied, mark.cave-ayland, dri-devel, chris, thierry.reding,
	kraxel, sparclinux, sam, m.szyprowski, jonathanh, matthew.auld,
	linux+etnaviv, linux-media, pawel, intel-gfx, etnaviv,
	linaro-mm-sig, thomas.hellstrom, rodrigo.vivi, linux-tegra,
	mchehab, tfiga, kyungmin.park, davem

Am 16.09.20 um 14:24 schrieb Daniel Vetter:
> On Wed, Sep 16, 2020 at 12:48:20PM +0200, Thomas Zimmermann wrote:
>> Hi
>>
>> Am 16.09.20 um 11:37 schrieb Daniel Vetter:
>>> On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
>>>> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
>>>> of dma-buf memory in kernel address space. The functions operate with plain
>>>> addresses and the assumption is that the memory can be accessed with load
>>>> and store operations. This is not the case on some architectures (e.g.,
>>>> sparc64) where I/O memory can only be accessed with dedicated instructions.
>>>>
>>>> This patchset introduces struct dma_buf_map, which contains the address of
>>>> a buffer and a flag that tells whether system- or I/O-memory instructions
>>>> are required.
>>>>
>>>> Some background: updating the DRM framebuffer console on sparc64 makes the
>>>> kernel panic. This is because the framebuffer memory cannot be accessed with
>>>> system-memory instructions. We currently employ a workaround in DRM to
>>>> address this specific problem. [1]
>>>>
>>>> To resolve the problem, we'd like to address it at the most common point,
>>>> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
>>>> instructions are required and exports this information to it's users. The
>>>> new structure struct dma_buf_map stores the buffer address and a flag that
>>>> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
>>>> can then access the memory accordingly.
>>>>
>>>> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
>>>> and it's interfaces. Further patches can update dma-buf users. For example,
>>>> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
>>>>
>>>> Further work: TTM, one of DRM's memory managers, already exports an
>>>> is_iomem flag of its own. It could later be switched over to exporting struct
>>>> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
>>>> fbdev console to operate on I/O memory. These could possibly be switched over
>>>> to the generic fbdev emulation, as soon as the generic code uses struct
>>>> dma_buf_map.
>>>>
>>>> [1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200725191012.GA434957%40ravnborg.org%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=wTmFuB95GhKUU%2F2Q91V0%2BtzAu4%2BEe3VBUcriBy3jx2g%3D&amp;reserved=0
>>>> [2] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200806085239.4606-1-tzimmermann%40suse.de%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=L4rBHmegO63b%2FiTQdTyH158KNxAZwSuJCQOaFszo5L0%3D&amp;reserved=0
>>> lgtm, imo ready to convert the follow-up patches over to this. But I think
>>> would be good to get at least some ack from the ttm side for the overall
>>> plan.
>> Yup, it would be nice if TTM could had out these types automatically.
>> Then all TTM-based drivers would automatically support it.
>>
>>> Also, I think we should put all the various helpers (writel/readl, memset,
>>> memcpy, whatever else) into the dma-buf-map.h helper, so that most code
>>> using this can just treat it as an abstract pointer type and never look
>>> underneath it.
>> We have some framebuffer helpers that rely on pointer arithmetic, so
>> we'd need that too. No big deal wrt code, but I was worried about the
>> overhead. If a loop goes over framebuffer memory, there's an if/else
>> branch for each access to the memory buffer.
> If we make all the helpers static inline, then the compiler should be able
> to see that dma_buf_map.is_iomem is always the same, and produced really
> optimized code for it by pulling that check out from all the loops.
>
> So should only result in somewhat verbose code of having to call
> dma_buf_map pointer arthimetic helpers, but not in bad generated code.
> Still worth double-checking I think, since e.g. on x86 the generated code
> should be the same for both cases (but maybe the compiler doesn't see
> through the inline asm to realize that, so we might end up with 2 copies).

Can we have that even independent of DMA-buf? We have essentially the 
same problem in TTM and the code around that is a complete mess if you 
ask me.

Christian.

> -Daniel
>
>
>> Best regards
>> Thomas
>>
>>> -Daniel
>>>
>>>> Thomas Zimmermann (3):
>>>>    dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
>>>>    dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>>>>    dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
>>>>
>>>>   Documentation/driver-api/dma-buf.rst          |   3 +
>>>>   drivers/dma-buf/dma-buf.c                     |  40 +++---
>>>>   drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>>>>   drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>>>>   drivers/gpu/drm/drm_prime.c                   |  14 +-
>>>>   drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>>>>   drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>>>>   .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>>>>   drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>>>>   .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>>>>   .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>>>>   .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>>>>   include/drm/drm_prime.h                       |   5 +-
>>>>   include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
>>>>   include/linux/dma-buf.h                       |  11 +-
>>>>   15 files changed, 274 insertions(+), 82 deletions(-)
>>>>   create mode 100644 include/linux/dma-buf-map.h
>>>>
>>>> --
>>>> 2.28.0
>>>>
>> -- 
>> Thomas Zimmermann
>> Graphics Driver Developer
>> SUSE Software Solutions Germany GmbH
>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>> (HRB 36809, AG Nürnberg)
>> Geschäftsführer: Felix Imendörffer
>>
>
>
>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [Intel-gfx] [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-16 12:59         ` Christian König
  0 siblings, 0 replies; 57+ messages in thread
From: Christian König @ 2020-09-16 12:59 UTC (permalink / raw)
  To: Daniel Vetter, Thomas Zimmermann
  Cc: airlied, mark.cave-ayland, dri-devel, chris, kraxel, sparclinux,
	sam, sumit.semwal, m.szyprowski, jonathanh, matthew.auld,
	linux+etnaviv, linux-media, pawel, intel-gfx, etnaviv,
	linaro-mm-sig, christian.gmeiner, thomas.hellstrom, mripard,
	linux-tegra, mchehab, tfiga, kyungmin.park, davem, l.stach

Am 16.09.20 um 14:24 schrieb Daniel Vetter:
> On Wed, Sep 16, 2020 at 12:48:20PM +0200, Thomas Zimmermann wrote:
>> Hi
>>
>> Am 16.09.20 um 11:37 schrieb Daniel Vetter:
>>> On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
>>>> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
>>>> of dma-buf memory in kernel address space. The functions operate with plain
>>>> addresses and the assumption is that the memory can be accessed with load
>>>> and store operations. This is not the case on some architectures (e.g.,
>>>> sparc64) where I/O memory can only be accessed with dedicated instructions.
>>>>
>>>> This patchset introduces struct dma_buf_map, which contains the address of
>>>> a buffer and a flag that tells whether system- or I/O-memory instructions
>>>> are required.
>>>>
>>>> Some background: updating the DRM framebuffer console on sparc64 makes the
>>>> kernel panic. This is because the framebuffer memory cannot be accessed with
>>>> system-memory instructions. We currently employ a workaround in DRM to
>>>> address this specific problem. [1]
>>>>
>>>> To resolve the problem, we'd like to address it at the most common point,
>>>> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
>>>> instructions are required and exports this information to it's users. The
>>>> new structure struct dma_buf_map stores the buffer address and a flag that
>>>> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
>>>> can then access the memory accordingly.
>>>>
>>>> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
>>>> and it's interfaces. Further patches can update dma-buf users. For example,
>>>> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
>>>>
>>>> Further work: TTM, one of DRM's memory managers, already exports an
>>>> is_iomem flag of its own. It could later be switched over to exporting struct
>>>> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
>>>> fbdev console to operate on I/O memory. These could possibly be switched over
>>>> to the generic fbdev emulation, as soon as the generic code uses struct
>>>> dma_buf_map.
>>>>
>>>> [1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200725191012.GA434957%40ravnborg.org%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=wTmFuB95GhKUU%2F2Q91V0%2BtzAu4%2BEe3VBUcriBy3jx2g%3D&amp;reserved=0
>>>> [2] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200806085239.4606-1-tzimmermann%40suse.de%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=L4rBHmegO63b%2FiTQdTyH158KNxAZwSuJCQOaFszo5L0%3D&amp;reserved=0
>>> lgtm, imo ready to convert the follow-up patches over to this. But I think
>>> would be good to get at least some ack from the ttm side for the overall
>>> plan.
>> Yup, it would be nice if TTM could had out these types automatically.
>> Then all TTM-based drivers would automatically support it.
>>
>>> Also, I think we should put all the various helpers (writel/readl, memset,
>>> memcpy, whatever else) into the dma-buf-map.h helper, so that most code
>>> using this can just treat it as an abstract pointer type and never look
>>> underneath it.
>> We have some framebuffer helpers that rely on pointer arithmetic, so
>> we'd need that too. No big deal wrt code, but I was worried about the
>> overhead. If a loop goes over framebuffer memory, there's an if/else
>> branch for each access to the memory buffer.
> If we make all the helpers static inline, then the compiler should be able
> to see that dma_buf_map.is_iomem is always the same, and produced really
> optimized code for it by pulling that check out from all the loops.
>
> So should only result in somewhat verbose code of having to call
> dma_buf_map pointer arthimetic helpers, but not in bad generated code.
> Still worth double-checking I think, since e.g. on x86 the generated code
> should be the same for both cases (but maybe the compiler doesn't see
> through the inline asm to realize that, so we might end up with 2 copies).

Can we have that even independent of DMA-buf? We have essentially the 
same problem in TTM and the code around that is a complete mess if you 
ask me.

Christian.

> -Daniel
>
>
>> Best regards
>> Thomas
>>
>>> -Daniel
>>>
>>>> Thomas Zimmermann (3):
>>>>    dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
>>>>    dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>>>>    dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
>>>>
>>>>   Documentation/driver-api/dma-buf.rst          |   3 +
>>>>   drivers/dma-buf/dma-buf.c                     |  40 +++---
>>>>   drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>>>>   drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>>>>   drivers/gpu/drm/drm_prime.c                   |  14 +-
>>>>   drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>>>>   drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>>>>   .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>>>>   drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>>>>   .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>>>>   .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>>>>   .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>>>>   include/drm/drm_prime.h                       |   5 +-
>>>>   include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
>>>>   include/linux/dma-buf.h                       |  11 +-
>>>>   15 files changed, 274 insertions(+), 82 deletions(-)
>>>>   create mode 100644 include/linux/dma-buf-map.h
>>>>
>>>> --
>>>> 2.28.0
>>>>
>> -- 
>> Thomas Zimmermann
>> Graphics Driver Developer
>> SUSE Software Solutions Germany GmbH
>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>> (HRB 36809, AG Nürnberg)
>> Geschäftsführer: Felix Imendörffer
>>
>
>
>

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
  2020-09-16 12:59         ` Christian König
  (?)
  (?)
@ 2020-09-16 13:12           ` Thomas Zimmermann
  -1 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-16 13:12 UTC (permalink / raw)
  To: Christian König, Daniel Vetter
  Cc: airlied, mark.cave-ayland, dri-devel, chris, thierry.reding,
	kraxel, sparclinux, sam, m.szyprowski, jonathanh, matthew.auld,
	linux+etnaviv, linux-media, pawel, intel-gfx, etnaviv,
	linaro-mm-sig, thomas.hellstrom, rodrigo.vivi, linux-tegra,
	mchehab, tfiga, kyungmin.park, davem


[-- Attachment #1.1: Type: text/plain, Size: 7539 bytes --]

Hi

Am 16.09.20 um 14:59 schrieb Christian König:
> Am 16.09.20 um 14:24 schrieb Daniel Vetter:
>> On Wed, Sep 16, 2020 at 12:48:20PM +0200, Thomas Zimmermann wrote:
>>> Hi
>>>
>>> Am 16.09.20 um 11:37 schrieb Daniel Vetter:
>>>> On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
>>>>> Dma-buf provides vmap() and vunmap() for retrieving and releasing
>>>>> mappings
>>>>> of dma-buf memory in kernel address space. The functions operate
>>>>> with plain
>>>>> addresses and the assumption is that the memory can be accessed
>>>>> with load
>>>>> and store operations. This is not the case on some architectures
>>>>> (e.g.,
>>>>> sparc64) where I/O memory can only be accessed with dedicated
>>>>> instructions.
>>>>>
>>>>> This patchset introduces struct dma_buf_map, which contains the
>>>>> address of
>>>>> a buffer and a flag that tells whether system- or I/O-memory
>>>>> instructions
>>>>> are required.
>>>>>
>>>>> Some background: updating the DRM framebuffer console on sparc64
>>>>> makes the
>>>>> kernel panic. This is because the framebuffer memory cannot be
>>>>> accessed with
>>>>> system-memory instructions. We currently employ a workaround in DRM to
>>>>> address this specific problem. [1]
>>>>>
>>>>> To resolve the problem, we'd like to address it at the most common
>>>>> point,
>>>>> which is the dma-buf framework. The dma-buf mapping ideally knows
>>>>> if I/O
>>>>> instructions are required and exports this information to it's
>>>>> users. The
>>>>> new structure struct dma_buf_map stores the buffer address and a
>>>>> flag that
>>>>> signals I/O memory. Affected users of the buffer (e.g., drivers,
>>>>> frameworks)
>>>>> can then access the memory accordingly.
>>>>>
>>>>> This patchset only introduces struct dma_buf_map, and updates
>>>>> struct dma_buf
>>>>> and it's interfaces. Further patches can update dma-buf users. For
>>>>> example,
>>>>> there's a prototype patchset for DRM that fixes the framebuffer
>>>>> problem. [2]
>>>>>
>>>>> Further work: TTM, one of DRM's memory managers, already exports an
>>>>> is_iomem flag of its own. It could later be switched over to
>>>>> exporting struct
>>>>> dma_buf_map, thus simplifying some code. Several DRM drivers expect
>>>>> their
>>>>> fbdev console to operate on I/O memory. These could possibly be
>>>>> switched over
>>>>> to the generic fbdev emulation, as soon as the generic code uses
>>>>> struct
>>>>> dma_buf_map.
>>>>>
>>>>> [1]
>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200725191012.GA434957%40ravnborg.org%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=wTmFuB95GhKUU%2F2Q91V0%2BtzAu4%2BEe3VBUcriBy3jx2g%3D&amp;reserved=0
>>>>>
>>>>> [2]
>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200806085239.4606-1-tzimmermann%40suse.de%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=L4rBHmegO63b%2FiTQdTyH158KNxAZwSuJCQOaFszo5L0%3D&amp;reserved=0
>>>>>
>>>> lgtm, imo ready to convert the follow-up patches over to this. But I
>>>> think
>>>> would be good to get at least some ack from the ttm side for the
>>>> overall
>>>> plan.
>>> Yup, it would be nice if TTM could had out these types automatically.
>>> Then all TTM-based drivers would automatically support it.
>>>
>>>> Also, I think we should put all the various helpers (writel/readl,
>>>> memset,
>>>> memcpy, whatever else) into the dma-buf-map.h helper, so that most code
>>>> using this can just treat it as an abstract pointer type and never look
>>>> underneath it.
>>> We have some framebuffer helpers that rely on pointer arithmetic, so
>>> we'd need that too. No big deal wrt code, but I was worried about the
>>> overhead. If a loop goes over framebuffer memory, there's an if/else
>>> branch for each access to the memory buffer.
>> If we make all the helpers static inline, then the compiler should be
>> able
>> to see that dma_buf_map.is_iomem is always the same, and produced really
>> optimized code for it by pulling that check out from all the loops.
>>
>> So should only result in somewhat verbose code of having to call
>> dma_buf_map pointer arthimetic helpers, but not in bad generated code.
>> Still worth double-checking I think, since e.g. on x86 the generated code
>> should be the same for both cases (but maybe the compiler doesn't see
>> through the inline asm to realize that, so we might end up with 2
>> copies).
> 
> Can we have that even independent of DMA-buf? We have essentially the
> same problem in TTM and the code around that is a complete mess if you
> ask me.

I already put this into dma-buf because it's at the intersection of all
the affected modules. For non-dma-buf pointers (say in framebuffer
damage handling), the idea is to initialize struct dma_buf_map by hand
and use this.

Where would you want to put it?

Best regards
Thomas

> 
> Christian.
> 
>> -Daniel
>>
>>
>>> Best regards
>>> Thomas
>>>
>>>> -Daniel
>>>>
>>>>> Thomas Zimmermann (3):
>>>>>    dma-buf: Add struct dma-buf-map for storing struct
>>>>> dma_buf.vaddr_ptr
>>>>>    dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>>>>>    dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
>>>>>
>>>>>   Documentation/driver-api/dma-buf.rst          |   3 +
>>>>>   drivers/dma-buf/dma-buf.c                     |  40 +++---
>>>>>   drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>>>>>   drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>>>>>   drivers/gpu/drm/drm_prime.c                   |  14 +-
>>>>>   drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>>>>>   drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>>>>>   .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>>>>>   drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>>>>>   .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>>>>>   .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>>>>>   .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>>>>>   include/drm/drm_prime.h                       |   5 +-
>>>>>   include/linux/dma-buf-map.h                   | 126
>>>>> ++++++++++++++++++
>>>>>   include/linux/dma-buf.h                       |  11 +-
>>>>>   15 files changed, 274 insertions(+), 82 deletions(-)
>>>>>   create mode 100644 include/linux/dma-buf-map.h
>>>>>
>>>>> -- 
>>>>> 2.28.0
>>>>>
>>> -- 
>>> Thomas Zimmermann
>>> Graphics Driver Developer
>>> SUSE Software Solutions Germany GmbH
>>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>>> (HRB 36809, AG Nürnberg)
>>> Geschäftsführer: Felix Imendörffer
>>>
>>
>>
>>
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-16 13:12           ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-16 13:12 UTC (permalink / raw)
  To: Christian König, Daniel Vetter
  Cc: airlied, mark.cave-ayland, dri-devel, chris, thierry.reding,
	kraxel, sparclinux, sam, m.szyprowski, jonathanh, matthew.auld,
	linux+etnaviv, linux-media, pawel, intel-gfx, etnaviv,
	linaro-mm-sig, thomas.hellstrom, rodrigo.vivi, linux-tegra,
	mchehab, tfiga, kyungmin.park, davem


[-- Attachment #1.1: Type: text/plain, Size: 7539 bytes --]

Hi

Am 16.09.20 um 14:59 schrieb Christian König:
> Am 16.09.20 um 14:24 schrieb Daniel Vetter:
>> On Wed, Sep 16, 2020 at 12:48:20PM +0200, Thomas Zimmermann wrote:
>>> Hi
>>>
>>> Am 16.09.20 um 11:37 schrieb Daniel Vetter:
>>>> On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
>>>>> Dma-buf provides vmap() and vunmap() for retrieving and releasing
>>>>> mappings
>>>>> of dma-buf memory in kernel address space. The functions operate
>>>>> with plain
>>>>> addresses and the assumption is that the memory can be accessed
>>>>> with load
>>>>> and store operations. This is not the case on some architectures
>>>>> (e.g.,
>>>>> sparc64) where I/O memory can only be accessed with dedicated
>>>>> instructions.
>>>>>
>>>>> This patchset introduces struct dma_buf_map, which contains the
>>>>> address of
>>>>> a buffer and a flag that tells whether system- or I/O-memory
>>>>> instructions
>>>>> are required.
>>>>>
>>>>> Some background: updating the DRM framebuffer console on sparc64
>>>>> makes the
>>>>> kernel panic. This is because the framebuffer memory cannot be
>>>>> accessed with
>>>>> system-memory instructions. We currently employ a workaround in DRM to
>>>>> address this specific problem. [1]
>>>>>
>>>>> To resolve the problem, we'd like to address it at the most common
>>>>> point,
>>>>> which is the dma-buf framework. The dma-buf mapping ideally knows
>>>>> if I/O
>>>>> instructions are required and exports this information to it's
>>>>> users. The
>>>>> new structure struct dma_buf_map stores the buffer address and a
>>>>> flag that
>>>>> signals I/O memory. Affected users of the buffer (e.g., drivers,
>>>>> frameworks)
>>>>> can then access the memory accordingly.
>>>>>
>>>>> This patchset only introduces struct dma_buf_map, and updates
>>>>> struct dma_buf
>>>>> and it's interfaces. Further patches can update dma-buf users. For
>>>>> example,
>>>>> there's a prototype patchset for DRM that fixes the framebuffer
>>>>> problem. [2]
>>>>>
>>>>> Further work: TTM, one of DRM's memory managers, already exports an
>>>>> is_iomem flag of its own. It could later be switched over to
>>>>> exporting struct
>>>>> dma_buf_map, thus simplifying some code. Several DRM drivers expect
>>>>> their
>>>>> fbdev console to operate on I/O memory. These could possibly be
>>>>> switched over
>>>>> to the generic fbdev emulation, as soon as the generic code uses
>>>>> struct
>>>>> dma_buf_map.
>>>>>
>>>>> [1]
>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200725191012.GA434957%40ravnborg.org%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=wTmFuB95GhKUU%2F2Q91V0%2BtzAu4%2BEe3VBUcriBy3jx2g%3D&amp;reserved=0
>>>>>
>>>>> [2]
>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200806085239.4606-1-tzimmermann%40suse.de%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=L4rBHmegO63b%2FiTQdTyH158KNxAZwSuJCQOaFszo5L0%3D&amp;reserved=0
>>>>>
>>>> lgtm, imo ready to convert the follow-up patches over to this. But I
>>>> think
>>>> would be good to get at least some ack from the ttm side for the
>>>> overall
>>>> plan.
>>> Yup, it would be nice if TTM could had out these types automatically.
>>> Then all TTM-based drivers would automatically support it.
>>>
>>>> Also, I think we should put all the various helpers (writel/readl,
>>>> memset,
>>>> memcpy, whatever else) into the dma-buf-map.h helper, so that most code
>>>> using this can just treat it as an abstract pointer type and never look
>>>> underneath it.
>>> We have some framebuffer helpers that rely on pointer arithmetic, so
>>> we'd need that too. No big deal wrt code, but I was worried about the
>>> overhead. If a loop goes over framebuffer memory, there's an if/else
>>> branch for each access to the memory buffer.
>> If we make all the helpers static inline, then the compiler should be
>> able
>> to see that dma_buf_map.is_iomem is always the same, and produced really
>> optimized code for it by pulling that check out from all the loops.
>>
>> So should only result in somewhat verbose code of having to call
>> dma_buf_map pointer arthimetic helpers, but not in bad generated code.
>> Still worth double-checking I think, since e.g. on x86 the generated code
>> should be the same for both cases (but maybe the compiler doesn't see
>> through the inline asm to realize that, so we might end up with 2
>> copies).
> 
> Can we have that even independent of DMA-buf? We have essentially the
> same problem in TTM and the code around that is a complete mess if you
> ask me.

I already put this into dma-buf because it's at the intersection of all
the affected modules. For non-dma-buf pointers (say in framebuffer
damage handling), the idea is to initialize struct dma_buf_map by hand
and use this.

Where would you want to put it?

Best regards
Thomas

> 
> Christian.
> 
>> -Daniel
>>
>>
>>> Best regards
>>> Thomas
>>>
>>>> -Daniel
>>>>
>>>>> Thomas Zimmermann (3):
>>>>>    dma-buf: Add struct dma-buf-map for storing struct
>>>>> dma_buf.vaddr_ptr
>>>>>    dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>>>>>    dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
>>>>>
>>>>>   Documentation/driver-api/dma-buf.rst          |   3 +
>>>>>   drivers/dma-buf/dma-buf.c                     |  40 +++---
>>>>>   drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>>>>>   drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>>>>>   drivers/gpu/drm/drm_prime.c                   |  14 +-
>>>>>   drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>>>>>   drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>>>>>   .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>>>>>   drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>>>>>   .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>>>>>   .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>>>>>   .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>>>>>   include/drm/drm_prime.h                       |   5 +-
>>>>>   include/linux/dma-buf-map.h                   | 126
>>>>> ++++++++++++++++++
>>>>>   include/linux/dma-buf.h                       |  11 +-
>>>>>   15 files changed, 274 insertions(+), 82 deletions(-)
>>>>>   create mode 100644 include/linux/dma-buf-map.h
>>>>>
>>>>> -- 
>>>>> 2.28.0
>>>>>
>>> -- 
>>> Thomas Zimmermann
>>> Graphics Driver Developer
>>> SUSE Software Solutions Germany GmbH
>>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>>> (HRB 36809, AG Nürnberg)
>>> Geschäftsführer: Felix Imendörffer
>>>
>>
>>
>>
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-16 13:12           ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-16 13:12 UTC (permalink / raw)
  To: Christian König, Daniel Vetter
  Cc: airlied, mark.cave-ayland, dri-devel, chris, thierry.reding,
	kraxel, sparclinux, sam, m.szyprowski, jonathanh, matthew.auld,
	linux+etnaviv, linux-media, pawel, intel-gfx, etnaviv,
	linaro-mm-sig, thomas.hellstrom, rodrigo.vivi, linux-tegra,
	mchehab, tfiga, kyungmin.park, davem


[-- Attachment #1.1.1: Type: text/plain, Size: 7539 bytes --]

Hi

Am 16.09.20 um 14:59 schrieb Christian König:
> Am 16.09.20 um 14:24 schrieb Daniel Vetter:
>> On Wed, Sep 16, 2020 at 12:48:20PM +0200, Thomas Zimmermann wrote:
>>> Hi
>>>
>>> Am 16.09.20 um 11:37 schrieb Daniel Vetter:
>>>> On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
>>>>> Dma-buf provides vmap() and vunmap() for retrieving and releasing
>>>>> mappings
>>>>> of dma-buf memory in kernel address space. The functions operate
>>>>> with plain
>>>>> addresses and the assumption is that the memory can be accessed
>>>>> with load
>>>>> and store operations. This is not the case on some architectures
>>>>> (e.g.,
>>>>> sparc64) where I/O memory can only be accessed with dedicated
>>>>> instructions.
>>>>>
>>>>> This patchset introduces struct dma_buf_map, which contains the
>>>>> address of
>>>>> a buffer and a flag that tells whether system- or I/O-memory
>>>>> instructions
>>>>> are required.
>>>>>
>>>>> Some background: updating the DRM framebuffer console on sparc64
>>>>> makes the
>>>>> kernel panic. This is because the framebuffer memory cannot be
>>>>> accessed with
>>>>> system-memory instructions. We currently employ a workaround in DRM to
>>>>> address this specific problem. [1]
>>>>>
>>>>> To resolve the problem, we'd like to address it at the most common
>>>>> point,
>>>>> which is the dma-buf framework. The dma-buf mapping ideally knows
>>>>> if I/O
>>>>> instructions are required and exports this information to it's
>>>>> users. The
>>>>> new structure struct dma_buf_map stores the buffer address and a
>>>>> flag that
>>>>> signals I/O memory. Affected users of the buffer (e.g., drivers,
>>>>> frameworks)
>>>>> can then access the memory accordingly.
>>>>>
>>>>> This patchset only introduces struct dma_buf_map, and updates
>>>>> struct dma_buf
>>>>> and it's interfaces. Further patches can update dma-buf users. For
>>>>> example,
>>>>> there's a prototype patchset for DRM that fixes the framebuffer
>>>>> problem. [2]
>>>>>
>>>>> Further work: TTM, one of DRM's memory managers, already exports an
>>>>> is_iomem flag of its own. It could later be switched over to
>>>>> exporting struct
>>>>> dma_buf_map, thus simplifying some code. Several DRM drivers expect
>>>>> their
>>>>> fbdev console to operate on I/O memory. These could possibly be
>>>>> switched over
>>>>> to the generic fbdev emulation, as soon as the generic code uses
>>>>> struct
>>>>> dma_buf_map.
>>>>>
>>>>> [1]
>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200725191012.GA434957%40ravnborg.org%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=wTmFuB95GhKUU%2F2Q91V0%2BtzAu4%2BEe3VBUcriBy3jx2g%3D&amp;reserved=0
>>>>>
>>>>> [2]
>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200806085239.4606-1-tzimmermann%40suse.de%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=L4rBHmegO63b%2FiTQdTyH158KNxAZwSuJCQOaFszo5L0%3D&amp;reserved=0
>>>>>
>>>> lgtm, imo ready to convert the follow-up patches over to this. But I
>>>> think
>>>> would be good to get at least some ack from the ttm side for the
>>>> overall
>>>> plan.
>>> Yup, it would be nice if TTM could had out these types automatically.
>>> Then all TTM-based drivers would automatically support it.
>>>
>>>> Also, I think we should put all the various helpers (writel/readl,
>>>> memset,
>>>> memcpy, whatever else) into the dma-buf-map.h helper, so that most code
>>>> using this can just treat it as an abstract pointer type and never look
>>>> underneath it.
>>> We have some framebuffer helpers that rely on pointer arithmetic, so
>>> we'd need that too. No big deal wrt code, but I was worried about the
>>> overhead. If a loop goes over framebuffer memory, there's an if/else
>>> branch for each access to the memory buffer.
>> If we make all the helpers static inline, then the compiler should be
>> able
>> to see that dma_buf_map.is_iomem is always the same, and produced really
>> optimized code for it by pulling that check out from all the loops.
>>
>> So should only result in somewhat verbose code of having to call
>> dma_buf_map pointer arthimetic helpers, but not in bad generated code.
>> Still worth double-checking I think, since e.g. on x86 the generated code
>> should be the same for both cases (but maybe the compiler doesn't see
>> through the inline asm to realize that, so we might end up with 2
>> copies).
> 
> Can we have that even independent of DMA-buf? We have essentially the
> same problem in TTM and the code around that is a complete mess if you
> ask me.

I already put this into dma-buf because it's at the intersection of all
the affected modules. For non-dma-buf pointers (say in framebuffer
damage handling), the idea is to initialize struct dma_buf_map by hand
and use this.

Where would you want to put it?

Best regards
Thomas

> 
> Christian.
> 
>> -Daniel
>>
>>
>>> Best regards
>>> Thomas
>>>
>>>> -Daniel
>>>>
>>>>> Thomas Zimmermann (3):
>>>>>    dma-buf: Add struct dma-buf-map for storing struct
>>>>> dma_buf.vaddr_ptr
>>>>>    dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>>>>>    dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
>>>>>
>>>>>   Documentation/driver-api/dma-buf.rst          |   3 +
>>>>>   drivers/dma-buf/dma-buf.c                     |  40 +++---
>>>>>   drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>>>>>   drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>>>>>   drivers/gpu/drm/drm_prime.c                   |  14 +-
>>>>>   drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>>>>>   drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>>>>>   .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>>>>>   drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>>>>>   .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>>>>>   .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>>>>>   .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>>>>>   include/drm/drm_prime.h                       |   5 +-
>>>>>   include/linux/dma-buf-map.h                   | 126
>>>>> ++++++++++++++++++
>>>>>   include/linux/dma-buf.h                       |  11 +-
>>>>>   15 files changed, 274 insertions(+), 82 deletions(-)
>>>>>   create mode 100644 include/linux/dma-buf-map.h
>>>>>
>>>>> -- 
>>>>> 2.28.0
>>>>>
>>> -- 
>>> Thomas Zimmermann
>>> Graphics Driver Developer
>>> SUSE Software Solutions Germany GmbH
>>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>>> (HRB 36809, AG Nürnberg)
>>> Geschäftsführer: Felix Imendörffer
>>>
>>
>>
>>
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [Intel-gfx] [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-16 13:12           ` Thomas Zimmermann
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-16 13:12 UTC (permalink / raw)
  To: Christian König, Daniel Vetter
  Cc: airlied, mark.cave-ayland, dri-devel, chris, kraxel, sparclinux,
	sam, m.szyprowski, jonathanh, matthew.auld, linux+etnaviv,
	linux-media, pawel, intel-gfx, etnaviv, linaro-mm-sig,
	thomas.hellstrom, linux-tegra, mchehab, tfiga, kyungmin.park,
	davem


[-- Attachment #1.1.1: Type: text/plain, Size: 7539 bytes --]

Hi

Am 16.09.20 um 14:59 schrieb Christian König:
> Am 16.09.20 um 14:24 schrieb Daniel Vetter:
>> On Wed, Sep 16, 2020 at 12:48:20PM +0200, Thomas Zimmermann wrote:
>>> Hi
>>>
>>> Am 16.09.20 um 11:37 schrieb Daniel Vetter:
>>>> On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
>>>>> Dma-buf provides vmap() and vunmap() for retrieving and releasing
>>>>> mappings
>>>>> of dma-buf memory in kernel address space. The functions operate
>>>>> with plain
>>>>> addresses and the assumption is that the memory can be accessed
>>>>> with load
>>>>> and store operations. This is not the case on some architectures
>>>>> (e.g.,
>>>>> sparc64) where I/O memory can only be accessed with dedicated
>>>>> instructions.
>>>>>
>>>>> This patchset introduces struct dma_buf_map, which contains the
>>>>> address of
>>>>> a buffer and a flag that tells whether system- or I/O-memory
>>>>> instructions
>>>>> are required.
>>>>>
>>>>> Some background: updating the DRM framebuffer console on sparc64
>>>>> makes the
>>>>> kernel panic. This is because the framebuffer memory cannot be
>>>>> accessed with
>>>>> system-memory instructions. We currently employ a workaround in DRM to
>>>>> address this specific problem. [1]
>>>>>
>>>>> To resolve the problem, we'd like to address it at the most common
>>>>> point,
>>>>> which is the dma-buf framework. The dma-buf mapping ideally knows
>>>>> if I/O
>>>>> instructions are required and exports this information to it's
>>>>> users. The
>>>>> new structure struct dma_buf_map stores the buffer address and a
>>>>> flag that
>>>>> signals I/O memory. Affected users of the buffer (e.g., drivers,
>>>>> frameworks)
>>>>> can then access the memory accordingly.
>>>>>
>>>>> This patchset only introduces struct dma_buf_map, and updates
>>>>> struct dma_buf
>>>>> and it's interfaces. Further patches can update dma-buf users. For
>>>>> example,
>>>>> there's a prototype patchset for DRM that fixes the framebuffer
>>>>> problem. [2]
>>>>>
>>>>> Further work: TTM, one of DRM's memory managers, already exports an
>>>>> is_iomem flag of its own. It could later be switched over to
>>>>> exporting struct
>>>>> dma_buf_map, thus simplifying some code. Several DRM drivers expect
>>>>> their
>>>>> fbdev console to operate on I/O memory. These could possibly be
>>>>> switched over
>>>>> to the generic fbdev emulation, as soon as the generic code uses
>>>>> struct
>>>>> dma_buf_map.
>>>>>
>>>>> [1]
>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200725191012.GA434957%40ravnborg.org%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=wTmFuB95GhKUU%2F2Q91V0%2BtzAu4%2BEe3VBUcriBy3jx2g%3D&amp;reserved=0
>>>>>
>>>>> [2]
>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200806085239.4606-1-tzimmermann%40suse.de%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=L4rBHmegO63b%2FiTQdTyH158KNxAZwSuJCQOaFszo5L0%3D&amp;reserved=0
>>>>>
>>>> lgtm, imo ready to convert the follow-up patches over to this. But I
>>>> think
>>>> would be good to get at least some ack from the ttm side for the
>>>> overall
>>>> plan.
>>> Yup, it would be nice if TTM could had out these types automatically.
>>> Then all TTM-based drivers would automatically support it.
>>>
>>>> Also, I think we should put all the various helpers (writel/readl,
>>>> memset,
>>>> memcpy, whatever else) into the dma-buf-map.h helper, so that most code
>>>> using this can just treat it as an abstract pointer type and never look
>>>> underneath it.
>>> We have some framebuffer helpers that rely on pointer arithmetic, so
>>> we'd need that too. No big deal wrt code, but I was worried about the
>>> overhead. If a loop goes over framebuffer memory, there's an if/else
>>> branch for each access to the memory buffer.
>> If we make all the helpers static inline, then the compiler should be
>> able
>> to see that dma_buf_map.is_iomem is always the same, and produced really
>> optimized code for it by pulling that check out from all the loops.
>>
>> So should only result in somewhat verbose code of having to call
>> dma_buf_map pointer arthimetic helpers, but not in bad generated code.
>> Still worth double-checking I think, since e.g. on x86 the generated code
>> should be the same for both cases (but maybe the compiler doesn't see
>> through the inline asm to realize that, so we might end up with 2
>> copies).
> 
> Can we have that even independent of DMA-buf? We have essentially the
> same problem in TTM and the code around that is a complete mess if you
> ask me.

I already put this into dma-buf because it's at the intersection of all
the affected modules. For non-dma-buf pointers (say in framebuffer
damage handling), the idea is to initialize struct dma_buf_map by hand
and use this.

Where would you want to put it?

Best regards
Thomas

> 
> Christian.
> 
>> -Daniel
>>
>>
>>> Best regards
>>> Thomas
>>>
>>>> -Daniel
>>>>
>>>>> Thomas Zimmermann (3):
>>>>>    dma-buf: Add struct dma-buf-map for storing struct
>>>>> dma_buf.vaddr_ptr
>>>>>    dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>>>>>    dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
>>>>>
>>>>>   Documentation/driver-api/dma-buf.rst          |   3 +
>>>>>   drivers/dma-buf/dma-buf.c                     |  40 +++---
>>>>>   drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>>>>>   drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>>>>>   drivers/gpu/drm/drm_prime.c                   |  14 +-
>>>>>   drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>>>>>   drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>>>>>   .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>>>>>   drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>>>>>   .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>>>>>   .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>>>>>   .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>>>>>   include/drm/drm_prime.h                       |   5 +-
>>>>>   include/linux/dma-buf-map.h                   | 126
>>>>> ++++++++++++++++++
>>>>>   include/linux/dma-buf.h                       |  11 +-
>>>>>   15 files changed, 274 insertions(+), 82 deletions(-)
>>>>>   create mode 100644 include/linux/dma-buf-map.h
>>>>>
>>>>> -- 
>>>>> 2.28.0
>>>>>
>>> -- 
>>> Thomas Zimmermann
>>> Graphics Driver Developer
>>> SUSE Software Solutions Germany GmbH
>>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>>> (HRB 36809, AG Nürnberg)
>>> Geschäftsführer: Felix Imendörffer
>>>
>>
>>
>>
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
  2020-09-16 12:59         ` Christian König
                           ` (3 preceding siblings ...)
  (?)
@ 2020-09-16 13:37         ` Thomas Hellström (Intel)
  2020-09-17  7:16           ` Thomas Zimmermann
  -1 siblings, 1 reply; 57+ messages in thread
From: Thomas Hellström (Intel) @ 2020-09-16 13:37 UTC (permalink / raw)
  To: dri-devel


On 9/16/20 2:59 PM, Christian König wrote:
> Am 16.09.20 um 14:24 schrieb Daniel Vetter:
>> On Wed, Sep 16, 2020 at 12:48:20PM +0200, Thomas Zimmermann wrote:
>>> Hi
>>>
>>> Am 16.09.20 um 11:37 schrieb Daniel Vetter:
>>>> On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
>>>>> Dma-buf provides vmap() and vunmap() for retrieving and releasing 
>>>>> mappings
>>>>> of dma-buf memory in kernel address space. The functions operate 
>>>>> with plain
>>>>> addresses and the assumption is that the memory can be accessed 
>>>>> with load
>>>>> and store operations. This is not the case on some architectures 
>>>>> (e.g.,
>>>>> sparc64) where I/O memory can only be accessed with dedicated 
>>>>> instructions.
>>>>>
>>>>> This patchset introduces struct dma_buf_map, which contains the 
>>>>> address of
>>>>> a buffer and a flag that tells whether system- or I/O-memory 
>>>>> instructions
>>>>> are required.
>>>>>
>>>>> Some background: updating the DRM framebuffer console on sparc64 
>>>>> makes the
>>>>> kernel panic. This is because the framebuffer memory cannot be 
>>>>> accessed with
>>>>> system-memory instructions. We currently employ a workaround in 
>>>>> DRM to
>>>>> address this specific problem. [1]
>>>>>
>>>>> To resolve the problem, we'd like to address it at the most common 
>>>>> point,
>>>>> which is the dma-buf framework. The dma-buf mapping ideally knows 
>>>>> if I/O
>>>>> instructions are required and exports this information to it's 
>>>>> users. The
>>>>> new structure struct dma_buf_map stores the buffer address and a 
>>>>> flag that
>>>>> signals I/O memory. Affected users of the buffer (e.g., drivers, 
>>>>> frameworks)
>>>>> can then access the memory accordingly.
>>>>>
>>>>> This patchset only introduces struct dma_buf_map, and updates 
>>>>> struct dma_buf
>>>>> and it's interfaces. Further patches can update dma-buf users. For 
>>>>> example,
>>>>> there's a prototype patchset for DRM that fixes the framebuffer 
>>>>> problem. [2]
>>>>>
>>>>> Further work: TTM, one of DRM's memory managers, already exports an
>>>>> is_iomem flag of its own. It could later be switched over to 
>>>>> exporting struct
>>>>> dma_buf_map, thus simplifying some code. Several DRM drivers 
>>>>> expect their
>>>>> fbdev console to operate on I/O memory. These could possibly be 
>>>>> switched over
>>>>> to the generic fbdev emulation, as soon as the generic code uses 
>>>>> struct
>>>>> dma_buf_map.
>>>>>
>>>>> [1] 
>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200725191012.GA434957%40ravnborg.org%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=wTmFuB95GhKUU%2F2Q91V0%2BtzAu4%2BEe3VBUcriBy3jx2g%3D&amp;reserved=0
>>>>> [2] 
>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200806085239.4606-1-tzimmermann%40suse.de%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=L4rBHmegO63b%2FiTQdTyH158KNxAZwSuJCQOaFszo5L0%3D&amp;reserved=0
>>>> lgtm, imo ready to convert the follow-up patches over to this. But 
>>>> I think
>>>> would be good to get at least some ack from the ttm side for the 
>>>> overall
>>>> plan.
>>> Yup, it would be nice if TTM could had out these types automatically.
>>> Then all TTM-based drivers would automatically support it.
>>>
>>>> Also, I think we should put all the various helpers (writel/readl, 
>>>> memset,
>>>> memcpy, whatever else) into the dma-buf-map.h helper, so that most 
>>>> code
>>>> using this can just treat it as an abstract pointer type and never 
>>>> look
>>>> underneath it.
>>> We have some framebuffer helpers that rely on pointer arithmetic, so
>>> we'd need that too. No big deal wrt code, but I was worried about the
>>> overhead. If a loop goes over framebuffer memory, there's an if/else
>>> branch for each access to the memory buffer.
>> If we make all the helpers static inline, then the compiler should be 
>> able
>> to see that dma_buf_map.is_iomem is always the same, and produced really
>> optimized code for it by pulling that check out from all the loops.
>>
>> So should only result in somewhat verbose code of having to call
>> dma_buf_map pointer arthimetic helpers, but not in bad generated code.
>> Still worth double-checking I think, since e.g. on x86 the generated 
>> code
>> should be the same for both cases (but maybe the compiler doesn't see
>> through the inline asm to realize that, so we might end up with 2 
>> copies).
>
> Can we have that even independent of DMA-buf? We have essentially the 
> same problem in TTM and the code around that is a complete mess if you 
> ask me.
>
> Christian.
>
I think this patchset looks good. Changing ttm_bo_kmap() over to 
returning a struct dma-buf-map would probably work just fine. If we then 
can have a set of helpers to operate on it, that's great.

/Thomas


_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
  2020-09-16 13:37         ` Thomas Hellström (Intel)
@ 2020-09-17  7:16           ` Thomas Zimmermann
  2020-09-17  8:04             ` Christian König
  0 siblings, 1 reply; 57+ messages in thread
From: Thomas Zimmermann @ 2020-09-17  7:16 UTC (permalink / raw)
  To: Thomas Hellström (Intel), dri-devel, Christian König


[-- Attachment #1.1.1: Type: text/plain, Size: 5819 bytes --]

Hi Christian and Thomas

Am 16.09.20 um 15:37 schrieb Thomas Hellström (Intel):
> 
> On 9/16/20 2:59 PM, Christian König wrote:
>> Am 16.09.20 um 14:24 schrieb Daniel Vetter:
>>> On Wed, Sep 16, 2020 at 12:48:20PM +0200, Thomas Zimmermann wrote:
>>>> Hi
>>>>
>>>> Am 16.09.20 um 11:37 schrieb Daniel Vetter:
>>>>> On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
>>>>>> Dma-buf provides vmap() and vunmap() for retrieving and releasing
>>>>>> mappings
>>>>>> of dma-buf memory in kernel address space. The functions operate
>>>>>> with plain
>>>>>> addresses and the assumption is that the memory can be accessed
>>>>>> with load
>>>>>> and store operations. This is not the case on some architectures
>>>>>> (e.g.,
>>>>>> sparc64) where I/O memory can only be accessed with dedicated
>>>>>> instructions.
>>>>>>
>>>>>> This patchset introduces struct dma_buf_map, which contains the
>>>>>> address of
>>>>>> a buffer and a flag that tells whether system- or I/O-memory
>>>>>> instructions
>>>>>> are required.
>>>>>>
>>>>>> Some background: updating the DRM framebuffer console on sparc64
>>>>>> makes the
>>>>>> kernel panic. This is because the framebuffer memory cannot be
>>>>>> accessed with
>>>>>> system-memory instructions. We currently employ a workaround in
>>>>>> DRM to
>>>>>> address this specific problem. [1]
>>>>>>
>>>>>> To resolve the problem, we'd like to address it at the most common
>>>>>> point,
>>>>>> which is the dma-buf framework. The dma-buf mapping ideally knows
>>>>>> if I/O
>>>>>> instructions are required and exports this information to it's
>>>>>> users. The
>>>>>> new structure struct dma_buf_map stores the buffer address and a
>>>>>> flag that
>>>>>> signals I/O memory. Affected users of the buffer (e.g., drivers,
>>>>>> frameworks)
>>>>>> can then access the memory accordingly.
>>>>>>
>>>>>> This patchset only introduces struct dma_buf_map, and updates
>>>>>> struct dma_buf
>>>>>> and it's interfaces. Further patches can update dma-buf users. For
>>>>>> example,
>>>>>> there's a prototype patchset for DRM that fixes the framebuffer
>>>>>> problem. [2]
>>>>>>
>>>>>> Further work: TTM, one of DRM's memory managers, already exports an
>>>>>> is_iomem flag of its own. It could later be switched over to
>>>>>> exporting struct
>>>>>> dma_buf_map, thus simplifying some code. Several DRM drivers
>>>>>> expect their
>>>>>> fbdev console to operate on I/O memory. These could possibly be
>>>>>> switched over
>>>>>> to the generic fbdev emulation, as soon as the generic code uses
>>>>>> struct
>>>>>> dma_buf_map.
>>>>>>
>>>>>> [1]
>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200725191012.GA434957%40ravnborg.org%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=wTmFuB95GhKUU%2F2Q91V0%2BtzAu4%2BEe3VBUcriBy3jx2g%3D&amp;reserved=0
>>>>>>
>>>>>> [2]
>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200806085239.4606-1-tzimmermann%40suse.de%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=L4rBHmegO63b%2FiTQdTyH158KNxAZwSuJCQOaFszo5L0%3D&amp;reserved=0
>>>>>>
>>>>> lgtm, imo ready to convert the follow-up patches over to this. But
>>>>> I think
>>>>> would be good to get at least some ack from the ttm side for the
>>>>> overall
>>>>> plan.
>>>> Yup, it would be nice if TTM could had out these types automatically.
>>>> Then all TTM-based drivers would automatically support it.
>>>>
>>>>> Also, I think we should put all the various helpers (writel/readl,
>>>>> memset,
>>>>> memcpy, whatever else) into the dma-buf-map.h helper, so that most
>>>>> code
>>>>> using this can just treat it as an abstract pointer type and never
>>>>> look
>>>>> underneath it.
>>>> We have some framebuffer helpers that rely on pointer arithmetic, so
>>>> we'd need that too. No big deal wrt code, but I was worried about the
>>>> overhead. If a loop goes over framebuffer memory, there's an if/else
>>>> branch for each access to the memory buffer.
>>> If we make all the helpers static inline, then the compiler should be
>>> able
>>> to see that dma_buf_map.is_iomem is always the same, and produced really
>>> optimized code for it by pulling that check out from all the loops.
>>>
>>> So should only result in somewhat verbose code of having to call
>>> dma_buf_map pointer arthimetic helpers, but not in bad generated code.
>>> Still worth double-checking I think, since e.g. on x86 the generated
>>> code
>>> should be the same for both cases (but maybe the compiler doesn't see
>>> through the inline asm to realize that, so we might end up with 2
>>> copies).
>>
>> Can we have that even independent of DMA-buf? We have essentially the
>> same problem in TTM and the code around that is a complete mess if you
>> ask me.
>>
>> Christian.
>>
> I think this patchset looks good. Changing ttm_bo_kmap() over to
> returning a struct dma-buf-map would probably work just fine. If we then
> can have a set of helpers to operate on it, that's great.
> 
> /Thomas

Can I count this as an A-b by one of you?

Best regards
Thomas

> 
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
  2020-09-17  7:16           ` Thomas Zimmermann
@ 2020-09-17  8:04             ` Christian König
  0 siblings, 0 replies; 57+ messages in thread
From: Christian König @ 2020-09-17  8:04 UTC (permalink / raw)
  To: Thomas Zimmermann, Thomas Hellström (Intel), dri-devel

Am 17.09.20 um 09:16 schrieb Thomas Zimmermann:
> Hi Christian and Thomas
>
> Am 16.09.20 um 15:37 schrieb Thomas Hellström (Intel):
>> On 9/16/20 2:59 PM, Christian König wrote:
>>> Am 16.09.20 um 14:24 schrieb Daniel Vetter:
>>>> On Wed, Sep 16, 2020 at 12:48:20PM +0200, Thomas Zimmermann wrote:
>>>>> Hi
>>>>>
>>>>> Am 16.09.20 um 11:37 schrieb Daniel Vetter:
>>>>>> On Mon, Sep 14, 2020 at 01:25:18PM +0200, Thomas Zimmermann wrote:
>>>>>>> Dma-buf provides vmap() and vunmap() for retrieving and releasing
>>>>>>> mappings
>>>>>>> of dma-buf memory in kernel address space. The functions operate
>>>>>>> with plain
>>>>>>> addresses and the assumption is that the memory can be accessed
>>>>>>> with load
>>>>>>> and store operations. This is not the case on some architectures
>>>>>>> (e.g.,
>>>>>>> sparc64) where I/O memory can only be accessed with dedicated
>>>>>>> instructions.
>>>>>>>
>>>>>>> This patchset introduces struct dma_buf_map, which contains the
>>>>>>> address of
>>>>>>> a buffer and a flag that tells whether system- or I/O-memory
>>>>>>> instructions
>>>>>>> are required.
>>>>>>>
>>>>>>> Some background: updating the DRM framebuffer console on sparc64
>>>>>>> makes the
>>>>>>> kernel panic. This is because the framebuffer memory cannot be
>>>>>>> accessed with
>>>>>>> system-memory instructions. We currently employ a workaround in
>>>>>>> DRM to
>>>>>>> address this specific problem. [1]
>>>>>>>
>>>>>>> To resolve the problem, we'd like to address it at the most common
>>>>>>> point,
>>>>>>> which is the dma-buf framework. The dma-buf mapping ideally knows
>>>>>>> if I/O
>>>>>>> instructions are required and exports this information to it's
>>>>>>> users. The
>>>>>>> new structure struct dma_buf_map stores the buffer address and a
>>>>>>> flag that
>>>>>>> signals I/O memory. Affected users of the buffer (e.g., drivers,
>>>>>>> frameworks)
>>>>>>> can then access the memory accordingly.
>>>>>>>
>>>>>>> This patchset only introduces struct dma_buf_map, and updates
>>>>>>> struct dma_buf
>>>>>>> and it's interfaces. Further patches can update dma-buf users. For
>>>>>>> example,
>>>>>>> there's a prototype patchset for DRM that fixes the framebuffer
>>>>>>> problem. [2]
>>>>>>>
>>>>>>> Further work: TTM, one of DRM's memory managers, already exports an
>>>>>>> is_iomem flag of its own. It could later be switched over to
>>>>>>> exporting struct
>>>>>>> dma_buf_map, thus simplifying some code. Several DRM drivers
>>>>>>> expect their
>>>>>>> fbdev console to operate on I/O memory. These could possibly be
>>>>>>> switched over
>>>>>>> to the generic fbdev emulation, as soon as the generic code uses
>>>>>>> struct
>>>>>>> dma_buf_map.
>>>>>>>
>>>>>>> [1]
>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200725191012.GA434957%40ravnborg.org%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=wTmFuB95GhKUU%2F2Q91V0%2BtzAu4%2BEe3VBUcriBy3jx2g%3D&amp;reserved=0
>>>>>>>
>>>>>>> [2]
>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20200806085239.4606-1-tzimmermann%40suse.de%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C04e3cc3e03ae40f1fa0f08d85a3b6a68%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637358558524732385&amp;sdata=L4rBHmegO63b%2FiTQdTyH158KNxAZwSuJCQOaFszo5L0%3D&amp;reserved=0
>>>>>>>
>>>>>> lgtm, imo ready to convert the follow-up patches over to this. But
>>>>>> I think
>>>>>> would be good to get at least some ack from the ttm side for the
>>>>>> overall
>>>>>> plan.
>>>>> Yup, it would be nice if TTM could had out these types automatically.
>>>>> Then all TTM-based drivers would automatically support it.
>>>>>
>>>>>> Also, I think we should put all the various helpers (writel/readl,
>>>>>> memset,
>>>>>> memcpy, whatever else) into the dma-buf-map.h helper, so that most
>>>>>> code
>>>>>> using this can just treat it as an abstract pointer type and never
>>>>>> look
>>>>>> underneath it.
>>>>> We have some framebuffer helpers that rely on pointer arithmetic, so
>>>>> we'd need that too. No big deal wrt code, but I was worried about the
>>>>> overhead. If a loop goes over framebuffer memory, there's an if/else
>>>>> branch for each access to the memory buffer.
>>>> If we make all the helpers static inline, then the compiler should be
>>>> able
>>>> to see that dma_buf_map.is_iomem is always the same, and produced really
>>>> optimized code for it by pulling that check out from all the loops.
>>>>
>>>> So should only result in somewhat verbose code of having to call
>>>> dma_buf_map pointer arthimetic helpers, but not in bad generated code.
>>>> Still worth double-checking I think, since e.g. on x86 the generated
>>>> code
>>>> should be the same for both cases (but maybe the compiler doesn't see
>>>> through the inline asm to realize that, so we might end up with 2
>>>> copies).
>>> Can we have that even independent of DMA-buf? We have essentially the
>>> same problem in TTM and the code around that is a complete mess if you
>>> ask me.
>>>
>>> Christian.
>>>
>> I think this patchset looks good. Changing ttm_bo_kmap() over to
>> returning a struct dma-buf-map would probably work just fine. If we then
>> can have a set of helpers to operate on it, that's great.
>>
>> /Thomas
> Can I count this as an A-b by one of you?

For the the general approach, certainly yes.

Regards,
Christian.

>
> Best regards
> Thomas
>
>>
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
  2020-09-14 11:25 ` Thomas Zimmermann
  (?)
  (?)
@ 2020-09-18  6:06   ` Sumit Semwal
  -1 siblings, 0 replies; 57+ messages in thread
From: Sumit Semwal @ 2020-09-18  6:06 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: Christian Koenig, Daniel Vetter, Dave Airlie, Sam Ravnborg,
	mark.cave-ayland, Gerd Hoffmann, David S . Miller,
	Maarten Lankhorst, Maxime Ripard, Lucas Stach, Russell King,
	Christian Gmeiner, Jani Nikula, Joonas Lahtinen, rodrigo.vivi,
	Thierry Reding, jonathanh, Pawel Osciak, Marek Szyprowski,
	Kyungmin Park, Tomasz Figa, Mauro Carvalho Chehab, Chris Wilson,
	matthew.auld, thomas.hellstrom,
	open list:DMA BUFFER SHARING FRAMEWORK, DRI mailing list,
	Linaro MM SIG, etnaviv, Intel Graphics Development, linux-tegra,
	sparclinux

Hello Thomas,

On Mon, 14 Sep 2020 at 16:55, Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
> of dma-buf memory in kernel address space. The functions operate with plain
> addresses and the assumption is that the memory can be accessed with load
> and store operations. This is not the case on some architectures (e.g.,
> sparc64) where I/O memory can only be accessed with dedicated instructions.
>
> This patchset introduces struct dma_buf_map, which contains the address of
> a buffer and a flag that tells whether system- or I/O-memory instructions
> are required.

Thank you for the patchset - it is a really nice, clean bit to add!
>
> Some background: updating the DRM framebuffer console on sparc64 makes the
> kernel panic. This is because the framebuffer memory cannot be accessed with
> system-memory instructions. We currently employ a workaround in DRM to
> address this specific problem. [1]
>
> To resolve the problem, we'd like to address it at the most common point,
> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
> instructions are required and exports this information to it's users. The
> new structure struct dma_buf_map stores the buffer address and a flag that
> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
> can then access the memory accordingly.
>
> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
> and it's interfaces. Further patches can update dma-buf users. For example,
> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
>
> Further work: TTM, one of DRM's memory managers, already exports an
> is_iomem flag of its own. It could later be switched over to exporting struct
> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
> fbdev console to operate on I/O memory. These could possibly be switched over
> to the generic fbdev emulation, as soon as the generic code uses struct
> dma_buf_map.
>
> [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
> [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/
>
> Thomas Zimmermann (3):
>   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
>   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces

FWIW, for the series, please feel free to add my
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>

>
>  Documentation/driver-api/dma-buf.rst          |   3 +
>  drivers/dma-buf/dma-buf.c                     |  40 +++---
>  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>  drivers/gpu/drm/drm_prime.c                   |  14 +-
>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>  include/drm/drm_prime.h                       |   5 +-
>  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
>  include/linux/dma-buf.h                       |  11 +-
>  15 files changed, 274 insertions(+), 82 deletions(-)
>  create mode 100644 include/linux/dma-buf-map.h
>
> --
> 2.28.0
>

Best,
Sumit.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-18  6:06   ` Sumit Semwal
  0 siblings, 0 replies; 57+ messages in thread
From: Sumit Semwal @ 2020-09-18  6:06 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: Christian Koenig, Dave Airlie, mark.cave-ayland,
	DRI mailing list, Chris Wilson, Thierry Reding, Gerd Hoffmann,
	sparclinux, Sam Ravnborg, Marek Szyprowski, jonathanh,
	matthew.auld, Russell King,
	open list:DMA BUFFER SHARING FRAMEWORK, Pawel Osciak,
	Intel Graphics Development, etnaviv, Linaro MM SIG,
	thomas.hellstrom, rodrigo.vivi, linux-tegra,
	Mauro Carvalho Chehab, Tomasz Figa, Kyungmin Park,
	David S . Miller

Hello Thomas,

On Mon, 14 Sep 2020 at 16:55, Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
> of dma-buf memory in kernel address space. The functions operate with plain
> addresses and the assumption is that the memory can be accessed with load
> and store operations. This is not the case on some architectures (e.g.,
> sparc64) where I/O memory can only be accessed with dedicated instructions.
>
> This patchset introduces struct dma_buf_map, which contains the address of
> a buffer and a flag that tells whether system- or I/O-memory instructions
> are required.

Thank you for the patchset - it is a really nice, clean bit to add!
>
> Some background: updating the DRM framebuffer console on sparc64 makes the
> kernel panic. This is because the framebuffer memory cannot be accessed with
> system-memory instructions. We currently employ a workaround in DRM to
> address this specific problem. [1]
>
> To resolve the problem, we'd like to address it at the most common point,
> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
> instructions are required and exports this information to it's users. The
> new structure struct dma_buf_map stores the buffer address and a flag that
> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
> can then access the memory accordingly.
>
> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
> and it's interfaces. Further patches can update dma-buf users. For example,
> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
>
> Further work: TTM, one of DRM's memory managers, already exports an
> is_iomem flag of its own. It could later be switched over to exporting struct
> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
> fbdev console to operate on I/O memory. These could possibly be switched over
> to the generic fbdev emulation, as soon as the generic code uses struct
> dma_buf_map.
>
> [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
> [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/
>
> Thomas Zimmermann (3):
>   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
>   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces

FWIW, for the series, please feel free to add my
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>

>
>  Documentation/driver-api/dma-buf.rst          |   3 +
>  drivers/dma-buf/dma-buf.c                     |  40 +++---
>  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>  drivers/gpu/drm/drm_prime.c                   |  14 +-
>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>  include/drm/drm_prime.h                       |   5 +-
>  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
>  include/linux/dma-buf.h                       |  11 +-
>  15 files changed, 274 insertions(+), 82 deletions(-)
>  create mode 100644 include/linux/dma-buf-map.h
>
> --
> 2.28.0
>

Best,
Sumit.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [Intel-gfx] [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-18  6:06   ` Sumit Semwal
  0 siblings, 0 replies; 57+ messages in thread
From: Sumit Semwal @ 2020-09-18  6:06 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: Christian Koenig, Dave Airlie, mark.cave-ayland,
	DRI mailing list, Chris Wilson, Gerd Hoffmann, sparclinux,
	Sam Ravnborg, Marek Szyprowski, jonathanh, matthew.auld,
	Russell King, open list:DMA BUFFER SHARING FRAMEWORK,
	Pawel Osciak, Intel Graphics Development, etnaviv, Linaro MM SIG,
	Christian Gmeiner, thomas.hellstrom, Maxime Ripard, linux-tegra,
	Mauro Carvalho Chehab, Tomasz Figa, Kyungmin Park,
	David S . Miller, Lucas Stach

Hello Thomas,

On Mon, 14 Sep 2020 at 16:55, Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
> of dma-buf memory in kernel address space. The functions operate with plain
> addresses and the assumption is that the memory can be accessed with load
> and store operations. This is not the case on some architectures (e.g.,
> sparc64) where I/O memory can only be accessed with dedicated instructions.
>
> This patchset introduces struct dma_buf_map, which contains the address of
> a buffer and a flag that tells whether system- or I/O-memory instructions
> are required.

Thank you for the patchset - it is a really nice, clean bit to add!
>
> Some background: updating the DRM framebuffer console on sparc64 makes the
> kernel panic. This is because the framebuffer memory cannot be accessed with
> system-memory instructions. We currently employ a workaround in DRM to
> address this specific problem. [1]
>
> To resolve the problem, we'd like to address it at the most common point,
> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
> instructions are required and exports this information to it's users. The
> new structure struct dma_buf_map stores the buffer address and a flag that
> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
> can then access the memory accordingly.
>
> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
> and it's interfaces. Further patches can update dma-buf users. For example,
> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
>
> Further work: TTM, one of DRM's memory managers, already exports an
> is_iomem flag of its own. It could later be switched over to exporting struct
> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
> fbdev console to operate on I/O memory. These could possibly be switched over
> to the generic fbdev emulation, as soon as the generic code uses struct
> dma_buf_map.
>
> [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
> [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/
>
> Thomas Zimmermann (3):
>   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
>   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces

FWIW, for the series, please feel free to add my
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>

>
>  Documentation/driver-api/dma-buf.rst          |   3 +
>  drivers/dma-buf/dma-buf.c                     |  40 +++---
>  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>  drivers/gpu/drm/drm_prime.c                   |  14 +-
>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>  include/drm/drm_prime.h                       |   5 +-
>  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
>  include/linux/dma-buf.h                       |  11 +-
>  15 files changed, 274 insertions(+), 82 deletions(-)
>  create mode 100644 include/linux/dma-buf-map.h
>
> --
> 2.28.0
>

Best,
Sumit.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-18  6:06   ` Sumit Semwal
  0 siblings, 0 replies; 57+ messages in thread
From: Sumit Semwal @ 2020-09-18  6:18 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: Christian Koenig, Dave Airlie, mark.cave-ayland,
	DRI mailing list, Chris Wilson, Thierry Reding, Gerd Hoffmann,
	sparclinux, Sam Ravnborg, Marek Szyprowski, jonathanh,
	matthew.auld, Russell King,
	open list:DMA BUFFER SHARING FRAMEWORK, Pawel Osciak,
	Intel Graphics Development, etnaviv, Linaro MM SIG,
	thomas.hellstrom, rodrigo.vivi, linux-tegra,
	Mauro Carvalho Chehab, Tomasz Figa, Kyungmin Park,
	David S . Miller

Hello Thomas,

On Mon, 14 Sep 2020 at 16:55, Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
> Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
> of dma-buf memory in kernel address space. The functions operate with plain
> addresses and the assumption is that the memory can be accessed with load
> and store operations. This is not the case on some architectures (e.g.,
> sparc64) where I/O memory can only be accessed with dedicated instructions.
>
> This patchset introduces struct dma_buf_map, which contains the address of
> a buffer and a flag that tells whether system- or I/O-memory instructions
> are required.

Thank you for the patchset - it is a really nice, clean bit to add!
>
> Some background: updating the DRM framebuffer console on sparc64 makes the
> kernel panic. This is because the framebuffer memory cannot be accessed with
> system-memory instructions. We currently employ a workaround in DRM to
> address this specific problem. [1]
>
> To resolve the problem, we'd like to address it at the most common point,
> which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
> instructions are required and exports this information to it's users. The
> new structure struct dma_buf_map stores the buffer address and a flag that
> signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
> can then access the memory accordingly.
>
> This patchset only introduces struct dma_buf_map, and updates struct dma_buf
> and it's interfaces. Further patches can update dma-buf users. For example,
> there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
>
> Further work: TTM, one of DRM's memory managers, already exports an
> is_iomem flag of its own. It could later be switched over to exporting struct
> dma_buf_map, thus simplifying some code. Several DRM drivers expect their
> fbdev console to operate on I/O memory. These could possibly be switched over
> to the generic fbdev emulation, as soon as the generic code uses struct
> dma_buf_map.
>
> [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
> [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/
>
> Thomas Zimmermann (3):
>   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
>   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
>   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces

FWIW, for the series, please feel free to add my
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>

>
>  Documentation/driver-api/dma-buf.rst          |   3 +
>  drivers/dma-buf/dma-buf.c                     |  40 +++---
>  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
>  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
>  drivers/gpu/drm/drm_prime.c                   |  14 +-
>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
>  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
>  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
>  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
>  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
>  include/drm/drm_prime.h                       |   5 +-
>  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
>  include/linux/dma-buf.h                       |  11 +-
>  15 files changed, 274 insertions(+), 82 deletions(-)
>  create mode 100644 include/linux/dma-buf-map.h
>
> --
> 2.28.0
>

Best,
Sumit.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
  2020-09-18  6:06   ` Sumit Semwal
  (?)
  (?)
@ 2020-09-18  8:32     ` Sumit Semwal
  -1 siblings, 0 replies; 57+ messages in thread
From: Sumit Semwal @ 2020-09-18  8:32 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: Christian Koenig, Daniel Vetter, Dave Airlie, Sam Ravnborg,
	mark.cave-ayland, Gerd Hoffmann, David S . Miller,
	Maarten Lankhorst, Maxime Ripard, Lucas Stach, Russell King,
	Christian Gmeiner, Jani Nikula, Joonas Lahtinen, rodrigo.vivi,
	Thierry Reding, jonathanh, Pawel Osciak, Marek Szyprowski,
	Kyungmin Park, Tomasz Figa, Mauro Carvalho Chehab, Chris Wilson,
	matthew.auld, thomas.hellstrom,
	open list:DMA BUFFER SHARING FRAMEWORK, DRI mailing list,
	Linaro MM SIG, etnaviv, Intel Graphics Development, linux-tegra,
	sparclinux

Hi Thomas,

On Fri, 18 Sep 2020 at 11:36, Sumit Semwal <sumit.semwal@linaro.org> wrote:
>
> Hello Thomas,
>
> On Mon, 14 Sep 2020 at 16:55, Thomas Zimmermann <tzimmermann@suse.de> wrote:
> >
> > Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
> > of dma-buf memory in kernel address space. The functions operate with plain
> > addresses and the assumption is that the memory can be accessed with load
> > and store operations. This is not the case on some architectures (e.g.,
> > sparc64) where I/O memory can only be accessed with dedicated instructions.
> >
> > This patchset introduces struct dma_buf_map, which contains the address of
> > a buffer and a flag that tells whether system- or I/O-memory instructions
> > are required.
>
> Thank you for the patchset - it is a really nice, clean bit to add!
> >
> > Some background: updating the DRM framebuffer console on sparc64 makes the
> > kernel panic. This is because the framebuffer memory cannot be accessed with
> > system-memory instructions. We currently employ a workaround in DRM to
> > address this specific problem. [1]
> >
> > To resolve the problem, we'd like to address it at the most common point,
> > which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
> > instructions are required and exports this information to it's users. The
> > new structure struct dma_buf_map stores the buffer address and a flag that
> > signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
> > can then access the memory accordingly.
> >
> > This patchset only introduces struct dma_buf_map, and updates struct dma_buf
> > and it's interfaces. Further patches can update dma-buf users. For example,
> > there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
> >
> > Further work: TTM, one of DRM's memory managers, already exports an
> > is_iomem flag of its own. It could later be switched over to exporting struct
> > dma_buf_map, thus simplifying some code. Several DRM drivers expect their
> > fbdev console to operate on I/O memory. These could possibly be switched over
> > to the generic fbdev emulation, as soon as the generic code uses struct
> > dma_buf_map.
> >
> > [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
> > [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/
> >
> > Thomas Zimmermann (3):
> >   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
> >   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
> >   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
>
> FWIW, for the series, please feel free to add my
> Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Of course, once the errors found by kernel test robot are fixed :).
>
> >
> >  Documentation/driver-api/dma-buf.rst          |   3 +
> >  drivers/dma-buf/dma-buf.c                     |  40 +++---
> >  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
> >  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
> >  drivers/gpu/drm/drm_prime.c                   |  14 +-
> >  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
> >  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
> >  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
> >  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
> >  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
> >  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
> >  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
> >  include/drm/drm_prime.h                       |   5 +-
> >  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
> >  include/linux/dma-buf.h                       |  11 +-
> >  15 files changed, 274 insertions(+), 82 deletions(-)
> >  create mode 100644 include/linux/dma-buf-map.h
> >
> > --
> > 2.28.0
> >
>
> Best,
> Sumit.

Best,
Sumit.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-18  8:32     ` Sumit Semwal
  0 siblings, 0 replies; 57+ messages in thread
From: Sumit Semwal @ 2020-09-18  8:32 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: Christian Koenig, Dave Airlie, mark.cave-ayland,
	DRI mailing list, Chris Wilson, Thierry Reding, Gerd Hoffmann,
	sparclinux, Sam Ravnborg, Marek Szyprowski, jonathanh,
	matthew.auld, Russell King,
	open list:DMA BUFFER SHARING FRAMEWORK, Pawel Osciak,
	Intel Graphics Development, etnaviv, Linaro MM SIG,
	thomas.hellstrom, rodrigo.vivi, linux-tegra,
	Mauro Carvalho Chehab, Tomasz Figa, Kyungmin Park,
	David S . Miller

Hi Thomas,

On Fri, 18 Sep 2020 at 11:36, Sumit Semwal <sumit.semwal@linaro.org> wrote:
>
> Hello Thomas,
>
> On Mon, 14 Sep 2020 at 16:55, Thomas Zimmermann <tzimmermann@suse.de> wrote:
> >
> > Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
> > of dma-buf memory in kernel address space. The functions operate with plain
> > addresses and the assumption is that the memory can be accessed with load
> > and store operations. This is not the case on some architectures (e.g.,
> > sparc64) where I/O memory can only be accessed with dedicated instructions.
> >
> > This patchset introduces struct dma_buf_map, which contains the address of
> > a buffer and a flag that tells whether system- or I/O-memory instructions
> > are required.
>
> Thank you for the patchset - it is a really nice, clean bit to add!
> >
> > Some background: updating the DRM framebuffer console on sparc64 makes the
> > kernel panic. This is because the framebuffer memory cannot be accessed with
> > system-memory instructions. We currently employ a workaround in DRM to
> > address this specific problem. [1]
> >
> > To resolve the problem, we'd like to address it at the most common point,
> > which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
> > instructions are required and exports this information to it's users. The
> > new structure struct dma_buf_map stores the buffer address and a flag that
> > signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
> > can then access the memory accordingly.
> >
> > This patchset only introduces struct dma_buf_map, and updates struct dma_buf
> > and it's interfaces. Further patches can update dma-buf users. For example,
> > there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
> >
> > Further work: TTM, one of DRM's memory managers, already exports an
> > is_iomem flag of its own. It could later be switched over to exporting struct
> > dma_buf_map, thus simplifying some code. Several DRM drivers expect their
> > fbdev console to operate on I/O memory. These could possibly be switched over
> > to the generic fbdev emulation, as soon as the generic code uses struct
> > dma_buf_map.
> >
> > [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
> > [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/
> >
> > Thomas Zimmermann (3):
> >   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
> >   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
> >   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
>
> FWIW, for the series, please feel free to add my
> Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Of course, once the errors found by kernel test robot are fixed :).
>
> >
> >  Documentation/driver-api/dma-buf.rst          |   3 +
> >  drivers/dma-buf/dma-buf.c                     |  40 +++---
> >  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
> >  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
> >  drivers/gpu/drm/drm_prime.c                   |  14 +-
> >  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
> >  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
> >  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
> >  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
> >  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
> >  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
> >  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
> >  include/drm/drm_prime.h                       |   5 +-
> >  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
> >  include/linux/dma-buf.h                       |  11 +-
> >  15 files changed, 274 insertions(+), 82 deletions(-)
> >  create mode 100644 include/linux/dma-buf-map.h
> >
> > --
> > 2.28.0
> >
>
> Best,
> Sumit.

Best,
Sumit.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [Intel-gfx] [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-18  8:32     ` Sumit Semwal
  0 siblings, 0 replies; 57+ messages in thread
From: Sumit Semwal @ 2020-09-18  8:32 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: Christian Koenig, Dave Airlie, mark.cave-ayland,
	DRI mailing list, Chris Wilson, Gerd Hoffmann, sparclinux,
	Sam Ravnborg, Marek Szyprowski, jonathanh, matthew.auld,
	Russell King, open list:DMA BUFFER SHARING FRAMEWORK,
	Pawel Osciak, Intel Graphics Development, etnaviv, Linaro MM SIG,
	Christian Gmeiner, thomas.hellstrom, Maxime Ripard, linux-tegra,
	Mauro Carvalho Chehab, Tomasz Figa, Kyungmin Park,
	David S . Miller, Lucas Stach

Hi Thomas,

On Fri, 18 Sep 2020 at 11:36, Sumit Semwal <sumit.semwal@linaro.org> wrote:
>
> Hello Thomas,
>
> On Mon, 14 Sep 2020 at 16:55, Thomas Zimmermann <tzimmermann@suse.de> wrote:
> >
> > Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
> > of dma-buf memory in kernel address space. The functions operate with plain
> > addresses and the assumption is that the memory can be accessed with load
> > and store operations. This is not the case on some architectures (e.g.,
> > sparc64) where I/O memory can only be accessed with dedicated instructions.
> >
> > This patchset introduces struct dma_buf_map, which contains the address of
> > a buffer and a flag that tells whether system- or I/O-memory instructions
> > are required.
>
> Thank you for the patchset - it is a really nice, clean bit to add!
> >
> > Some background: updating the DRM framebuffer console on sparc64 makes the
> > kernel panic. This is because the framebuffer memory cannot be accessed with
> > system-memory instructions. We currently employ a workaround in DRM to
> > address this specific problem. [1]
> >
> > To resolve the problem, we'd like to address it at the most common point,
> > which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
> > instructions are required and exports this information to it's users. The
> > new structure struct dma_buf_map stores the buffer address and a flag that
> > signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
> > can then access the memory accordingly.
> >
> > This patchset only introduces struct dma_buf_map, and updates struct dma_buf
> > and it's interfaces. Further patches can update dma-buf users. For example,
> > there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
> >
> > Further work: TTM, one of DRM's memory managers, already exports an
> > is_iomem flag of its own. It could later be switched over to exporting struct
> > dma_buf_map, thus simplifying some code. Several DRM drivers expect their
> > fbdev console to operate on I/O memory. These could possibly be switched over
> > to the generic fbdev emulation, as soon as the generic code uses struct
> > dma_buf_map.
> >
> > [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
> > [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/
> >
> > Thomas Zimmermann (3):
> >   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
> >   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
> >   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
>
> FWIW, for the series, please feel free to add my
> Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Of course, once the errors found by kernel test robot are fixed :).
>
> >
> >  Documentation/driver-api/dma-buf.rst          |   3 +
> >  drivers/dma-buf/dma-buf.c                     |  40 +++---
> >  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
> >  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
> >  drivers/gpu/drm/drm_prime.c                   |  14 +-
> >  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
> >  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
> >  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
> >  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
> >  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
> >  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
> >  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
> >  include/drm/drm_prime.h                       |   5 +-
> >  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
> >  include/linux/dma-buf.h                       |  11 +-
> >  15 files changed, 274 insertions(+), 82 deletions(-)
> >  create mode 100644 include/linux/dma-buf-map.h
> >
> > --
> > 2.28.0
> >
>
> Best,
> Sumit.

Best,
Sumit.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory
@ 2020-09-18  8:32     ` Sumit Semwal
  0 siblings, 0 replies; 57+ messages in thread
From: Sumit Semwal @ 2020-09-18  8:44 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: Christian Koenig, Dave Airlie, mark.cave-ayland,
	DRI mailing list, Chris Wilson, Thierry Reding, Gerd Hoffmann,
	sparclinux, Sam Ravnborg, Marek Szyprowski, jonathanh,
	matthew.auld, Russell King,
	open list:DMA BUFFER SHARING FRAMEWORK, Pawel Osciak,
	Intel Graphics Development, etnaviv, Linaro MM SIG,
	thomas.hellstrom, rodrigo.vivi, linux-tegra,
	Mauro Carvalho Chehab, Tomasz Figa, Kyungmin Park,
	David S . Miller

Hi Thomas,

On Fri, 18 Sep 2020 at 11:36, Sumit Semwal <sumit.semwal@linaro.org> wrote:
>
> Hello Thomas,
>
> On Mon, 14 Sep 2020 at 16:55, Thomas Zimmermann <tzimmermann@suse.de> wrote:
> >
> > Dma-buf provides vmap() and vunmap() for retrieving and releasing mappings
> > of dma-buf memory in kernel address space. The functions operate with plain
> > addresses and the assumption is that the memory can be accessed with load
> > and store operations. This is not the case on some architectures (e.g.,
> > sparc64) where I/O memory can only be accessed with dedicated instructions.
> >
> > This patchset introduces struct dma_buf_map, which contains the address of
> > a buffer and a flag that tells whether system- or I/O-memory instructions
> > are required.
>
> Thank you for the patchset - it is a really nice, clean bit to add!
> >
> > Some background: updating the DRM framebuffer console on sparc64 makes the
> > kernel panic. This is because the framebuffer memory cannot be accessed with
> > system-memory instructions. We currently employ a workaround in DRM to
> > address this specific problem. [1]
> >
> > To resolve the problem, we'd like to address it at the most common point,
> > which is the dma-buf framework. The dma-buf mapping ideally knows if I/O
> > instructions are required and exports this information to it's users. The
> > new structure struct dma_buf_map stores the buffer address and a flag that
> > signals I/O memory. Affected users of the buffer (e.g., drivers, frameworks)
> > can then access the memory accordingly.
> >
> > This patchset only introduces struct dma_buf_map, and updates struct dma_buf
> > and it's interfaces. Further patches can update dma-buf users. For example,
> > there's a prototype patchset for DRM that fixes the framebuffer problem. [2]
> >
> > Further work: TTM, one of DRM's memory managers, already exports an
> > is_iomem flag of its own. It could later be switched over to exporting struct
> > dma_buf_map, thus simplifying some code. Several DRM drivers expect their
> > fbdev console to operate on I/O memory. These could possibly be switched over
> > to the generic fbdev emulation, as soon as the generic code uses struct
> > dma_buf_map.
> >
> > [1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
> > [2] https://lore.kernel.org/dri-devel/20200806085239.4606-1-tzimmermann@suse.de/
> >
> > Thomas Zimmermann (3):
> >   dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr
> >   dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces
> >   dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces
>
> FWIW, for the series, please feel free to add my
> Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Of course, once the errors found by kernel test robot are fixed :).
>
> >
> >  Documentation/driver-api/dma-buf.rst          |   3 +
> >  drivers/dma-buf/dma-buf.c                     |  40 +++---
> >  drivers/gpu/drm/drm_gem_cma_helper.c          |  16 ++-
> >  drivers/gpu/drm/drm_gem_shmem_helper.c        |  17 ++-
> >  drivers/gpu/drm/drm_prime.c                   |  14 +-
> >  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |  13 +-
> >  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |  13 +-
> >  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  18 ++-
> >  drivers/gpu/drm/tegra/gem.c                   |  23 ++--
> >  .../common/videobuf2/videobuf2-dma-contig.c   |  17 ++-
> >  .../media/common/videobuf2/videobuf2-dma-sg.c |  19 ++-
> >  .../common/videobuf2/videobuf2-vmalloc.c      |  21 ++-
> >  include/drm/drm_prime.h                       |   5 +-
> >  include/linux/dma-buf-map.h                   | 126 ++++++++++++++++++
> >  include/linux/dma-buf.h                       |  11 +-
> >  15 files changed, 274 insertions(+), 82 deletions(-)
> >  create mode 100644 include/linux/dma-buf-map.h
> >
> > --
> > 2.28.0
> >
>
> Best,
> Sumit.

Best,
Sumit.

^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2020-09-18  8:44 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-14 11:25 [PATCH 0/3] dma-buf: Flag vmap'ed memory as system or I/O memory Thomas Zimmermann
2020-09-14 11:25 ` [Intel-gfx] " Thomas Zimmermann
2020-09-14 11:25 ` Thomas Zimmermann
2020-09-14 11:25 ` Thomas Zimmermann
2020-09-14 11:25 ` [PATCH 1/3] dma-buf: Add struct dma-buf-map for storing struct dma_buf.vaddr_ptr Thomas Zimmermann
2020-09-14 11:25   ` [Intel-gfx] " Thomas Zimmermann
2020-09-14 11:25   ` Thomas Zimmermann
2020-09-14 11:25   ` Thomas Zimmermann
2020-09-14 11:25 ` [PATCH 2/3] dma-buf: Use struct dma_buf_map in dma_buf_vmap() interfaces Thomas Zimmermann
2020-09-14 11:25   ` [Intel-gfx] " Thomas Zimmermann
2020-09-14 11:25   ` Thomas Zimmermann
2020-09-14 11:25   ` Thomas Zimmermann
2020-09-14 18:33   ` [Intel-gfx] " kernel test robot
2020-09-14 23:54   ` kernel test robot
2020-09-15  1:56   ` kernel test robot
2020-09-16  9:35   ` Daniel Vetter
2020-09-16  9:35     ` [Intel-gfx] " Daniel Vetter
2020-09-16  9:35     ` Daniel Vetter
2020-09-16  9:35     ` Daniel Vetter
2020-09-14 11:25 ` [PATCH 3/3] dma-buf: Use struct dma_buf_map in dma_buf_vunmap() interfaces Thomas Zimmermann
2020-09-14 11:25   ` [Intel-gfx] " Thomas Zimmermann
2020-09-14 11:25   ` Thomas Zimmermann
2020-09-14 11:25   ` Thomas Zimmermann
2020-09-14 17:22   ` [Intel-gfx] " kernel test robot
2020-09-14 19:28   ` kernel test robot
2020-09-14 17:41 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for dma-buf: Flag vmap'ed memory as system or I/O memory Patchwork
2020-09-16  9:37 ` [PATCH 0/3] " Daniel Vetter
2020-09-16  9:37   ` [Intel-gfx] " Daniel Vetter
2020-09-16  9:37   ` Daniel Vetter
2020-09-16  9:37   ` Daniel Vetter
2020-09-16 10:48   ` Thomas Zimmermann
2020-09-16 10:48     ` [Intel-gfx] " Thomas Zimmermann
2020-09-16 10:48     ` Thomas Zimmermann
2020-09-16 10:48     ` Thomas Zimmermann
2020-09-16 12:24     ` Daniel Vetter
2020-09-16 12:24       ` [Intel-gfx] " Daniel Vetter
2020-09-16 12:24       ` Daniel Vetter
2020-09-16 12:24       ` Daniel Vetter
2020-09-16 12:59       ` Christian König
2020-09-16 12:59         ` [Intel-gfx] " Christian König
2020-09-16 12:59         ` Christian König
2020-09-16 12:59         ` Christian König
2020-09-16 13:12         ` Thomas Zimmermann
2020-09-16 13:12           ` [Intel-gfx] " Thomas Zimmermann
2020-09-16 13:12           ` Thomas Zimmermann
2020-09-16 13:12           ` Thomas Zimmermann
2020-09-16 13:37         ` Thomas Hellström (Intel)
2020-09-17  7:16           ` Thomas Zimmermann
2020-09-17  8:04             ` Christian König
2020-09-18  6:06 ` Sumit Semwal
2020-09-18  6:18   ` Sumit Semwal
2020-09-18  6:06   ` [Intel-gfx] " Sumit Semwal
2020-09-18  6:06   ` Sumit Semwal
2020-09-18  8:32   ` Sumit Semwal
2020-09-18  8:44     ` Sumit Semwal
2020-09-18  8:32     ` [Intel-gfx] " Sumit Semwal
2020-09-18  8:32     ` Sumit Semwal

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.