dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/4] dma-buf: Remove kmap kerneldoc vestiges
@ 2020-12-11 15:58 Daniel Vetter
  2020-12-11 15:58 ` [PATCH 2/4] dma-buf: some kerneldoc formatting fixes Daniel Vetter
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Daniel Vetter @ 2020-12-11 15:58 UTC (permalink / raw)
  To: DRI Development
  Cc: Daniel Vetter, Intel Graphics Development, linaro-mm-sig,
	Thomas Zimmermann, Daniel Vetter, Christian König,
	linux-media

Also try to clarify a bit when dma_buf_begin/end_cpu_access should
be called.

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: "Christian König" <christian.koenig@amd.com>
Cc: linux-media@vger.kernel.org
Cc: linaro-mm-sig@lists.linaro.org
---
 drivers/dma-buf/dma-buf.c | 20 ++++++++++++++------
 include/linux/dma-buf.h   | 25 +++++++++----------------
 2 files changed, 23 insertions(+), 22 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index e63684d4cd90..a12fdffa130f 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1001,15 +1001,15 @@ EXPORT_SYMBOL_GPL(dma_buf_move_notify);
  *   vmalloc space might be limited and result in vmap calls failing.
  *
  *   Interfaces::
+ *
  *      void \*dma_buf_vmap(struct dma_buf \*dmabuf)
  *      void dma_buf_vunmap(struct dma_buf \*dmabuf, void \*vaddr)
  *
  *   The vmap call can fail if there is no vmap support in the exporter, or if
- *   it runs out of vmalloc space. Fallback to kmap should be implemented. Note
- *   that the dma-buf layer keeps a reference count for all vmap access and
- *   calls down into the exporter's vmap function only when no vmapping exists,
- *   and only unmaps it once. Protection against concurrent vmap/vunmap calls is
- *   provided by taking the dma_buf->lock mutex.
+ *   it runs out of vmalloc space. Note that the dma-buf layer keeps a reference
+ *   count for all vmap access and calls down into the exporter's vmap function
+ *   only when no vmapping exists, and only unmaps it once. Protection against
+ *   concurrent vmap/vunmap calls is provided by taking the &dma_buf.lock mutex.
  *
  * - For full compatibility on the importer side with existing userspace
  *   interfaces, which might already support mmap'ing buffers. This is needed in
@@ -1098,6 +1098,11 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
  * dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is
  * it guaranteed to be coherent with other DMA access.
  *
+ * This function will also wait for any DMA transactions tracked through
+ * implicit synchronization in &dma_buf.resv. For DMA transactions with explicit
+ * synchronization this function will only ensure cache coherency, callers must
+ * ensure synchronization with such DMA transactions on their own.
+ *
  * Can return negative error values, returns 0 on success.
  */
 int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
@@ -1199,7 +1204,10 @@ EXPORT_SYMBOL_GPL(dma_buf_mmap);
  * This call may fail due to lack of virtual mapping address space.
  * These calls are optional in drivers. The intended use for them
  * is for mapping objects linear in kernel space for high use objects.
- * Please attempt to use kmap/kunmap before thinking about these interfaces.
+ *
+ * To ensure coherency users must call dma_buf_begin_cpu_access() and
+ * dma_buf_end_cpu_access() around any cpu access performed through this
+ * mapping.
  *
  * Returns 0 on success, or a negative errno code otherwise.
  */
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index cf72699cb2bc..7eca37c8b10c 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -183,24 +183,19 @@ struct dma_buf_ops {
 	 * @begin_cpu_access:
 	 *
 	 * This is called from dma_buf_begin_cpu_access() and allows the
-	 * exporter to ensure that the memory is actually available for cpu
-	 * access - the exporter might need to allocate or swap-in and pin the
-	 * backing storage. The exporter also needs to ensure that cpu access is
-	 * coherent for the access direction. The direction can be used by the
-	 * exporter to optimize the cache flushing, i.e. access with a different
+	 * exporter to ensure that the memory is actually coherent for cpu
+	 * access. The exporter also needs to ensure that cpu access is coherent
+	 * for the access direction. The direction can be used by the exporter
+	 * to optimize the cache flushing, i.e. access with a different
 	 * direction (read instead of write) might return stale or even bogus
 	 * data (e.g. when the exporter needs to copy the data to temporary
 	 * storage).
 	 *
-	 * This callback is optional.
+	 * Note that this is both called through the DMA_BUF_IOCTL_SYNC IOCTL
+	 * command for userspace mappings established through @mmap, and also
+	 * for kernel mappings established with @vmap.
 	 *
-	 * FIXME: This is both called through the DMA_BUF_IOCTL_SYNC command
-	 * from userspace (where storage shouldn't be pinned to avoid handing
-	 * de-factor mlock rights to userspace) and for the kernel-internal
-	 * users of the various kmap interfaces, where the backing storage must
-	 * be pinned to guarantee that the atomic kmap calls can succeed. Since
-	 * there's no in-kernel users of the kmap interfaces yet this isn't a
-	 * real problem.
+	 * This callback is optional.
 	 *
 	 * Returns:
 	 *
@@ -216,9 +211,7 @@ struct dma_buf_ops {
 	 *
 	 * This is called from dma_buf_end_cpu_access() when the importer is
 	 * done accessing the CPU. The exporter can use this to flush caches and
-	 * unpin any resources pinned in @begin_cpu_access.
-	 * The result of any dma_buf kmap calls after end_cpu_access is
-	 * undefined.
+	 * undo anything else done in @begin_cpu_access.
 	 *
 	 * This callback is optional.
 	 *
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/4] dma-buf: some kerneldoc formatting fixes
  2020-12-11 15:58 [PATCH 1/4] dma-buf: Remove kmap kerneldoc vestiges Daniel Vetter
@ 2020-12-11 15:58 ` Daniel Vetter
  2020-12-11 15:58 ` [PATCH 3/4] dma-buf: begin/end_cpu might lock the dma_resv lock Daniel Vetter
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 12+ messages in thread
From: Daniel Vetter @ 2020-12-11 15:58 UTC (permalink / raw)
  To: DRI Development
  Cc: Daniel Vetter, Intel Graphics Development, linaro-mm-sig,
	Thomas Zimmermann, Daniel Vetter, Christian König,
	linux-media

Noticed while reviewing the output. Adds a bunch more links and fixes
the function interface quoting.

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: "Christian König" <christian.koenig@amd.com>
Cc: linux-media@vger.kernel.org
Cc: linaro-mm-sig@lists.linaro.org
---
 drivers/dma-buf/dma-buf.c | 31 ++++++++++++++++++-------------
 include/linux/dma-buf.h   |  6 +++---
 2 files changed, 21 insertions(+), 16 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index a12fdffa130f..e1fa6c6f02c4 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -480,7 +480,7 @@ static struct file *dma_buf_getfile(struct dma_buf *dmabuf, int flags)
  *
  * 4. Once a driver is done with a shared buffer it needs to call
  *    dma_buf_detach() (after cleaning up any mappings) and then release the
- *    reference acquired with dma_buf_get by calling dma_buf_put().
+ *    reference acquired with dma_buf_get() by calling dma_buf_put().
  *
  * For the detailed semantics exporters are expected to implement see
  * &dma_buf_ops.
@@ -496,9 +496,10 @@ static struct file *dma_buf_getfile(struct dma_buf *dmabuf, int flags)
  *			by the exporter. see &struct dma_buf_export_info
  *			for further details.
  *
- * Returns, on success, a newly created dma_buf object, which wraps the
- * supplied private data and operations for dma_buf_ops. On either missing
- * ops, or error in allocating struct dma_buf, will return negative error.
+ * Returns, on success, a newly created struct dma_buf object, which wraps the
+ * supplied private data and operations for struct dma_buf_ops. On either
+ * missing ops, or error in allocating struct dma_buf, will return negative
+ * error.
  *
  * For most cases the easiest way to create @exp_info is through the
  * %DEFINE_DMA_BUF_EXPORT_INFO macro.
@@ -584,7 +585,7 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
 EXPORT_SYMBOL_GPL(dma_buf_export);
 
 /**
- * dma_buf_fd - returns a file descriptor for the given dma_buf
+ * dma_buf_fd - returns a file descriptor for the given struct dma_buf
  * @dmabuf:	[in]	pointer to dma_buf for which fd is required.
  * @flags:      [in]    flags to give to fd
  *
@@ -608,10 +609,10 @@ int dma_buf_fd(struct dma_buf *dmabuf, int flags)
 EXPORT_SYMBOL_GPL(dma_buf_fd);
 
 /**
- * dma_buf_get - returns the dma_buf structure related to an fd
- * @fd:	[in]	fd associated with the dma_buf to be returned
+ * dma_buf_get - returns the struct dma_buf related to an fd
+ * @fd:	[in]	fd associated with the struct dma_buf to be returned
  *
- * On success, returns the dma_buf structure associated with an fd; uses
+ * On success, returns the struct dma_buf associated with an fd; uses
  * file's refcounting done by fget to increase refcount. returns ERR_PTR
  * otherwise.
  */
@@ -653,8 +654,7 @@ void dma_buf_put(struct dma_buf *dmabuf)
 EXPORT_SYMBOL_GPL(dma_buf_put);
 
 /**
- * dma_buf_dynamic_attach - Add the device to dma_buf's attachments list; optionally,
- * calls attach() of dma_buf_ops to allow device-specific attach functionality
+ * dma_buf_dynamic_attach - Add the device to dma_buf's attachments list
  * @dmabuf:		[in]	buffer to attach device to.
  * @dev:		[in]	device to be attached.
  * @importer_ops:	[in]	importer operations for the attachment
@@ -663,6 +663,9 @@ EXPORT_SYMBOL_GPL(dma_buf_put);
  * Returns struct dma_buf_attachment pointer for this attachment. Attachments
  * must be cleaned up by calling dma_buf_detach().
  *
+ * Optionally this calls &dma_buf_ops.attach to allow device-specific attach
+ * functionality.
+ *
  * Returns:
  *
  * A pointer to newly created &dma_buf_attachment on success, or a negative
@@ -769,12 +772,13 @@ struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
 EXPORT_SYMBOL_GPL(dma_buf_attach);
 
 /**
- * dma_buf_detach - Remove the given attachment from dmabuf's attachments list;
- * optionally calls detach() of dma_buf_ops for device-specific detach
+ * dma_buf_detach - Remove the given attachment from dmabuf's attachments list
  * @dmabuf:	[in]	buffer to detach from.
  * @attach:	[in]	attachment to be detached; is free'd after this call.
  *
  * Clean up a device attachment obtained by calling dma_buf_attach().
+ *
+ * Optionally this calls &dma_buf_ops.detach for device-specific detach.
  */
 void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach)
 {
@@ -1061,11 +1065,12 @@ EXPORT_SYMBOL_GPL(dma_buf_move_notify);
  *   shootdowns would increase the complexity quite a bit.
  *
  *   Interface::
+ *
  *      int dma_buf_mmap(struct dma_buf \*, struct vm_area_struct \*,
  *		       unsigned long);
  *
  *   If the importing subsystem simply provides a special-purpose mmap call to
- *   set up a mapping in userspace, calling do_mmap with dma_buf->file will
+ *   set up a mapping in userspace, calling do_mmap with &dma_buf.file will
  *   equally achieve that for a dma-buf object.
  */
 
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 7eca37c8b10c..43802a31b25d 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -85,10 +85,10 @@ struct dma_buf_ops {
 	/**
 	 * @pin:
 	 *
-	 * This is called by dma_buf_pin and lets the exporter know that the
+	 * This is called by dma_buf_pin() and lets the exporter know that the
 	 * DMA-buf can't be moved any more.
 	 *
-	 * This is called with the dmabuf->resv object locked and is mutual
+	 * This is called with the &dmabuf.resv object locked and is mutual
 	 * exclusive with @cache_sgt_mapping.
 	 *
 	 * This callback is optional and should only be used in limited use
@@ -103,7 +103,7 @@ struct dma_buf_ops {
 	/**
 	 * @unpin:
 	 *
-	 * This is called by dma_buf_unpin and lets the exporter know that the
+	 * This is called by dma_buf_unpin() and lets the exporter know that the
 	 * DMA-buf can be moved again.
 	 *
 	 * This is called with the dmabuf->resv object locked and is mutual
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/4] dma-buf: begin/end_cpu might lock the dma_resv lock
  2020-12-11 15:58 [PATCH 1/4] dma-buf: Remove kmap kerneldoc vestiges Daniel Vetter
  2020-12-11 15:58 ` [PATCH 2/4] dma-buf: some kerneldoc formatting fixes Daniel Vetter
@ 2020-12-11 15:58 ` Daniel Vetter
  2020-12-11 22:31   ` kernel test robot
                     ` (2 more replies)
  2020-12-11 15:58 ` [PATCH 4/4] dma-buf: doc polish for pin/unpin Daniel Vetter
  2020-12-14 10:33 ` [PATCH 1/4] dma-buf: Remove kmap kerneldoc vestiges Christian König
  3 siblings, 3 replies; 12+ messages in thread
From: Daniel Vetter @ 2020-12-11 15:58 UTC (permalink / raw)
  To: DRI Development
  Cc: Daniel Vetter, Intel Graphics Development, linaro-mm-sig,
	Thomas Zimmermann, Daniel Vetter, Christian König,
	linux-media

At least amdgpu and i915 do, so lets just document this as the rule.

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: "Christian König" <christian.koenig@amd.com>
Cc: linux-media@vger.kernel.org
Cc: linaro-mm-sig@lists.linaro.org
---
 drivers/dma-buf/dma-buf.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index e1fa6c6f02c4..00d5afe904cc 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1118,6 +1118,8 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
 	if (WARN_ON(!dmabuf))
 		return -EINVAL;
 
+	might_lock(&dma_buf->resv.lock);
+
 	if (dmabuf->ops->begin_cpu_access)
 		ret = dmabuf->ops->begin_cpu_access(dmabuf, direction);
 
@@ -1151,6 +1153,8 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf,
 
 	WARN_ON(!dmabuf);
 
+	might_lock(&dma_buf->resv.lock);
+
 	if (dmabuf->ops->end_cpu_access)
 		ret = dmabuf->ops->end_cpu_access(dmabuf, direction);
 
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/4] dma-buf: doc polish for pin/unpin
  2020-12-11 15:58 [PATCH 1/4] dma-buf: Remove kmap kerneldoc vestiges Daniel Vetter
  2020-12-11 15:58 ` [PATCH 2/4] dma-buf: some kerneldoc formatting fixes Daniel Vetter
  2020-12-11 15:58 ` [PATCH 3/4] dma-buf: begin/end_cpu might lock the dma_resv lock Daniel Vetter
@ 2020-12-11 15:58 ` Daniel Vetter
  2020-12-14 10:33 ` [PATCH 1/4] dma-buf: Remove kmap kerneldoc vestiges Christian König
  3 siblings, 0 replies; 12+ messages in thread
From: Daniel Vetter @ 2020-12-11 15:58 UTC (permalink / raw)
  To: DRI Development
  Cc: Daniel Vetter, Intel Graphics Development, linaro-mm-sig,
	Thomas Zimmermann, Daniel Vetter, Christian König,
	linux-media

Motivated by a discussion with Christian and Thomas: Try to untangle a
bit what's relevant for importers and what's relevant for exporters.

Also add an assert that really only dynamic importers use the api
function, anything else doesn't make sense.

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: "Christian König" <christian.koenig@amd.com>
Cc: linux-media@vger.kernel.org
Cc: linaro-mm-sig@lists.linaro.org
---
 drivers/dma-buf/dma-buf.c | 19 ++++++++++++++++---
 include/linux/dma-buf.h   |  8 +++++---
 2 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 00d5afe904cc..1065545e9ca1 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -809,9 +809,15 @@ EXPORT_SYMBOL_GPL(dma_buf_detach);
 
 /**
  * dma_buf_pin - Lock down the DMA-buf
- *
  * @attach:	[in]	attachment which should be pinned
  *
+ * Only dynamic importers (who set up @attach with dma_buf_dynamic_attach()) may
+ * call this, and only for limited use cases like scanout and not for temporary
+ * pin operations. It is not permitted to allow userspace to pin arbitrary
+ * amounts of buffers through this interface.
+ *
+ * Buffers must be unpinned by calling dma_buf_unpin().
+ *
  * Returns:
  * 0 on success, negative error code on failure.
  */
@@ -820,6 +826,8 @@ int dma_buf_pin(struct dma_buf_attachment *attach)
 	struct dma_buf *dmabuf = attach->dmabuf;
 	int ret = 0;
 
+	WARN_ON(!dma_buf_attachment_is_dynamic(attach));
+
 	dma_resv_assert_held(dmabuf->resv);
 
 	if (dmabuf->ops->pin)
@@ -830,14 +838,19 @@ int dma_buf_pin(struct dma_buf_attachment *attach)
 EXPORT_SYMBOL_GPL(dma_buf_pin);
 
 /**
- * dma_buf_unpin - Remove lock from DMA-buf
- *
+ * dma_buf_unpin - Unpin a DMA-buf
  * @attach:	[in]	attachment which should be unpinned
+ *
+ * This unpins a buffer pinned by dma_buf_pin() and allows the exporter to move
+ * any mapping of @attach again and inform the importer through
+ * &dma_buf_attach_ops.move_notify.
  */
 void dma_buf_unpin(struct dma_buf_attachment *attach)
 {
 	struct dma_buf *dmabuf = attach->dmabuf;
 
+	WARN_ON(!dma_buf_attachment_is_dynamic(attach));
+
 	dma_resv_assert_held(dmabuf->resv);
 
 	if (dmabuf->ops->unpin)
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 43802a31b25d..628681bf6c99 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -86,13 +86,15 @@ struct dma_buf_ops {
 	 * @pin:
 	 *
 	 * This is called by dma_buf_pin() and lets the exporter know that the
-	 * DMA-buf can't be moved any more.
+	 * DMA-buf can't be moved any more. The exporter should pin the buffer
+	 * into system memory to make sure it is generally accessible by other
+	 * devices.
 	 *
 	 * This is called with the &dmabuf.resv object locked and is mutual
 	 * exclusive with @cache_sgt_mapping.
 	 *
-	 * This callback is optional and should only be used in limited use
-	 * cases like scanout and not for temporary pin operations.
+	 * This is called automatically for non-dynamic importers from
+	 * dma_buf_attach().
 	 *
 	 * Returns:
 	 *
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/4] dma-buf: begin/end_cpu might lock the dma_resv lock
  2020-12-11 15:58 ` [PATCH 3/4] dma-buf: begin/end_cpu might lock the dma_resv lock Daniel Vetter
@ 2020-12-11 22:31   ` kernel test robot
  2020-12-14 10:33   ` Christian König
  2020-12-14 17:16   ` [PATCH] " Daniel Vetter
  2 siblings, 0 replies; 12+ messages in thread
From: kernel test robot @ 2020-12-11 22:31 UTC (permalink / raw)
  To: Daniel Vetter, DRI Development
  Cc: kbuild-all, Daniel Vetter, Intel Graphics Development,
	linaro-mm-sig, clang-built-linux, Thomas Zimmermann,
	Christian König, linux-media

[-- Attachment #1: Type: text/plain, Size: 4371 bytes --]

Hi Daniel,

I love your patch! Yet something to improve:

[auto build test ERROR on next-20201211]
[also build test ERROR on v5.10-rc7]
[cannot apply to tegra-drm/drm/tegra/for-next linus/master v5.10-rc7 v5.10-rc6 v5.10-rc5]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Daniel-Vetter/dma-buf-Remove-kmap-kerneldoc-vestiges/20201212-000040
base:    3cc2bd440f2171f093b3a8480a4b54d8c270ed38
config: powerpc64-randconfig-r035-20201210 (attached as .config)
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project 5ff35356f1af2bb92785b38c657463924d9ec386)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install powerpc64 cross compiling tool for clang build
        # apt-get install binutils-powerpc64-linux-gnu
        # https://github.com/0day-ci/linux/commit/24d4fcf0e09c80974556c7639cb630f86a544378
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Daniel-Vetter/dma-buf-Remove-kmap-kerneldoc-vestiges/20201212-000040
        git checkout 24d4fcf0e09c80974556c7639cb630f86a544378
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=powerpc64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> drivers/dma-buf/dma-buf.c:1121:14: error: use of undeclared identifier 'dma_buf'
           might_lock(&dma_buf->resv.lock);
                       ^
>> drivers/dma-buf/dma-buf.c:1121:14: error: use of undeclared identifier 'dma_buf'
>> drivers/dma-buf/dma-buf.c:1121:14: error: use of undeclared identifier 'dma_buf'
   drivers/dma-buf/dma-buf.c:1156:14: error: use of undeclared identifier 'dma_buf'
           might_lock(&dma_buf->resv.lock);
                       ^
   drivers/dma-buf/dma-buf.c:1156:14: error: use of undeclared identifier 'dma_buf'
   drivers/dma-buf/dma-buf.c:1156:14: error: use of undeclared identifier 'dma_buf'
   6 errors generated.

vim +/dma_buf +1121 drivers/dma-buf/dma-buf.c

  1093	
  1094	/**
  1095	 * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from the
  1096	 * cpu in the kernel context. Calls begin_cpu_access to allow exporter-specific
  1097	 * preparations. Coherency is only guaranteed in the specified range for the
  1098	 * specified access direction.
  1099	 * @dmabuf:	[in]	buffer to prepare cpu access for.
  1100	 * @direction:	[in]	length of range for cpu access.
  1101	 *
  1102	 * After the cpu access is complete the caller should call
  1103	 * dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is
  1104	 * it guaranteed to be coherent with other DMA access.
  1105	 *
  1106	 * This function will also wait for any DMA transactions tracked through
  1107	 * implicit synchronization in &dma_buf.resv. For DMA transactions with explicit
  1108	 * synchronization this function will only ensure cache coherency, callers must
  1109	 * ensure synchronization with such DMA transactions on their own.
  1110	 *
  1111	 * Can return negative error values, returns 0 on success.
  1112	 */
  1113	int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
  1114				     enum dma_data_direction direction)
  1115	{
  1116		int ret = 0;
  1117	
  1118		if (WARN_ON(!dmabuf))
  1119			return -EINVAL;
  1120	
> 1121		might_lock(&dma_buf->resv.lock);
  1122	
  1123		if (dmabuf->ops->begin_cpu_access)
  1124			ret = dmabuf->ops->begin_cpu_access(dmabuf, direction);
  1125	
  1126		/* Ensure that all fences are waited upon - but we first allow
  1127		 * the native handler the chance to do so more efficiently if it
  1128		 * chooses. A double invocation here will be reasonably cheap no-op.
  1129		 */
  1130		if (ret == 0)
  1131			ret = __dma_buf_begin_cpu_access(dmabuf, direction);
  1132	
  1133		return ret;
  1134	}
  1135	EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
  1136	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 44099 bytes --]

[-- Attachment #3: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/4] dma-buf: Remove kmap kerneldoc vestiges
  2020-12-11 15:58 [PATCH 1/4] dma-buf: Remove kmap kerneldoc vestiges Daniel Vetter
                   ` (2 preceding siblings ...)
  2020-12-11 15:58 ` [PATCH 4/4] dma-buf: doc polish for pin/unpin Daniel Vetter
@ 2020-12-14 10:33 ` Christian König
  2020-12-14 16:01   ` Daniel Vetter
  3 siblings, 1 reply; 12+ messages in thread
From: Christian König @ 2020-12-14 10:33 UTC (permalink / raw)
  To: Daniel Vetter, DRI Development
  Cc: Intel Graphics Development, linaro-mm-sig, Thomas Zimmermann,
	Daniel Vetter, linux-media

Am 11.12.20 um 16:58 schrieb Daniel Vetter:
> Also try to clarify a bit when dma_buf_begin/end_cpu_access should
> be called.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: "Christian König" <christian.koenig@amd.com>
> Cc: linux-media@vger.kernel.org
> Cc: linaro-mm-sig@lists.linaro.org
> ---
>   drivers/dma-buf/dma-buf.c | 20 ++++++++++++++------
>   include/linux/dma-buf.h   | 25 +++++++++----------------
>   2 files changed, 23 insertions(+), 22 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index e63684d4cd90..a12fdffa130f 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -1001,15 +1001,15 @@ EXPORT_SYMBOL_GPL(dma_buf_move_notify);
>    *   vmalloc space might be limited and result in vmap calls failing.
>    *
>    *   Interfaces::
> + *
>    *      void \*dma_buf_vmap(struct dma_buf \*dmabuf)
>    *      void dma_buf_vunmap(struct dma_buf \*dmabuf, void \*vaddr)
>    *
>    *   The vmap call can fail if there is no vmap support in the exporter, or if
> - *   it runs out of vmalloc space. Fallback to kmap should be implemented. Note
> - *   that the dma-buf layer keeps a reference count for all vmap access and
> - *   calls down into the exporter's vmap function only when no vmapping exists,
> - *   and only unmaps it once. Protection against concurrent vmap/vunmap calls is
> - *   provided by taking the dma_buf->lock mutex.
> + *   it runs out of vmalloc space. Note that the dma-buf layer keeps a reference
> + *   count for all vmap access and calls down into the exporter's vmap function
> + *   only when no vmapping exists, and only unmaps it once. Protection against
> + *   concurrent vmap/vunmap calls is provided by taking the &dma_buf.lock mutex.

Who is talking the lock? The caller of the dma_buf_vmap/vunmap() 
functions, the functions itself or the callback inside the exporter?

Christian.

>    *
>    * - For full compatibility on the importer side with existing userspace
>    *   interfaces, which might already support mmap'ing buffers. This is needed in
> @@ -1098,6 +1098,11 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>    * dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is
>    * it guaranteed to be coherent with other DMA access.
>    *
> + * This function will also wait for any DMA transactions tracked through
> + * implicit synchronization in &dma_buf.resv. For DMA transactions with explicit
> + * synchronization this function will only ensure cache coherency, callers must
> + * ensure synchronization with such DMA transactions on their own.
> + *
>    * Can return negative error values, returns 0 on success.
>    */
>   int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> @@ -1199,7 +1204,10 @@ EXPORT_SYMBOL_GPL(dma_buf_mmap);
>    * This call may fail due to lack of virtual mapping address space.
>    * These calls are optional in drivers. The intended use for them
>    * is for mapping objects linear in kernel space for high use objects.
> - * Please attempt to use kmap/kunmap before thinking about these interfaces.
> + *
> + * To ensure coherency users must call dma_buf_begin_cpu_access() and
> + * dma_buf_end_cpu_access() around any cpu access performed through this
> + * mapping.
>    *
>    * Returns 0 on success, or a negative errno code otherwise.
>    */
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index cf72699cb2bc..7eca37c8b10c 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -183,24 +183,19 @@ struct dma_buf_ops {
>   	 * @begin_cpu_access:
>   	 *
>   	 * This is called from dma_buf_begin_cpu_access() and allows the
> -	 * exporter to ensure that the memory is actually available for cpu
> -	 * access - the exporter might need to allocate or swap-in and pin the
> -	 * backing storage. The exporter also needs to ensure that cpu access is
> -	 * coherent for the access direction. The direction can be used by the
> -	 * exporter to optimize the cache flushing, i.e. access with a different
> +	 * exporter to ensure that the memory is actually coherent for cpu
> +	 * access. The exporter also needs to ensure that cpu access is coherent
> +	 * for the access direction. The direction can be used by the exporter
> +	 * to optimize the cache flushing, i.e. access with a different
>   	 * direction (read instead of write) might return stale or even bogus
>   	 * data (e.g. when the exporter needs to copy the data to temporary
>   	 * storage).
>   	 *
> -	 * This callback is optional.
> +	 * Note that this is both called through the DMA_BUF_IOCTL_SYNC IOCTL
> +	 * command for userspace mappings established through @mmap, and also
> +	 * for kernel mappings established with @vmap.
>   	 *
> -	 * FIXME: This is both called through the DMA_BUF_IOCTL_SYNC command
> -	 * from userspace (where storage shouldn't be pinned to avoid handing
> -	 * de-factor mlock rights to userspace) and for the kernel-internal
> -	 * users of the various kmap interfaces, where the backing storage must
> -	 * be pinned to guarantee that the atomic kmap calls can succeed. Since
> -	 * there's no in-kernel users of the kmap interfaces yet this isn't a
> -	 * real problem.
> +	 * This callback is optional.
>   	 *
>   	 * Returns:
>   	 *
> @@ -216,9 +211,7 @@ struct dma_buf_ops {
>   	 *
>   	 * This is called from dma_buf_end_cpu_access() when the importer is
>   	 * done accessing the CPU. The exporter can use this to flush caches and
> -	 * unpin any resources pinned in @begin_cpu_access.
> -	 * The result of any dma_buf kmap calls after end_cpu_access is
> -	 * undefined.
> +	 * undo anything else done in @begin_cpu_access.
>   	 *
>   	 * This callback is optional.
>   	 *

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/4] dma-buf: begin/end_cpu might lock the dma_resv lock
  2020-12-11 15:58 ` [PATCH 3/4] dma-buf: begin/end_cpu might lock the dma_resv lock Daniel Vetter
  2020-12-11 22:31   ` kernel test robot
@ 2020-12-14 10:33   ` Christian König
  2020-12-14 17:16   ` [PATCH] " Daniel Vetter
  2 siblings, 0 replies; 12+ messages in thread
From: Christian König @ 2020-12-14 10:33 UTC (permalink / raw)
  To: Daniel Vetter, DRI Development
  Cc: Intel Graphics Development, linaro-mm-sig, Thomas Zimmermann,
	Daniel Vetter, linux-media

Am 11.12.20 um 16:58 schrieb Daniel Vetter:
> At least amdgpu and i915 do, so lets just document this as the rule.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: "Christian König" <christian.koenig@amd.com>
> Cc: linux-media@vger.kernel.org
> Cc: linaro-mm-sig@lists.linaro.org

Reviewed-by: Christian König <christian.koenig@amd.com>

> ---
>   drivers/dma-buf/dma-buf.c | 4 ++++
>   1 file changed, 4 insertions(+)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index e1fa6c6f02c4..00d5afe904cc 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -1118,6 +1118,8 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>   	if (WARN_ON(!dmabuf))
>   		return -EINVAL;
>   
> +	might_lock(&dma_buf->resv.lock);
> +
>   	if (dmabuf->ops->begin_cpu_access)
>   		ret = dmabuf->ops->begin_cpu_access(dmabuf, direction);
>   
> @@ -1151,6 +1153,8 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf,
>   
>   	WARN_ON(!dmabuf);
>   
> +	might_lock(&dma_buf->resv.lock);
> +
>   	if (dmabuf->ops->end_cpu_access)
>   		ret = dmabuf->ops->end_cpu_access(dmabuf, direction);
>   

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/4] dma-buf: Remove kmap kerneldoc vestiges
  2020-12-14 10:33 ` [PATCH 1/4] dma-buf: Remove kmap kerneldoc vestiges Christian König
@ 2020-12-14 16:01   ` Daniel Vetter
  2020-12-15 14:18     ` Christian König
  0 siblings, 1 reply; 12+ messages in thread
From: Daniel Vetter @ 2020-12-14 16:01 UTC (permalink / raw)
  To: Christian König
  Cc: Daniel Vetter, Intel Graphics Development, DRI Development,
	linaro-mm-sig, Thomas Zimmermann, Daniel Vetter, linux-media

On Mon, Dec 14, 2020 at 11:33:10AM +0100, Christian König wrote:
> Am 11.12.20 um 16:58 schrieb Daniel Vetter:
> > Also try to clarify a bit when dma_buf_begin/end_cpu_access should
> > be called.
> > 
> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > Cc: Sumit Semwal <sumit.semwal@linaro.org>
> > Cc: "Christian König" <christian.koenig@amd.com>
> > Cc: linux-media@vger.kernel.org
> > Cc: linaro-mm-sig@lists.linaro.org
> > ---
> >   drivers/dma-buf/dma-buf.c | 20 ++++++++++++++------
> >   include/linux/dma-buf.h   | 25 +++++++++----------------
> >   2 files changed, 23 insertions(+), 22 deletions(-)
> > 
> > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> > index e63684d4cd90..a12fdffa130f 100644
> > --- a/drivers/dma-buf/dma-buf.c
> > +++ b/drivers/dma-buf/dma-buf.c
> > @@ -1001,15 +1001,15 @@ EXPORT_SYMBOL_GPL(dma_buf_move_notify);
> >    *   vmalloc space might be limited and result in vmap calls failing.
> >    *
> >    *   Interfaces::
> > + *
> >    *      void \*dma_buf_vmap(struct dma_buf \*dmabuf)
> >    *      void dma_buf_vunmap(struct dma_buf \*dmabuf, void \*vaddr)
> >    *
> >    *   The vmap call can fail if there is no vmap support in the exporter, or if
> > - *   it runs out of vmalloc space. Fallback to kmap should be implemented. Note
> > - *   that the dma-buf layer keeps a reference count for all vmap access and
> > - *   calls down into the exporter's vmap function only when no vmapping exists,
> > - *   and only unmaps it once. Protection against concurrent vmap/vunmap calls is
> > - *   provided by taking the dma_buf->lock mutex.
> > + *   it runs out of vmalloc space. Note that the dma-buf layer keeps a reference
> > + *   count for all vmap access and calls down into the exporter's vmap function
> > + *   only when no vmapping exists, and only unmaps it once. Protection against
> > + *   concurrent vmap/vunmap calls is provided by taking the &dma_buf.lock mutex.
> 
> Who is talking the lock? The caller of the dma_buf_vmap/vunmap() functions,
> the functions itself or the callback inside the exporter?

That's the part I didn't change at all here, just re-laid out the line
breaking. I only removed the outdated kmap section here.

Should I do another patch and remove this one sentence here (it's kinda
pointless and generally we don't muse about implementation details that
callers don't care about)?

I did try and do a cursory review of the dma-buf docs, but this is kinda
not meant as an all-out revamp. Just a few things I've noticed while
reviewing Thomas' vmap_local stuff.
-Daniel

> 
> Christian.
> 
> >    *
> >    * - For full compatibility on the importer side with existing userspace
> >    *   interfaces, which might already support mmap'ing buffers. This is needed in
> > @@ -1098,6 +1098,11 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> >    * dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is
> >    * it guaranteed to be coherent with other DMA access.
> >    *
> > + * This function will also wait for any DMA transactions tracked through
> > + * implicit synchronization in &dma_buf.resv. For DMA transactions with explicit
> > + * synchronization this function will only ensure cache coherency, callers must
> > + * ensure synchronization with such DMA transactions on their own.
> > + *
> >    * Can return negative error values, returns 0 on success.
> >    */
> >   int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> > @@ -1199,7 +1204,10 @@ EXPORT_SYMBOL_GPL(dma_buf_mmap);
> >    * This call may fail due to lack of virtual mapping address space.
> >    * These calls are optional in drivers. The intended use for them
> >    * is for mapping objects linear in kernel space for high use objects.
> > - * Please attempt to use kmap/kunmap before thinking about these interfaces.
> > + *
> > + * To ensure coherency users must call dma_buf_begin_cpu_access() and
> > + * dma_buf_end_cpu_access() around any cpu access performed through this
> > + * mapping.
> >    *
> >    * Returns 0 on success, or a negative errno code otherwise.
> >    */
> > diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> > index cf72699cb2bc..7eca37c8b10c 100644
> > --- a/include/linux/dma-buf.h
> > +++ b/include/linux/dma-buf.h
> > @@ -183,24 +183,19 @@ struct dma_buf_ops {
> >   	 * @begin_cpu_access:
> >   	 *
> >   	 * This is called from dma_buf_begin_cpu_access() and allows the
> > -	 * exporter to ensure that the memory is actually available for cpu
> > -	 * access - the exporter might need to allocate or swap-in and pin the
> > -	 * backing storage. The exporter also needs to ensure that cpu access is
> > -	 * coherent for the access direction. The direction can be used by the
> > -	 * exporter to optimize the cache flushing, i.e. access with a different
> > +	 * exporter to ensure that the memory is actually coherent for cpu
> > +	 * access. The exporter also needs to ensure that cpu access is coherent
> > +	 * for the access direction. The direction can be used by the exporter
> > +	 * to optimize the cache flushing, i.e. access with a different
> >   	 * direction (read instead of write) might return stale or even bogus
> >   	 * data (e.g. when the exporter needs to copy the data to temporary
> >   	 * storage).
> >   	 *
> > -	 * This callback is optional.
> > +	 * Note that this is both called through the DMA_BUF_IOCTL_SYNC IOCTL
> > +	 * command for userspace mappings established through @mmap, and also
> > +	 * for kernel mappings established with @vmap.
> >   	 *
> > -	 * FIXME: This is both called through the DMA_BUF_IOCTL_SYNC command
> > -	 * from userspace (where storage shouldn't be pinned to avoid handing
> > -	 * de-factor mlock rights to userspace) and for the kernel-internal
> > -	 * users of the various kmap interfaces, where the backing storage must
> > -	 * be pinned to guarantee that the atomic kmap calls can succeed. Since
> > -	 * there's no in-kernel users of the kmap interfaces yet this isn't a
> > -	 * real problem.
> > +	 * This callback is optional.
> >   	 *
> >   	 * Returns:
> >   	 *
> > @@ -216,9 +211,7 @@ struct dma_buf_ops {
> >   	 *
> >   	 * This is called from dma_buf_end_cpu_access() when the importer is
> >   	 * done accessing the CPU. The exporter can use this to flush caches and
> > -	 * unpin any resources pinned in @begin_cpu_access.
> > -	 * The result of any dma_buf kmap calls after end_cpu_access is
> > -	 * undefined.
> > +	 * undo anything else done in @begin_cpu_access.
> >   	 *
> >   	 * This callback is optional.
> >   	 *
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] dma-buf: begin/end_cpu might lock the dma_resv lock
  2020-12-11 15:58 ` [PATCH 3/4] dma-buf: begin/end_cpu might lock the dma_resv lock Daniel Vetter
  2020-12-11 22:31   ` kernel test robot
  2020-12-14 10:33   ` Christian König
@ 2020-12-14 17:16   ` Daniel Vetter
  2020-12-15 14:16     ` Christian König
  2 siblings, 1 reply; 12+ messages in thread
From: Daniel Vetter @ 2020-12-14 17:16 UTC (permalink / raw)
  To: DRI Development
  Cc: Daniel Vetter, Intel Graphics Development, linaro-mm-sig,
	Thomas Zimmermann, Daniel Vetter, Christian König,
	linux-media

At least amdgpu and i915 do, so lets just document this as the rule.

v2: Works better with less typos (intel-gfx-ci)

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: "Christian König" <christian.koenig@amd.com>
Cc: linux-media@vger.kernel.org
Cc: linaro-mm-sig@lists.linaro.org
---
 drivers/dma-buf/dma-buf.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index e1fa6c6f02c4..a0a02ef888da 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1118,6 +1118,8 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
 	if (WARN_ON(!dmabuf))
 		return -EINVAL;
 
+	might_lock(&dmabuf->resv->lock.base);
+
 	if (dmabuf->ops->begin_cpu_access)
 		ret = dmabuf->ops->begin_cpu_access(dmabuf, direction);
 
@@ -1151,6 +1153,8 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf,
 
 	WARN_ON(!dmabuf);
 
+	might_lock(&dmabuf->resv->lock.base);
+
 	if (dmabuf->ops->end_cpu_access)
 		ret = dmabuf->ops->end_cpu_access(dmabuf, direction);
 
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH] dma-buf: begin/end_cpu might lock the dma_resv lock
  2020-12-14 17:16   ` [PATCH] " Daniel Vetter
@ 2020-12-15 14:16     ` Christian König
  0 siblings, 0 replies; 12+ messages in thread
From: Christian König @ 2020-12-15 14:16 UTC (permalink / raw)
  To: Daniel Vetter, DRI Development
  Cc: Intel Graphics Development, linaro-mm-sig, Thomas Zimmermann,
	Daniel Vetter, linux-media

Am 14.12.20 um 18:16 schrieb Daniel Vetter:
> At least amdgpu and i915 do, so lets just document this as the rule.
>
> v2: Works better with less typos (intel-gfx-ci)
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: "Christian König" <christian.koenig@amd.com>
> Cc: linux-media@vger.kernel.org
> Cc: linaro-mm-sig@lists.linaro.org

Reviewed-by: Christian König <christian.koenig@amd.com>

> ---
>   drivers/dma-buf/dma-buf.c | 4 ++++
>   1 file changed, 4 insertions(+)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index e1fa6c6f02c4..a0a02ef888da 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -1118,6 +1118,8 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>   	if (WARN_ON(!dmabuf))
>   		return -EINVAL;
>   
> +	might_lock(&dmabuf->resv->lock.base);
> +
>   	if (dmabuf->ops->begin_cpu_access)
>   		ret = dmabuf->ops->begin_cpu_access(dmabuf, direction);
>   
> @@ -1151,6 +1153,8 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf,
>   
>   	WARN_ON(!dmabuf);
>   
> +	might_lock(&dmabuf->resv->lock.base);
> +
>   	if (dmabuf->ops->end_cpu_access)
>   		ret = dmabuf->ops->end_cpu_access(dmabuf, direction);
>   

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/4] dma-buf: Remove kmap kerneldoc vestiges
  2020-12-14 16:01   ` Daniel Vetter
@ 2020-12-15 14:18     ` Christian König
  2020-12-16 10:29       ` Daniel Vetter
  0 siblings, 1 reply; 12+ messages in thread
From: Christian König @ 2020-12-15 14:18 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Daniel Vetter, Intel Graphics Development, DRI Development,
	linaro-mm-sig, Thomas Zimmermann, Daniel Vetter, linux-media

Am 14.12.20 um 17:01 schrieb Daniel Vetter:
> On Mon, Dec 14, 2020 at 11:33:10AM +0100, Christian König wrote:
>> Am 11.12.20 um 16:58 schrieb Daniel Vetter:
>>> Also try to clarify a bit when dma_buf_begin/end_cpu_access should
>>> be called.
>>>
>>> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
>>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
>>> Cc: Sumit Semwal <sumit.semwal@linaro.org>
>>> Cc: "Christian König" <christian.koenig@amd.com>
>>> Cc: linux-media@vger.kernel.org
>>> Cc: linaro-mm-sig@lists.linaro.org
>>> ---
>>>    drivers/dma-buf/dma-buf.c | 20 ++++++++++++++------
>>>    include/linux/dma-buf.h   | 25 +++++++++----------------
>>>    2 files changed, 23 insertions(+), 22 deletions(-)
>>>
>>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
>>> index e63684d4cd90..a12fdffa130f 100644
>>> --- a/drivers/dma-buf/dma-buf.c
>>> +++ b/drivers/dma-buf/dma-buf.c
>>> @@ -1001,15 +1001,15 @@ EXPORT_SYMBOL_GPL(dma_buf_move_notify);
>>>     *   vmalloc space might be limited and result in vmap calls failing.
>>>     *
>>>     *   Interfaces::
>>> + *
>>>     *      void \*dma_buf_vmap(struct dma_buf \*dmabuf)
>>>     *      void dma_buf_vunmap(struct dma_buf \*dmabuf, void \*vaddr)
>>>     *
>>>     *   The vmap call can fail if there is no vmap support in the exporter, or if
>>> - *   it runs out of vmalloc space. Fallback to kmap should be implemented. Note
>>> - *   that the dma-buf layer keeps a reference count for all vmap access and
>>> - *   calls down into the exporter's vmap function only when no vmapping exists,
>>> - *   and only unmaps it once. Protection against concurrent vmap/vunmap calls is
>>> - *   provided by taking the dma_buf->lock mutex.
>>> + *   it runs out of vmalloc space. Note that the dma-buf layer keeps a reference
>>> + *   count for all vmap access and calls down into the exporter's vmap function
>>> + *   only when no vmapping exists, and only unmaps it once. Protection against
>>> + *   concurrent vmap/vunmap calls is provided by taking the &dma_buf.lock mutex.
>> Who is talking the lock? The caller of the dma_buf_vmap/vunmap() functions,
>> the functions itself or the callback inside the exporter?
> That's the part I didn't change at all here, just re-laid out the line
> breaking. I only removed the outdated kmap section here.

I just wanted to point out that this still isn't described here very very.


> Should I do another patch and remove this one sentence here (it's kinda
> pointless and generally we don't muse about implementation details that
> callers don't care about)?

Na, works like this for me.

> I did try and do a cursory review of the dma-buf docs, but this is kinda
> not meant as an all-out revamp. Just a few things I've noticed while
> reviewing Thomas' vmap_local stuff.


Fell free to add an Acked-by: Christian König <christian.koenig@amd.com> 
to the series.

Christian.

> -Daniel
>
>> Christian.
>>
>>>     *
>>>     * - For full compatibility on the importer side with existing userspace
>>>     *   interfaces, which might already support mmap'ing buffers. This is needed in
>>> @@ -1098,6 +1098,11 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>>>     * dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is
>>>     * it guaranteed to be coherent with other DMA access.
>>>     *
>>> + * This function will also wait for any DMA transactions tracked through
>>> + * implicit synchronization in &dma_buf.resv. For DMA transactions with explicit
>>> + * synchronization this function will only ensure cache coherency, callers must
>>> + * ensure synchronization with such DMA transactions on their own.
>>> + *
>>>     * Can return negative error values, returns 0 on success.
>>>     */
>>>    int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>>> @@ -1199,7 +1204,10 @@ EXPORT_SYMBOL_GPL(dma_buf_mmap);
>>>     * This call may fail due to lack of virtual mapping address space.
>>>     * These calls are optional in drivers. The intended use for them
>>>     * is for mapping objects linear in kernel space for high use objects.
>>> - * Please attempt to use kmap/kunmap before thinking about these interfaces.
>>> + *
>>> + * To ensure coherency users must call dma_buf_begin_cpu_access() and
>>> + * dma_buf_end_cpu_access() around any cpu access performed through this
>>> + * mapping.
>>>     *
>>>     * Returns 0 on success, or a negative errno code otherwise.
>>>     */
>>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
>>> index cf72699cb2bc..7eca37c8b10c 100644
>>> --- a/include/linux/dma-buf.h
>>> +++ b/include/linux/dma-buf.h
>>> @@ -183,24 +183,19 @@ struct dma_buf_ops {
>>>    	 * @begin_cpu_access:
>>>    	 *
>>>    	 * This is called from dma_buf_begin_cpu_access() and allows the
>>> -	 * exporter to ensure that the memory is actually available for cpu
>>> -	 * access - the exporter might need to allocate or swap-in and pin the
>>> -	 * backing storage. The exporter also needs to ensure that cpu access is
>>> -	 * coherent for the access direction. The direction can be used by the
>>> -	 * exporter to optimize the cache flushing, i.e. access with a different
>>> +	 * exporter to ensure that the memory is actually coherent for cpu
>>> +	 * access. The exporter also needs to ensure that cpu access is coherent
>>> +	 * for the access direction. The direction can be used by the exporter
>>> +	 * to optimize the cache flushing, i.e. access with a different
>>>    	 * direction (read instead of write) might return stale or even bogus
>>>    	 * data (e.g. when the exporter needs to copy the data to temporary
>>>    	 * storage).
>>>    	 *
>>> -	 * This callback is optional.
>>> +	 * Note that this is both called through the DMA_BUF_IOCTL_SYNC IOCTL
>>> +	 * command for userspace mappings established through @mmap, and also
>>> +	 * for kernel mappings established with @vmap.
>>>    	 *
>>> -	 * FIXME: This is both called through the DMA_BUF_IOCTL_SYNC command
>>> -	 * from userspace (where storage shouldn't be pinned to avoid handing
>>> -	 * de-factor mlock rights to userspace) and for the kernel-internal
>>> -	 * users of the various kmap interfaces, where the backing storage must
>>> -	 * be pinned to guarantee that the atomic kmap calls can succeed. Since
>>> -	 * there's no in-kernel users of the kmap interfaces yet this isn't a
>>> -	 * real problem.
>>> +	 * This callback is optional.
>>>    	 *
>>>    	 * Returns:
>>>    	 *
>>> @@ -216,9 +211,7 @@ struct dma_buf_ops {
>>>    	 *
>>>    	 * This is called from dma_buf_end_cpu_access() when the importer is
>>>    	 * done accessing the CPU. The exporter can use this to flush caches and
>>> -	 * unpin any resources pinned in @begin_cpu_access.
>>> -	 * The result of any dma_buf kmap calls after end_cpu_access is
>>> -	 * undefined.
>>> +	 * undo anything else done in @begin_cpu_access.
>>>    	 *
>>>    	 * This callback is optional.
>>>    	 *

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/4] dma-buf: Remove kmap kerneldoc vestiges
  2020-12-15 14:18     ` Christian König
@ 2020-12-16 10:29       ` Daniel Vetter
  0 siblings, 0 replies; 12+ messages in thread
From: Daniel Vetter @ 2020-12-16 10:29 UTC (permalink / raw)
  To: Christian König
  Cc: Daniel Vetter, Intel Graphics Development, DRI Development,
	linaro-mm-sig, Thomas Zimmermann, Daniel Vetter, linux-media

On Tue, Dec 15, 2020 at 03:18:49PM +0100, Christian König wrote:
> Am 14.12.20 um 17:01 schrieb Daniel Vetter:
> > On Mon, Dec 14, 2020 at 11:33:10AM +0100, Christian König wrote:
> > > Am 11.12.20 um 16:58 schrieb Daniel Vetter:
> > > > Also try to clarify a bit when dma_buf_begin/end_cpu_access should
> > > > be called.
> > > > 
> > > > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > > > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > > > Cc: Sumit Semwal <sumit.semwal@linaro.org>
> > > > Cc: "Christian König" <christian.koenig@amd.com>
> > > > Cc: linux-media@vger.kernel.org
> > > > Cc: linaro-mm-sig@lists.linaro.org
> > > > ---
> > > >    drivers/dma-buf/dma-buf.c | 20 ++++++++++++++------
> > > >    include/linux/dma-buf.h   | 25 +++++++++----------------
> > > >    2 files changed, 23 insertions(+), 22 deletions(-)
> > > > 
> > > > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> > > > index e63684d4cd90..a12fdffa130f 100644
> > > > --- a/drivers/dma-buf/dma-buf.c
> > > > +++ b/drivers/dma-buf/dma-buf.c
> > > > @@ -1001,15 +1001,15 @@ EXPORT_SYMBOL_GPL(dma_buf_move_notify);
> > > >     *   vmalloc space might be limited and result in vmap calls failing.
> > > >     *
> > > >     *   Interfaces::
> > > > + *
> > > >     *      void \*dma_buf_vmap(struct dma_buf \*dmabuf)
> > > >     *      void dma_buf_vunmap(struct dma_buf \*dmabuf, void \*vaddr)
> > > >     *
> > > >     *   The vmap call can fail if there is no vmap support in the exporter, or if
> > > > - *   it runs out of vmalloc space. Fallback to kmap should be implemented. Note
> > > > - *   that the dma-buf layer keeps a reference count for all vmap access and
> > > > - *   calls down into the exporter's vmap function only when no vmapping exists,
> > > > - *   and only unmaps it once. Protection against concurrent vmap/vunmap calls is
> > > > - *   provided by taking the dma_buf->lock mutex.
> > > > + *   it runs out of vmalloc space. Note that the dma-buf layer keeps a reference
> > > > + *   count for all vmap access and calls down into the exporter's vmap function
> > > > + *   only when no vmapping exists, and only unmaps it once. Protection against
> > > > + *   concurrent vmap/vunmap calls is provided by taking the &dma_buf.lock mutex.
> > > Who is talking the lock? The caller of the dma_buf_vmap/vunmap() functions,
> > > the functions itself or the callback inside the exporter?
> > That's the part I didn't change at all here, just re-laid out the line
> > breaking. I only removed the outdated kmap section here.
> 
> I just wanted to point out that this still isn't described here very very.
> 
> 
> > Should I do another patch and remove this one sentence here (it's kinda
> > pointless and generally we don't muse about implementation details that
> > callers don't care about)?
> 
> Na, works like this for me.
> 
> > I did try and do a cursory review of the dma-buf docs, but this is kinda
> > not meant as an all-out revamp. Just a few things I've noticed while
> > reviewing Thomas' vmap_local stuff.
> 
> 
> Fell free to add an Acked-by: Christian König <christian.koenig@amd.com> to
> the series.

Thanks for taking a look, and yeah I actually want to do a review of all
the dma-buf docs but just haven't found the quiet time for that yet.

Patches pushed to drm-misc-next.
-Daniel

> 
> Christian.
> 
> > -Daniel
> > 
> > > Christian.
> > > 
> > > >     *
> > > >     * - For full compatibility on the importer side with existing userspace
> > > >     *   interfaces, which might already support mmap'ing buffers. This is needed in
> > > > @@ -1098,6 +1098,11 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> > > >     * dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is
> > > >     * it guaranteed to be coherent with other DMA access.
> > > >     *
> > > > + * This function will also wait for any DMA transactions tracked through
> > > > + * implicit synchronization in &dma_buf.resv. For DMA transactions with explicit
> > > > + * synchronization this function will only ensure cache coherency, callers must
> > > > + * ensure synchronization with such DMA transactions on their own.
> > > > + *
> > > >     * Can return negative error values, returns 0 on success.
> > > >     */
> > > >    int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> > > > @@ -1199,7 +1204,10 @@ EXPORT_SYMBOL_GPL(dma_buf_mmap);
> > > >     * This call may fail due to lack of virtual mapping address space.
> > > >     * These calls are optional in drivers. The intended use for them
> > > >     * is for mapping objects linear in kernel space for high use objects.
> > > > - * Please attempt to use kmap/kunmap before thinking about these interfaces.
> > > > + *
> > > > + * To ensure coherency users must call dma_buf_begin_cpu_access() and
> > > > + * dma_buf_end_cpu_access() around any cpu access performed through this
> > > > + * mapping.
> > > >     *
> > > >     * Returns 0 on success, or a negative errno code otherwise.
> > > >     */
> > > > diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> > > > index cf72699cb2bc..7eca37c8b10c 100644
> > > > --- a/include/linux/dma-buf.h
> > > > +++ b/include/linux/dma-buf.h
> > > > @@ -183,24 +183,19 @@ struct dma_buf_ops {
> > > >    	 * @begin_cpu_access:
> > > >    	 *
> > > >    	 * This is called from dma_buf_begin_cpu_access() and allows the
> > > > -	 * exporter to ensure that the memory is actually available for cpu
> > > > -	 * access - the exporter might need to allocate or swap-in and pin the
> > > > -	 * backing storage. The exporter also needs to ensure that cpu access is
> > > > -	 * coherent for the access direction. The direction can be used by the
> > > > -	 * exporter to optimize the cache flushing, i.e. access with a different
> > > > +	 * exporter to ensure that the memory is actually coherent for cpu
> > > > +	 * access. The exporter also needs to ensure that cpu access is coherent
> > > > +	 * for the access direction. The direction can be used by the exporter
> > > > +	 * to optimize the cache flushing, i.e. access with a different
> > > >    	 * direction (read instead of write) might return stale or even bogus
> > > >    	 * data (e.g. when the exporter needs to copy the data to temporary
> > > >    	 * storage).
> > > >    	 *
> > > > -	 * This callback is optional.
> > > > +	 * Note that this is both called through the DMA_BUF_IOCTL_SYNC IOCTL
> > > > +	 * command for userspace mappings established through @mmap, and also
> > > > +	 * for kernel mappings established with @vmap.
> > > >    	 *
> > > > -	 * FIXME: This is both called through the DMA_BUF_IOCTL_SYNC command
> > > > -	 * from userspace (where storage shouldn't be pinned to avoid handing
> > > > -	 * de-factor mlock rights to userspace) and for the kernel-internal
> > > > -	 * users of the various kmap interfaces, where the backing storage must
> > > > -	 * be pinned to guarantee that the atomic kmap calls can succeed. Since
> > > > -	 * there's no in-kernel users of the kmap interfaces yet this isn't a
> > > > -	 * real problem.
> > > > +	 * This callback is optional.
> > > >    	 *
> > > >    	 * Returns:
> > > >    	 *
> > > > @@ -216,9 +211,7 @@ struct dma_buf_ops {
> > > >    	 *
> > > >    	 * This is called from dma_buf_end_cpu_access() when the importer is
> > > >    	 * done accessing the CPU. The exporter can use this to flush caches and
> > > > -	 * unpin any resources pinned in @begin_cpu_access.
> > > > -	 * The result of any dma_buf kmap calls after end_cpu_access is
> > > > -	 * undefined.
> > > > +	 * undo anything else done in @begin_cpu_access.
> > > >    	 *
> > > >    	 * This callback is optional.
> > > >    	 *
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2020-12-16 10:29 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-11 15:58 [PATCH 1/4] dma-buf: Remove kmap kerneldoc vestiges Daniel Vetter
2020-12-11 15:58 ` [PATCH 2/4] dma-buf: some kerneldoc formatting fixes Daniel Vetter
2020-12-11 15:58 ` [PATCH 3/4] dma-buf: begin/end_cpu might lock the dma_resv lock Daniel Vetter
2020-12-11 22:31   ` kernel test robot
2020-12-14 10:33   ` Christian König
2020-12-14 17:16   ` [PATCH] " Daniel Vetter
2020-12-15 14:16     ` Christian König
2020-12-11 15:58 ` [PATCH 4/4] dma-buf: doc polish for pin/unpin Daniel Vetter
2020-12-14 10:33 ` [PATCH 1/4] dma-buf: Remove kmap kerneldoc vestiges Christian König
2020-12-14 16:01   ` Daniel Vetter
2020-12-15 14:18     ` Christian König
2020-12-16 10:29       ` Daniel Vetter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).