All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jonathan Cameron <jic23@kernel.org>
To: Paul Cercueil <paul@crapouillou.net>
Cc: "Lars-Peter Clausen" <lars@metafoo.de>,
	"Vinod Koul" <vkoul@kernel.org>,
	"Michael Hennerich" <Michael.Hennerich@analog.com>,
	"Nuno Sá" <noname.nuno@gmail.com>,
	"Sumit Semwal" <sumit.semwal@linaro.org>,
	"Christian König" <christian.koenig@amd.com>,
	linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org,
	linux-iio@vger.kernel.org, linux-media@vger.kernel.org,
	dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v3 07/11] iio: core: Add new DMABUF interface infrastructure
Date: Sun, 16 Apr 2023 16:04:08 +0100	[thread overview]
Message-ID: <20230416160408.5fae01a0@jic23-huawei> (raw)
In-Reply-To: <20230403154800.215924-8-paul@crapouillou.net>

On Mon,  3 Apr 2023 17:47:56 +0200
Paul Cercueil <paul@crapouillou.net> wrote:

> Add the necessary infrastructure to the IIO core to support a new
> optional DMABUF based interface.
> 
> With this new interface, DMABUF objects (externally created) can be
> attached to a IIO buffer, and subsequently used for data transfer.
> 
> A userspace application can then use this interface to share DMABUF
> objects between several interfaces, allowing it to transfer data in a
> zero-copy fashion, for instance between IIO and the USB stack.
> 
> The userspace application can also memory-map the DMABUF objects, and
> access the sample data directly. The advantage of doing this vs. the
> read() interface is that it avoids an extra copy of the data between the
> kernel and userspace. This is particularly userful for high-speed
> devices which produce several megabytes or even gigabytes of data per
> second.

I like numbers to support a patch.  Any nice ones to throw in here
as examples of expected rates?

> 
> As part of the interface, 3 new IOCTLs have been added:
> 
> IIO_BUFFER_DMABUF_ATTACH_IOCTL(int fd):
>  Attach the DMABUF object identified by the given file descriptor to the
>  buffer.
> 
> IIO_BUFFER_DMABUF_DETACH_IOCTL(int fd):
>  Detach the DMABUF object identified by the given file descriptor from
>  the buffer. Note that closing the IIO buffer's file descriptor will
>  automatically detach all previously attached DMABUF objects.
> 
> IIO_BUFFER_DMABUF_ENQUEUE_IOCTL(struct iio_dmabuf *):
>  Request a data transfer to/from the given DMABUF object. Its file
>  descriptor, as well as the transfer size and flags are provided in the
>  "iio_dmabuf" structure.
> 
> These three IOCTLs have to be performed on the IIO buffer's file
> descriptor, obtained using the IIO_BUFFER_GET_FD_IOCTL() ioctl.
> 
> Signed-off-by: Paul Cercueil <paul@crapouillou.net>
> 

Trivial comments from me.  I don't (yet) understand dmabuf well enough
to know if that part is right or not.  Not sure I will ever find the time
so relying on those who are more familiar with it to tell me if that code
is correct.

Thanks,

Jonathan



>  static int iio_buffer_chrdev_release(struct inode *inode, struct file *filep)
>  {
>  	struct iio_dev_buffer_pair *ib = filep->private_data;
>  	struct iio_dev *indio_dev = ib->indio_dev;
>  	struct iio_buffer *buffer = ib->buffer;
> +	struct iio_dmabuf_priv *priv, *tmp;
>  
>  	wake_up(&buffer->pollq);
>  
> +	/* Close all attached DMABUFs */
> +	list_for_each_entry_safe(priv, tmp, &buffer->dmabufs, entry) {
> +		list_del_init(&priv->entry);
> +		iio_buffer_dmabuf_put(priv->attach);
> +	}
> +
> +	/* TODO: Is it safe? Can "ib" be freed here? */

No idea :)  However that need resolving before we apply this.

> +	if (!list_empty(&buffer->dmabufs))
> +		dev_warn(&indio_dev->dev, "Buffer FD closed with active transfers\n");
> +
>  	kfree(ib);
>  	clear_bit(IIO_BUSY_BIT_POS, &buffer->flags);
>  	iio_device_put(indio_dev);
> @@ -1515,11 +1591,337 @@ static int iio_buffer_chrdev_release(struct inode *inode, struct file *filep)
>  	return 0;
>  }
>  




> +static int iio_buffer_enqueue_dmabuf(struct iio_dev_buffer_pair *ib,
> +				     struct iio_dmabuf __user *iio_dmabuf_req,
> +				     bool nonblock)
> +{

...

> +
> +	ret = buffer->access->enqueue_dmabuf(buffer, priv->block, sgt,
> +					     iio_dmabuf.bytes_used, cyclic);
> +	if (ret)

Hmm. Is there an easy way to perhaps avoid a function with multiple
error handling paths like we have here.  Perhaps drag the
extra stuff from the the dmabuf_done() function into this if (ret)
then goto err_fence_put;?  I'm not sure if that would make this even
harder to read however.

> +		iio_buffer_signal_dmabuf_done(attach, ret);
> +
> +	dma_buf_put(dmabuf);
> +
> +	return ret;
> +
> +err_resv_unlock:
> +	dma_resv_unlock(dmabuf->resv);
> +err_fence_put:
> +	dma_fence_put(&fence->base);
> +err_unmap_attachment:
> +	dma_buf_unmap_attachment(attach, sgt, dir);
> +err_attachment_put:
> +	iio_buffer_dmabuf_put(attach);
> +err_dmabuf_put:
> +	dma_buf_put(dmabuf);
> +
> +	return ret;
> +}
> +
> +void iio_buffer_signal_dmabuf_done(struct dma_buf_attachment *attach, int ret)
> +{
> +	struct iio_dmabuf_priv *priv = attach->importer_priv;
> +	struct iio_dma_fence *fence = priv->fence;
> +	enum dma_data_direction dir = fence->dir;
> +	struct sg_table *sgt = fence->sgt;
> +
> +	dma_fence_get(&fence->base);
> +	fence->base.error = ret;
> +	dma_fence_signal(&fence->base);
> +	dma_fence_put(&fence->base);
> +
> +	dma_buf_unmap_attachment(attach, sgt, dir);
> +	iio_buffer_dmabuf_put(attach);
> +}
> +EXPORT_SYMBOL_GPL(iio_buffer_signal_dmabuf_done);

...

> diff --git a/include/linux/iio/buffer_impl.h b/include/linux/iio/buffer_impl.h
> index 89c3fd7c29ca..a8a490091277 100644
> --- a/include/linux/iio/buffer_impl.h
> +++ b/include/linux/iio/buffer_impl.h
> @@ -9,8 +9,11 @@
>  #include <uapi/linux/iio/buffer.h>
>  #include <linux/iio/buffer.h>
>  
> +struct dma_buf_attachment;
>  struct iio_dev;
> +struct iio_dma_buffer_block;
>  struct iio_buffer;
> +struct sg_table;
>  
>  /**
>   * INDIO_BUFFER_FLAG_FIXED_WATERMARK - Watermark level of the buffer can not be
> @@ -39,6 +42,9 @@ struct iio_buffer;
>   *                      device stops sampling. Calles are balanced with @enable.
>   * @release:		called when the last reference to the buffer is dropped,
>   *			should free all resources allocated by the buffer
> + * @alloc_dmabuf:	called from userspace via ioctl to allocate one DMABUF.

Looks like you missed updating the docs.

> + * @enqueue_dmabuf:	called from userspace via ioctl to queue this DMABUF
> + *			object to this buffer. Requires a valid DMABUF fd.
>   * @modes:		Supported operating modes by this buffer type
>   * @flags:		A bitmask combination of INDIO_BUFFER_FLAG_*
>   *
> @@ -68,6 +74,14 @@ struct iio_buffer_access_funcs {
>  
>  	void (*release)(struct iio_buffer *buffer);
>  
> +	struct iio_dma_buffer_block * (*attach_dmabuf)(struct iio_buffer *buffer,
> +						       struct dma_buf_attachment *attach);
> +	void (*detach_dmabuf)(struct iio_buffer *buffer,
> +			      struct iio_dma_buffer_block *block);
> +	int (*enqueue_dmabuf)(struct iio_buffer *buffer,
> +			      struct iio_dma_buffer_block *block,
> +			      struct sg_table *sgt, size_t size, bool cyclic);
> +
>  	unsigned int modes;
>  	unsigned int flags;
>  };


WARNING: multiple messages have this Message-ID (diff)
From: Jonathan Cameron <jic23@kernel.org>
To: Paul Cercueil <paul@crapouillou.net>
Cc: "Michael Hennerich" <Michael.Hennerich@analog.com>,
	linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org,
	"Sumit Semwal" <sumit.semwal@linaro.org>,
	linaro-mm-sig@lists.linaro.org, "Vinod Koul" <vkoul@kernel.org>,
	"Nuno Sá" <noname.nuno@gmail.com>,
	dmaengine@vger.kernel.org,
	"Christian König" <christian.koenig@amd.com>,
	linux-media@vger.kernel.org
Subject: Re: [PATCH v3 07/11] iio: core: Add new DMABUF interface infrastructure
Date: Sun, 16 Apr 2023 16:04:08 +0100	[thread overview]
Message-ID: <20230416160408.5fae01a0@jic23-huawei> (raw)
In-Reply-To: <20230403154800.215924-8-paul@crapouillou.net>

On Mon,  3 Apr 2023 17:47:56 +0200
Paul Cercueil <paul@crapouillou.net> wrote:

> Add the necessary infrastructure to the IIO core to support a new
> optional DMABUF based interface.
> 
> With this new interface, DMABUF objects (externally created) can be
> attached to a IIO buffer, and subsequently used for data transfer.
> 
> A userspace application can then use this interface to share DMABUF
> objects between several interfaces, allowing it to transfer data in a
> zero-copy fashion, for instance between IIO and the USB stack.
> 
> The userspace application can also memory-map the DMABUF objects, and
> access the sample data directly. The advantage of doing this vs. the
> read() interface is that it avoids an extra copy of the data between the
> kernel and userspace. This is particularly userful for high-speed
> devices which produce several megabytes or even gigabytes of data per
> second.

I like numbers to support a patch.  Any nice ones to throw in here
as examples of expected rates?

> 
> As part of the interface, 3 new IOCTLs have been added:
> 
> IIO_BUFFER_DMABUF_ATTACH_IOCTL(int fd):
>  Attach the DMABUF object identified by the given file descriptor to the
>  buffer.
> 
> IIO_BUFFER_DMABUF_DETACH_IOCTL(int fd):
>  Detach the DMABUF object identified by the given file descriptor from
>  the buffer. Note that closing the IIO buffer's file descriptor will
>  automatically detach all previously attached DMABUF objects.
> 
> IIO_BUFFER_DMABUF_ENQUEUE_IOCTL(struct iio_dmabuf *):
>  Request a data transfer to/from the given DMABUF object. Its file
>  descriptor, as well as the transfer size and flags are provided in the
>  "iio_dmabuf" structure.
> 
> These three IOCTLs have to be performed on the IIO buffer's file
> descriptor, obtained using the IIO_BUFFER_GET_FD_IOCTL() ioctl.
> 
> Signed-off-by: Paul Cercueil <paul@crapouillou.net>
> 

Trivial comments from me.  I don't (yet) understand dmabuf well enough
to know if that part is right or not.  Not sure I will ever find the time
so relying on those who are more familiar with it to tell me if that code
is correct.

Thanks,

Jonathan



>  static int iio_buffer_chrdev_release(struct inode *inode, struct file *filep)
>  {
>  	struct iio_dev_buffer_pair *ib = filep->private_data;
>  	struct iio_dev *indio_dev = ib->indio_dev;
>  	struct iio_buffer *buffer = ib->buffer;
> +	struct iio_dmabuf_priv *priv, *tmp;
>  
>  	wake_up(&buffer->pollq);
>  
> +	/* Close all attached DMABUFs */
> +	list_for_each_entry_safe(priv, tmp, &buffer->dmabufs, entry) {
> +		list_del_init(&priv->entry);
> +		iio_buffer_dmabuf_put(priv->attach);
> +	}
> +
> +	/* TODO: Is it safe? Can "ib" be freed here? */

No idea :)  However that need resolving before we apply this.

> +	if (!list_empty(&buffer->dmabufs))
> +		dev_warn(&indio_dev->dev, "Buffer FD closed with active transfers\n");
> +
>  	kfree(ib);
>  	clear_bit(IIO_BUSY_BIT_POS, &buffer->flags);
>  	iio_device_put(indio_dev);
> @@ -1515,11 +1591,337 @@ static int iio_buffer_chrdev_release(struct inode *inode, struct file *filep)
>  	return 0;
>  }
>  




> +static int iio_buffer_enqueue_dmabuf(struct iio_dev_buffer_pair *ib,
> +				     struct iio_dmabuf __user *iio_dmabuf_req,
> +				     bool nonblock)
> +{

...

> +
> +	ret = buffer->access->enqueue_dmabuf(buffer, priv->block, sgt,
> +					     iio_dmabuf.bytes_used, cyclic);
> +	if (ret)

Hmm. Is there an easy way to perhaps avoid a function with multiple
error handling paths like we have here.  Perhaps drag the
extra stuff from the the dmabuf_done() function into this if (ret)
then goto err_fence_put;?  I'm not sure if that would make this even
harder to read however.

> +		iio_buffer_signal_dmabuf_done(attach, ret);
> +
> +	dma_buf_put(dmabuf);
> +
> +	return ret;
> +
> +err_resv_unlock:
> +	dma_resv_unlock(dmabuf->resv);
> +err_fence_put:
> +	dma_fence_put(&fence->base);
> +err_unmap_attachment:
> +	dma_buf_unmap_attachment(attach, sgt, dir);
> +err_attachment_put:
> +	iio_buffer_dmabuf_put(attach);
> +err_dmabuf_put:
> +	dma_buf_put(dmabuf);
> +
> +	return ret;
> +}
> +
> +void iio_buffer_signal_dmabuf_done(struct dma_buf_attachment *attach, int ret)
> +{
> +	struct iio_dmabuf_priv *priv = attach->importer_priv;
> +	struct iio_dma_fence *fence = priv->fence;
> +	enum dma_data_direction dir = fence->dir;
> +	struct sg_table *sgt = fence->sgt;
> +
> +	dma_fence_get(&fence->base);
> +	fence->base.error = ret;
> +	dma_fence_signal(&fence->base);
> +	dma_fence_put(&fence->base);
> +
> +	dma_buf_unmap_attachment(attach, sgt, dir);
> +	iio_buffer_dmabuf_put(attach);
> +}
> +EXPORT_SYMBOL_GPL(iio_buffer_signal_dmabuf_done);

...

> diff --git a/include/linux/iio/buffer_impl.h b/include/linux/iio/buffer_impl.h
> index 89c3fd7c29ca..a8a490091277 100644
> --- a/include/linux/iio/buffer_impl.h
> +++ b/include/linux/iio/buffer_impl.h
> @@ -9,8 +9,11 @@
>  #include <uapi/linux/iio/buffer.h>
>  #include <linux/iio/buffer.h>
>  
> +struct dma_buf_attachment;
>  struct iio_dev;
> +struct iio_dma_buffer_block;
>  struct iio_buffer;
> +struct sg_table;
>  
>  /**
>   * INDIO_BUFFER_FLAG_FIXED_WATERMARK - Watermark level of the buffer can not be
> @@ -39,6 +42,9 @@ struct iio_buffer;
>   *                      device stops sampling. Calles are balanced with @enable.
>   * @release:		called when the last reference to the buffer is dropped,
>   *			should free all resources allocated by the buffer
> + * @alloc_dmabuf:	called from userspace via ioctl to allocate one DMABUF.

Looks like you missed updating the docs.

> + * @enqueue_dmabuf:	called from userspace via ioctl to queue this DMABUF
> + *			object to this buffer. Requires a valid DMABUF fd.
>   * @modes:		Supported operating modes by this buffer type
>   * @flags:		A bitmask combination of INDIO_BUFFER_FLAG_*
>   *
> @@ -68,6 +74,14 @@ struct iio_buffer_access_funcs {
>  
>  	void (*release)(struct iio_buffer *buffer);
>  
> +	struct iio_dma_buffer_block * (*attach_dmabuf)(struct iio_buffer *buffer,
> +						       struct dma_buf_attachment *attach);
> +	void (*detach_dmabuf)(struct iio_buffer *buffer,
> +			      struct iio_dma_buffer_block *block);
> +	int (*enqueue_dmabuf)(struct iio_buffer *buffer,
> +			      struct iio_dma_buffer_block *block,
> +			      struct sg_table *sgt, size_t size, bool cyclic);
> +
>  	unsigned int modes;
>  	unsigned int flags;
>  };


  parent reply	other threads:[~2023-04-16 15:04 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-03 15:47 [PATCH v3 00/11] iio: new DMABUF based API, v3 Paul Cercueil
2023-04-03 15:47 ` Paul Cercueil
2023-04-03 15:47 ` [PATCH v3 01/11] dmaengine: Add API function dmaengine_prep_slave_dma_array() Paul Cercueil
2023-04-03 15:47   ` Paul Cercueil
2023-04-04  1:59   ` Hillf Danton
2023-04-04  7:42     ` Paul Cercueil
2023-04-04  7:42       ` Paul Cercueil
2023-04-04  8:54       ` Christian König
2023-04-04  8:54         ` Christian König
2023-04-12 17:23   ` Vinod Koul
2023-04-12 17:23     ` Vinod Koul
2023-04-13  7:59     ` Paul Cercueil
2023-04-13  7:59       ` Paul Cercueil
2023-04-03 15:47 ` [PATCH v3 02/11] dmaengine: dma-axi-dmac: Implement device_prep_slave_dma_array Paul Cercueil
2023-04-03 15:47   ` Paul Cercueil
2023-04-03 15:47 ` [PATCH v3 03/11] iio: buffer-dma: Get rid of outgoing queue Paul Cercueil
2023-04-03 15:47   ` Paul Cercueil
2023-04-16 14:24   ` Jonathan Cameron
2023-04-16 14:24     ` Jonathan Cameron
2023-04-18  8:08     ` Paul Cercueil
2023-04-18  8:08       ` Paul Cercueil
2023-05-01 16:25       ` Jonathan Cameron
2023-05-01 16:25         ` Jonathan Cameron
2023-04-03 15:47 ` [PATCH v3 04/11] iio: buffer-dma: Enable buffer write support Paul Cercueil
2023-04-03 15:47   ` Paul Cercueil
2023-04-16 14:30   ` Jonathan Cameron
2023-04-16 14:30     ` Jonathan Cameron
2023-04-03 15:47 ` [PATCH v3 05/11] iio: buffer-dmaengine: Support specifying buffer direction Paul Cercueil
2023-04-03 15:47   ` Paul Cercueil
2023-04-16 14:35   ` Jonathan Cameron
2023-04-16 14:35     ` Jonathan Cameron
2023-04-03 15:47 ` [PATCH v3 06/11] iio: buffer-dmaengine: Enable write support Paul Cercueil
2023-04-03 15:47   ` Paul Cercueil
2023-04-16 14:37   ` Jonathan Cameron
2023-04-16 14:37     ` Jonathan Cameron
2023-04-03 15:47 ` [PATCH v3 07/11] iio: core: Add new DMABUF interface infrastructure Paul Cercueil
2023-04-03 15:47   ` Paul Cercueil
2023-04-04  7:32   ` Nuno Sá
2023-04-04  7:55     ` Paul Cercueil
2023-04-04  8:21       ` Nuno Sá
2023-04-04 13:22       ` Lars-Peter Clausen
2023-04-16 15:04   ` Jonathan Cameron [this message]
2023-04-16 15:04     ` Jonathan Cameron
2023-04-03 15:47 ` [PATCH v3 08/11] iio: buffer-dma: split iio_dma_buffer_fileio_free() function Paul Cercueil
2023-04-03 15:47   ` Paul Cercueil
2023-04-03 15:47 ` [PATCH v3 09/11] iio: buffer-dma: Enable support for DMABUFs Paul Cercueil
2023-04-03 15:47   ` Paul Cercueil
2023-04-16 15:10   ` Jonathan Cameron
2023-04-16 15:10     ` Jonathan Cameron
2023-04-03 15:49 ` [PATCH v3 10/11] iio: buffer-dmaengine: Support new DMABUF based userspace API Paul Cercueil
2023-04-03 15:49   ` Paul Cercueil
2023-04-03 15:49   ` [PATCH v3 11/11] Documentation: iio: Document high-speed DMABUF based API Paul Cercueil
2023-04-03 15:49     ` Paul Cercueil
2023-04-03 16:05     ` Jonathan Corbet
2023-04-03 16:05       ` Jonathan Corbet
2023-04-03 18:37       ` Paul Cercueil
2023-04-03 18:37         ` Paul Cercueil
2023-04-16 15:15   ` [PATCH v3 10/11] iio: buffer-dmaengine: Support new DMABUF based userspace API Jonathan Cameron
2023-04-16 15:15     ` Jonathan Cameron
2023-04-04  7:44 ` [PATCH v3 00/11] iio: new DMABUF based API, v3 Nuno Sá

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230416160408.5fae01a0@jic23-huawei \
    --to=jic23@kernel.org \
    --cc=Michael.Hennerich@analog.com \
    --cc=christian.koenig@amd.com \
    --cc=dmaengine@vger.kernel.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=lars@metafoo.de \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=linux-iio@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=noname.nuno@gmail.com \
    --cc=paul@crapouillou.net \
    --cc=sumit.semwal@linaro.org \
    --cc=vkoul@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.