All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] Add audio support in v4l2 framework
@ 2023-06-29  1:37 Shengjiu Wang
  2023-06-29  1:37 ` [PATCH 1/6] media: v4l2: Add audio capture and output support Shengjiu Wang
                   ` (5 more replies)
  0 siblings, 6 replies; 41+ messages in thread
From: Shengjiu Wang @ 2023-06-29  1:37 UTC (permalink / raw)
  To: tfiga, m.szyprowski, mchehab, linux-media, linux-kernel,
	shengjiu.wang, Xiubo.Lee, festevam, nicoleotsuka, lgirdwood,
	broonie, perex, tiwai, alsa-devel, linuxppc-dev

Audio signal processing has the requirement for memory to
memory similar as Video.

This patch is to add this support in v4l2 framework, defined
new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and
V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format
for audio case usage.

The created audio device is named "/dev/audioX".

And add memory to memory support for two kinds of i.MX ASRC
module


Shengjiu Wang (6):
  media: v4l2: Add audio capture and output support
  ASoC: fsl_asrc: define functions for memory to memory usage
  ASoC: fsl_easrc: define functions for memory to memory usage
  ASoC: fsl_asrc: Add memory to memory driver
  ASoC: fsl_asrc: enable memory to memory function
  ASoC: fsl_easrc: enable memory to memory function

 .../media/common/videobuf2/videobuf2-v4l2.c   |   4 +
 drivers/media/v4l2-core/v4l2-dev.c            |  17 +
 drivers/media/v4l2-core/v4l2-ioctl.c          |  52 ++
 include/media/v4l2-dev.h                      |   2 +
 include/media/v4l2-ioctl.h                    |  34 +
 include/uapi/linux/videodev2.h                |  19 +
 sound/soc/fsl/Kconfig                         |  13 +
 sound/soc/fsl/Makefile                        |   2 +
 sound/soc/fsl/fsl_asrc.c                      | 175 +++-
 sound/soc/fsl/fsl_asrc.h                      |   2 +
 sound/soc/fsl/fsl_asrc_common.h               |  54 ++
 sound/soc/fsl/fsl_asrc_m2m.c                  | 878 ++++++++++++++++++
 sound/soc/fsl/fsl_asrc_m2m.h                  |  48 +
 sound/soc/fsl/fsl_easrc.c                     | 255 ++++-
 sound/soc/fsl/fsl_easrc.h                     |   6 +
 15 files changed, 1557 insertions(+), 4 deletions(-)
 create mode 100644 sound/soc/fsl/fsl_asrc_m2m.c
 create mode 100644 sound/soc/fsl/fsl_asrc_m2m.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [PATCH 1/6] media: v4l2: Add audio capture and output support
  2023-06-29  1:37 [PATCH 0/6] Add audio support in v4l2 framework Shengjiu Wang
@ 2023-06-29  1:37 ` Shengjiu Wang
  2023-06-30 10:05     ` Sakari Ailus
  2023-06-29  1:37 ` [PATCH 2/6] ASoC: fsl_asrc: define functions for memory to memory usage Shengjiu Wang
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 41+ messages in thread
From: Shengjiu Wang @ 2023-06-29  1:37 UTC (permalink / raw)
  To: tfiga, m.szyprowski, mchehab, linux-media, linux-kernel,
	shengjiu.wang, Xiubo.Lee, festevam, nicoleotsuka, lgirdwood,
	broonie, perex, tiwai, alsa-devel, linuxppc-dev

Audio signal processing has the requirement for memory to
memory similar as Video.

This patch is to add this support in v4l2 framework, defined
new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and
V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format
for audio case usage.

The created audio device is named "/dev/audioX".

Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
---
 .../media/common/videobuf2/videobuf2-v4l2.c   |  4 ++
 drivers/media/v4l2-core/v4l2-dev.c            | 17 ++++++
 drivers/media/v4l2-core/v4l2-ioctl.c          | 52 +++++++++++++++++++
 include/media/v4l2-dev.h                      |  2 +
 include/media/v4l2-ioctl.h                    | 34 ++++++++++++
 include/uapi/linux/videodev2.h                | 19 +++++++
 6 files changed, 128 insertions(+)

diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c b/drivers/media/common/videobuf2/videobuf2-v4l2.c
index c7a54d82a55e..12f2be2773a2 100644
--- a/drivers/media/common/videobuf2/videobuf2-v4l2.c
+++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c
@@ -785,6 +785,10 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create)
 	case V4L2_BUF_TYPE_META_OUTPUT:
 		requested_sizes[0] = f->fmt.meta.buffersize;
 		break;
+	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
+	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
+		requested_sizes[0] = f->fmt.audio.buffersize;
+		break;
 	default:
 		return -EINVAL;
 	}
diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c
index f81279492682..67484f4c6eaf 100644
--- a/drivers/media/v4l2-core/v4l2-dev.c
+++ b/drivers/media/v4l2-core/v4l2-dev.c
@@ -553,6 +553,7 @@ static void determine_valid_ioctls(struct video_device *vdev)
 	bool is_tch = vdev->vfl_type == VFL_TYPE_TOUCH;
 	bool is_meta = vdev->vfl_type == VFL_TYPE_VIDEO &&
 		       (vdev->device_caps & meta_caps);
+	bool is_audio = vdev->vfl_type == VFL_TYPE_AUDIO;
 	bool is_rx = vdev->vfl_dir != VFL_DIR_TX;
 	bool is_tx = vdev->vfl_dir != VFL_DIR_RX;
 	bool is_io_mc = vdev->device_caps & V4L2_CAP_IO_MC;
@@ -664,6 +665,19 @@ static void determine_valid_ioctls(struct video_device *vdev)
 		SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_meta_out);
 		SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_meta_out);
 	}
+	if (is_audio && is_rx) {
+		/* audio capture specific ioctls */
+		SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_cap);
+		SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_cap);
+		SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_cap);
+		SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_cap);
+	} else if (is_audio && is_tx) {
+		/* audio output specific ioctls */
+		SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_out);
+		SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_out);
+		SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_out);
+		SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_out);
+	}
 	if (is_vbi) {
 		/* vbi specific ioctls */
 		if ((is_rx && (ops->vidioc_g_fmt_vbi_cap ||
@@ -927,6 +941,9 @@ int __video_register_device(struct video_device *vdev,
 	case VFL_TYPE_TOUCH:
 		name_base = "v4l-touch";
 		break;
+	case VFL_TYPE_AUDIO:
+		name_base = "audio";
+		break;
 	default:
 		pr_err("%s called with unknown type: %d\n",
 		       __func__, type);
diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
index a858acea6547..26bc4b0d8ef0 100644
--- a/drivers/media/v4l2-core/v4l2-ioctl.c
+++ b/drivers/media/v4l2-core/v4l2-ioctl.c
@@ -188,6 +188,8 @@ const char *v4l2_type_names[] = {
 	[V4L2_BUF_TYPE_SDR_OUTPUT]         = "sdr-out",
 	[V4L2_BUF_TYPE_META_CAPTURE]       = "meta-cap",
 	[V4L2_BUF_TYPE_META_OUTPUT]	   = "meta-out",
+	[V4L2_BUF_TYPE_AUDIO_CAPTURE]      = "audio-cap",
+	[V4L2_BUF_TYPE_AUDIO_OUTPUT]	   = "audio-out",
 };
 EXPORT_SYMBOL(v4l2_type_names);
 
@@ -276,6 +278,7 @@ static void v4l_print_format(const void *arg, bool write_only)
 	const struct v4l2_sliced_vbi_format *sliced;
 	const struct v4l2_window *win;
 	const struct v4l2_meta_format *meta;
+	const struct v4l2_audio_format *audio;
 	u32 pixelformat;
 	u32 planes;
 	unsigned i;
@@ -346,6 +349,12 @@ static void v4l_print_format(const void *arg, bool write_only)
 		pr_cont(", dataformat=%p4cc, buffersize=%u\n",
 			&pixelformat, meta->buffersize);
 		break;
+	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
+	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
+		audio = &p->fmt.audio;
+		pr_cont(", rate=%u, format=%u, channels=%u, buffersize=%u\n",
+			audio->rate, audio->format, audio->channels, audio->buffersize);
+		break;
 	}
 }
 
@@ -927,6 +936,7 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
 	bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH;
 	bool is_meta = vfd->vfl_type == VFL_TYPE_VIDEO &&
 		       (vfd->device_caps & meta_caps);
+	bool is_audio = vfd->vfl_type == VFL_TYPE_AUDIO;
 	bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
 	bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
 
@@ -992,6 +1002,14 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
 		if (is_meta && is_tx && ops->vidioc_g_fmt_meta_out)
 			return 0;
 		break;
+	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
+		if (is_audio && is_rx && ops->vidioc_g_fmt_audio_cap)
+			return 0;
+		break;
+	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
+		if (is_audio && is_tx && ops->vidioc_g_fmt_audio_out)
+			return 0;
+		break;
 	default:
 		break;
 	}
@@ -1592,6 +1610,16 @@ static int v4l_enum_fmt(const struct v4l2_ioctl_ops *ops,
 			break;
 		ret = ops->vidioc_enum_fmt_meta_out(file, fh, arg);
 		break;
+	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
+		if (unlikely(!ops->vidioc_enum_fmt_audio_cap))
+			break;
+		ret = ops->vidioc_enum_fmt_audio_cap(file, fh, arg);
+		break;
+	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
+		if (unlikely(!ops->vidioc_enum_fmt_audio_out))
+			break;
+		ret = ops->vidioc_enum_fmt_audio_out(file, fh, arg);
+		break;
 	}
 	if (ret == 0)
 		v4l_fill_fmtdesc(p);
@@ -1668,6 +1696,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops,
 		return ops->vidioc_g_fmt_meta_cap(file, fh, arg);
 	case V4L2_BUF_TYPE_META_OUTPUT:
 		return ops->vidioc_g_fmt_meta_out(file, fh, arg);
+	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
+		return ops->vidioc_g_fmt_audio_cap(file, fh, arg);
+	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
+		return ops->vidioc_g_fmt_audio_out(file, fh, arg);
 	}
 	return -EINVAL;
 }
@@ -1779,6 +1811,16 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops,
 			break;
 		memset_after(p, 0, fmt.meta);
 		return ops->vidioc_s_fmt_meta_out(file, fh, arg);
+	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
+		if (unlikely(!ops->vidioc_s_fmt_audio_cap))
+			break;
+		memset_after(p, 0, fmt.audio);
+		return ops->vidioc_s_fmt_audio_cap(file, fh, arg);
+	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
+		if (unlikely(!ops->vidioc_s_fmt_audio_out))
+			break;
+		memset_after(p, 0, fmt.audio);
+		return ops->vidioc_s_fmt_audio_out(file, fh, arg);
 	}
 	return -EINVAL;
 }
@@ -1887,6 +1929,16 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops,
 			break;
 		memset_after(p, 0, fmt.meta);
 		return ops->vidioc_try_fmt_meta_out(file, fh, arg);
+	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
+		if (unlikely(!ops->vidioc_try_fmt_audio_cap))
+			break;
+		memset_after(p, 0, fmt.audio);
+		return ops->vidioc_try_fmt_audio_cap(file, fh, arg);
+	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
+		if (unlikely(!ops->vidioc_try_fmt_audio_out))
+			break;
+		memset_after(p, 0, fmt.audio);
+		return ops->vidioc_try_fmt_audio_out(file, fh, arg);
 	}
 	return -EINVAL;
 }
diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h
index e0a13505f88d..0924e6d1dab1 100644
--- a/include/media/v4l2-dev.h
+++ b/include/media/v4l2-dev.h
@@ -30,6 +30,7 @@
  * @VFL_TYPE_SUBDEV:	for V4L2 subdevices
  * @VFL_TYPE_SDR:	for Software Defined Radio tuners
  * @VFL_TYPE_TOUCH:	for touch sensors
+ * @VFL_TYPE_AUDIO:	for audio input/output devices
  * @VFL_TYPE_MAX:	number of VFL types, must always be last in the enum
  */
 enum vfl_devnode_type {
@@ -39,6 +40,7 @@ enum vfl_devnode_type {
 	VFL_TYPE_SUBDEV,
 	VFL_TYPE_SDR,
 	VFL_TYPE_TOUCH,
+	VFL_TYPE_AUDIO,
 	VFL_TYPE_MAX /* Shall be the last one */
 };
 
diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h
index edb733f21604..f840cf740ce1 100644
--- a/include/media/v4l2-ioctl.h
+++ b/include/media/v4l2-ioctl.h
@@ -45,6 +45,12 @@ struct v4l2_fh;
  * @vidioc_enum_fmt_meta_out: pointer to the function that implements
  *	:ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
  *	for metadata output
+ * @vidioc_enum_fmt_audio_cap: pointer to the function that implements
+ *	:ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
+ *	for audio capture
+ * @vidioc_enum_fmt_audio_out: pointer to the function that implements
+ *	:ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
+ *	for audio output
  * @vidioc_g_fmt_vid_cap: pointer to the function that implements
  *	:ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for video capture
  *	in single plane mode
@@ -79,6 +85,10 @@ struct v4l2_fh;
  *	:ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
  * @vidioc_g_fmt_meta_out: pointer to the function that implements
  *	:ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata output
+ * @vidioc_g_fmt_audio_cap: pointer to the function that implements
+ *	:ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio capture
+ * @vidioc_g_fmt_audio_out: pointer to the function that implements
+ *	:ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio output
  * @vidioc_s_fmt_vid_cap: pointer to the function that implements
  *	:ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for video capture
  *	in single plane mode
@@ -113,6 +123,10 @@ struct v4l2_fh;
  *	:ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
  * @vidioc_s_fmt_meta_out: pointer to the function that implements
  *	:ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata output
+ * @vidioc_s_fmt_audio_cap: pointer to the function that implements
+ *	:ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio capture
+ * @vidioc_s_fmt_audio_out: pointer to the function that implements
+ *	:ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio output
  * @vidioc_try_fmt_vid_cap: pointer to the function that implements
  *	:ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for video capture
  *	in single plane mode
@@ -149,6 +163,10 @@ struct v4l2_fh;
  *	:ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
  * @vidioc_try_fmt_meta_out: pointer to the function that implements
  *	:ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata output
+ * @vidioc_try_fmt_audio_cap: pointer to the function that implements
+ *	:ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio capture
+ * @vidioc_try_fmt_audio_out: pointer to the function that implements
+ *	:ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio output
  * @vidioc_reqbufs: pointer to the function that implements
  *	:ref:`VIDIOC_REQBUFS <vidioc_reqbufs>` ioctl
  * @vidioc_querybuf: pointer to the function that implements
@@ -315,6 +333,10 @@ struct v4l2_ioctl_ops {
 					struct v4l2_fmtdesc *f);
 	int (*vidioc_enum_fmt_meta_out)(struct file *file, void *fh,
 					struct v4l2_fmtdesc *f);
+	int (*vidioc_enum_fmt_audio_cap)(struct file *file, void *fh,
+					 struct v4l2_fmtdesc *f);
+	int (*vidioc_enum_fmt_audio_out)(struct file *file, void *fh,
+					 struct v4l2_fmtdesc *f);
 
 	/* VIDIOC_G_FMT handlers */
 	int (*vidioc_g_fmt_vid_cap)(struct file *file, void *fh,
@@ -345,6 +367,10 @@ struct v4l2_ioctl_ops {
 				     struct v4l2_format *f);
 	int (*vidioc_g_fmt_meta_out)(struct file *file, void *fh,
 				     struct v4l2_format *f);
+	int (*vidioc_g_fmt_audio_cap)(struct file *file, void *fh,
+				      struct v4l2_format *f);
+	int (*vidioc_g_fmt_audio_out)(struct file *file, void *fh,
+				      struct v4l2_format *f);
 
 	/* VIDIOC_S_FMT handlers */
 	int (*vidioc_s_fmt_vid_cap)(struct file *file, void *fh,
@@ -375,6 +401,10 @@ struct v4l2_ioctl_ops {
 				     struct v4l2_format *f);
 	int (*vidioc_s_fmt_meta_out)(struct file *file, void *fh,
 				     struct v4l2_format *f);
+	int (*vidioc_s_fmt_audio_cap)(struct file *file, void *fh,
+				      struct v4l2_format *f);
+	int (*vidioc_s_fmt_audio_out)(struct file *file, void *fh,
+				      struct v4l2_format *f);
 
 	/* VIDIOC_TRY_FMT handlers */
 	int (*vidioc_try_fmt_vid_cap)(struct file *file, void *fh,
@@ -405,6 +435,10 @@ struct v4l2_ioctl_ops {
 				       struct v4l2_format *f);
 	int (*vidioc_try_fmt_meta_out)(struct file *file, void *fh,
 				       struct v4l2_format *f);
+	int (*vidioc_try_fmt_audio_cap)(struct file *file, void *fh,
+					struct v4l2_format *f);
+	int (*vidioc_try_fmt_audio_out)(struct file *file, void *fh,
+					struct v4l2_format *f);
 
 	/* Buffer handlers */
 	int (*vidioc_reqbufs)(struct file *file, void *fh,
diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
index aee75eb9e686..a7af28f4c8c3 100644
--- a/include/uapi/linux/videodev2.h
+++ b/include/uapi/linux/videodev2.h
@@ -153,6 +153,8 @@ enum v4l2_buf_type {
 	V4L2_BUF_TYPE_SDR_OUTPUT           = 12,
 	V4L2_BUF_TYPE_META_CAPTURE         = 13,
 	V4L2_BUF_TYPE_META_OUTPUT	   = 14,
+	V4L2_BUF_TYPE_AUDIO_CAPTURE        = 15,
+	V4L2_BUF_TYPE_AUDIO_OUTPUT         = 16,
 	/* Deprecated, do not use */
 	V4L2_BUF_TYPE_PRIVATE              = 0x80,
 };
@@ -169,6 +171,7 @@ enum v4l2_buf_type {
 	 || (type) == V4L2_BUF_TYPE_VBI_OUTPUT			\
 	 || (type) == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT		\
 	 || (type) == V4L2_BUF_TYPE_SDR_OUTPUT			\
+	 || (type) == V4L2_BUF_TYPE_AUDIO_OUTPUT		\
 	 || (type) == V4L2_BUF_TYPE_META_OUTPUT)
 
 #define V4L2_TYPE_IS_CAPTURE(type) (!V4L2_TYPE_IS_OUTPUT(type))
@@ -2404,6 +2407,20 @@ struct v4l2_meta_format {
 	__u32				buffersize;
 } __attribute__ ((packed));
 
+/**
+ * struct v4l2_audio_format - audio data format definition
+ * @rate:		sample rate
+ * @format:		sample format
+ * @channels:		channel numbers
+ * @buffersize:		maximum size in bytes required for data
+ */
+struct v4l2_audio_format {
+	__u32				rate;
+	__u32				format;
+	__u32				channels;
+	__u32				buffersize;
+} __attribute__ ((packed));
+
 /**
  * struct v4l2_format - stream data format
  * @type:	enum v4l2_buf_type; type of the data stream
@@ -2412,6 +2429,7 @@ struct v4l2_meta_format {
  * @win:	definition of an overlaid image
  * @vbi:	raw VBI capture or output parameters
  * @sliced:	sliced VBI capture or output parameters
+ * @audio:	definition of an audio format
  * @raw_data:	placeholder for future extensions and custom formats
  * @fmt:	union of @pix, @pix_mp, @win, @vbi, @sliced, @sdr, @meta
  *		and @raw_data
@@ -2426,6 +2444,7 @@ struct v4l2_format {
 		struct v4l2_sliced_vbi_format	sliced;  /* V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */
 		struct v4l2_sdr_format		sdr;     /* V4L2_BUF_TYPE_SDR_CAPTURE */
 		struct v4l2_meta_format		meta;    /* V4L2_BUF_TYPE_META_CAPTURE */
+		struct v4l2_audio_format	audio;   /* V4L2_BUF_TYPE_AUDIO_CAPTURE */
 		__u8	raw_data[200];                   /* user-defined */
 	} fmt;
 };
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 2/6] ASoC: fsl_asrc: define functions for memory to memory usage
  2023-06-29  1:37 [PATCH 0/6] Add audio support in v4l2 framework Shengjiu Wang
  2023-06-29  1:37 ` [PATCH 1/6] media: v4l2: Add audio capture and output support Shengjiu Wang
@ 2023-06-29  1:37 ` Shengjiu Wang
  2023-06-29  1:37 ` [PATCH 3/6] ASoC: fsl_easrc: " Shengjiu Wang
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 41+ messages in thread
From: Shengjiu Wang @ 2023-06-29  1:37 UTC (permalink / raw)
  To: tfiga, m.szyprowski, mchehab, linux-media, linux-kernel,
	shengjiu.wang, Xiubo.Lee, festevam, nicoleotsuka, lgirdwood,
	broonie, perex, tiwai, alsa-devel, linuxppc-dev

ASRC can be used on memory to memory case, define several
functions for m2m usage.

m2m_start_part_one: first part of the start steps
m2m_start_part_two: second part of the start steps
m2m_stop_part_one: first part of stop steps
m2m_stop_part_two: second part of stop steps
m2m_check_format: check format is supported or not
m2m_calc_out_len: calculate output length according to input length
m2m_get_maxburst: burst size for dma
m2m_pair_suspend: suspend function of pair
m2m_pair_resume: resume function of pair
get_output_fifo_size: get remaining data size in FIFO

Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
---
 sound/soc/fsl/fsl_asrc.c        | 138 ++++++++++++++++++++++++++++++++
 sound/soc/fsl/fsl_asrc.h        |   2 +
 sound/soc/fsl/fsl_asrc_common.h |  54 +++++++++++++
 3 files changed, 194 insertions(+)

diff --git a/sound/soc/fsl/fsl_asrc.c b/sound/soc/fsl/fsl_asrc.c
index adb8a59de2bd..30190ccb74e7 100644
--- a/sound/soc/fsl/fsl_asrc.c
+++ b/sound/soc/fsl/fsl_asrc.c
@@ -1063,6 +1063,135 @@ static int fsl_asrc_get_fifo_addr(u8 dir, enum asrc_pair_index index)
 	return REG_ASRDx(dir, index);
 }
 
+/* Get sample numbers in FIFO */
+static unsigned int fsl_asrc_get_output_fifo_size(struct fsl_asrc_pair *pair)
+{
+	struct fsl_asrc *asrc = pair->asrc;
+	enum asrc_pair_index index = pair->index;
+	u32 val;
+
+	regmap_read(asrc->regmap, REG_ASRFST(index), &val);
+
+	val &= ASRFSTi_OUTPUT_FIFO_MASK;
+
+	return val >> ASRFSTi_OUTPUT_FIFO_SHIFT;
+}
+
+static int fsl_asrc_m2m_start_part_one(struct fsl_asrc_pair *pair)
+{
+	struct fsl_asrc_pair_priv *pair_priv = pair->private;
+	struct fsl_asrc *asrc = pair->asrc;
+	struct device *dev = &asrc->pdev->dev;
+	struct asrc_config config;
+	int ret;
+
+	/* fill config */
+	config.pair = pair->index;
+	config.channel_num = pair->channels;
+	config.input_sample_rate = pair->rate[IN];
+	config.output_sample_rate = pair->rate[OUT];
+	config.input_format = pair->sample_format[IN];
+	config.output_format = pair->sample_format[OUT];
+	config.inclk = INCLK_NONE;
+	config.outclk = OUTCLK_ASRCK1_CLK;
+
+	pair_priv->config = &config;
+	ret = fsl_asrc_config_pair(pair, true);
+	if (ret) {
+		dev_err(dev, "failed to config pair: %d\n", ret);
+		return ret;
+	}
+
+	fsl_asrc_start_pair(pair);
+
+	return 0;
+}
+
+static int fsl_asrc_m2m_start_part_two(struct fsl_asrc_pair *pair)
+{
+	/*
+	 * Clear DMA request during the stall state of ASRC:
+	 * During STALL state, the remaining in input fifo would never be
+	 * smaller than the input threshold while the output fifo would not
+	 * be bigger than output one. Thus the DMA request would be cleared.
+	 */
+	fsl_asrc_set_watermarks(pair, ASRC_FIFO_THRESHOLD_MIN,
+				ASRC_FIFO_THRESHOLD_MAX);
+
+	/* Update the real input threshold to raise DMA request */
+	fsl_asrc_set_watermarks(pair, ASRC_M2M_INPUTFIFO_WML,
+				ASRC_M2M_OUTPUTFIFO_WML);
+
+	return 0;
+}
+
+static int fsl_asrc_m2m_stop_part_one(struct fsl_asrc_pair *pair)
+{
+	fsl_asrc_stop_pair(pair);
+
+	return 0;
+}
+
+static int fsl_asrc_m2m_check_format(u8 dir, u32 rate, u32 channels, u32 format)
+{
+	u64 support_format = FSL_ASRC_FORMATS;
+
+	if (channels < 1 || channels > 10)
+		return -EINVAL;
+
+	if (rate < 5512 || rate > 192000)
+		return -EINVAL;
+
+	if (dir == IN)
+		support_format |= SNDRV_PCM_FMTBIT_S8;
+
+	if (!(1 << format & support_format))
+		return -EINVAL;
+
+	return 0;
+}
+
+/* calculate capture data length according to output data length and sample rate */
+static int fsl_asrc_m2m_calc_out_len(struct fsl_asrc_pair *pair, int input_buffer_length)
+{
+	unsigned int in_width, out_width;
+	unsigned int channels = pair->channels;
+	unsigned int in_samples, out_samples;
+	unsigned int out_length;
+
+	in_width = snd_pcm_format_physical_width(pair->sample_format[IN]) / 8;
+	out_width = snd_pcm_format_physical_width(pair->sample_format[OUT]) / 8;
+
+	in_samples = input_buffer_length / in_width / channels;
+	out_samples = pair->rate[OUT] * in_samples / pair->rate[IN];
+	out_length = (out_samples - ASRC_OUTPUT_LAST_SAMPLE) * out_width * channels;
+
+	return out_length;
+}
+
+static int fsl_asrc_m2m_get_maxburst(u8 dir, struct fsl_asrc_pair *pair)
+{
+	struct fsl_asrc *asrc = pair->asrc;
+	struct fsl_asrc_priv *asrc_priv = asrc->private;
+	int wml = (dir == IN) ? ASRC_M2M_INPUTFIFO_WML : ASRC_M2M_OUTPUTFIFO_WML;
+
+	if (!asrc_priv->soc->use_edma)
+		return wml * pair->channels;
+	else
+		return 1;
+}
+
+static int fsl_asrc_m2m_pair_resume(struct fsl_asrc_pair *pair)
+{
+	struct fsl_asrc *asrc = pair->asrc;
+	int i;
+
+	for (i = 0; i < pair->channels * 4; i++)
+		regmap_write(asrc->regmap, REG_ASRDI(pair->index), 0);
+
+	return 0;
+}
+
 static int fsl_asrc_runtime_resume(struct device *dev);
 static int fsl_asrc_runtime_suspend(struct device *dev);
 
@@ -1147,6 +1276,15 @@ static int fsl_asrc_probe(struct platform_device *pdev)
 	asrc->get_fifo_addr = fsl_asrc_get_fifo_addr;
 	asrc->pair_priv_size = sizeof(struct fsl_asrc_pair_priv);
 
+	asrc->m2m_start_part_one = fsl_asrc_m2m_start_part_one;
+	asrc->m2m_start_part_two = fsl_asrc_m2m_start_part_two;
+	asrc->m2m_stop_part_one = fsl_asrc_m2m_stop_part_one;
+	asrc->get_output_fifo_size = fsl_asrc_get_output_fifo_size;
+	asrc->m2m_check_format = fsl_asrc_m2m_check_format;
+	asrc->m2m_calc_out_len = fsl_asrc_m2m_calc_out_len;
+	asrc->m2m_get_maxburst = fsl_asrc_m2m_get_maxburst;
+	asrc->m2m_pair_resume = fsl_asrc_m2m_pair_resume;
+
 	if (of_device_is_compatible(np, "fsl,imx35-asrc")) {
 		asrc_priv->clk_map[IN] = input_clk_map_imx35;
 		asrc_priv->clk_map[OUT] = output_clk_map_imx35;
diff --git a/sound/soc/fsl/fsl_asrc.h b/sound/soc/fsl/fsl_asrc.h
index 86d2422ad606..1c492eb237f5 100644
--- a/sound/soc/fsl/fsl_asrc.h
+++ b/sound/soc/fsl/fsl_asrc.h
@@ -12,6 +12,8 @@
 
 #include  "fsl_asrc_common.h"
 
+#define ASRC_M2M_INPUTFIFO_WML		0x4
+#define ASRC_M2M_OUTPUTFIFO_WML		0x2
 #define ASRC_DMA_BUFFER_NUM		2
 #define ASRC_INPUTFIFO_THRESHOLD	32
 #define ASRC_OUTPUTFIFO_THRESHOLD	32
diff --git a/sound/soc/fsl/fsl_asrc_common.h b/sound/soc/fsl/fsl_asrc_common.h
index 7e1c13ca37f1..9294f378f0cc 100644
--- a/sound/soc/fsl/fsl_asrc_common.h
+++ b/sound/soc/fsl/fsl_asrc_common.h
@@ -4,6 +4,10 @@
  *
  */
 
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-fh.h>
+
 #ifndef _FSL_ASRC_COMMON_H
 #define _FSL_ASRC_COMMON_H
 
@@ -34,6 +38,13 @@ enum asrc_pair_index {
  * @pos: hardware pointer position
  * @req_dma_chan: flag to release dev_to_dev chan
  * @private: pair private area
+ * @complete: dma task complete
+ * @fh: v4l2 file handler
+ * @ctrl_handler: v4l2 control handler
+ * @sample_format: format of m2m
+ * @rate: rate of m2m
+ * @buf_len: buffer length of m2m
+ * @req_pair: flag for request pair
  */
 struct fsl_asrc_pair {
 	struct fsl_asrc *asrc;
@@ -49,6 +60,15 @@ struct fsl_asrc_pair {
 	bool req_dma_chan;
 
 	void *private;
+
+	/* used for m2m */
+	struct completion complete[2];
+	struct v4l2_fh fh;
+	struct v4l2_ctrl_handler ctrl_handler;
+	snd_pcm_format_t sample_format[2];
+	unsigned int rate[2];
+	unsigned int buf_len[2];
+	bool req_pair;
 };
 
 /**
@@ -63,6 +83,10 @@ struct fsl_asrc_pair {
  * @ipg_clk: clock source to drive peripheral
  * @spba_clk: SPBA clock (optional, depending on SoC design)
  * @lock: spin lock for resource protection
+ * @v4l2_dev: v4l2 device structure
+ * @m2m_dev: pointer to v4l2_m2m_dev
+ * @dec_vdev: pointer to video_device
+ * @mlock: v4l2 ioctls serialization
  * @pair: pair pointers
  * @channel_avail: non-occupied channel numbers
  * @asrc_rate: default sample rate for ASoC Back-Ends
@@ -72,6 +96,17 @@ struct fsl_asrc_pair {
  * @request_pair: function pointer
  * @release_pair: function pointer
  * @get_fifo_addr: function pointer
+ * @m2m_start_part_one: function pointer
+ * @m2m_start_part_two: function pointer
+ * @m2m_stop_part_one: function pointer
+ * @m2m_stop_part_two: function pointer
+ * @m2m_check_format: function pointer
+ * @m2m_calc_out_len: function pointer
+ * @m2m_get_maxburst: function pointer
+ * @m2m_pair_suspend: function pointer
+ * @m2m_pair_resume: function pointer
+ * @m2m_set_ratio_mod: function pointer
+ * @get_output_fifo_size: function pointer
  * @pair_priv_size: size of pair private struct.
  * @private: private data structure
  */
@@ -86,6 +121,11 @@ struct fsl_asrc {
 	struct clk *spba_clk;
 	spinlock_t lock;      /* spin lock for resource protection */
 
+	struct v4l2_device v4l2_dev;
+	struct v4l2_m2m_dev *m2m_dev;
+	struct video_device *dec_vdev;
+	struct mutex mlock; /* v4l2 ioctls serialization */
+
 	struct fsl_asrc_pair *pair[PAIR_CTX_NUM];
 	unsigned int channel_avail;
 
@@ -97,6 +137,20 @@ struct fsl_asrc {
 	int (*request_pair)(int channels, struct fsl_asrc_pair *pair);
 	void (*release_pair)(struct fsl_asrc_pair *pair);
 	int (*get_fifo_addr)(u8 dir, enum asrc_pair_index index);
+
+	int (*m2m_start_part_one)(struct fsl_asrc_pair *pair);
+	int (*m2m_start_part_two)(struct fsl_asrc_pair *pair);
+	int (*m2m_stop_part_one)(struct fsl_asrc_pair *pair);
+	int (*m2m_stop_part_two)(struct fsl_asrc_pair *pair);
+
+	int (*m2m_check_format)(u8 dir, u32 rate, u32 channels, u32 format);
+	int (*m2m_calc_out_len)(struct fsl_asrc_pair *pair, int input_buffer_length);
+	int (*m2m_get_maxburst)(u8 dir, struct fsl_asrc_pair *pair);
+	int (*m2m_pair_suspend)(struct fsl_asrc_pair *pair);
+	int (*m2m_pair_resume)(struct fsl_asrc_pair *pair);
+	int (*m2m_set_ratio_mod)(struct fsl_asrc_pair *pair, int val);
+
+	unsigned int (*get_output_fifo_size)(struct fsl_asrc_pair *pair);
 	size_t pair_priv_size;
 
 	void *private;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 3/6] ASoC: fsl_easrc: define functions for memory to memory usage
  2023-06-29  1:37 [PATCH 0/6] Add audio support in v4l2 framework Shengjiu Wang
  2023-06-29  1:37 ` [PATCH 1/6] media: v4l2: Add audio capture and output support Shengjiu Wang
  2023-06-29  1:37 ` [PATCH 2/6] ASoC: fsl_asrc: define functions for memory to memory usage Shengjiu Wang
@ 2023-06-29  1:37 ` Shengjiu Wang
  2023-06-29 10:59     ` Fabio Estevam
  2023-06-29  1:37 ` [PATCH 4/6] ASoC: fsl_asrc: Add memory to memory driver Shengjiu Wang
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 41+ messages in thread
From: Shengjiu Wang @ 2023-06-29  1:37 UTC (permalink / raw)
  To: tfiga, m.szyprowski, mchehab, linux-media, linux-kernel,
	shengjiu.wang, Xiubo.Lee, festevam, nicoleotsuka, lgirdwood,
	broonie, perex, tiwai, alsa-devel, linuxppc-dev

ASRC can be used on memory to memory case, define several
functions for m2m usage and export them as function pointer.

Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
---
 sound/soc/fsl/fsl_easrc.c | 214 ++++++++++++++++++++++++++++++++++++++
 sound/soc/fsl/fsl_easrc.h |   6 ++
 2 files changed, 220 insertions(+)

diff --git a/sound/soc/fsl/fsl_easrc.c b/sound/soc/fsl/fsl_easrc.c
index 670cbdb361b6..b735b24badc2 100644
--- a/sound/soc/fsl/fsl_easrc.c
+++ b/sound/soc/fsl/fsl_easrc.c
@@ -1861,6 +1861,210 @@ static int fsl_easrc_get_fifo_addr(u8 dir, enum asrc_pair_index index)
 	return REG_EASRC_FIFO(dir, index);
 }
 
+/* Get sample numbers in FIFO */
+static unsigned int fsl_easrc_get_output_fifo_size(struct fsl_asrc_pair *pair)
+{
+	struct fsl_asrc *asrc = pair->asrc;
+	enum asrc_pair_index index = pair->index;
+	u32 val;
+
+	regmap_read(asrc->regmap, REG_EASRC_SFS(index), &val);
+	val &= EASRC_SFS_NSGO_MASK;
+
+	return val >> EASRC_SFS_NSGO_SHIFT;
+}
+
+static int fsl_easrc_m2m_start_part_one(struct fsl_asrc_pair *pair)
+{
+	struct fsl_easrc_ctx_priv *ctx_priv = pair->private;
+	struct fsl_asrc *asrc = pair->asrc;
+	struct device *dev = &asrc->pdev->dev;
+	int ret;
+
+	ctx_priv->in_params.sample_rate = pair->rate[IN];
+	ctx_priv->in_params.sample_format = pair->sample_format[IN];
+	ctx_priv->out_params.sample_rate = pair->rate[OUT];
+	ctx_priv->out_params.sample_format = pair->sample_format[OUT];
+
+	ctx_priv->in_params.fifo_wtmk = FSL_EASRC_INPUTFIFO_WML;
+	ctx_priv->out_params.fifo_wtmk = FSL_EASRC_OUTPUTFIFO_WML;
+	/* Fill the right half of the re-sampler with zeros */
+	ctx_priv->rs_init_mode = 0x2;
+	/* Zero fill the right half of the prefilter */
+	ctx_priv->pf_init_mode = 0x2;
+
+	ret = fsl_easrc_set_ctx_format(pair,
+				       &ctx_priv->in_params.sample_format,
+				       &ctx_priv->out_params.sample_format);
+	if (ret) {
+		dev_err(dev, "failed to set context format: %d\n", ret);
+		return ret;
+	}
+
+	ret = fsl_easrc_config_context(asrc, pair->index);
+	if (ret) {
+		dev_err(dev, "failed to config context %d\n", ret);
+		return ret;
+	}
+
+	ctx_priv->in_params.iterations = 1;
+	ctx_priv->in_params.group_len = pair->channels;
+	ctx_priv->in_params.access_len = pair->channels;
+	ctx_priv->out_params.iterations = 1;
+	ctx_priv->out_params.group_len = pair->channels;
+	ctx_priv->out_params.access_len = pair->channels;
+
+	ret = fsl_easrc_set_ctx_organziation(pair);
+	if (ret) {
+		dev_err(dev, "failed to set fifo organization\n");
+		return ret;
+	}
+
+	/* The context start flag */
+	ctx_priv->first_convert = 1;
+	return 0;
+}
+
+static int fsl_easrc_m2m_start_part_two(struct fsl_asrc_pair *pair)
+{
+	struct fsl_easrc_ctx_priv *ctx_priv = pair->private;
+	/* start context once */
+	if (ctx_priv->first_convert) {
+		fsl_easrc_start_context(pair);
+		ctx_priv->first_convert = 0;
+	}
+
+	return 0;
+}
+
+static int fsl_easrc_m2m_stop_part_two(struct fsl_asrc_pair *pair)
+{
+	struct fsl_easrc_ctx_priv *ctx_priv = pair->private;
+	/* Stop pair/context */
+	if (!ctx_priv->first_convert) {
+		fsl_easrc_stop_context(pair);
+		ctx_priv->first_convert = 1;
+	}
+
+	return 0;
+}
+
+static int fsl_easrc_m2m_check_format(u8 dir, u32 rate, u32 channels, u32 format)
+{
+	u64 support_format = FSL_EASRC_FORMATS;
+
+	if (channels < 1 || channels > 32)
+		return -EINVAL;
+
+	if (rate < 8000 || rate > 768000)
+		return -EINVAL;
+
+	if (dir == OUT)
+		support_format |= SNDRV_PCM_FMTBIT_IEC958_SUBFRAME_LE;
+
+	if (!(1 << format & support_format))
+		return -EINVAL;
+
+	return 0;
+}
+
+/* calculate capture data length according to output data length and sample rate */
+static int fsl_easrc_m2m_calc_out_len(struct fsl_asrc_pair *pair, int input_buffer_length)
+{
+	struct fsl_asrc *easrc = pair->asrc;
+	struct fsl_easrc_priv *easrc_priv = easrc->private;
+	struct fsl_easrc_ctx_priv *ctx_priv = pair->private;
+	unsigned int in_rate = ctx_priv->in_params.norm_rate;
+	unsigned int out_rate = ctx_priv->out_params.norm_rate;
+	unsigned int channels = pair->channels;
+	unsigned int in_samples, out_samples;
+	unsigned int in_width, out_width;
+	unsigned int out_length;
+	unsigned int frac_bits;
+	u64 val1, val2;
+
+	switch (easrc_priv->rs_num_taps) {
+	case EASRC_RS_32_TAPS:
+		/* integer bits = 5; */
+		frac_bits = 39;
+		break;
+	case EASRC_RS_64_TAPS:
+		/* integer bits = 6; */
+		frac_bits = 38;
+		break;
+	case EASRC_RS_128_TAPS:
+		/* integer bits = 7; */
+		frac_bits = 37;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	val1 = (u64)in_rate << frac_bits;
+	do_div(val1, out_rate);
+	val1 = val1 + ctx_priv->ratio_mod;
+
+	in_width = snd_pcm_format_physical_width(ctx_priv->in_params.sample_format) / 8;
+	out_width = snd_pcm_format_physical_width(ctx_priv->out_params.sample_format) / 8;
+
+	ctx_priv->in_filled_len += input_buffer_length;
+	if (ctx_priv->in_filled_len <= ctx_priv->in_filled_sample * in_width * channels) {
+		out_length = 0;
+	} else {
+		in_samples = ctx_priv->in_filled_len / (in_width * channels) -
+			     ctx_priv->in_filled_sample;
+
+		/* right shift 12 bit to make ratio in 32bit space */
+		val2 = (u64)in_samples << (frac_bits - 12);
+		val1 = val1 >> 12;
+		do_div(val2, val1);
+		out_samples = val2;
+
+		out_length = out_samples * out_width * channels;
+		ctx_priv->in_filled_len = ctx_priv->in_filled_sample * in_width * channels;
+	}
+
+	return out_length;
+}
+
+static int fsl_easrc_m2m_get_maxburst(u8 dir, struct fsl_asrc_pair *pair)
+{
+	struct fsl_easrc_ctx_priv *ctx_priv = pair->private;
+
+	if (dir == IN)
+		return ctx_priv->in_params.fifo_wtmk * pair->channels;
+	else
+		return ctx_priv->out_params.fifo_wtmk * pair->channels;
+}
+
+static int fsl_easrc_m2m_pair_suspend(struct fsl_asrc_pair *pair)
+{
+	fsl_easrc_stop_context(pair);
+
+	return 0;
+}
+
+static int fsl_easrc_m2m_pair_resume(struct fsl_asrc_pair *pair)
+{
+	struct fsl_easrc_ctx_priv *ctx_priv = pair->private;
+
+	ctx_priv->first_convert = 1;
+	ctx_priv->in_filled_len = 0;
+
+	return 0;
+}
+
+static int fsl_easrc_m2m_set_ratio_mod(struct fsl_asrc_pair *pair, int val)
+{
+	struct fsl_easrc_ctx_priv *ctx_priv = pair->private;
+	struct fsl_asrc *easrc = pair->asrc;
+
+	ctx_priv->ratio_mod += val;
+	regmap_write(easrc->regmap, REG_EASRC_RUC(pair->index), EASRC_RSUC_RS_RM(val));
+
+	return 0;
+}
+
 static const struct of_device_id fsl_easrc_dt_ids[] = {
 	{ .compatible = "fsl,imx8mn-easrc",},
 	{}
@@ -1926,6 +2130,16 @@ static int fsl_easrc_probe(struct platform_device *pdev)
 	easrc->release_pair = fsl_easrc_release_context;
 	easrc->get_fifo_addr = fsl_easrc_get_fifo_addr;
 	easrc->pair_priv_size = sizeof(struct fsl_easrc_ctx_priv);
+	easrc->m2m_start_part_one = fsl_easrc_m2m_start_part_one;
+	easrc->m2m_start_part_two = fsl_easrc_m2m_start_part_two;
+	easrc->m2m_stop_part_two = fsl_easrc_m2m_stop_part_two;
+	easrc->get_output_fifo_size = fsl_easrc_get_output_fifo_size;
+	easrc->m2m_check_format = fsl_easrc_m2m_check_format;
+	easrc->m2m_calc_out_len = fsl_easrc_m2m_calc_out_len;
+	easrc->m2m_get_maxburst = fsl_easrc_m2m_get_maxburst;
+	easrc->m2m_pair_suspend = fsl_easrc_m2m_pair_suspend;
+	easrc->m2m_pair_resume = fsl_easrc_m2m_pair_resume;
+	easrc->m2m_set_ratio_mod = fsl_easrc_m2m_set_ratio_mod;
 
 	easrc_priv->rs_num_taps = EASRC_RS_32_TAPS;
 	easrc_priv->const_coeff = 0x3FF0000000000000;
diff --git a/sound/soc/fsl/fsl_easrc.h b/sound/soc/fsl/fsl_easrc.h
index 7c70dac52713..bee887c8b4f2 100644
--- a/sound/soc/fsl/fsl_easrc.h
+++ b/sound/soc/fsl/fsl_easrc.h
@@ -601,6 +601,9 @@ struct fsl_easrc_slot {
  * @out_missed_sample: sample missed in output
  * @st1_addexp: exponent added for stage1
  * @st2_addexp: exponent added for stage2
+ * @ratio_mod: update ratio
+ * @first_convert: start of conversion
+ * @in_filled_len: input filled length
  */
 struct fsl_easrc_ctx_priv {
 	struct fsl_easrc_io_params in_params;
@@ -618,6 +621,9 @@ struct fsl_easrc_ctx_priv {
 	int out_missed_sample;
 	int st1_addexp;
 	int st2_addexp;
+	int ratio_mod;
+	unsigned int first_convert;
+	unsigned int in_filled_len;
 };
 
 /**
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 4/6] ASoC: fsl_asrc: Add memory to memory driver
  2023-06-29  1:37 [PATCH 0/6] Add audio support in v4l2 framework Shengjiu Wang
                   ` (2 preceding siblings ...)
  2023-06-29  1:37 ` [PATCH 3/6] ASoC: fsl_easrc: " Shengjiu Wang
@ 2023-06-29  1:37 ` Shengjiu Wang
  2023-06-29 11:38     ` Mark Brown
  2023-06-29  1:37 ` [PATCH 5/6] ASoC: fsl_asrc: enable memory to memory function Shengjiu Wang
  2023-06-29  1:37 ` [PATCH 6/6] ASoC: fsl_easrc: " Shengjiu Wang
  5 siblings, 1 reply; 41+ messages in thread
From: Shengjiu Wang @ 2023-06-29  1:37 UTC (permalink / raw)
  To: tfiga, m.szyprowski, mchehab, linux-media, linux-kernel,
	shengjiu.wang, Xiubo.Lee, festevam, nicoleotsuka, lgirdwood,
	broonie, perex, tiwai, alsa-devel, linuxppc-dev

Implement the ASRC memory to memory function using
the v4l2 framework, user can use this function with
v4l2 ioctl interface.

User send the output and capture buffer to driver and
driver store the converted data to the capture buffer.

This feature can be shared by ASRC and EASRC drivers

Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
---
 sound/soc/fsl/Kconfig        |  13 +
 sound/soc/fsl/Makefile       |   2 +
 sound/soc/fsl/fsl_asrc_m2m.c | 878 +++++++++++++++++++++++++++++++++++
 sound/soc/fsl/fsl_asrc_m2m.h |  48 ++
 4 files changed, 941 insertions(+)
 create mode 100644 sound/soc/fsl/fsl_asrc_m2m.c
 create mode 100644 sound/soc/fsl/fsl_asrc_m2m.h

diff --git a/sound/soc/fsl/Kconfig b/sound/soc/fsl/Kconfig
index 725c530a3636..b61087c20a1d 100644
--- a/sound/soc/fsl/Kconfig
+++ b/sound/soc/fsl/Kconfig
@@ -14,6 +14,19 @@ config SND_SOC_FSL_ASRC
 	  This option is only useful for out-of-tree drivers since
 	  in-tree drivers select it automatically.
 
+config SND_SOC_FSL_ASRC_M2M
+	tristate "Asynchronous Sample Rate Converter (ASRC) M2M support"
+	depends on SND_SOC_FSL_ASRC
+	depends on V4L_MEM2MEM_DRIVERS
+	depends on MEDIA_SUPPORT
+	select VIDEOBUF2_DMA_CONTIG
+	select V4L2_MEM2MEM_DEV
+	help
+	  Say Y if you want to add ASRC M2M support for NXP CPUs.
+	  It is a completement for ASRC M2P and ASRC P2M features.
+	  This option is only useful for out-of-tree drivers since
+	  in-tree drivers select it automatically.
+
 config SND_SOC_FSL_SAI
 	tristate "Synchronous Audio Interface (SAI) module support"
 	select REGMAP_MMIO
diff --git a/sound/soc/fsl/Makefile b/sound/soc/fsl/Makefile
index 8db7e97d0bd5..02182fa4cf02 100644
--- a/sound/soc/fsl/Makefile
+++ b/sound/soc/fsl/Makefile
@@ -15,6 +15,7 @@ obj-$(CONFIG_SND_SOC_P1022_RDK) += snd-soc-p1022-rdk.o
 snd-soc-fsl-audmix-objs := fsl_audmix.o
 snd-soc-fsl-asoc-card-objs := fsl-asoc-card.o
 snd-soc-fsl-asrc-objs := fsl_asrc.o fsl_asrc_dma.o
+snd-soc-fsl-asrc-m2m-objs := fsl_asrc_m2m.o
 snd-soc-fsl-sai-objs := fsl_sai.o
 snd-soc-fsl-ssi-y := fsl_ssi.o
 snd-soc-fsl-ssi-$(CONFIG_DEBUG_FS) += fsl_ssi_dbg.o
@@ -33,6 +34,7 @@ snd-soc-fsl-qmc-audio-objs := fsl_qmc_audio.o
 obj-$(CONFIG_SND_SOC_FSL_AUDMIX) += snd-soc-fsl-audmix.o
 obj-$(CONFIG_SND_SOC_FSL_ASOC_CARD) += snd-soc-fsl-asoc-card.o
 obj-$(CONFIG_SND_SOC_FSL_ASRC) += snd-soc-fsl-asrc.o
+obj-$(CONFIG_SND_SOC_FSL_ASRC_M2M) += snd-soc-fsl-asrc-m2m.o
 obj-$(CONFIG_SND_SOC_FSL_SAI) += snd-soc-fsl-sai.o
 obj-$(CONFIG_SND_SOC_FSL_SSI) += snd-soc-fsl-ssi.o
 obj-$(CONFIG_SND_SOC_FSL_SPDIF) += snd-soc-fsl-spdif.o
diff --git a/sound/soc/fsl/fsl_asrc_m2m.c b/sound/soc/fsl/fsl_asrc_m2m.c
new file mode 100644
index 000000000000..02fe420793af
--- /dev/null
+++ b/sound/soc/fsl/fsl_asrc_m2m.c
@@ -0,0 +1,878 @@
+// SPDX-License-Identifier: GPL-2.0
+//
+// Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
+// Copyright (C) 2019-2023 NXP
+//
+// Freescale ASRC Memory to Memory (M2M) driver
+
+#include <linux/dma/imx-dma.h>
+#include <linux/pm_runtime.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-mem2mem.h>
+#include <media/videobuf2-dma-contig.h>
+#include <sound/dmaengine_pcm.h>
+#include "fsl_asrc.h"
+#include "fsl_asrc_m2m.h"
+
+#define ASRC_M2M_BUFFER_SIZE (512 * 1024)
+#define ASRC_M2M_PERIOD_SIZE (48 * 1024)
+#define ASRC_M2M_SG_NUM (20)
+
+static inline struct fsl_asrc_pair *fsl_asrc_m2m_fh_to_ctx(struct v4l2_fh *fh)
+{
+	return container_of(fh, struct fsl_asrc_pair, fh);
+}
+
+/**
+ * fsl_asrc_read_last_fifo: read all the remaining data from FIFO
+ *	@pair: Structure pointer of fsl_asrc_pair
+ *	@dma_vaddr: virtual address of capture buffer
+ *	@length: payload length of capture buffer
+ */
+static void fsl_asrc_read_last_fifo(struct fsl_asrc_pair *pair, void *dma_vaddr, u32 *length)
+{
+	struct fsl_asrc *asrc = pair->asrc;
+	enum asrc_pair_index index = pair->index;
+	u32 i, reg, size, t_size = 0, width;
+	u32 *reg32 = NULL;
+	u16 *reg16 = NULL;
+	u8  *reg24 = NULL;
+
+	width = snd_pcm_format_physical_width(pair->sample_format[V4L_CAP]);
+	if (width == 32)
+		reg32 = dma_vaddr + *length;
+	else if (width == 16)
+		reg16 = dma_vaddr + *length;
+	else
+		reg24 = dma_vaddr + *length;
+retry:
+	size = asrc->get_output_fifo_size(pair);
+	if (size + *length > ASRC_M2M_BUFFER_SIZE)
+		goto end;
+
+	for (i = 0; i < size * pair->channels; i++) {
+		regmap_read(asrc->regmap, asrc->get_fifo_addr(OUT, index), &reg);
+		if (reg32) {
+			*(reg32) = reg;
+			reg32++;
+		} else if (reg16) {
+			*(reg16) = (u16)reg;
+			reg16++;
+		} else {
+			*reg24++ = (u8)reg;
+			*reg24++ = (u8)(reg >> 8);
+			*reg24++ = (u8)(reg >> 16);
+		}
+	}
+	t_size += size;
+
+	/* In case there is data left in FIFO */
+	if (size)
+		goto retry;
+end:
+	/* Update payload length */
+	if (reg32)
+		*length += t_size * pair->channels * 4;
+	else if (reg16)
+		*length += t_size * pair->channels * 2;
+	else
+		*length += t_size * pair->channels * 3;
+}
+
+static int fsl_asrc_m2m_start_streaming(struct vb2_queue *q, unsigned int count)
+{
+	struct fsl_asrc_pair *pair = vb2_get_drv_priv(q);
+	struct fsl_asrc *asrc = pair->asrc;
+	struct device *dev = &asrc->pdev->dev;
+	struct vb2_v4l2_buffer *buf;
+	bool request_flag = false;
+	int ret;
+
+	dev_dbg(dev, "Start streaming pair=%p, %d\n", pair, q->type);
+
+	ret = pm_runtime_get_sync(dev);
+	if (ret < 0) {
+		dev_err(dev, "Failed to power up asrc\n");
+		goto err_pm_runtime;
+	}
+
+	/* Request asrc pair/context */
+	if (!pair->req_pair) {
+		/* flag for error handler of this function */
+		request_flag = true;
+
+		ret = asrc->request_pair(pair->channels, pair);
+		if (ret) {
+			dev_err(dev, "failed to request pair: %d\n", ret);
+			goto err_request_pair;
+		}
+
+		ret = asrc->m2m_start_part_one(pair);
+		if (ret) {
+			dev_err(dev, "failed to start pair part one: %d\n", ret);
+			goto err_start_part_one;
+		}
+
+		pair->req_pair = true;
+	}
+
+	/* Request dma channels */
+	if (V4L2_TYPE_IS_OUTPUT(q->type)) {
+		pair->dma_chan[V4L_OUT] = asrc->get_dma_channel(pair, IN);
+		if (!pair->dma_chan[V4L_OUT]) {
+			dev_err(dev, "[ctx%d] failed to get input DMA channel\n", pair->index);
+			ret = -EBUSY;
+			goto err_dma_channel;
+		}
+	} else {
+		pair->dma_chan[V4L_CAP] = asrc->get_dma_channel(pair, OUT);
+		if (!pair->dma_chan[V4L_CAP]) {
+			dev_err(dev, "[ctx%d] failed to get output DMA channel\n", pair->index);
+			ret = -EBUSY;
+			goto err_dma_channel;
+		}
+	}
+
+	v4l2_m2m_update_start_streaming_state(pair->fh.m2m_ctx, q);
+
+	return 0;
+
+err_dma_channel:
+	if (request_flag && asrc->m2m_stop_part_one)
+		asrc->m2m_stop_part_one(pair);
+err_start_part_one:
+	if (request_flag)
+		asrc->release_pair(pair);
+err_request_pair:
+	pm_runtime_put_sync(dev);
+err_pm_runtime:
+	/* Release buffers */
+	if (V4L2_TYPE_IS_OUTPUT(q->type)) {
+		while ((buf = v4l2_m2m_src_buf_remove(pair->fh.m2m_ctx)))
+			v4l2_m2m_buf_done(buf, VB2_BUF_STATE_QUEUED);
+	} else {
+		while ((buf = v4l2_m2m_dst_buf_remove(pair->fh.m2m_ctx)))
+			v4l2_m2m_buf_done(buf, VB2_BUF_STATE_QUEUED);
+	}
+	return ret;
+}
+
+static void fsl_asrc_m2m_stop_streaming(struct vb2_queue *q)
+{
+	struct fsl_asrc_pair *pair = vb2_get_drv_priv(q);
+	struct fsl_asrc *asrc = pair->asrc;
+	struct device *dev = &asrc->pdev->dev;
+
+	dev_dbg(dev, "Stop streaming pair=%p, %d\n", pair, q->type);
+
+	v4l2_m2m_update_stop_streaming_state(pair->fh.m2m_ctx, q);
+
+	/* Stop & release pair/context */
+	if (asrc->m2m_stop_part_two)
+		asrc->m2m_stop_part_two(pair);
+
+	if (pair->req_pair) {
+		if (asrc->m2m_stop_part_one)
+			asrc->m2m_stop_part_one(pair);
+		asrc->release_pair(pair);
+		pair->req_pair = false;
+	}
+
+	/* Release dma channel */
+	if (V4L2_TYPE_IS_OUTPUT(q->type)) {
+		if (pair->dma_chan[V4L_OUT])
+			dma_release_channel(pair->dma_chan[V4L_OUT]);
+	} else {
+		if (pair->dma_chan[V4L_CAP])
+			dma_release_channel(pair->dma_chan[V4L_CAP]);
+	}
+
+	pm_runtime_put_sync(dev);
+}
+
+static int fsl_asrc_m2m_queue_setup(struct vb2_queue *q,
+				    unsigned int *num_buffers, unsigned int *num_planes,
+				    unsigned int sizes[], struct device *alloc_devs[])
+{
+	struct fsl_asrc_pair *pair = vb2_get_drv_priv(q);
+
+	/* single buffer */
+	*num_planes = 1;
+
+	/*
+	 * The capture buffer size depends on output buffer size
+	 * and the convert ratio.
+	 *
+	 * Here just use a fix length for capture and output buffer.
+	 * User need to care about it.
+	 */
+
+	if (V4L2_TYPE_IS_OUTPUT(q->type))
+		sizes[0] = pair->buf_len[V4L_OUT];
+	else
+		sizes[0] = pair->buf_len[V4L_CAP];
+
+	return 0;
+}
+
+static void fsl_asrc_m2m_buf_queue(struct vb2_buffer *vb)
+{
+	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+	struct fsl_asrc_pair *pair = vb2_get_drv_priv(vb->vb2_queue);
+
+	/* queue buffer */
+	v4l2_m2m_buf_queue(pair->fh.m2m_ctx, vbuf);
+}
+
+static const struct vb2_ops fsl_asrc_m2m_qops = {
+	.wait_prepare		= vb2_ops_wait_prepare,
+	.wait_finish		= vb2_ops_wait_finish,
+	.start_streaming	= fsl_asrc_m2m_start_streaming,
+	.stop_streaming		= fsl_asrc_m2m_stop_streaming,
+	.queue_setup		= fsl_asrc_m2m_queue_setup,
+	.buf_queue		= fsl_asrc_m2m_buf_queue,
+};
+
+/* Init video buffer queue for src and dst. */
+static int fsl_asrc_m2m_queue_init(void *priv, struct vb2_queue *src_vq,
+				   struct vb2_queue *dst_vq)
+{
+	struct fsl_asrc_pair *pair = priv;
+	struct fsl_asrc *asrc = pair->asrc;
+	int ret;
+
+	src_vq->type = V4L2_BUF_TYPE_AUDIO_OUTPUT;
+	src_vq->io_modes = VB2_MMAP | VB2_DMABUF;
+	src_vq->drv_priv = pair;
+	src_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
+	src_vq->ops = &fsl_asrc_m2m_qops;
+	src_vq->mem_ops = &vb2_dma_contig_memops;
+	src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
+	src_vq->lock = &asrc->mlock;
+	src_vq->dev = &asrc->pdev->dev;
+	src_vq->min_buffers_needed = 1;
+
+	ret = vb2_queue_init(src_vq);
+	if (ret)
+		return ret;
+
+	dst_vq->type = V4L2_BUF_TYPE_AUDIO_CAPTURE;
+	dst_vq->io_modes = VB2_MMAP | VB2_DMABUF;
+	dst_vq->drv_priv = pair;
+	dst_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
+	dst_vq->ops = &fsl_asrc_m2m_qops;
+	dst_vq->mem_ops = &vb2_dma_contig_memops;
+	dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
+	dst_vq->lock = &asrc->mlock;
+	dst_vq->dev = &asrc->pdev->dev;
+	dst_vq->min_buffers_needed = 1;
+
+	ret = vb2_queue_init(dst_vq);
+	return ret;
+}
+
+static int fsl_asrc_m2m_op_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+	struct fsl_asrc_pair *pair =
+		container_of(ctrl->handler, struct fsl_asrc_pair, ctrl_handler);
+	struct fsl_asrc *asrc = pair->asrc;
+	int ret = 0;
+
+	switch (ctrl->id) {
+	case V4L2_CID_GAIN:
+		if (asrc->m2m_set_ratio_mod)
+			asrc->m2m_set_ratio_mod(pair, ctrl->val);
+		break;
+	default:
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+static const struct v4l2_ctrl_ops fsl_asrc_m2m_ctrl_ops = {
+	.s_ctrl = fsl_asrc_m2m_op_s_ctrl,
+};
+
+/* system callback for open() */
+static int fsl_asrc_m2m_open(struct file *file)
+{
+	struct fsl_asrc *asrc = video_drvdata(file);
+	struct video_device *vdev = video_devdata(file);
+	struct fsl_asrc_pair *pair;
+	int ret = 0;
+
+	if (mutex_lock_interruptible(&asrc->mlock))
+		return -ERESTARTSYS;
+
+	pair = kzalloc(sizeof(*pair) + asrc->pair_priv_size, GFP_KERNEL);
+	if (!pair) {
+		ret = -ENOMEM;
+		goto err_alloc;
+	}
+
+	pair->private = (void *)pair + sizeof(struct fsl_asrc_pair);
+	pair->asrc = asrc;
+	pair->buf_len[V4L_OUT] = ASRC_M2M_BUFFER_SIZE;
+	pair->buf_len[V4L_CAP] = ASRC_M2M_BUFFER_SIZE;
+
+	init_completion(&pair->complete[V4L_OUT]);
+	init_completion(&pair->complete[V4L_CAP]);
+
+	v4l2_fh_init(&pair->fh, vdev);
+	v4l2_fh_add(&pair->fh);
+	file->private_data = &pair->fh;
+
+	/* m2m context init */
+	pair->fh.m2m_ctx = v4l2_m2m_ctx_init(asrc->m2m_dev, pair,
+					     fsl_asrc_m2m_queue_init);
+	if (IS_ERR(pair->fh.m2m_ctx)) {
+		ret = PTR_ERR(pair->fh.m2m_ctx);
+		goto err_ctx_init;
+	}
+
+	v4l2_ctrl_handler_init(&pair->ctrl_handler, 2);
+
+	/* use V4L2_CID_GAIN for ratio update control */
+	v4l2_ctrl_new_std(&pair->ctrl_handler, &fsl_asrc_m2m_ctrl_ops,
+			  V4L2_CID_GAIN,
+			  0xFFFFFFFF80000001, 0x7fffffff, 1, 0);
+
+	if (pair->ctrl_handler.error) {
+		ret = pair->ctrl_handler.error;
+		v4l2_ctrl_handler_free(&pair->ctrl_handler);
+		goto err_ctrl_handler;
+	}
+
+	pair->fh.ctrl_handler = &pair->ctrl_handler;
+
+	mutex_unlock(&asrc->mlock);
+
+	return 0;
+
+err_ctrl_handler:
+	v4l2_m2m_ctx_release(pair->fh.m2m_ctx);
+err_ctx_init:
+	v4l2_fh_del(&pair->fh);
+	v4l2_fh_exit(&pair->fh);
+	kfree(pair);
+err_alloc:
+	mutex_unlock(&asrc->mlock);
+	return ret;
+}
+
+static int fsl_asrc_m2m_release(struct file *file)
+{
+	struct fsl_asrc *asrc = video_drvdata(file);
+	struct fsl_asrc_pair *pair = fsl_asrc_m2m_fh_to_ctx(file->private_data);
+
+	mutex_lock(&asrc->mlock);
+	v4l2_ctrl_handler_free(&pair->ctrl_handler);
+	v4l2_m2m_ctx_release(pair->fh.m2m_ctx);
+	v4l2_fh_del(&pair->fh);
+	v4l2_fh_exit(&pair->fh);
+	kfree(pair);
+	mutex_unlock(&asrc->mlock);
+
+	return 0;
+}
+
+static const struct v4l2_file_operations fsl_asrc_m2m_fops = {
+	.owner          = THIS_MODULE,
+	.open           = fsl_asrc_m2m_open,
+	.release        = fsl_asrc_m2m_release,
+	.poll           = v4l2_m2m_fop_poll,
+	.unlocked_ioctl = video_ioctl2,
+	.mmap           = v4l2_m2m_fop_mmap,
+};
+
+static int fsl_asrc_m2m_querycap(struct file *file, void *priv,
+				 struct v4l2_capability *cap)
+{
+	strscpy(cap->driver, "asrc m2m", sizeof(cap->driver));
+	strscpy(cap->card, "asrc m2m", sizeof(cap->card));
+	cap->device_caps = V4L2_CAP_AUDIO | V4L2_CAP_STREAMING;
+	cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
+
+	return 0;
+}
+
+static int fsl_asrc_m2m_g_fmt_aud_cap(struct file *file, void *fh,
+				      struct v4l2_format *f)
+{
+	struct fsl_asrc_pair *pair = fsl_asrc_m2m_fh_to_ctx(fh);
+
+	f->fmt.audio.channels = pair->channels;
+	f->fmt.audio.rate = pair->rate[V4L_CAP];
+	f->fmt.audio.format = pair->sample_format[V4L_CAP];
+	f->fmt.audio.buffersize = pair->buf_len[V4L_CAP];
+
+	return 0;
+}
+
+static int fsl_asrc_m2m_g_fmt_aud_out(struct file *file, void *fh,
+				      struct v4l2_format *f)
+{
+	struct fsl_asrc_pair *pair = fsl_asrc_m2m_fh_to_ctx(fh);
+
+	f->fmt.audio.channels = pair->channels;
+	f->fmt.audio.rate = pair->rate[V4L_OUT];
+	f->fmt.audio.format = pair->sample_format[V4L_OUT];
+	f->fmt.audio.buffersize = pair->buf_len[V4L_OUT];
+
+	return 0;
+}
+
+/* output for asrc */
+static int fsl_asrc_m2m_s_fmt_aud_cap(struct file *file, void *fh,
+				      struct v4l2_format *f)
+{
+	struct fsl_asrc_pair *pair = fsl_asrc_m2m_fh_to_ctx(fh);
+	struct fsl_asrc *asrc = pair->asrc;
+	struct device *dev = &asrc->pdev->dev;
+	int ret;
+
+	ret = asrc->m2m_check_format(OUT, f->fmt.audio.rate,
+				     f->fmt.audio.channels,
+				     f->fmt.audio.format);
+	if (ret)
+		return -EINVAL;
+
+	if (pair->channels > 0 && pair->channels != f->fmt.audio.channels) {
+		dev_err(dev, "channels don't match for cap and out\n");
+		return -EINVAL;
+	}
+
+	pair->channels = f->fmt.audio.channels;
+	pair->rate[V4L_CAP] = f->fmt.audio.rate;
+	pair->sample_format[V4L_CAP] = f->fmt.audio.format;
+
+	/* Get buffer size from user */
+	if (f->fmt.audio.buffersize > pair->buf_len[V4L_CAP])
+		pair->buf_len[V4L_CAP] = f->fmt.audio.buffersize;
+
+	return 0;
+}
+
+/* input for asrc */
+static int fsl_asrc_m2m_s_fmt_aud_out(struct file *file, void *fh,
+				      struct v4l2_format *f)
+{
+	struct fsl_asrc_pair *pair = fsl_asrc_m2m_fh_to_ctx(fh);
+	struct fsl_asrc *asrc = pair->asrc;
+	struct device *dev = &asrc->pdev->dev;
+	int ret;
+
+	ret = asrc->m2m_check_format(IN, f->fmt.audio.rate,
+				     f->fmt.audio.channels,
+				     f->fmt.audio.format);
+	if (ret)
+		return -EINVAL;
+
+	if (pair->channels > 0 && pair->channels != f->fmt.audio.channels) {
+		dev_err(dev, "channels don't match for cap and out\n");
+		return -EINVAL;
+	}
+
+	pair->channels = f->fmt.audio.channels;
+	pair->rate[V4L_OUT] = f->fmt.audio.rate;
+	pair->sample_format[V4L_OUT] = f->fmt.audio.format;
+
+	/* Get buffer size from user */
+	if (f->fmt.audio.buffersize > pair->buf_len[V4L_OUT])
+		pair->buf_len[V4L_OUT] = f->fmt.audio.buffersize;
+
+	return 0;
+}
+
+static int fsl_asrc_m2m_try_fmt_audio_cap(struct file *file, void *fh,
+					  struct v4l2_format *f)
+{
+	struct fsl_asrc_pair *pair = fsl_asrc_m2m_fh_to_ctx(fh);
+	struct fsl_asrc *asrc = pair->asrc;
+	int ret;
+
+	ret = asrc->m2m_check_format(OUT, f->fmt.audio.rate,
+				     f->fmt.audio.channels,
+				     f->fmt.audio.format);
+	return ret;
+}
+
+static int fsl_asrc_m2m_try_fmt_audio_out(struct file *file, void *fh,
+					  struct v4l2_format *f)
+{
+	struct fsl_asrc_pair *pair = fsl_asrc_m2m_fh_to_ctx(fh);
+	struct fsl_asrc *asrc = pair->asrc;
+	int ret;
+
+	ret = asrc->m2m_check_format(IN, f->fmt.audio.rate,
+				     f->fmt.audio.channels,
+				     f->fmt.audio.format);
+	return ret;
+}
+
+static const struct v4l2_ioctl_ops fsl_asrc_m2m_ioctl_ops = {
+	.vidioc_querycap		= fsl_asrc_m2m_querycap,
+
+	.vidioc_g_fmt_audio_cap		= fsl_asrc_m2m_g_fmt_aud_cap,
+	.vidioc_g_fmt_audio_out		= fsl_asrc_m2m_g_fmt_aud_out,
+
+	.vidioc_s_fmt_audio_cap		= fsl_asrc_m2m_s_fmt_aud_cap,
+	.vidioc_s_fmt_audio_out		= fsl_asrc_m2m_s_fmt_aud_out,
+
+	.vidioc_try_fmt_audio_cap	= fsl_asrc_m2m_try_fmt_audio_cap,
+	.vidioc_try_fmt_audio_out	= fsl_asrc_m2m_try_fmt_audio_out,
+
+	.vidioc_qbuf			= v4l2_m2m_ioctl_qbuf,
+	.vidioc_dqbuf			= v4l2_m2m_ioctl_dqbuf,
+
+	.vidioc_create_bufs		= v4l2_m2m_ioctl_create_bufs,
+	.vidioc_prepare_buf		= v4l2_m2m_ioctl_prepare_buf,
+	.vidioc_reqbufs			= v4l2_m2m_ioctl_reqbufs,
+	.vidioc_querybuf		= v4l2_m2m_ioctl_querybuf,
+	.vidioc_streamon		= v4l2_m2m_ioctl_streamon,
+	.vidioc_streamoff		= v4l2_m2m_ioctl_streamoff,
+};
+
+/* dma complete callback */
+static void fsl_asrc_input_dma_callback(void *data)
+{
+	struct fsl_asrc_pair *pair = (struct fsl_asrc_pair *)data;
+
+	complete(&pair->complete[V4L_OUT]);
+}
+
+/* dma complete callback */
+static void fsl_asrc_output_dma_callback(void *data)
+{
+	struct fsl_asrc_pair *pair = (struct fsl_asrc_pair *)data;
+
+	complete(&pair->complete[V4L_CAP]);
+}
+
+/* config dma channel */
+static int fsl_asrc_dmaconfig(struct fsl_asrc_pair *pair,
+			      struct dma_chan *chan,
+			      u32 dma_addr, dma_addr_t buf_addr, u32 buf_len,
+			      int dir, int width)
+{
+	struct fsl_asrc *asrc = pair->asrc;
+	struct device *dev = &asrc->pdev->dev;
+	struct dma_slave_config slave_config;
+	struct scatterlist sg[ASRC_M2M_SG_NUM];
+	enum dma_slave_buswidth buswidth;
+	unsigned int sg_len, max_period_size;
+	int ret, i;
+
+	switch (width) {
+	case 8:
+		buswidth = DMA_SLAVE_BUSWIDTH_1_BYTE;
+		break;
+	case 16:
+		buswidth = DMA_SLAVE_BUSWIDTH_2_BYTES;
+		break;
+	case 24:
+		buswidth = DMA_SLAVE_BUSWIDTH_3_BYTES;
+		break;
+	case 32:
+		buswidth = DMA_SLAVE_BUSWIDTH_4_BYTES;
+		break;
+	default:
+		dev_err(dev, "invalid word width\n");
+		return -EINVAL;
+	}
+
+	memset(&slave_config, 0, sizeof(slave_config));
+	if (dir == V4L_OUT) {
+		slave_config.direction = DMA_MEM_TO_DEV;
+		slave_config.dst_addr = dma_addr;
+		slave_config.dst_addr_width = buswidth;
+		slave_config.dst_maxburst = asrc->m2m_get_maxburst(IN, pair);
+	} else {
+		slave_config.direction = DMA_DEV_TO_MEM;
+		slave_config.src_addr = dma_addr;
+		slave_config.src_addr_width = buswidth;
+		slave_config.src_maxburst = asrc->m2m_get_maxburst(OUT, pair);
+	}
+
+	ret = dmaengine_slave_config(chan, &slave_config);
+	if (ret) {
+		dev_err(dev, "failed to config dmaengine for %s task: %d\n",
+			DIR_STR(dir), ret);
+		return -EINVAL;
+	}
+
+	max_period_size = rounddown(ASRC_M2M_PERIOD_SIZE, width * pair->channels / 8);
+	/* scatter gather mode */
+	sg_len = buf_len / max_period_size;
+	if (buf_len % max_period_size)
+		sg_len += 1;
+
+	sg_init_table(sg, sg_len);
+	for (i = 0; i < (sg_len - 1); i++) {
+		sg_dma_address(&sg[i]) = buf_addr + i * max_period_size;
+		sg_dma_len(&sg[i]) = max_period_size;
+	}
+	sg_dma_address(&sg[i]) = buf_addr + i * max_period_size;
+	sg_dma_len(&sg[i]) = buf_len - i * max_period_size;
+
+	pair->desc[dir] = dmaengine_prep_slave_sg(chan, sg, sg_len,
+						  slave_config.direction,
+						  DMA_PREP_INTERRUPT);
+	if (!pair->desc[dir]) {
+		dev_err(dev, "failed to prepare dmaengine for %s task\n", DIR_STR(dir));
+		return -EINVAL;
+	}
+
+	pair->desc[dir]->callback = ASRC_xPUT_DMA_CALLBACK(dir);
+	pair->desc[dir]->callback_param = pair;
+
+	return 0;
+}
+
+/* main function of converter */
+static void fsl_asrc_m2m_device_run(void *priv)
+{
+	struct fsl_asrc_pair *pair = priv;
+	struct fsl_asrc *asrc = pair->asrc;
+	struct device *dev = &asrc->pdev->dev;
+	enum asrc_pair_index index = pair->index;
+	struct vb2_v4l2_buffer *src_buf, *dst_buf;
+	unsigned int out_buf_len;
+	unsigned int cap_dma_len;
+	unsigned int width;
+	u32 fifo_addr;
+	int ret;
+
+	src_buf = v4l2_m2m_next_src_buf(pair->fh.m2m_ctx);
+	dst_buf = v4l2_m2m_next_dst_buf(pair->fh.m2m_ctx);
+
+	width = snd_pcm_format_physical_width(pair->sample_format[V4L_OUT]);
+	fifo_addr = asrc->paddr + asrc->get_fifo_addr(IN, index);
+	out_buf_len = vb2_get_plane_payload(&src_buf->vb2_buf, 0);
+	if (out_buf_len < width * pair->channels / 8 ||
+	    out_buf_len > ASRC_M2M_BUFFER_SIZE ||
+	    out_buf_len % (width * pair->channels / 8)) {
+		dev_err(dev, "out buffer size is error: [%d]\n", out_buf_len);
+		goto end;
+	}
+
+	/* dma config for output dma channel */
+	ret = fsl_asrc_dmaconfig(pair,
+				 pair->dma_chan[V4L_OUT],
+				 fifo_addr,
+				 vb2_dma_contig_plane_dma_addr(&src_buf->vb2_buf, 0),
+				 out_buf_len, V4L_OUT, width);
+	if (ret) {
+		dev_err(dev, "out dma config error\n");
+		goto end;
+	}
+
+	width = snd_pcm_format_physical_width(pair->sample_format[V4L_CAP]);
+	fifo_addr = asrc->paddr + asrc->get_fifo_addr(OUT, index);
+	cap_dma_len = asrc->m2m_calc_out_len(pair, out_buf_len);
+	if (cap_dma_len > 0 && cap_dma_len <= ASRC_M2M_BUFFER_SIZE) {
+		/* dma config for capture dma channel */
+		ret = fsl_asrc_dmaconfig(pair,
+					 pair->dma_chan[V4L_CAP],
+					 fifo_addr,
+					 vb2_dma_contig_plane_dma_addr(&dst_buf->vb2_buf, 0),
+					 cap_dma_len, V4L_CAP, width);
+		if (ret) {
+			dev_err(dev, "cap dma config error\n");
+			goto end;
+		}
+	} else if (cap_dma_len > ASRC_M2M_BUFFER_SIZE) {
+		dev_err(dev, "cap buffer size error\n");
+		goto end;
+	}
+
+	reinit_completion(&pair->complete[V4L_OUT]);
+	reinit_completion(&pair->complete[V4L_CAP]);
+
+	/* Submit DMA request */
+	dmaengine_submit(pair->desc[V4L_OUT]);
+	dma_async_issue_pending(pair->desc[V4L_OUT]->chan);
+	if (cap_dma_len > 0) {
+		dmaengine_submit(pair->desc[V4L_CAP]);
+		dma_async_issue_pending(pair->desc[V4L_CAP]->chan);
+	}
+
+	asrc->m2m_start_part_two(pair);
+
+	if (!wait_for_completion_interruptible_timeout(&pair->complete[V4L_OUT], 10 * HZ)) {
+		dev_err(dev, "out DMA task timeout\n");
+		goto end;
+	}
+
+	if (cap_dma_len > 0) {
+		if (!wait_for_completion_interruptible_timeout(&pair->complete[V4L_CAP], 10 * HZ)) {
+			dev_err(dev, "cap DMA task timeout\n");
+			goto end;
+		}
+	}
+
+	/* read the last words from FIFO */
+	fsl_asrc_read_last_fifo(pair, vb2_plane_vaddr(&dst_buf->vb2_buf, 0), &cap_dma_len);
+	/* update payload length for capture */
+	vb2_set_plane_payload(&dst_buf->vb2_buf, 0, cap_dma_len);
+
+end:
+	src_buf = v4l2_m2m_src_buf_remove(pair->fh.m2m_ctx);
+	dst_buf = v4l2_m2m_dst_buf_remove(pair->fh.m2m_ctx);
+
+	v4l2_m2m_buf_done(src_buf, VB2_BUF_STATE_DONE);
+	v4l2_m2m_buf_done(dst_buf, VB2_BUF_STATE_DONE);
+
+	v4l2_m2m_job_finish(asrc->m2m_dev, pair->fh.m2m_ctx);
+}
+
+static int fsl_asrc_m2m_job_ready(void *priv)
+{
+	struct fsl_asrc_pair *pair = priv;
+
+	if (v4l2_m2m_num_src_bufs_ready(pair->fh.m2m_ctx) > 0 &&
+	    v4l2_m2m_num_dst_bufs_ready(pair->fh.m2m_ctx) > 0) {
+		return 1;
+	}
+
+	return 0;
+}
+
+static const struct v4l2_m2m_ops fsl_asrc_m2m_ops = {
+	.job_ready = fsl_asrc_m2m_job_ready,
+	.device_run = fsl_asrc_m2m_device_run,
+};
+
+int fsl_asrc_m2m_probe(struct fsl_asrc *asrc)
+{
+	struct device *dev = &asrc->pdev->dev;
+	int ret;
+
+	ret = v4l2_device_register(dev, &asrc->v4l2_dev);
+	if (ret) {
+		dev_err(dev, "failed to register v4l2 device\n");
+		goto err_register;
+	}
+
+	asrc->m2m_dev = v4l2_m2m_init(&fsl_asrc_m2m_ops);
+	if (IS_ERR(asrc->m2m_dev)) {
+		dev_err(dev, "failed to register v4l2 device\n");
+		ret = PTR_ERR(asrc->m2m_dev);
+		goto err_m2m;
+	}
+
+	asrc->dec_vdev = video_device_alloc();
+	if (!asrc->dec_vdev) {
+		dev_err(dev, "failed to register v4l2 device\n");
+		ret = -ENOMEM;
+		goto err_vdev_alloc;
+	}
+
+	mutex_init(&asrc->mlock);
+
+	asrc->dec_vdev->fops = &fsl_asrc_m2m_fops;
+	asrc->dec_vdev->ioctl_ops = &fsl_asrc_m2m_ioctl_ops;
+	asrc->dec_vdev->minor = -1;
+	asrc->dec_vdev->release = video_device_release;
+	asrc->dec_vdev->lock = &asrc->mlock; /* lock for ioctl serialization */
+	asrc->dec_vdev->v4l2_dev = &asrc->v4l2_dev;
+	asrc->dec_vdev->vfl_dir = VFL_DIR_M2M;
+	asrc->dec_vdev->device_caps = V4L2_CAP_AUDIO | V4L2_CAP_STREAMING;
+
+	ret = video_register_device(asrc->dec_vdev, VFL_TYPE_AUDIO, -1);
+	if (ret) {
+		dev_err(dev, "failed to register video device\n");
+		goto err_vdev_register;
+	}
+
+	video_set_drvdata(asrc->dec_vdev, asrc);
+
+	return 0;
+
+err_vdev_register:
+	video_device_release(asrc->dec_vdev);
+err_vdev_alloc:
+	v4l2_m2m_release(asrc->m2m_dev);
+err_m2m:
+	v4l2_device_unregister(&asrc->v4l2_dev);
+err_register:
+	return ret;
+}
+EXPORT_SYMBOL_GPL(fsl_asrc_m2m_probe);
+
+int fsl_asrc_m2m_remove(struct platform_device *pdev)
+{
+	struct fsl_asrc *asrc = dev_get_drvdata(&pdev->dev);
+
+	video_unregister_device(asrc->dec_vdev);
+	video_device_release(asrc->dec_vdev);
+	v4l2_m2m_release(asrc->m2m_dev);
+	v4l2_device_unregister(&asrc->v4l2_dev);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(fsl_asrc_m2m_remove);
+
+/* suspend callback for m2m */
+int fsl_asrc_m2m_suspend(struct fsl_asrc *asrc)
+{
+	struct fsl_asrc_pair *pair;
+	unsigned long lock_flags;
+	int i;
+
+	for (i = 0; i < PAIR_CTX_NUM; i++) {
+		spin_lock_irqsave(&asrc->lock, lock_flags);
+		pair = asrc->pair[i];
+		if (!pair || !pair->fh.vdev) {
+			spin_unlock_irqrestore(&asrc->lock, lock_flags);
+			continue;
+		}
+		if (!completion_done(&pair->complete[V4L_OUT])) {
+			if (pair->dma_chan[V4L_OUT])
+				dmaengine_terminate_all(pair->dma_chan[V4L_OUT]);
+			fsl_asrc_input_dma_callback((void *)pair);
+		}
+		if (!completion_done(&pair->complete[V4L_CAP])) {
+			if (pair->dma_chan[V4L_CAP])
+				dmaengine_terminate_all(pair->dma_chan[V4L_CAP]);
+			fsl_asrc_output_dma_callback((void *)pair);
+		}
+
+		if (asrc->m2m_pair_suspend)
+			asrc->m2m_pair_suspend(pair);
+
+		spin_unlock_irqrestore(&asrc->lock, lock_flags);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(fsl_asrc_m2m_suspend);
+
+int fsl_asrc_m2m_resume(struct fsl_asrc *asrc)
+{
+	struct fsl_asrc_pair *pair;
+	unsigned long lock_flags;
+	int i;
+
+	for (i = 0; i < PAIR_CTX_NUM; i++) {
+		spin_lock_irqsave(&asrc->lock, lock_flags);
+		pair = asrc->pair[i];
+		if (!pair || !pair->fh.vdev) {
+			spin_unlock_irqrestore(&asrc->lock, lock_flags);
+			continue;
+		}
+		if (asrc->m2m_pair_resume)
+			asrc->m2m_pair_resume(pair);
+
+		spin_unlock_irqrestore(&asrc->lock, lock_flags);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(fsl_asrc_m2m_resume);
+
+MODULE_DESCRIPTION("Freescale ASRC M2M driver");
+MODULE_LICENSE("GPL");
diff --git a/sound/soc/fsl/fsl_asrc_m2m.h b/sound/soc/fsl/fsl_asrc_m2m.h
new file mode 100644
index 000000000000..a5b91ed4d3b0
--- /dev/null
+++ b/sound/soc/fsl/fsl_asrc_m2m.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019-2023 NXP
+ */
+
+#ifndef _FSL_ASRC_M2M_H
+#define _FSL_ASRC_M2M_H
+
+#include <linux/dma/imx-dma.h>
+#include "fsl_asrc_common.h"
+
+#define V4L_CAP OUT
+#define V4L_OUT IN
+
+#define ASRC_xPUT_DMA_CALLBACK(dir) \
+	(((dir) == V4L_OUT) ? fsl_asrc_input_dma_callback \
+	 : fsl_asrc_output_dma_callback)
+
+#define DIR_STR(dir) (dir) == V4L_OUT ? "out" : "cap"
+
+#if IS_ENABLED(CONFIG_SND_SOC_FSL_ASRC_M2M)
+int fsl_asrc_m2m_probe(struct fsl_asrc *asrc);
+int fsl_asrc_m2m_remove(struct platform_device *pdev);
+int fsl_asrc_m2m_suspend(struct fsl_asrc *asrc);
+int fsl_asrc_m2m_resume(struct fsl_asrc *asrc);
+#else
+static inline int fsl_asrc_m2m_probe(struct fsl_asrc *asrc)
+{
+	return 0;
+}
+
+static inline int fsl_asrc_m2m_remove(struct platform_device *pdev)
+{
+	return 0;
+}
+
+static inline int fsl_asrc_m2m_suspend(struct fsl_asrc *asrc)
+{
+	return 0;
+}
+
+static inline int fsl_asrc_m2m_resume(struct fsl_asrc *asrc)
+{
+	return 0;
+}
+#endif
+
+#endif /* _FSL_EASRC_M2M_H */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 5/6] ASoC: fsl_asrc: enable memory to memory function
  2023-06-29  1:37 [PATCH 0/6] Add audio support in v4l2 framework Shengjiu Wang
                   ` (3 preceding siblings ...)
  2023-06-29  1:37 ` [PATCH 4/6] ASoC: fsl_asrc: Add memory to memory driver Shengjiu Wang
@ 2023-06-29  1:37 ` Shengjiu Wang
  2023-06-29  1:37 ` [PATCH 6/6] ASoC: fsl_easrc: " Shengjiu Wang
  5 siblings, 0 replies; 41+ messages in thread
From: Shengjiu Wang @ 2023-06-29  1:37 UTC (permalink / raw)
  To: tfiga, m.szyprowski, mchehab, linux-media, linux-kernel,
	shengjiu.wang, Xiubo.Lee, festevam, nicoleotsuka, lgirdwood,
	broonie, perex, tiwai, alsa-devel, linuxppc-dev

Intergrate memory to memory feature to ASRC driver,
call m2m probe(), remove(), suspend() and resume()
in different callback.

Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
---
 sound/soc/fsl/fsl_asrc.c | 37 +++++++++++++++++++++++++++++++++++--
 1 file changed, 35 insertions(+), 2 deletions(-)

diff --git a/sound/soc/fsl/fsl_asrc.c b/sound/soc/fsl/fsl_asrc.c
index 30190ccb74e7..bd5f134e3473 100644
--- a/sound/soc/fsl/fsl_asrc.c
+++ b/sound/soc/fsl/fsl_asrc.c
@@ -17,6 +17,7 @@
 #include <sound/pcm_params.h>
 
 #include "fsl_asrc.h"
+#include "fsl_asrc_m2m.h"
 
 #define IDEAL_RATIO_DECIMAL_DEPTH 26
 #define DIVIDER_NUM  64
@@ -1380,6 +1381,13 @@ static int fsl_asrc_probe(struct platform_device *pdev)
 		goto err_pm_get_sync;
 	}
 
+	/* probe m2m feature */
+	ret = fsl_asrc_m2m_probe(asrc);
+	if (ret) {
+		dev_err(&pdev->dev, "failed to init m2m device %d\n", ret);
+		goto err_pm_get_sync;
+	}
+
 	return 0;
 
 err_pm_get_sync:
@@ -1392,6 +1400,9 @@ static int fsl_asrc_probe(struct platform_device *pdev)
 
 static void fsl_asrc_remove(struct platform_device *pdev)
 {
+	/* remove m2m feature */
+	fsl_asrc_m2m_remove(pdev);
+
 	pm_runtime_disable(&pdev->dev);
 	if (!pm_runtime_status_suspended(&pdev->dev))
 		fsl_asrc_runtime_suspend(&pdev->dev);
@@ -1493,10 +1504,32 @@ static int fsl_asrc_runtime_suspend(struct device *dev)
 	return 0;
 }
 
+static int __maybe_unused fsl_asrc_suspend(struct device *dev)
+{
+	struct fsl_asrc *asrc = dev_get_drvdata(dev);
+
+	/* suspend asrc m2m */
+	fsl_asrc_m2m_suspend(asrc);
+
+	return pm_runtime_force_suspend(dev);
+}
+
+static int __maybe_unused fsl_asrc_resume(struct device *dev)
+{
+	struct fsl_asrc *asrc = dev_get_drvdata(dev);
+	int ret;
+
+	ret = pm_runtime_force_resume(dev);
+
+	/* resume asrc m2m */
+	fsl_asrc_m2m_resume(asrc);
+
+	return ret;
+}
+
 static const struct dev_pm_ops fsl_asrc_pm = {
 	SET_RUNTIME_PM_OPS(fsl_asrc_runtime_suspend, fsl_asrc_runtime_resume, NULL)
-	SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
-				pm_runtime_force_resume)
+	SET_SYSTEM_SLEEP_PM_OPS(fsl_asrc_suspend, fsl_asrc_resume)
 };
 
 static const struct fsl_asrc_soc_data fsl_asrc_imx35_data = {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH 6/6] ASoC: fsl_easrc: enable memory to memory function
  2023-06-29  1:37 [PATCH 0/6] Add audio support in v4l2 framework Shengjiu Wang
                   ` (4 preceding siblings ...)
  2023-06-29  1:37 ` [PATCH 5/6] ASoC: fsl_asrc: enable memory to memory function Shengjiu Wang
@ 2023-06-29  1:37 ` Shengjiu Wang
  5 siblings, 0 replies; 41+ messages in thread
From: Shengjiu Wang @ 2023-06-29  1:37 UTC (permalink / raw)
  To: tfiga, m.szyprowski, mchehab, linux-media, linux-kernel,
	shengjiu.wang, Xiubo.Lee, festevam, nicoleotsuka, lgirdwood,
	broonie, perex, tiwai, alsa-devel, linuxppc-dev

Intergrate memory to memory feature to EASRC driver.
call m2m probe(), remove(), suspend() and resume()
in different callback.

Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
---
 sound/soc/fsl/fsl_easrc.c | 41 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 39 insertions(+), 2 deletions(-)

diff --git a/sound/soc/fsl/fsl_easrc.c b/sound/soc/fsl/fsl_easrc.c
index b735b24badc2..bc5404627032 100644
--- a/sound/soc/fsl/fsl_easrc.c
+++ b/sound/soc/fsl/fsl_easrc.c
@@ -29,6 +29,7 @@
 #include <sound/core.h>
 
 #include "fsl_easrc.h"
+#include "fsl_asrc_m2m.h"
 #include "imx-pcm.h"
 
 #define FSL_EASRC_FORMATS       (SNDRV_PCM_FMTBIT_S16_LE | \
@@ -2190,11 +2191,21 @@ static int fsl_easrc_probe(struct platform_device *pdev)
 		return ret;
 	}
 
+	/* probe the m2m feature */
+	ret = fsl_asrc_m2m_probe(easrc);
+	if (ret) {
+		dev_err(&pdev->dev, "failed to init m2m device %d\n", ret);
+		return ret;
+	}
+
 	return 0;
 }
 
 static void fsl_easrc_remove(struct platform_device *pdev)
 {
+	/* remove the m2m feature */
+	fsl_asrc_m2m_remove(pdev);
+
 	pm_runtime_disable(&pdev->dev);
 }
 
@@ -2295,12 +2306,38 @@ static __maybe_unused int fsl_easrc_runtime_resume(struct device *dev)
 	return ret;
 }
 
+static int __maybe_unused fsl_easrc_suspend(struct device *dev)
+{
+	struct fsl_asrc *easrc = dev_get_drvdata(dev);
+	int ret;
+
+	/* suspend m2m function first */
+	fsl_asrc_m2m_suspend(easrc);
+
+	ret = pm_runtime_force_suspend(dev);
+
+	return ret;
+}
+
+static int __maybe_unused fsl_easrc_resume(struct device *dev)
+{
+	struct fsl_asrc *easrc = dev_get_drvdata(dev);
+	int ret;
+
+	ret = pm_runtime_force_resume(dev);
+
+	/* resume m2m function */
+	fsl_asrc_m2m_resume(easrc);
+
+	return ret;
+}
+
 static const struct dev_pm_ops fsl_easrc_pm_ops = {
 	SET_RUNTIME_PM_OPS(fsl_easrc_runtime_suspend,
 			   fsl_easrc_runtime_resume,
 			   NULL)
-	SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
-				pm_runtime_force_resume)
+	SET_SYSTEM_SLEEP_PM_OPS(fsl_easrc_suspend,
+				fsl_easrc_resume)
 };
 
 static struct platform_driver fsl_easrc_driver = {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH 3/6] ASoC: fsl_easrc: define functions for memory to memory usage
  2023-06-29  1:37 ` [PATCH 3/6] ASoC: fsl_easrc: " Shengjiu Wang
@ 2023-06-29 10:59     ` Fabio Estevam
  0 siblings, 0 replies; 41+ messages in thread
From: Fabio Estevam @ 2023-06-29 10:59 UTC (permalink / raw)
  To: Shengjiu Wang
  Cc: tfiga, m.szyprowski, mchehab, linux-media, linux-kernel,
	shengjiu.wang, Xiubo.Lee, nicoleotsuka, lgirdwood, broonie,
	perex, tiwai, alsa-devel, linuxppc-dev

Hi Shengjiu,

On Wed, Jun 28, 2023 at 11:10 PM Shengjiu Wang <shengjiu.wang@nxp.com> wrote:
>
> ASRC can be used on memory to memory case, define several
> functions for m2m usage and export them as function pointer.
>
> Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>

Could you please explain what is the benefit of using M2M in the EASRC driver?

A few weeks ago, an imx8mn user reported that the EASRC with the
mainline kernel introduces huge delays.

Does M2M help with this aspect?

Thanks

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 3/6] ASoC: fsl_easrc: define functions for memory to memory usage
@ 2023-06-29 10:59     ` Fabio Estevam
  0 siblings, 0 replies; 41+ messages in thread
From: Fabio Estevam @ 2023-06-29 10:59 UTC (permalink / raw)
  To: Shengjiu Wang
  Cc: alsa-devel, linuxppc-dev, linux-media, Xiubo.Lee, lgirdwood,
	tiwai, linux-kernel, tfiga, nicoleotsuka, broonie, perex,
	mchehab, shengjiu.wang, m.szyprowski

Hi Shengjiu,

On Wed, Jun 28, 2023 at 11:10 PM Shengjiu Wang <shengjiu.wang@nxp.com> wrote:
>
> ASRC can be used on memory to memory case, define several
> functions for m2m usage and export them as function pointer.
>
> Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>

Could you please explain what is the benefit of using M2M in the EASRC driver?

A few weeks ago, an imx8mn user reported that the EASRC with the
mainline kernel introduces huge delays.

Does M2M help with this aspect?

Thanks

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 4/6] ASoC: fsl_asrc: Add memory to memory driver
  2023-06-29  1:37 ` [PATCH 4/6] ASoC: fsl_asrc: Add memory to memory driver Shengjiu Wang
@ 2023-06-29 11:38     ` Mark Brown
  0 siblings, 0 replies; 41+ messages in thread
From: Mark Brown @ 2023-06-29 11:38 UTC (permalink / raw)
  To: Shengjiu Wang
  Cc: tfiga, m.szyprowski, mchehab, linux-media, linux-kernel,
	shengjiu.wang, Xiubo.Lee, festevam, nicoleotsuka, lgirdwood,
	perex, tiwai, alsa-devel, linuxppc-dev

[-- Attachment #1: Type: text/plain, Size: 937 bytes --]

On Thu, Jun 29, 2023 at 09:37:51AM +0800, Shengjiu Wang wrote:
> Implement the ASRC memory to memory function using
> the v4l2 framework, user can use this function with
> v4l2 ioctl interface.
> 
> User send the output and capture buffer to driver and
> driver store the converted data to the capture buffer.
> 
> This feature can be shared by ASRC and EASRC drivers
> 
> Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
> ---
>  sound/soc/fsl/Kconfig        |  13 +
>  sound/soc/fsl/Makefile       |   2 +
>  sound/soc/fsl/fsl_asrc_m2m.c | 878 +++++++++++++++++++++++++++++++++++
>  sound/soc/fsl/fsl_asrc_m2m.h |  48 ++

This feels like the bit where we interface v4l to ASoC should be a
separate library, there shouldn't be anything device specific about
getting an audio stream into a block of memory.  I'm thinking something
like the way we handle dmaengine here.

I've not dug into the code yet though.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 4/6] ASoC: fsl_asrc: Add memory to memory driver
@ 2023-06-29 11:38     ` Mark Brown
  0 siblings, 0 replies; 41+ messages in thread
From: Mark Brown @ 2023-06-29 11:38 UTC (permalink / raw)
  To: Shengjiu Wang
  Cc: alsa-devel, lgirdwood, linux-media, Xiubo.Lee, festevam, tiwai,
	linux-kernel, tfiga, nicoleotsuka, linuxppc-dev, perex, mchehab,
	shengjiu.wang, m.szyprowski

[-- Attachment #1: Type: text/plain, Size: 937 bytes --]

On Thu, Jun 29, 2023 at 09:37:51AM +0800, Shengjiu Wang wrote:
> Implement the ASRC memory to memory function using
> the v4l2 framework, user can use this function with
> v4l2 ioctl interface.
> 
> User send the output and capture buffer to driver and
> driver store the converted data to the capture buffer.
> 
> This feature can be shared by ASRC and EASRC drivers
> 
> Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
> ---
>  sound/soc/fsl/Kconfig        |  13 +
>  sound/soc/fsl/Makefile       |   2 +
>  sound/soc/fsl/fsl_asrc_m2m.c | 878 +++++++++++++++++++++++++++++++++++
>  sound/soc/fsl/fsl_asrc_m2m.h |  48 ++

This feels like the bit where we interface v4l to ASoC should be a
separate library, there shouldn't be anything device specific about
getting an audio stream into a block of memory.  I'm thinking something
like the way we handle dmaengine here.

I've not dug into the code yet though.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 3/6] ASoC: fsl_easrc: define functions for memory to memory usage
  2023-06-29 10:59     ` Fabio Estevam
@ 2023-06-30  3:23       ` Shengjiu Wang
  -1 siblings, 0 replies; 41+ messages in thread
From: Shengjiu Wang @ 2023-06-30  3:23 UTC (permalink / raw)
  To: Fabio Estevam
  Cc: Shengjiu Wang, tfiga, m.szyprowski, mchehab, linux-media,
	linux-kernel, Xiubo.Lee, nicoleotsuka, lgirdwood, broonie, perex,
	tiwai, alsa-devel, linuxppc-dev

On Thu, Jun 29, 2023 at 7:00 PM Fabio Estevam <festevam@gmail.com> wrote:

> Hi Shengjiu,
>
> On Wed, Jun 28, 2023 at 11:10 PM Shengjiu Wang <shengjiu.wang@nxp.com>
> wrote:
> >
> > ASRC can be used on memory to memory case, define several
> > functions for m2m usage and export them as function pointer.
> >
> > Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
>
> Could you please explain what is the benefit of using M2M in the EASRC
> driver?
>
> Users may want to get the ASRC output in the user space, then do mixing
with
other streams before sending to DAC.
so this patch-set is to use the v4l2 API for this usage, because there is
no such
API in ASoC.


> A few weeks ago, an imx8mn user reported that the EASRC with the
> mainline kernel introduces huge delays.
>
> Does M2M help with this aspect?
>
No, M2M can't help with the delays issue.   The delay issue maybe caused
by the buffer size or the prefilled data needs by EASRC.

Best regards
wang shengjiu

>
> Thanks
>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 3/6] ASoC: fsl_easrc: define functions for memory to memory usage
@ 2023-06-30  3:23       ` Shengjiu Wang
  0 siblings, 0 replies; 41+ messages in thread
From: Shengjiu Wang @ 2023-06-30  3:23 UTC (permalink / raw)
  To: Fabio Estevam
  Cc: alsa-devel, mchehab, Xiubo.Lee, lgirdwood, Shengjiu Wang, tiwai,
	linux-kernel, tfiga, nicoleotsuka, broonie, perex, linux-media,
	linuxppc-dev, m.szyprowski

[-- Attachment #1: Type: text/plain, Size: 1026 bytes --]

On Thu, Jun 29, 2023 at 7:00 PM Fabio Estevam <festevam@gmail.com> wrote:

> Hi Shengjiu,
>
> On Wed, Jun 28, 2023 at 11:10 PM Shengjiu Wang <shengjiu.wang@nxp.com>
> wrote:
> >
> > ASRC can be used on memory to memory case, define several
> > functions for m2m usage and export them as function pointer.
> >
> > Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
>
> Could you please explain what is the benefit of using M2M in the EASRC
> driver?
>
> Users may want to get the ASRC output in the user space, then do mixing
with
other streams before sending to DAC.
so this patch-set is to use the v4l2 API for this usage, because there is
no such
API in ASoC.


> A few weeks ago, an imx8mn user reported that the EASRC with the
> mainline kernel introduces huge delays.
>
> Does M2M help with this aspect?
>
No, M2M can't help with the delays issue.   The delay issue maybe caused
by the buffer size or the prefilled data needs by EASRC.

Best regards
wang shengjiu

>
> Thanks
>

[-- Attachment #2: Type: text/html, Size: 1863 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 4/6] ASoC: fsl_asrc: Add memory to memory driver
  2023-06-29 11:38     ` Mark Brown
@ 2023-06-30  3:37       ` Shengjiu Wang
  -1 siblings, 0 replies; 41+ messages in thread
From: Shengjiu Wang @ 2023-06-30  3:37 UTC (permalink / raw)
  To: Mark Brown
  Cc: Shengjiu Wang, tfiga, m.szyprowski, mchehab, linux-media,
	linux-kernel, Xiubo.Lee, festevam, nicoleotsuka, lgirdwood,
	perex, tiwai, alsa-devel, linuxppc-dev

On Thu, Jun 29, 2023 at 7:39 PM Mark Brown <broonie@kernel.org> wrote:

> On Thu, Jun 29, 2023 at 09:37:51AM +0800, Shengjiu Wang wrote:
> > Implement the ASRC memory to memory function using
> > the v4l2 framework, user can use this function with
> > v4l2 ioctl interface.
> >
> > User send the output and capture buffer to driver and
> > driver store the converted data to the capture buffer.
> >
> > This feature can be shared by ASRC and EASRC drivers
> >
> > Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
> > ---
> >  sound/soc/fsl/Kconfig        |  13 +
> >  sound/soc/fsl/Makefile       |   2 +
> >  sound/soc/fsl/fsl_asrc_m2m.c | 878 +++++++++++++++++++++++++++++++++++
> >  sound/soc/fsl/fsl_asrc_m2m.h |  48 ++
>
> This feels like the bit where we interface v4l to ASoC should be a
> separate library, there shouldn't be anything device specific about
> getting an audio stream into a block of memory.  I'm thinking something
> like the way we handle dmaengine here.
>
> I've not dug into the code yet though.
>

Users may want to get the ASRC output in the user space, then
do mixing with other streams before sending to ALSA.

As there is no such API in ASoC,  the best interface I found is
the V4L2, but I need to do a little modification of the V4L2 API,

extend it for audio usage.

Could you please suggest more about the "separate library"?
Should I place this "sound/soc/fsl/fsl_asrc_m2m.c" in another folder?

best regards
wang shengjiu

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 4/6] ASoC: fsl_asrc: Add memory to memory driver
@ 2023-06-30  3:37       ` Shengjiu Wang
  0 siblings, 0 replies; 41+ messages in thread
From: Shengjiu Wang @ 2023-06-30  3:37 UTC (permalink / raw)
  To: Mark Brown
  Cc: alsa-devel, mchehab, Xiubo.Lee, lgirdwood, Shengjiu Wang, tiwai,
	linux-kernel, tfiga, nicoleotsuka, linuxppc-dev, perex,
	linux-media, festevam, m.szyprowski

[-- Attachment #1: Type: text/plain, Size: 1505 bytes --]

On Thu, Jun 29, 2023 at 7:39 PM Mark Brown <broonie@kernel.org> wrote:

> On Thu, Jun 29, 2023 at 09:37:51AM +0800, Shengjiu Wang wrote:
> > Implement the ASRC memory to memory function using
> > the v4l2 framework, user can use this function with
> > v4l2 ioctl interface.
> >
> > User send the output and capture buffer to driver and
> > driver store the converted data to the capture buffer.
> >
> > This feature can be shared by ASRC and EASRC drivers
> >
> > Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
> > ---
> >  sound/soc/fsl/Kconfig        |  13 +
> >  sound/soc/fsl/Makefile       |   2 +
> >  sound/soc/fsl/fsl_asrc_m2m.c | 878 +++++++++++++++++++++++++++++++++++
> >  sound/soc/fsl/fsl_asrc_m2m.h |  48 ++
>
> This feels like the bit where we interface v4l to ASoC should be a
> separate library, there shouldn't be anything device specific about
> getting an audio stream into a block of memory.  I'm thinking something
> like the way we handle dmaengine here.
>
> I've not dug into the code yet though.
>

Users may want to get the ASRC output in the user space, then
do mixing with other streams before sending to ALSA.

As there is no such API in ASoC,  the best interface I found is
the V4L2, but I need to do a little modification of the V4L2 API,

extend it for audio usage.

Could you please suggest more about the "separate library"?
Should I place this "sound/soc/fsl/fsl_asrc_m2m.c" in another folder?

best regards
wang shengjiu

[-- Attachment #2: Type: text/html, Size: 2238 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
  2023-06-29  1:37 ` [PATCH 1/6] media: v4l2: Add audio capture and output support Shengjiu Wang
@ 2023-06-30 10:05     ` Sakari Ailus
  0 siblings, 0 replies; 41+ messages in thread
From: Sakari Ailus @ 2023-06-30 10:05 UTC (permalink / raw)
  To: Shengjiu Wang
  Cc: tfiga, m.szyprowski, mchehab, linux-media, linux-kernel,
	shengjiu.wang, Xiubo.Lee, festevam, nicoleotsuka, lgirdwood,
	broonie, perex, tiwai, alsa-devel, linuxppc-dev, hverkuil,
	Jacopo Mondi

Hi Shengjiu,

On Thu, Jun 29, 2023 at 09:37:48AM +0800, Shengjiu Wang wrote:
> Audio signal processing has the requirement for memory to
> memory similar as Video.
> 
> This patch is to add this support in v4l2 framework, defined
> new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and
> V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format
> for audio case usage.

Why are you proposing to add this to V4L2 framework instead of doing this
within ALSA?

Also cc Hans and Jacopo.

> 
> The created audio device is named "/dev/audioX".
> 
> Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
> ---
>  .../media/common/videobuf2/videobuf2-v4l2.c   |  4 ++
>  drivers/media/v4l2-core/v4l2-dev.c            | 17 ++++++
>  drivers/media/v4l2-core/v4l2-ioctl.c          | 52 +++++++++++++++++++
>  include/media/v4l2-dev.h                      |  2 +
>  include/media/v4l2-ioctl.h                    | 34 ++++++++++++
>  include/uapi/linux/videodev2.h                | 19 +++++++
>  6 files changed, 128 insertions(+)
> 
> diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c b/drivers/media/common/videobuf2/videobuf2-v4l2.c
> index c7a54d82a55e..12f2be2773a2 100644
> --- a/drivers/media/common/videobuf2/videobuf2-v4l2.c
> +++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c
> @@ -785,6 +785,10 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create)
>  	case V4L2_BUF_TYPE_META_OUTPUT:
>  		requested_sizes[0] = f->fmt.meta.buffersize;
>  		break;
> +	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> +	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> +		requested_sizes[0] = f->fmt.audio.buffersize;
> +		break;
>  	default:
>  		return -EINVAL;
>  	}
> diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c
> index f81279492682..67484f4c6eaf 100644
> --- a/drivers/media/v4l2-core/v4l2-dev.c
> +++ b/drivers/media/v4l2-core/v4l2-dev.c
> @@ -553,6 +553,7 @@ static void determine_valid_ioctls(struct video_device *vdev)
>  	bool is_tch = vdev->vfl_type == VFL_TYPE_TOUCH;
>  	bool is_meta = vdev->vfl_type == VFL_TYPE_VIDEO &&
>  		       (vdev->device_caps & meta_caps);
> +	bool is_audio = vdev->vfl_type == VFL_TYPE_AUDIO;
>  	bool is_rx = vdev->vfl_dir != VFL_DIR_TX;
>  	bool is_tx = vdev->vfl_dir != VFL_DIR_RX;
>  	bool is_io_mc = vdev->device_caps & V4L2_CAP_IO_MC;
> @@ -664,6 +665,19 @@ static void determine_valid_ioctls(struct video_device *vdev)
>  		SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_meta_out);
>  		SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_meta_out);
>  	}
> +	if (is_audio && is_rx) {
> +		/* audio capture specific ioctls */
> +		SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_cap);
> +		SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_cap);
> +		SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_cap);
> +		SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_cap);
> +	} else if (is_audio && is_tx) {
> +		/* audio output specific ioctls */
> +		SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_out);
> +		SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_out);
> +		SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_out);
> +		SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_out);
> +	}
>  	if (is_vbi) {
>  		/* vbi specific ioctls */
>  		if ((is_rx && (ops->vidioc_g_fmt_vbi_cap ||
> @@ -927,6 +941,9 @@ int __video_register_device(struct video_device *vdev,
>  	case VFL_TYPE_TOUCH:
>  		name_base = "v4l-touch";
>  		break;
> +	case VFL_TYPE_AUDIO:
> +		name_base = "audio";
> +		break;
>  	default:
>  		pr_err("%s called with unknown type: %d\n",
>  		       __func__, type);
> diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
> index a858acea6547..26bc4b0d8ef0 100644
> --- a/drivers/media/v4l2-core/v4l2-ioctl.c
> +++ b/drivers/media/v4l2-core/v4l2-ioctl.c
> @@ -188,6 +188,8 @@ const char *v4l2_type_names[] = {
>  	[V4L2_BUF_TYPE_SDR_OUTPUT]         = "sdr-out",
>  	[V4L2_BUF_TYPE_META_CAPTURE]       = "meta-cap",
>  	[V4L2_BUF_TYPE_META_OUTPUT]	   = "meta-out",
> +	[V4L2_BUF_TYPE_AUDIO_CAPTURE]      = "audio-cap",
> +	[V4L2_BUF_TYPE_AUDIO_OUTPUT]	   = "audio-out",
>  };
>  EXPORT_SYMBOL(v4l2_type_names);
>  
> @@ -276,6 +278,7 @@ static void v4l_print_format(const void *arg, bool write_only)
>  	const struct v4l2_sliced_vbi_format *sliced;
>  	const struct v4l2_window *win;
>  	const struct v4l2_meta_format *meta;
> +	const struct v4l2_audio_format *audio;
>  	u32 pixelformat;
>  	u32 planes;
>  	unsigned i;
> @@ -346,6 +349,12 @@ static void v4l_print_format(const void *arg, bool write_only)
>  		pr_cont(", dataformat=%p4cc, buffersize=%u\n",
>  			&pixelformat, meta->buffersize);
>  		break;
> +	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> +	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> +		audio = &p->fmt.audio;
> +		pr_cont(", rate=%u, format=%u, channels=%u, buffersize=%u\n",
> +			audio->rate, audio->format, audio->channels, audio->buffersize);
> +		break;
>  	}
>  }
>  
> @@ -927,6 +936,7 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
>  	bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH;
>  	bool is_meta = vfd->vfl_type == VFL_TYPE_VIDEO &&
>  		       (vfd->device_caps & meta_caps);
> +	bool is_audio = vfd->vfl_type == VFL_TYPE_AUDIO;
>  	bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
>  	bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
>  
> @@ -992,6 +1002,14 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
>  		if (is_meta && is_tx && ops->vidioc_g_fmt_meta_out)
>  			return 0;
>  		break;
> +	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> +		if (is_audio && is_rx && ops->vidioc_g_fmt_audio_cap)
> +			return 0;
> +		break;
> +	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> +		if (is_audio && is_tx && ops->vidioc_g_fmt_audio_out)
> +			return 0;
> +		break;
>  	default:
>  		break;
>  	}
> @@ -1592,6 +1610,16 @@ static int v4l_enum_fmt(const struct v4l2_ioctl_ops *ops,
>  			break;
>  		ret = ops->vidioc_enum_fmt_meta_out(file, fh, arg);
>  		break;
> +	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> +		if (unlikely(!ops->vidioc_enum_fmt_audio_cap))
> +			break;
> +		ret = ops->vidioc_enum_fmt_audio_cap(file, fh, arg);
> +		break;
> +	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> +		if (unlikely(!ops->vidioc_enum_fmt_audio_out))
> +			break;
> +		ret = ops->vidioc_enum_fmt_audio_out(file, fh, arg);
> +		break;
>  	}
>  	if (ret == 0)
>  		v4l_fill_fmtdesc(p);
> @@ -1668,6 +1696,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops,
>  		return ops->vidioc_g_fmt_meta_cap(file, fh, arg);
>  	case V4L2_BUF_TYPE_META_OUTPUT:
>  		return ops->vidioc_g_fmt_meta_out(file, fh, arg);
> +	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> +		return ops->vidioc_g_fmt_audio_cap(file, fh, arg);
> +	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> +		return ops->vidioc_g_fmt_audio_out(file, fh, arg);
>  	}
>  	return -EINVAL;
>  }
> @@ -1779,6 +1811,16 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops,
>  			break;
>  		memset_after(p, 0, fmt.meta);
>  		return ops->vidioc_s_fmt_meta_out(file, fh, arg);
> +	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> +		if (unlikely(!ops->vidioc_s_fmt_audio_cap))
> +			break;
> +		memset_after(p, 0, fmt.audio);
> +		return ops->vidioc_s_fmt_audio_cap(file, fh, arg);
> +	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> +		if (unlikely(!ops->vidioc_s_fmt_audio_out))
> +			break;
> +		memset_after(p, 0, fmt.audio);
> +		return ops->vidioc_s_fmt_audio_out(file, fh, arg);
>  	}
>  	return -EINVAL;
>  }
> @@ -1887,6 +1929,16 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops,
>  			break;
>  		memset_after(p, 0, fmt.meta);
>  		return ops->vidioc_try_fmt_meta_out(file, fh, arg);
> +	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> +		if (unlikely(!ops->vidioc_try_fmt_audio_cap))
> +			break;
> +		memset_after(p, 0, fmt.audio);
> +		return ops->vidioc_try_fmt_audio_cap(file, fh, arg);
> +	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> +		if (unlikely(!ops->vidioc_try_fmt_audio_out))
> +			break;
> +		memset_after(p, 0, fmt.audio);
> +		return ops->vidioc_try_fmt_audio_out(file, fh, arg);
>  	}
>  	return -EINVAL;
>  }
> diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h
> index e0a13505f88d..0924e6d1dab1 100644
> --- a/include/media/v4l2-dev.h
> +++ b/include/media/v4l2-dev.h
> @@ -30,6 +30,7 @@
>   * @VFL_TYPE_SUBDEV:	for V4L2 subdevices
>   * @VFL_TYPE_SDR:	for Software Defined Radio tuners
>   * @VFL_TYPE_TOUCH:	for touch sensors
> + * @VFL_TYPE_AUDIO:	for audio input/output devices
>   * @VFL_TYPE_MAX:	number of VFL types, must always be last in the enum
>   */
>  enum vfl_devnode_type {
> @@ -39,6 +40,7 @@ enum vfl_devnode_type {
>  	VFL_TYPE_SUBDEV,
>  	VFL_TYPE_SDR,
>  	VFL_TYPE_TOUCH,
> +	VFL_TYPE_AUDIO,
>  	VFL_TYPE_MAX /* Shall be the last one */
>  };
>  
> diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h
> index edb733f21604..f840cf740ce1 100644
> --- a/include/media/v4l2-ioctl.h
> +++ b/include/media/v4l2-ioctl.h
> @@ -45,6 +45,12 @@ struct v4l2_fh;
>   * @vidioc_enum_fmt_meta_out: pointer to the function that implements
>   *	:ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
>   *	for metadata output
> + * @vidioc_enum_fmt_audio_cap: pointer to the function that implements
> + *	:ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
> + *	for audio capture
> + * @vidioc_enum_fmt_audio_out: pointer to the function that implements
> + *	:ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
> + *	for audio output
>   * @vidioc_g_fmt_vid_cap: pointer to the function that implements
>   *	:ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for video capture
>   *	in single plane mode
> @@ -79,6 +85,10 @@ struct v4l2_fh;
>   *	:ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
>   * @vidioc_g_fmt_meta_out: pointer to the function that implements
>   *	:ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata output
> + * @vidioc_g_fmt_audio_cap: pointer to the function that implements
> + *	:ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio capture
> + * @vidioc_g_fmt_audio_out: pointer to the function that implements
> + *	:ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio output
>   * @vidioc_s_fmt_vid_cap: pointer to the function that implements
>   *	:ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for video capture
>   *	in single plane mode
> @@ -113,6 +123,10 @@ struct v4l2_fh;
>   *	:ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
>   * @vidioc_s_fmt_meta_out: pointer to the function that implements
>   *	:ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata output
> + * @vidioc_s_fmt_audio_cap: pointer to the function that implements
> + *	:ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio capture
> + * @vidioc_s_fmt_audio_out: pointer to the function that implements
> + *	:ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio output
>   * @vidioc_try_fmt_vid_cap: pointer to the function that implements
>   *	:ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for video capture
>   *	in single plane mode
> @@ -149,6 +163,10 @@ struct v4l2_fh;
>   *	:ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
>   * @vidioc_try_fmt_meta_out: pointer to the function that implements
>   *	:ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata output
> + * @vidioc_try_fmt_audio_cap: pointer to the function that implements
> + *	:ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio capture
> + * @vidioc_try_fmt_audio_out: pointer to the function that implements
> + *	:ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio output
>   * @vidioc_reqbufs: pointer to the function that implements
>   *	:ref:`VIDIOC_REQBUFS <vidioc_reqbufs>` ioctl
>   * @vidioc_querybuf: pointer to the function that implements
> @@ -315,6 +333,10 @@ struct v4l2_ioctl_ops {
>  					struct v4l2_fmtdesc *f);
>  	int (*vidioc_enum_fmt_meta_out)(struct file *file, void *fh,
>  					struct v4l2_fmtdesc *f);
> +	int (*vidioc_enum_fmt_audio_cap)(struct file *file, void *fh,
> +					 struct v4l2_fmtdesc *f);
> +	int (*vidioc_enum_fmt_audio_out)(struct file *file, void *fh,
> +					 struct v4l2_fmtdesc *f);
>  
>  	/* VIDIOC_G_FMT handlers */
>  	int (*vidioc_g_fmt_vid_cap)(struct file *file, void *fh,
> @@ -345,6 +367,10 @@ struct v4l2_ioctl_ops {
>  				     struct v4l2_format *f);
>  	int (*vidioc_g_fmt_meta_out)(struct file *file, void *fh,
>  				     struct v4l2_format *f);
> +	int (*vidioc_g_fmt_audio_cap)(struct file *file, void *fh,
> +				      struct v4l2_format *f);
> +	int (*vidioc_g_fmt_audio_out)(struct file *file, void *fh,
> +				      struct v4l2_format *f);
>  
>  	/* VIDIOC_S_FMT handlers */
>  	int (*vidioc_s_fmt_vid_cap)(struct file *file, void *fh,
> @@ -375,6 +401,10 @@ struct v4l2_ioctl_ops {
>  				     struct v4l2_format *f);
>  	int (*vidioc_s_fmt_meta_out)(struct file *file, void *fh,
>  				     struct v4l2_format *f);
> +	int (*vidioc_s_fmt_audio_cap)(struct file *file, void *fh,
> +				      struct v4l2_format *f);
> +	int (*vidioc_s_fmt_audio_out)(struct file *file, void *fh,
> +				      struct v4l2_format *f);
>  
>  	/* VIDIOC_TRY_FMT handlers */
>  	int (*vidioc_try_fmt_vid_cap)(struct file *file, void *fh,
> @@ -405,6 +435,10 @@ struct v4l2_ioctl_ops {
>  				       struct v4l2_format *f);
>  	int (*vidioc_try_fmt_meta_out)(struct file *file, void *fh,
>  				       struct v4l2_format *f);
> +	int (*vidioc_try_fmt_audio_cap)(struct file *file, void *fh,
> +					struct v4l2_format *f);
> +	int (*vidioc_try_fmt_audio_out)(struct file *file, void *fh,
> +					struct v4l2_format *f);
>  
>  	/* Buffer handlers */
>  	int (*vidioc_reqbufs)(struct file *file, void *fh,
> diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
> index aee75eb9e686..a7af28f4c8c3 100644
> --- a/include/uapi/linux/videodev2.h
> +++ b/include/uapi/linux/videodev2.h
> @@ -153,6 +153,8 @@ enum v4l2_buf_type {
>  	V4L2_BUF_TYPE_SDR_OUTPUT           = 12,
>  	V4L2_BUF_TYPE_META_CAPTURE         = 13,
>  	V4L2_BUF_TYPE_META_OUTPUT	   = 14,
> +	V4L2_BUF_TYPE_AUDIO_CAPTURE        = 15,
> +	V4L2_BUF_TYPE_AUDIO_OUTPUT         = 16,
>  	/* Deprecated, do not use */
>  	V4L2_BUF_TYPE_PRIVATE              = 0x80,
>  };
> @@ -169,6 +171,7 @@ enum v4l2_buf_type {
>  	 || (type) == V4L2_BUF_TYPE_VBI_OUTPUT			\
>  	 || (type) == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT		\
>  	 || (type) == V4L2_BUF_TYPE_SDR_OUTPUT			\
> +	 || (type) == V4L2_BUF_TYPE_AUDIO_OUTPUT		\
>  	 || (type) == V4L2_BUF_TYPE_META_OUTPUT)
>  
>  #define V4L2_TYPE_IS_CAPTURE(type) (!V4L2_TYPE_IS_OUTPUT(type))
> @@ -2404,6 +2407,20 @@ struct v4l2_meta_format {
>  	__u32				buffersize;
>  } __attribute__ ((packed));
>  
> +/**
> + * struct v4l2_audio_format - audio data format definition
> + * @rate:		sample rate
> + * @format:		sample format
> + * @channels:		channel numbers
> + * @buffersize:		maximum size in bytes required for data
> + */
> +struct v4l2_audio_format {
> +	__u32				rate;
> +	__u32				format;
> +	__u32				channels;
> +	__u32				buffersize;
> +} __attribute__ ((packed));
> +
>  /**
>   * struct v4l2_format - stream data format
>   * @type:	enum v4l2_buf_type; type of the data stream
> @@ -2412,6 +2429,7 @@ struct v4l2_meta_format {
>   * @win:	definition of an overlaid image
>   * @vbi:	raw VBI capture or output parameters
>   * @sliced:	sliced VBI capture or output parameters
> + * @audio:	definition of an audio format
>   * @raw_data:	placeholder for future extensions and custom formats
>   * @fmt:	union of @pix, @pix_mp, @win, @vbi, @sliced, @sdr, @meta
>   *		and @raw_data
> @@ -2426,6 +2444,7 @@ struct v4l2_format {
>  		struct v4l2_sliced_vbi_format	sliced;  /* V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */
>  		struct v4l2_sdr_format		sdr;     /* V4L2_BUF_TYPE_SDR_CAPTURE */
>  		struct v4l2_meta_format		meta;    /* V4L2_BUF_TYPE_META_CAPTURE */
> +		struct v4l2_audio_format	audio;   /* V4L2_BUF_TYPE_AUDIO_CAPTURE */
>  		__u8	raw_data[200];                   /* user-defined */
>  	} fmt;
>  };
> -- 
> 2.34.1
> 

-- 
Sakari Ailus

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
@ 2023-06-30 10:05     ` Sakari Ailus
  0 siblings, 0 replies; 41+ messages in thread
From: Sakari Ailus @ 2023-06-30 10:05 UTC (permalink / raw)
  To: Shengjiu Wang
  Cc: hverkuil, alsa-devel, lgirdwood, linux-media, Xiubo.Lee,
	festevam, tiwai, Jacopo Mondi, linux-kernel, tfiga, nicoleotsuka,
	linuxppc-dev, broonie, perex, mchehab, shengjiu.wang,
	m.szyprowski

Hi Shengjiu,

On Thu, Jun 29, 2023 at 09:37:48AM +0800, Shengjiu Wang wrote:
> Audio signal processing has the requirement for memory to
> memory similar as Video.
> 
> This patch is to add this support in v4l2 framework, defined
> new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and
> V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format
> for audio case usage.

Why are you proposing to add this to V4L2 framework instead of doing this
within ALSA?

Also cc Hans and Jacopo.

> 
> The created audio device is named "/dev/audioX".
> 
> Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
> ---
>  .../media/common/videobuf2/videobuf2-v4l2.c   |  4 ++
>  drivers/media/v4l2-core/v4l2-dev.c            | 17 ++++++
>  drivers/media/v4l2-core/v4l2-ioctl.c          | 52 +++++++++++++++++++
>  include/media/v4l2-dev.h                      |  2 +
>  include/media/v4l2-ioctl.h                    | 34 ++++++++++++
>  include/uapi/linux/videodev2.h                | 19 +++++++
>  6 files changed, 128 insertions(+)
> 
> diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c b/drivers/media/common/videobuf2/videobuf2-v4l2.c
> index c7a54d82a55e..12f2be2773a2 100644
> --- a/drivers/media/common/videobuf2/videobuf2-v4l2.c
> +++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c
> @@ -785,6 +785,10 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create)
>  	case V4L2_BUF_TYPE_META_OUTPUT:
>  		requested_sizes[0] = f->fmt.meta.buffersize;
>  		break;
> +	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> +	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> +		requested_sizes[0] = f->fmt.audio.buffersize;
> +		break;
>  	default:
>  		return -EINVAL;
>  	}
> diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c
> index f81279492682..67484f4c6eaf 100644
> --- a/drivers/media/v4l2-core/v4l2-dev.c
> +++ b/drivers/media/v4l2-core/v4l2-dev.c
> @@ -553,6 +553,7 @@ static void determine_valid_ioctls(struct video_device *vdev)
>  	bool is_tch = vdev->vfl_type == VFL_TYPE_TOUCH;
>  	bool is_meta = vdev->vfl_type == VFL_TYPE_VIDEO &&
>  		       (vdev->device_caps & meta_caps);
> +	bool is_audio = vdev->vfl_type == VFL_TYPE_AUDIO;
>  	bool is_rx = vdev->vfl_dir != VFL_DIR_TX;
>  	bool is_tx = vdev->vfl_dir != VFL_DIR_RX;
>  	bool is_io_mc = vdev->device_caps & V4L2_CAP_IO_MC;
> @@ -664,6 +665,19 @@ static void determine_valid_ioctls(struct video_device *vdev)
>  		SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_meta_out);
>  		SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_meta_out);
>  	}
> +	if (is_audio && is_rx) {
> +		/* audio capture specific ioctls */
> +		SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_cap);
> +		SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_cap);
> +		SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_cap);
> +		SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_cap);
> +	} else if (is_audio && is_tx) {
> +		/* audio output specific ioctls */
> +		SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_out);
> +		SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_out);
> +		SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_out);
> +		SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_out);
> +	}
>  	if (is_vbi) {
>  		/* vbi specific ioctls */
>  		if ((is_rx && (ops->vidioc_g_fmt_vbi_cap ||
> @@ -927,6 +941,9 @@ int __video_register_device(struct video_device *vdev,
>  	case VFL_TYPE_TOUCH:
>  		name_base = "v4l-touch";
>  		break;
> +	case VFL_TYPE_AUDIO:
> +		name_base = "audio";
> +		break;
>  	default:
>  		pr_err("%s called with unknown type: %d\n",
>  		       __func__, type);
> diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
> index a858acea6547..26bc4b0d8ef0 100644
> --- a/drivers/media/v4l2-core/v4l2-ioctl.c
> +++ b/drivers/media/v4l2-core/v4l2-ioctl.c
> @@ -188,6 +188,8 @@ const char *v4l2_type_names[] = {
>  	[V4L2_BUF_TYPE_SDR_OUTPUT]         = "sdr-out",
>  	[V4L2_BUF_TYPE_META_CAPTURE]       = "meta-cap",
>  	[V4L2_BUF_TYPE_META_OUTPUT]	   = "meta-out",
> +	[V4L2_BUF_TYPE_AUDIO_CAPTURE]      = "audio-cap",
> +	[V4L2_BUF_TYPE_AUDIO_OUTPUT]	   = "audio-out",
>  };
>  EXPORT_SYMBOL(v4l2_type_names);
>  
> @@ -276,6 +278,7 @@ static void v4l_print_format(const void *arg, bool write_only)
>  	const struct v4l2_sliced_vbi_format *sliced;
>  	const struct v4l2_window *win;
>  	const struct v4l2_meta_format *meta;
> +	const struct v4l2_audio_format *audio;
>  	u32 pixelformat;
>  	u32 planes;
>  	unsigned i;
> @@ -346,6 +349,12 @@ static void v4l_print_format(const void *arg, bool write_only)
>  		pr_cont(", dataformat=%p4cc, buffersize=%u\n",
>  			&pixelformat, meta->buffersize);
>  		break;
> +	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> +	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> +		audio = &p->fmt.audio;
> +		pr_cont(", rate=%u, format=%u, channels=%u, buffersize=%u\n",
> +			audio->rate, audio->format, audio->channels, audio->buffersize);
> +		break;
>  	}
>  }
>  
> @@ -927,6 +936,7 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
>  	bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH;
>  	bool is_meta = vfd->vfl_type == VFL_TYPE_VIDEO &&
>  		       (vfd->device_caps & meta_caps);
> +	bool is_audio = vfd->vfl_type == VFL_TYPE_AUDIO;
>  	bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
>  	bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
>  
> @@ -992,6 +1002,14 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
>  		if (is_meta && is_tx && ops->vidioc_g_fmt_meta_out)
>  			return 0;
>  		break;
> +	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> +		if (is_audio && is_rx && ops->vidioc_g_fmt_audio_cap)
> +			return 0;
> +		break;
> +	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> +		if (is_audio && is_tx && ops->vidioc_g_fmt_audio_out)
> +			return 0;
> +		break;
>  	default:
>  		break;
>  	}
> @@ -1592,6 +1610,16 @@ static int v4l_enum_fmt(const struct v4l2_ioctl_ops *ops,
>  			break;
>  		ret = ops->vidioc_enum_fmt_meta_out(file, fh, arg);
>  		break;
> +	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> +		if (unlikely(!ops->vidioc_enum_fmt_audio_cap))
> +			break;
> +		ret = ops->vidioc_enum_fmt_audio_cap(file, fh, arg);
> +		break;
> +	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> +		if (unlikely(!ops->vidioc_enum_fmt_audio_out))
> +			break;
> +		ret = ops->vidioc_enum_fmt_audio_out(file, fh, arg);
> +		break;
>  	}
>  	if (ret == 0)
>  		v4l_fill_fmtdesc(p);
> @@ -1668,6 +1696,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops,
>  		return ops->vidioc_g_fmt_meta_cap(file, fh, arg);
>  	case V4L2_BUF_TYPE_META_OUTPUT:
>  		return ops->vidioc_g_fmt_meta_out(file, fh, arg);
> +	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> +		return ops->vidioc_g_fmt_audio_cap(file, fh, arg);
> +	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> +		return ops->vidioc_g_fmt_audio_out(file, fh, arg);
>  	}
>  	return -EINVAL;
>  }
> @@ -1779,6 +1811,16 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops,
>  			break;
>  		memset_after(p, 0, fmt.meta);
>  		return ops->vidioc_s_fmt_meta_out(file, fh, arg);
> +	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> +		if (unlikely(!ops->vidioc_s_fmt_audio_cap))
> +			break;
> +		memset_after(p, 0, fmt.audio);
> +		return ops->vidioc_s_fmt_audio_cap(file, fh, arg);
> +	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> +		if (unlikely(!ops->vidioc_s_fmt_audio_out))
> +			break;
> +		memset_after(p, 0, fmt.audio);
> +		return ops->vidioc_s_fmt_audio_out(file, fh, arg);
>  	}
>  	return -EINVAL;
>  }
> @@ -1887,6 +1929,16 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops,
>  			break;
>  		memset_after(p, 0, fmt.meta);
>  		return ops->vidioc_try_fmt_meta_out(file, fh, arg);
> +	case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> +		if (unlikely(!ops->vidioc_try_fmt_audio_cap))
> +			break;
> +		memset_after(p, 0, fmt.audio);
> +		return ops->vidioc_try_fmt_audio_cap(file, fh, arg);
> +	case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> +		if (unlikely(!ops->vidioc_try_fmt_audio_out))
> +			break;
> +		memset_after(p, 0, fmt.audio);
> +		return ops->vidioc_try_fmt_audio_out(file, fh, arg);
>  	}
>  	return -EINVAL;
>  }
> diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h
> index e0a13505f88d..0924e6d1dab1 100644
> --- a/include/media/v4l2-dev.h
> +++ b/include/media/v4l2-dev.h
> @@ -30,6 +30,7 @@
>   * @VFL_TYPE_SUBDEV:	for V4L2 subdevices
>   * @VFL_TYPE_SDR:	for Software Defined Radio tuners
>   * @VFL_TYPE_TOUCH:	for touch sensors
> + * @VFL_TYPE_AUDIO:	for audio input/output devices
>   * @VFL_TYPE_MAX:	number of VFL types, must always be last in the enum
>   */
>  enum vfl_devnode_type {
> @@ -39,6 +40,7 @@ enum vfl_devnode_type {
>  	VFL_TYPE_SUBDEV,
>  	VFL_TYPE_SDR,
>  	VFL_TYPE_TOUCH,
> +	VFL_TYPE_AUDIO,
>  	VFL_TYPE_MAX /* Shall be the last one */
>  };
>  
> diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h
> index edb733f21604..f840cf740ce1 100644
> --- a/include/media/v4l2-ioctl.h
> +++ b/include/media/v4l2-ioctl.h
> @@ -45,6 +45,12 @@ struct v4l2_fh;
>   * @vidioc_enum_fmt_meta_out: pointer to the function that implements
>   *	:ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
>   *	for metadata output
> + * @vidioc_enum_fmt_audio_cap: pointer to the function that implements
> + *	:ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
> + *	for audio capture
> + * @vidioc_enum_fmt_audio_out: pointer to the function that implements
> + *	:ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
> + *	for audio output
>   * @vidioc_g_fmt_vid_cap: pointer to the function that implements
>   *	:ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for video capture
>   *	in single plane mode
> @@ -79,6 +85,10 @@ struct v4l2_fh;
>   *	:ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
>   * @vidioc_g_fmt_meta_out: pointer to the function that implements
>   *	:ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata output
> + * @vidioc_g_fmt_audio_cap: pointer to the function that implements
> + *	:ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio capture
> + * @vidioc_g_fmt_audio_out: pointer to the function that implements
> + *	:ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio output
>   * @vidioc_s_fmt_vid_cap: pointer to the function that implements
>   *	:ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for video capture
>   *	in single plane mode
> @@ -113,6 +123,10 @@ struct v4l2_fh;
>   *	:ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
>   * @vidioc_s_fmt_meta_out: pointer to the function that implements
>   *	:ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata output
> + * @vidioc_s_fmt_audio_cap: pointer to the function that implements
> + *	:ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio capture
> + * @vidioc_s_fmt_audio_out: pointer to the function that implements
> + *	:ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio output
>   * @vidioc_try_fmt_vid_cap: pointer to the function that implements
>   *	:ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for video capture
>   *	in single plane mode
> @@ -149,6 +163,10 @@ struct v4l2_fh;
>   *	:ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
>   * @vidioc_try_fmt_meta_out: pointer to the function that implements
>   *	:ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata output
> + * @vidioc_try_fmt_audio_cap: pointer to the function that implements
> + *	:ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio capture
> + * @vidioc_try_fmt_audio_out: pointer to the function that implements
> + *	:ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio output
>   * @vidioc_reqbufs: pointer to the function that implements
>   *	:ref:`VIDIOC_REQBUFS <vidioc_reqbufs>` ioctl
>   * @vidioc_querybuf: pointer to the function that implements
> @@ -315,6 +333,10 @@ struct v4l2_ioctl_ops {
>  					struct v4l2_fmtdesc *f);
>  	int (*vidioc_enum_fmt_meta_out)(struct file *file, void *fh,
>  					struct v4l2_fmtdesc *f);
> +	int (*vidioc_enum_fmt_audio_cap)(struct file *file, void *fh,
> +					 struct v4l2_fmtdesc *f);
> +	int (*vidioc_enum_fmt_audio_out)(struct file *file, void *fh,
> +					 struct v4l2_fmtdesc *f);
>  
>  	/* VIDIOC_G_FMT handlers */
>  	int (*vidioc_g_fmt_vid_cap)(struct file *file, void *fh,
> @@ -345,6 +367,10 @@ struct v4l2_ioctl_ops {
>  				     struct v4l2_format *f);
>  	int (*vidioc_g_fmt_meta_out)(struct file *file, void *fh,
>  				     struct v4l2_format *f);
> +	int (*vidioc_g_fmt_audio_cap)(struct file *file, void *fh,
> +				      struct v4l2_format *f);
> +	int (*vidioc_g_fmt_audio_out)(struct file *file, void *fh,
> +				      struct v4l2_format *f);
>  
>  	/* VIDIOC_S_FMT handlers */
>  	int (*vidioc_s_fmt_vid_cap)(struct file *file, void *fh,
> @@ -375,6 +401,10 @@ struct v4l2_ioctl_ops {
>  				     struct v4l2_format *f);
>  	int (*vidioc_s_fmt_meta_out)(struct file *file, void *fh,
>  				     struct v4l2_format *f);
> +	int (*vidioc_s_fmt_audio_cap)(struct file *file, void *fh,
> +				      struct v4l2_format *f);
> +	int (*vidioc_s_fmt_audio_out)(struct file *file, void *fh,
> +				      struct v4l2_format *f);
>  
>  	/* VIDIOC_TRY_FMT handlers */
>  	int (*vidioc_try_fmt_vid_cap)(struct file *file, void *fh,
> @@ -405,6 +435,10 @@ struct v4l2_ioctl_ops {
>  				       struct v4l2_format *f);
>  	int (*vidioc_try_fmt_meta_out)(struct file *file, void *fh,
>  				       struct v4l2_format *f);
> +	int (*vidioc_try_fmt_audio_cap)(struct file *file, void *fh,
> +					struct v4l2_format *f);
> +	int (*vidioc_try_fmt_audio_out)(struct file *file, void *fh,
> +					struct v4l2_format *f);
>  
>  	/* Buffer handlers */
>  	int (*vidioc_reqbufs)(struct file *file, void *fh,
> diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
> index aee75eb9e686..a7af28f4c8c3 100644
> --- a/include/uapi/linux/videodev2.h
> +++ b/include/uapi/linux/videodev2.h
> @@ -153,6 +153,8 @@ enum v4l2_buf_type {
>  	V4L2_BUF_TYPE_SDR_OUTPUT           = 12,
>  	V4L2_BUF_TYPE_META_CAPTURE         = 13,
>  	V4L2_BUF_TYPE_META_OUTPUT	   = 14,
> +	V4L2_BUF_TYPE_AUDIO_CAPTURE        = 15,
> +	V4L2_BUF_TYPE_AUDIO_OUTPUT         = 16,
>  	/* Deprecated, do not use */
>  	V4L2_BUF_TYPE_PRIVATE              = 0x80,
>  };
> @@ -169,6 +171,7 @@ enum v4l2_buf_type {
>  	 || (type) == V4L2_BUF_TYPE_VBI_OUTPUT			\
>  	 || (type) == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT		\
>  	 || (type) == V4L2_BUF_TYPE_SDR_OUTPUT			\
> +	 || (type) == V4L2_BUF_TYPE_AUDIO_OUTPUT		\
>  	 || (type) == V4L2_BUF_TYPE_META_OUTPUT)
>  
>  #define V4L2_TYPE_IS_CAPTURE(type) (!V4L2_TYPE_IS_OUTPUT(type))
> @@ -2404,6 +2407,20 @@ struct v4l2_meta_format {
>  	__u32				buffersize;
>  } __attribute__ ((packed));
>  
> +/**
> + * struct v4l2_audio_format - audio data format definition
> + * @rate:		sample rate
> + * @format:		sample format
> + * @channels:		channel numbers
> + * @buffersize:		maximum size in bytes required for data
> + */
> +struct v4l2_audio_format {
> +	__u32				rate;
> +	__u32				format;
> +	__u32				channels;
> +	__u32				buffersize;
> +} __attribute__ ((packed));
> +
>  /**
>   * struct v4l2_format - stream data format
>   * @type:	enum v4l2_buf_type; type of the data stream
> @@ -2412,6 +2429,7 @@ struct v4l2_meta_format {
>   * @win:	definition of an overlaid image
>   * @vbi:	raw VBI capture or output parameters
>   * @sliced:	sliced VBI capture or output parameters
> + * @audio:	definition of an audio format
>   * @raw_data:	placeholder for future extensions and custom formats
>   * @fmt:	union of @pix, @pix_mp, @win, @vbi, @sliced, @sdr, @meta
>   *		and @raw_data
> @@ -2426,6 +2444,7 @@ struct v4l2_format {
>  		struct v4l2_sliced_vbi_format	sliced;  /* V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */
>  		struct v4l2_sdr_format		sdr;     /* V4L2_BUF_TYPE_SDR_CAPTURE */
>  		struct v4l2_meta_format		meta;    /* V4L2_BUF_TYPE_META_CAPTURE */
> +		struct v4l2_audio_format	audio;   /* V4L2_BUF_TYPE_AUDIO_CAPTURE */
>  		__u8	raw_data[200];                   /* user-defined */
>  	} fmt;
>  };
> -- 
> 2.34.1
> 

-- 
Sakari Ailus

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 4/6] ASoC: fsl_asrc: Add memory to memory driver
  2023-06-30  3:37       ` Shengjiu Wang
@ 2023-06-30 11:22         ` Mark Brown
  -1 siblings, 0 replies; 41+ messages in thread
From: Mark Brown @ 2023-06-30 11:22 UTC (permalink / raw)
  To: Shengjiu Wang
  Cc: Shengjiu Wang, tfiga, m.szyprowski, mchehab, linux-media,
	linux-kernel, Xiubo.Lee, festevam, nicoleotsuka, lgirdwood,
	perex, tiwai, alsa-devel, linuxppc-dev

[-- Attachment #1: Type: text/plain, Size: 1375 bytes --]

On Fri, Jun 30, 2023 at 11:37:29AM +0800, Shengjiu Wang wrote:
> On Thu, Jun 29, 2023 at 7:39 PM Mark Brown <broonie@kernel.org> wrote:
> > On Thu, Jun 29, 2023 at 09:37:51AM +0800, Shengjiu Wang wrote:

> > > Implement the ASRC memory to memory function using
> > > the v4l2 framework, user can use this function with
> > > v4l2 ioctl interface.

> > This feels like the bit where we interface v4l to ASoC should be a
> > separate library, there shouldn't be anything device specific about
> > getting an audio stream into a block of memory.  I'm thinking something
> > like the way we handle dmaengine here.

> > I've not dug into the code yet though.

> Users may want to get the ASRC output in the user space, then
> do mixing with other streams before sending to ALSA.

> As there is no such API in ASoC,  the best interface I found is
> the V4L2, but I need to do a little modification of the V4L2 API,

> extend it for audio usage.

> Could you please suggest more about the "separate library"?
> Should I place this "sound/soc/fsl/fsl_asrc_m2m.c" in another folder?

The concept of connecting an audio stream from v4l directly to something
in ASoC isn't specific to this driver or even to the i.MX platform, the
code that deals with that part of things should be split out so that it
works the same for any other drivers that do this.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 4/6] ASoC: fsl_asrc: Add memory to memory driver
@ 2023-06-30 11:22         ` Mark Brown
  0 siblings, 0 replies; 41+ messages in thread
From: Mark Brown @ 2023-06-30 11:22 UTC (permalink / raw)
  To: Shengjiu Wang
  Cc: alsa-devel, mchehab, Xiubo.Lee, lgirdwood, Shengjiu Wang, tiwai,
	linux-kernel, tfiga, nicoleotsuka, linuxppc-dev, perex,
	linux-media, festevam, m.szyprowski

[-- Attachment #1: Type: text/plain, Size: 1375 bytes --]

On Fri, Jun 30, 2023 at 11:37:29AM +0800, Shengjiu Wang wrote:
> On Thu, Jun 29, 2023 at 7:39 PM Mark Brown <broonie@kernel.org> wrote:
> > On Thu, Jun 29, 2023 at 09:37:51AM +0800, Shengjiu Wang wrote:

> > > Implement the ASRC memory to memory function using
> > > the v4l2 framework, user can use this function with
> > > v4l2 ioctl interface.

> > This feels like the bit where we interface v4l to ASoC should be a
> > separate library, there shouldn't be anything device specific about
> > getting an audio stream into a block of memory.  I'm thinking something
> > like the way we handle dmaengine here.

> > I've not dug into the code yet though.

> Users may want to get the ASRC output in the user space, then
> do mixing with other streams before sending to ALSA.

> As there is no such API in ASoC,  the best interface I found is
> the V4L2, but I need to do a little modification of the V4L2 API,

> extend it for audio usage.

> Could you please suggest more about the "separate library"?
> Should I place this "sound/soc/fsl/fsl_asrc_m2m.c" in another folder?

The concept of connecting an audio stream from v4l directly to something
in ASoC isn't specific to this driver or even to the i.MX platform, the
code that deals with that part of things should be split out so that it
works the same for any other drivers that do this.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
  2023-06-30 10:05     ` Sakari Ailus
@ 2023-07-03  9:54       ` Shengjiu Wang
  -1 siblings, 0 replies; 41+ messages in thread
From: Shengjiu Wang @ 2023-07-03  9:54 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Shengjiu Wang, tfiga, m.szyprowski, mchehab, linux-media,
	linux-kernel, Xiubo.Lee, festevam, nicoleotsuka, lgirdwood,
	broonie, perex, tiwai, alsa-devel, linuxppc-dev, hverkuil,
	Jacopo Mondi

Hi Sakari

On Fri, Jun 30, 2023 at 6:05 PM Sakari Ailus <sakari.ailus@iki.fi> wrote:

> Hi Shengjiu,
>
> On Thu, Jun 29, 2023 at 09:37:48AM +0800, Shengjiu Wang wrote:
> > Audio signal processing has the requirement for memory to
> > memory similar as Video.
> >
> > This patch is to add this support in v4l2 framework, defined
> > new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and
> > V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format
> > for audio case usage.
>
> Why are you proposing to add this to V4L2 framework instead of doing this
> within ALSA?
>
> Also cc Hans and Jacopo.


There is no such memory to memory interface defined in ALSA.  Seems
ALSA is not designed for M2M cases.

V4L2 is designed for video, radio, image, sdr, meta...,   so I think audio
can be
naturally added to the support scope.

Thanks.

Best regards
Shengjiu Wang

>


> >
> > The created audio device is named "/dev/audioX".
> >
> > Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
> > ---
> >  .../media/common/videobuf2/videobuf2-v4l2.c   |  4 ++
> >  drivers/media/v4l2-core/v4l2-dev.c            | 17 ++++++
> >  drivers/media/v4l2-core/v4l2-ioctl.c          | 52 +++++++++++++++++++
> >  include/media/v4l2-dev.h                      |  2 +
> >  include/media/v4l2-ioctl.h                    | 34 ++++++++++++
> >  include/uapi/linux/videodev2.h                | 19 +++++++
> >  6 files changed, 128 insertions(+)
> >
> > diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c
> b/drivers/media/common/videobuf2/videobuf2-v4l2.c
> > index c7a54d82a55e..12f2be2773a2 100644
> > --- a/drivers/media/common/videobuf2/videobuf2-v4l2.c
> > +++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c
> > @@ -785,6 +785,10 @@ int vb2_create_bufs(struct vb2_queue *q, struct
> v4l2_create_buffers *create)
> >       case V4L2_BUF_TYPE_META_OUTPUT:
> >               requested_sizes[0] = f->fmt.meta.buffersize;
> >               break;
> > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > +             requested_sizes[0] = f->fmt.audio.buffersize;
> > +             break;
> >       default:
> >               return -EINVAL;
> >       }
> > diff --git a/drivers/media/v4l2-core/v4l2-dev.c
> b/drivers/media/v4l2-core/v4l2-dev.c
> > index f81279492682..67484f4c6eaf 100644
> > --- a/drivers/media/v4l2-core/v4l2-dev.c
> > +++ b/drivers/media/v4l2-core/v4l2-dev.c
> > @@ -553,6 +553,7 @@ static void determine_valid_ioctls(struct
> video_device *vdev)
> >       bool is_tch = vdev->vfl_type == VFL_TYPE_TOUCH;
> >       bool is_meta = vdev->vfl_type == VFL_TYPE_VIDEO &&
> >                      (vdev->device_caps & meta_caps);
> > +     bool is_audio = vdev->vfl_type == VFL_TYPE_AUDIO;
> >       bool is_rx = vdev->vfl_dir != VFL_DIR_TX;
> >       bool is_tx = vdev->vfl_dir != VFL_DIR_RX;
> >       bool is_io_mc = vdev->device_caps & V4L2_CAP_IO_MC;
> > @@ -664,6 +665,19 @@ static void determine_valid_ioctls(struct
> video_device *vdev)
> >               SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_meta_out);
> >               SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT,
> vidioc_try_fmt_meta_out);
> >       }
> > +     if (is_audio && is_rx) {
> > +             /* audio capture specific ioctls */
> > +             SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT,
> vidioc_enum_fmt_audio_cap);
> > +             SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_cap);
> > +             SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_cap);
> > +             SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT,
> vidioc_try_fmt_audio_cap);
> > +     } else if (is_audio && is_tx) {
> > +             /* audio output specific ioctls */
> > +             SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT,
> vidioc_enum_fmt_audio_out);
> > +             SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_out);
> > +             SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_out);
> > +             SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT,
> vidioc_try_fmt_audio_out);
> > +     }
> >       if (is_vbi) {
> >               /* vbi specific ioctls */
> >               if ((is_rx && (ops->vidioc_g_fmt_vbi_cap ||
> > @@ -927,6 +941,9 @@ int __video_register_device(struct video_device
> *vdev,
> >       case VFL_TYPE_TOUCH:
> >               name_base = "v4l-touch";
> >               break;
> > +     case VFL_TYPE_AUDIO:
> > +             name_base = "audio";
> > +             break;
> >       default:
> >               pr_err("%s called with unknown type: %d\n",
> >                      __func__, type);
> > diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c
> b/drivers/media/v4l2-core/v4l2-ioctl.c
> > index a858acea6547..26bc4b0d8ef0 100644
> > --- a/drivers/media/v4l2-core/v4l2-ioctl.c
> > +++ b/drivers/media/v4l2-core/v4l2-ioctl.c
> > @@ -188,6 +188,8 @@ const char *v4l2_type_names[] = {
> >       [V4L2_BUF_TYPE_SDR_OUTPUT]         = "sdr-out",
> >       [V4L2_BUF_TYPE_META_CAPTURE]       = "meta-cap",
> >       [V4L2_BUF_TYPE_META_OUTPUT]        = "meta-out",
> > +     [V4L2_BUF_TYPE_AUDIO_CAPTURE]      = "audio-cap",
> > +     [V4L2_BUF_TYPE_AUDIO_OUTPUT]       = "audio-out",
> >  };
> >  EXPORT_SYMBOL(v4l2_type_names);
> >
> > @@ -276,6 +278,7 @@ static void v4l_print_format(const void *arg, bool
> write_only)
> >       const struct v4l2_sliced_vbi_format *sliced;
> >       const struct v4l2_window *win;
> >       const struct v4l2_meta_format *meta;
> > +     const struct v4l2_audio_format *audio;
> >       u32 pixelformat;
> >       u32 planes;
> >       unsigned i;
> > @@ -346,6 +349,12 @@ static void v4l_print_format(const void *arg, bool
> write_only)
> >               pr_cont(", dataformat=%p4cc, buffersize=%u\n",
> >                       &pixelformat, meta->buffersize);
> >               break;
> > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > +             audio = &p->fmt.audio;
> > +             pr_cont(", rate=%u, format=%u, channels=%u,
> buffersize=%u\n",
> > +                     audio->rate, audio->format, audio->channels,
> audio->buffersize);
> > +             break;
> >       }
> >  }
> >
> > @@ -927,6 +936,7 @@ static int check_fmt(struct file *file, enum
> v4l2_buf_type type)
> >       bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH;
> >       bool is_meta = vfd->vfl_type == VFL_TYPE_VIDEO &&
> >                      (vfd->device_caps & meta_caps);
> > +     bool is_audio = vfd->vfl_type == VFL_TYPE_AUDIO;
> >       bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
> >       bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
> >
> > @@ -992,6 +1002,14 @@ static int check_fmt(struct file *file, enum
> v4l2_buf_type type)
> >               if (is_meta && is_tx && ops->vidioc_g_fmt_meta_out)
> >                       return 0;
> >               break;
> > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > +             if (is_audio && is_rx && ops->vidioc_g_fmt_audio_cap)
> > +                     return 0;
> > +             break;
> > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > +             if (is_audio && is_tx && ops->vidioc_g_fmt_audio_out)
> > +                     return 0;
> > +             break;
> >       default:
> >               break;
> >       }
> > @@ -1592,6 +1610,16 @@ static int v4l_enum_fmt(const struct
> v4l2_ioctl_ops *ops,
> >                       break;
> >               ret = ops->vidioc_enum_fmt_meta_out(file, fh, arg);
> >               break;
> > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > +             if (unlikely(!ops->vidioc_enum_fmt_audio_cap))
> > +                     break;
> > +             ret = ops->vidioc_enum_fmt_audio_cap(file, fh, arg);
> > +             break;
> > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > +             if (unlikely(!ops->vidioc_enum_fmt_audio_out))
> > +                     break;
> > +             ret = ops->vidioc_enum_fmt_audio_out(file, fh, arg);
> > +             break;
> >       }
> >       if (ret == 0)
> >               v4l_fill_fmtdesc(p);
> > @@ -1668,6 +1696,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops
> *ops,
> >               return ops->vidioc_g_fmt_meta_cap(file, fh, arg);
> >       case V4L2_BUF_TYPE_META_OUTPUT:
> >               return ops->vidioc_g_fmt_meta_out(file, fh, arg);
> > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > +             return ops->vidioc_g_fmt_audio_cap(file, fh, arg);
> > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > +             return ops->vidioc_g_fmt_audio_out(file, fh, arg);
> >       }
> >       return -EINVAL;
> >  }
> > @@ -1779,6 +1811,16 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops
> *ops,
> >                       break;
> >               memset_after(p, 0, fmt.meta);
> >               return ops->vidioc_s_fmt_meta_out(file, fh, arg);
> > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > +             if (unlikely(!ops->vidioc_s_fmt_audio_cap))
> > +                     break;
> > +             memset_after(p, 0, fmt.audio);
> > +             return ops->vidioc_s_fmt_audio_cap(file, fh, arg);
> > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > +             if (unlikely(!ops->vidioc_s_fmt_audio_out))
> > +                     break;
> > +             memset_after(p, 0, fmt.audio);
> > +             return ops->vidioc_s_fmt_audio_out(file, fh, arg);
> >       }
> >       return -EINVAL;
> >  }
> > @@ -1887,6 +1929,16 @@ static int v4l_try_fmt(const struct
> v4l2_ioctl_ops *ops,
> >                       break;
> >               memset_after(p, 0, fmt.meta);
> >               return ops->vidioc_try_fmt_meta_out(file, fh, arg);
> > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > +             if (unlikely(!ops->vidioc_try_fmt_audio_cap))
> > +                     break;
> > +             memset_after(p, 0, fmt.audio);
> > +             return ops->vidioc_try_fmt_audio_cap(file, fh, arg);
> > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > +             if (unlikely(!ops->vidioc_try_fmt_audio_out))
> > +                     break;
> > +             memset_after(p, 0, fmt.audio);
> > +             return ops->vidioc_try_fmt_audio_out(file, fh, arg);
> >       }
> >       return -EINVAL;
> >  }
> > diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h
> > index e0a13505f88d..0924e6d1dab1 100644
> > --- a/include/media/v4l2-dev.h
> > +++ b/include/media/v4l2-dev.h
> > @@ -30,6 +30,7 @@
> >   * @VFL_TYPE_SUBDEV: for V4L2 subdevices
> >   * @VFL_TYPE_SDR:    for Software Defined Radio tuners
> >   * @VFL_TYPE_TOUCH:  for touch sensors
> > + * @VFL_TYPE_AUDIO:  for audio input/output devices
> >   * @VFL_TYPE_MAX:    number of VFL types, must always be last in the
> enum
> >   */
> >  enum vfl_devnode_type {
> > @@ -39,6 +40,7 @@ enum vfl_devnode_type {
> >       VFL_TYPE_SUBDEV,
> >       VFL_TYPE_SDR,
> >       VFL_TYPE_TOUCH,
> > +     VFL_TYPE_AUDIO,
> >       VFL_TYPE_MAX /* Shall be the last one */
> >  };
> >
> > diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h
> > index edb733f21604..f840cf740ce1 100644
> > --- a/include/media/v4l2-ioctl.h
> > +++ b/include/media/v4l2-ioctl.h
> > @@ -45,6 +45,12 @@ struct v4l2_fh;
> >   * @vidioc_enum_fmt_meta_out: pointer to the function that implements
> >   *   :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
> >   *   for metadata output
> > + * @vidioc_enum_fmt_audio_cap: pointer to the function that implements
> > + *   :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
> > + *   for audio capture
> > + * @vidioc_enum_fmt_audio_out: pointer to the function that implements
> > + *   :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
> > + *   for audio output
> >   * @vidioc_g_fmt_vid_cap: pointer to the function that implements
> >   *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for video capture
> >   *   in single plane mode
> > @@ -79,6 +85,10 @@ struct v4l2_fh;
> >   *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
> >   * @vidioc_g_fmt_meta_out: pointer to the function that implements
> >   *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata output
> > + * @vidioc_g_fmt_audio_cap: pointer to the function that implements
> > + *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio capture
> > + * @vidioc_g_fmt_audio_out: pointer to the function that implements
> > + *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio output
> >   * @vidioc_s_fmt_vid_cap: pointer to the function that implements
> >   *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for video capture
> >   *   in single plane mode
> > @@ -113,6 +123,10 @@ struct v4l2_fh;
> >   *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
> >   * @vidioc_s_fmt_meta_out: pointer to the function that implements
> >   *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata output
> > + * @vidioc_s_fmt_audio_cap: pointer to the function that implements
> > + *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio capture
> > + * @vidioc_s_fmt_audio_out: pointer to the function that implements
> > + *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio output
> >   * @vidioc_try_fmt_vid_cap: pointer to the function that implements
> >   *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for video capture
> >   *   in single plane mode
> > @@ -149,6 +163,10 @@ struct v4l2_fh;
> >   *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata
> capture
> >   * @vidioc_try_fmt_meta_out: pointer to the function that implements
> >   *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata
> output
> > + * @vidioc_try_fmt_audio_cap: pointer to the function that implements
> > + *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio capture
> > + * @vidioc_try_fmt_audio_out: pointer to the function that implements
> > + *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio output
> >   * @vidioc_reqbufs: pointer to the function that implements
> >   *   :ref:`VIDIOC_REQBUFS <vidioc_reqbufs>` ioctl
> >   * @vidioc_querybuf: pointer to the function that implements
> > @@ -315,6 +333,10 @@ struct v4l2_ioctl_ops {
> >                                       struct v4l2_fmtdesc *f);
> >       int (*vidioc_enum_fmt_meta_out)(struct file *file, void *fh,
> >                                       struct v4l2_fmtdesc *f);
> > +     int (*vidioc_enum_fmt_audio_cap)(struct file *file, void *fh,
> > +                                      struct v4l2_fmtdesc *f);
> > +     int (*vidioc_enum_fmt_audio_out)(struct file *file, void *fh,
> > +                                      struct v4l2_fmtdesc *f);
> >
> >       /* VIDIOC_G_FMT handlers */
> >       int (*vidioc_g_fmt_vid_cap)(struct file *file, void *fh,
> > @@ -345,6 +367,10 @@ struct v4l2_ioctl_ops {
> >                                    struct v4l2_format *f);
> >       int (*vidioc_g_fmt_meta_out)(struct file *file, void *fh,
> >                                    struct v4l2_format *f);
> > +     int (*vidioc_g_fmt_audio_cap)(struct file *file, void *fh,
> > +                                   struct v4l2_format *f);
> > +     int (*vidioc_g_fmt_audio_out)(struct file *file, void *fh,
> > +                                   struct v4l2_format *f);
> >
> >       /* VIDIOC_S_FMT handlers */
> >       int (*vidioc_s_fmt_vid_cap)(struct file *file, void *fh,
> > @@ -375,6 +401,10 @@ struct v4l2_ioctl_ops {
> >                                    struct v4l2_format *f);
> >       int (*vidioc_s_fmt_meta_out)(struct file *file, void *fh,
> >                                    struct v4l2_format *f);
> > +     int (*vidioc_s_fmt_audio_cap)(struct file *file, void *fh,
> > +                                   struct v4l2_format *f);
> > +     int (*vidioc_s_fmt_audio_out)(struct file *file, void *fh,
> > +                                   struct v4l2_format *f);
> >
> >       /* VIDIOC_TRY_FMT handlers */
> >       int (*vidioc_try_fmt_vid_cap)(struct file *file, void *fh,
> > @@ -405,6 +435,10 @@ struct v4l2_ioctl_ops {
> >                                      struct v4l2_format *f);
> >       int (*vidioc_try_fmt_meta_out)(struct file *file, void *fh,
> >                                      struct v4l2_format *f);
> > +     int (*vidioc_try_fmt_audio_cap)(struct file *file, void *fh,
> > +                                     struct v4l2_format *f);
> > +     int (*vidioc_try_fmt_audio_out)(struct file *file, void *fh,
> > +                                     struct v4l2_format *f);
> >
> >       /* Buffer handlers */
> >       int (*vidioc_reqbufs)(struct file *file, void *fh,
> > diff --git a/include/uapi/linux/videodev2.h
> b/include/uapi/linux/videodev2.h
> > index aee75eb9e686..a7af28f4c8c3 100644
> > --- a/include/uapi/linux/videodev2.h
> > +++ b/include/uapi/linux/videodev2.h
> > @@ -153,6 +153,8 @@ enum v4l2_buf_type {
> >       V4L2_BUF_TYPE_SDR_OUTPUT           = 12,
> >       V4L2_BUF_TYPE_META_CAPTURE         = 13,
> >       V4L2_BUF_TYPE_META_OUTPUT          = 14,
> > +     V4L2_BUF_TYPE_AUDIO_CAPTURE        = 15,
> > +     V4L2_BUF_TYPE_AUDIO_OUTPUT         = 16,
> >       /* Deprecated, do not use */
> >       V4L2_BUF_TYPE_PRIVATE              = 0x80,
> >  };
> > @@ -169,6 +171,7 @@ enum v4l2_buf_type {
> >        || (type) == V4L2_BUF_TYPE_VBI_OUTPUT                  \
> >        || (type) == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT           \
> >        || (type) == V4L2_BUF_TYPE_SDR_OUTPUT                  \
> > +      || (type) == V4L2_BUF_TYPE_AUDIO_OUTPUT                \
> >        || (type) == V4L2_BUF_TYPE_META_OUTPUT)
> >
> >  #define V4L2_TYPE_IS_CAPTURE(type) (!V4L2_TYPE_IS_OUTPUT(type))
> > @@ -2404,6 +2407,20 @@ struct v4l2_meta_format {
> >       __u32                           buffersize;
> >  } __attribute__ ((packed));
> >
> > +/**
> > + * struct v4l2_audio_format - audio data format definition
> > + * @rate:            sample rate
> > + * @format:          sample format
> > + * @channels:                channel numbers
> > + * @buffersize:              maximum size in bytes required for data
> > + */
> > +struct v4l2_audio_format {
> > +     __u32                           rate;
> > +     __u32                           format;
> > +     __u32                           channels;
> > +     __u32                           buffersize;
> > +} __attribute__ ((packed));
> > +
> >  /**
> >   * struct v4l2_format - stream data format
> >   * @type:    enum v4l2_buf_type; type of the data stream
> > @@ -2412,6 +2429,7 @@ struct v4l2_meta_format {
> >   * @win:     definition of an overlaid image
> >   * @vbi:     raw VBI capture or output parameters
> >   * @sliced:  sliced VBI capture or output parameters
> > + * @audio:   definition of an audio format
> >   * @raw_data:        placeholder for future extensions and custom
> formats
> >   * @fmt:     union of @pix, @pix_mp, @win, @vbi, @sliced, @sdr, @meta
> >   *           and @raw_data
> > @@ -2426,6 +2444,7 @@ struct v4l2_format {
> >               struct v4l2_sliced_vbi_format   sliced;  /*
> V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */
> >               struct v4l2_sdr_format          sdr;     /*
> V4L2_BUF_TYPE_SDR_CAPTURE */
> >               struct v4l2_meta_format         meta;    /*
> V4L2_BUF_TYPE_META_CAPTURE */
> > +             struct v4l2_audio_format        audio;   /*
> V4L2_BUF_TYPE_AUDIO_CAPTURE */
> >               __u8    raw_data[200];                   /* user-defined */
> >       } fmt;
> >  };
> > --
> > 2.34.1
> >
>
> --
> Sakari Ailus
>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
@ 2023-07-03  9:54       ` Shengjiu Wang
  0 siblings, 0 replies; 41+ messages in thread
From: Shengjiu Wang @ 2023-07-03  9:54 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: hverkuil, alsa-devel, mchehab, Jacopo Mondi, Xiubo.Lee,
	lgirdwood, Shengjiu Wang, tiwai, linux-kernel, tfiga,
	nicoleotsuka, linuxppc-dev, broonie, perex, linux-media,
	festevam, m.szyprowski

[-- Attachment #1: Type: text/plain, Size: 19803 bytes --]

Hi Sakari

On Fri, Jun 30, 2023 at 6:05 PM Sakari Ailus <sakari.ailus@iki.fi> wrote:

> Hi Shengjiu,
>
> On Thu, Jun 29, 2023 at 09:37:48AM +0800, Shengjiu Wang wrote:
> > Audio signal processing has the requirement for memory to
> > memory similar as Video.
> >
> > This patch is to add this support in v4l2 framework, defined
> > new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and
> > V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format
> > for audio case usage.
>
> Why are you proposing to add this to V4L2 framework instead of doing this
> within ALSA?
>
> Also cc Hans and Jacopo.


There is no such memory to memory interface defined in ALSA.  Seems
ALSA is not designed for M2M cases.

V4L2 is designed for video, radio, image, sdr, meta...,   so I think audio
can be
naturally added to the support scope.

Thanks.

Best regards
Shengjiu Wang

>


> >
> > The created audio device is named "/dev/audioX".
> >
> > Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
> > ---
> >  .../media/common/videobuf2/videobuf2-v4l2.c   |  4 ++
> >  drivers/media/v4l2-core/v4l2-dev.c            | 17 ++++++
> >  drivers/media/v4l2-core/v4l2-ioctl.c          | 52 +++++++++++++++++++
> >  include/media/v4l2-dev.h                      |  2 +
> >  include/media/v4l2-ioctl.h                    | 34 ++++++++++++
> >  include/uapi/linux/videodev2.h                | 19 +++++++
> >  6 files changed, 128 insertions(+)
> >
> > diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c
> b/drivers/media/common/videobuf2/videobuf2-v4l2.c
> > index c7a54d82a55e..12f2be2773a2 100644
> > --- a/drivers/media/common/videobuf2/videobuf2-v4l2.c
> > +++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c
> > @@ -785,6 +785,10 @@ int vb2_create_bufs(struct vb2_queue *q, struct
> v4l2_create_buffers *create)
> >       case V4L2_BUF_TYPE_META_OUTPUT:
> >               requested_sizes[0] = f->fmt.meta.buffersize;
> >               break;
> > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > +             requested_sizes[0] = f->fmt.audio.buffersize;
> > +             break;
> >       default:
> >               return -EINVAL;
> >       }
> > diff --git a/drivers/media/v4l2-core/v4l2-dev.c
> b/drivers/media/v4l2-core/v4l2-dev.c
> > index f81279492682..67484f4c6eaf 100644
> > --- a/drivers/media/v4l2-core/v4l2-dev.c
> > +++ b/drivers/media/v4l2-core/v4l2-dev.c
> > @@ -553,6 +553,7 @@ static void determine_valid_ioctls(struct
> video_device *vdev)
> >       bool is_tch = vdev->vfl_type == VFL_TYPE_TOUCH;
> >       bool is_meta = vdev->vfl_type == VFL_TYPE_VIDEO &&
> >                      (vdev->device_caps & meta_caps);
> > +     bool is_audio = vdev->vfl_type == VFL_TYPE_AUDIO;
> >       bool is_rx = vdev->vfl_dir != VFL_DIR_TX;
> >       bool is_tx = vdev->vfl_dir != VFL_DIR_RX;
> >       bool is_io_mc = vdev->device_caps & V4L2_CAP_IO_MC;
> > @@ -664,6 +665,19 @@ static void determine_valid_ioctls(struct
> video_device *vdev)
> >               SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_meta_out);
> >               SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT,
> vidioc_try_fmt_meta_out);
> >       }
> > +     if (is_audio && is_rx) {
> > +             /* audio capture specific ioctls */
> > +             SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT,
> vidioc_enum_fmt_audio_cap);
> > +             SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_cap);
> > +             SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_cap);
> > +             SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT,
> vidioc_try_fmt_audio_cap);
> > +     } else if (is_audio && is_tx) {
> > +             /* audio output specific ioctls */
> > +             SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT,
> vidioc_enum_fmt_audio_out);
> > +             SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_out);
> > +             SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_out);
> > +             SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT,
> vidioc_try_fmt_audio_out);
> > +     }
> >       if (is_vbi) {
> >               /* vbi specific ioctls */
> >               if ((is_rx && (ops->vidioc_g_fmt_vbi_cap ||
> > @@ -927,6 +941,9 @@ int __video_register_device(struct video_device
> *vdev,
> >       case VFL_TYPE_TOUCH:
> >               name_base = "v4l-touch";
> >               break;
> > +     case VFL_TYPE_AUDIO:
> > +             name_base = "audio";
> > +             break;
> >       default:
> >               pr_err("%s called with unknown type: %d\n",
> >                      __func__, type);
> > diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c
> b/drivers/media/v4l2-core/v4l2-ioctl.c
> > index a858acea6547..26bc4b0d8ef0 100644
> > --- a/drivers/media/v4l2-core/v4l2-ioctl.c
> > +++ b/drivers/media/v4l2-core/v4l2-ioctl.c
> > @@ -188,6 +188,8 @@ const char *v4l2_type_names[] = {
> >       [V4L2_BUF_TYPE_SDR_OUTPUT]         = "sdr-out",
> >       [V4L2_BUF_TYPE_META_CAPTURE]       = "meta-cap",
> >       [V4L2_BUF_TYPE_META_OUTPUT]        = "meta-out",
> > +     [V4L2_BUF_TYPE_AUDIO_CAPTURE]      = "audio-cap",
> > +     [V4L2_BUF_TYPE_AUDIO_OUTPUT]       = "audio-out",
> >  };
> >  EXPORT_SYMBOL(v4l2_type_names);
> >
> > @@ -276,6 +278,7 @@ static void v4l_print_format(const void *arg, bool
> write_only)
> >       const struct v4l2_sliced_vbi_format *sliced;
> >       const struct v4l2_window *win;
> >       const struct v4l2_meta_format *meta;
> > +     const struct v4l2_audio_format *audio;
> >       u32 pixelformat;
> >       u32 planes;
> >       unsigned i;
> > @@ -346,6 +349,12 @@ static void v4l_print_format(const void *arg, bool
> write_only)
> >               pr_cont(", dataformat=%p4cc, buffersize=%u\n",
> >                       &pixelformat, meta->buffersize);
> >               break;
> > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > +             audio = &p->fmt.audio;
> > +             pr_cont(", rate=%u, format=%u, channels=%u,
> buffersize=%u\n",
> > +                     audio->rate, audio->format, audio->channels,
> audio->buffersize);
> > +             break;
> >       }
> >  }
> >
> > @@ -927,6 +936,7 @@ static int check_fmt(struct file *file, enum
> v4l2_buf_type type)
> >       bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH;
> >       bool is_meta = vfd->vfl_type == VFL_TYPE_VIDEO &&
> >                      (vfd->device_caps & meta_caps);
> > +     bool is_audio = vfd->vfl_type == VFL_TYPE_AUDIO;
> >       bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
> >       bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
> >
> > @@ -992,6 +1002,14 @@ static int check_fmt(struct file *file, enum
> v4l2_buf_type type)
> >               if (is_meta && is_tx && ops->vidioc_g_fmt_meta_out)
> >                       return 0;
> >               break;
> > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > +             if (is_audio && is_rx && ops->vidioc_g_fmt_audio_cap)
> > +                     return 0;
> > +             break;
> > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > +             if (is_audio && is_tx && ops->vidioc_g_fmt_audio_out)
> > +                     return 0;
> > +             break;
> >       default:
> >               break;
> >       }
> > @@ -1592,6 +1610,16 @@ static int v4l_enum_fmt(const struct
> v4l2_ioctl_ops *ops,
> >                       break;
> >               ret = ops->vidioc_enum_fmt_meta_out(file, fh, arg);
> >               break;
> > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > +             if (unlikely(!ops->vidioc_enum_fmt_audio_cap))
> > +                     break;
> > +             ret = ops->vidioc_enum_fmt_audio_cap(file, fh, arg);
> > +             break;
> > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > +             if (unlikely(!ops->vidioc_enum_fmt_audio_out))
> > +                     break;
> > +             ret = ops->vidioc_enum_fmt_audio_out(file, fh, arg);
> > +             break;
> >       }
> >       if (ret == 0)
> >               v4l_fill_fmtdesc(p);
> > @@ -1668,6 +1696,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops
> *ops,
> >               return ops->vidioc_g_fmt_meta_cap(file, fh, arg);
> >       case V4L2_BUF_TYPE_META_OUTPUT:
> >               return ops->vidioc_g_fmt_meta_out(file, fh, arg);
> > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > +             return ops->vidioc_g_fmt_audio_cap(file, fh, arg);
> > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > +             return ops->vidioc_g_fmt_audio_out(file, fh, arg);
> >       }
> >       return -EINVAL;
> >  }
> > @@ -1779,6 +1811,16 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops
> *ops,
> >                       break;
> >               memset_after(p, 0, fmt.meta);
> >               return ops->vidioc_s_fmt_meta_out(file, fh, arg);
> > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > +             if (unlikely(!ops->vidioc_s_fmt_audio_cap))
> > +                     break;
> > +             memset_after(p, 0, fmt.audio);
> > +             return ops->vidioc_s_fmt_audio_cap(file, fh, arg);
> > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > +             if (unlikely(!ops->vidioc_s_fmt_audio_out))
> > +                     break;
> > +             memset_after(p, 0, fmt.audio);
> > +             return ops->vidioc_s_fmt_audio_out(file, fh, arg);
> >       }
> >       return -EINVAL;
> >  }
> > @@ -1887,6 +1929,16 @@ static int v4l_try_fmt(const struct
> v4l2_ioctl_ops *ops,
> >                       break;
> >               memset_after(p, 0, fmt.meta);
> >               return ops->vidioc_try_fmt_meta_out(file, fh, arg);
> > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > +             if (unlikely(!ops->vidioc_try_fmt_audio_cap))
> > +                     break;
> > +             memset_after(p, 0, fmt.audio);
> > +             return ops->vidioc_try_fmt_audio_cap(file, fh, arg);
> > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > +             if (unlikely(!ops->vidioc_try_fmt_audio_out))
> > +                     break;
> > +             memset_after(p, 0, fmt.audio);
> > +             return ops->vidioc_try_fmt_audio_out(file, fh, arg);
> >       }
> >       return -EINVAL;
> >  }
> > diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h
> > index e0a13505f88d..0924e6d1dab1 100644
> > --- a/include/media/v4l2-dev.h
> > +++ b/include/media/v4l2-dev.h
> > @@ -30,6 +30,7 @@
> >   * @VFL_TYPE_SUBDEV: for V4L2 subdevices
> >   * @VFL_TYPE_SDR:    for Software Defined Radio tuners
> >   * @VFL_TYPE_TOUCH:  for touch sensors
> > + * @VFL_TYPE_AUDIO:  for audio input/output devices
> >   * @VFL_TYPE_MAX:    number of VFL types, must always be last in the
> enum
> >   */
> >  enum vfl_devnode_type {
> > @@ -39,6 +40,7 @@ enum vfl_devnode_type {
> >       VFL_TYPE_SUBDEV,
> >       VFL_TYPE_SDR,
> >       VFL_TYPE_TOUCH,
> > +     VFL_TYPE_AUDIO,
> >       VFL_TYPE_MAX /* Shall be the last one */
> >  };
> >
> > diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h
> > index edb733f21604..f840cf740ce1 100644
> > --- a/include/media/v4l2-ioctl.h
> > +++ b/include/media/v4l2-ioctl.h
> > @@ -45,6 +45,12 @@ struct v4l2_fh;
> >   * @vidioc_enum_fmt_meta_out: pointer to the function that implements
> >   *   :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
> >   *   for metadata output
> > + * @vidioc_enum_fmt_audio_cap: pointer to the function that implements
> > + *   :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
> > + *   for audio capture
> > + * @vidioc_enum_fmt_audio_out: pointer to the function that implements
> > + *   :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
> > + *   for audio output
> >   * @vidioc_g_fmt_vid_cap: pointer to the function that implements
> >   *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for video capture
> >   *   in single plane mode
> > @@ -79,6 +85,10 @@ struct v4l2_fh;
> >   *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
> >   * @vidioc_g_fmt_meta_out: pointer to the function that implements
> >   *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata output
> > + * @vidioc_g_fmt_audio_cap: pointer to the function that implements
> > + *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio capture
> > + * @vidioc_g_fmt_audio_out: pointer to the function that implements
> > + *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio output
> >   * @vidioc_s_fmt_vid_cap: pointer to the function that implements
> >   *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for video capture
> >   *   in single plane mode
> > @@ -113,6 +123,10 @@ struct v4l2_fh;
> >   *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
> >   * @vidioc_s_fmt_meta_out: pointer to the function that implements
> >   *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata output
> > + * @vidioc_s_fmt_audio_cap: pointer to the function that implements
> > + *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio capture
> > + * @vidioc_s_fmt_audio_out: pointer to the function that implements
> > + *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio output
> >   * @vidioc_try_fmt_vid_cap: pointer to the function that implements
> >   *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for video capture
> >   *   in single plane mode
> > @@ -149,6 +163,10 @@ struct v4l2_fh;
> >   *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata
> capture
> >   * @vidioc_try_fmt_meta_out: pointer to the function that implements
> >   *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata
> output
> > + * @vidioc_try_fmt_audio_cap: pointer to the function that implements
> > + *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio capture
> > + * @vidioc_try_fmt_audio_out: pointer to the function that implements
> > + *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio output
> >   * @vidioc_reqbufs: pointer to the function that implements
> >   *   :ref:`VIDIOC_REQBUFS <vidioc_reqbufs>` ioctl
> >   * @vidioc_querybuf: pointer to the function that implements
> > @@ -315,6 +333,10 @@ struct v4l2_ioctl_ops {
> >                                       struct v4l2_fmtdesc *f);
> >       int (*vidioc_enum_fmt_meta_out)(struct file *file, void *fh,
> >                                       struct v4l2_fmtdesc *f);
> > +     int (*vidioc_enum_fmt_audio_cap)(struct file *file, void *fh,
> > +                                      struct v4l2_fmtdesc *f);
> > +     int (*vidioc_enum_fmt_audio_out)(struct file *file, void *fh,
> > +                                      struct v4l2_fmtdesc *f);
> >
> >       /* VIDIOC_G_FMT handlers */
> >       int (*vidioc_g_fmt_vid_cap)(struct file *file, void *fh,
> > @@ -345,6 +367,10 @@ struct v4l2_ioctl_ops {
> >                                    struct v4l2_format *f);
> >       int (*vidioc_g_fmt_meta_out)(struct file *file, void *fh,
> >                                    struct v4l2_format *f);
> > +     int (*vidioc_g_fmt_audio_cap)(struct file *file, void *fh,
> > +                                   struct v4l2_format *f);
> > +     int (*vidioc_g_fmt_audio_out)(struct file *file, void *fh,
> > +                                   struct v4l2_format *f);
> >
> >       /* VIDIOC_S_FMT handlers */
> >       int (*vidioc_s_fmt_vid_cap)(struct file *file, void *fh,
> > @@ -375,6 +401,10 @@ struct v4l2_ioctl_ops {
> >                                    struct v4l2_format *f);
> >       int (*vidioc_s_fmt_meta_out)(struct file *file, void *fh,
> >                                    struct v4l2_format *f);
> > +     int (*vidioc_s_fmt_audio_cap)(struct file *file, void *fh,
> > +                                   struct v4l2_format *f);
> > +     int (*vidioc_s_fmt_audio_out)(struct file *file, void *fh,
> > +                                   struct v4l2_format *f);
> >
> >       /* VIDIOC_TRY_FMT handlers */
> >       int (*vidioc_try_fmt_vid_cap)(struct file *file, void *fh,
> > @@ -405,6 +435,10 @@ struct v4l2_ioctl_ops {
> >                                      struct v4l2_format *f);
> >       int (*vidioc_try_fmt_meta_out)(struct file *file, void *fh,
> >                                      struct v4l2_format *f);
> > +     int (*vidioc_try_fmt_audio_cap)(struct file *file, void *fh,
> > +                                     struct v4l2_format *f);
> > +     int (*vidioc_try_fmt_audio_out)(struct file *file, void *fh,
> > +                                     struct v4l2_format *f);
> >
> >       /* Buffer handlers */
> >       int (*vidioc_reqbufs)(struct file *file, void *fh,
> > diff --git a/include/uapi/linux/videodev2.h
> b/include/uapi/linux/videodev2.h
> > index aee75eb9e686..a7af28f4c8c3 100644
> > --- a/include/uapi/linux/videodev2.h
> > +++ b/include/uapi/linux/videodev2.h
> > @@ -153,6 +153,8 @@ enum v4l2_buf_type {
> >       V4L2_BUF_TYPE_SDR_OUTPUT           = 12,
> >       V4L2_BUF_TYPE_META_CAPTURE         = 13,
> >       V4L2_BUF_TYPE_META_OUTPUT          = 14,
> > +     V4L2_BUF_TYPE_AUDIO_CAPTURE        = 15,
> > +     V4L2_BUF_TYPE_AUDIO_OUTPUT         = 16,
> >       /* Deprecated, do not use */
> >       V4L2_BUF_TYPE_PRIVATE              = 0x80,
> >  };
> > @@ -169,6 +171,7 @@ enum v4l2_buf_type {
> >        || (type) == V4L2_BUF_TYPE_VBI_OUTPUT                  \
> >        || (type) == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT           \
> >        || (type) == V4L2_BUF_TYPE_SDR_OUTPUT                  \
> > +      || (type) == V4L2_BUF_TYPE_AUDIO_OUTPUT                \
> >        || (type) == V4L2_BUF_TYPE_META_OUTPUT)
> >
> >  #define V4L2_TYPE_IS_CAPTURE(type) (!V4L2_TYPE_IS_OUTPUT(type))
> > @@ -2404,6 +2407,20 @@ struct v4l2_meta_format {
> >       __u32                           buffersize;
> >  } __attribute__ ((packed));
> >
> > +/**
> > + * struct v4l2_audio_format - audio data format definition
> > + * @rate:            sample rate
> > + * @format:          sample format
> > + * @channels:                channel numbers
> > + * @buffersize:              maximum size in bytes required for data
> > + */
> > +struct v4l2_audio_format {
> > +     __u32                           rate;
> > +     __u32                           format;
> > +     __u32                           channels;
> > +     __u32                           buffersize;
> > +} __attribute__ ((packed));
> > +
> >  /**
> >   * struct v4l2_format - stream data format
> >   * @type:    enum v4l2_buf_type; type of the data stream
> > @@ -2412,6 +2429,7 @@ struct v4l2_meta_format {
> >   * @win:     definition of an overlaid image
> >   * @vbi:     raw VBI capture or output parameters
> >   * @sliced:  sliced VBI capture or output parameters
> > + * @audio:   definition of an audio format
> >   * @raw_data:        placeholder for future extensions and custom
> formats
> >   * @fmt:     union of @pix, @pix_mp, @win, @vbi, @sliced, @sdr, @meta
> >   *           and @raw_data
> > @@ -2426,6 +2444,7 @@ struct v4l2_format {
> >               struct v4l2_sliced_vbi_format   sliced;  /*
> V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */
> >               struct v4l2_sdr_format          sdr;     /*
> V4L2_BUF_TYPE_SDR_CAPTURE */
> >               struct v4l2_meta_format         meta;    /*
> V4L2_BUF_TYPE_META_CAPTURE */
> > +             struct v4l2_audio_format        audio;   /*
> V4L2_BUF_TYPE_AUDIO_CAPTURE */
> >               __u8    raw_data[200];                   /* user-defined */
> >       } fmt;
> >  };
> > --
> > 2.34.1
> >
>
> --
> Sakari Ailus
>

[-- Attachment #2: Type: text/html, Size: 24635 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
  2023-07-03  9:54       ` Shengjiu Wang
@ 2023-07-03 10:04         ` Hans Verkuil
  -1 siblings, 0 replies; 41+ messages in thread
From: Hans Verkuil @ 2023-07-03 10:04 UTC (permalink / raw)
  To: Shengjiu Wang, Sakari Ailus
  Cc: Shengjiu Wang, tfiga, m.szyprowski, mchehab, linux-media,
	linux-kernel, Xiubo.Lee, festevam, nicoleotsuka, lgirdwood,
	broonie, perex, tiwai, alsa-devel, linuxppc-dev, Jacopo Mondi

On 03/07/2023 11:54, Shengjiu Wang wrote:
> Hi Sakari
> 
> On Fri, Jun 30, 2023 at 6:05 PM Sakari Ailus <sakari.ailus@iki.fi <mailto:sakari.ailus@iki.fi>> wrote:
> 
>     Hi Shengjiu,
> 
>     On Thu, Jun 29, 2023 at 09:37:48AM +0800, Shengjiu Wang wrote:
>     > Audio signal processing has the requirement for memory to
>     > memory similar as Video.
>     >
>     > This patch is to add this support in v4l2 framework, defined
>     > new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and
>     > V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format
>     > for audio case usage.
> 
>     Why are you proposing to add this to V4L2 framework instead of doing this
>     within ALSA?
> 
>     Also cc Hans and Jacopo.
> 
> 
> There is no such memory to memory interface defined in ALSA.  Seems
> ALSA is not designed for M2M cases.
> 
> V4L2 is designed for video, radio, image, sdr, meta...,   so I think audio can be
> naturally added to the support scope.  

While I do not have an objection as such supporting this as part of V4L2, I do
want to know if the ALSA maintainers think it is OK as well before I am going
to spend time on this.

In principle the V4L2 mem2mem framework doesn't really care what type of data
is processed, it is just a matter of adding audio types (or reusing them from ALSA,
which is presumably the intention here).

Regards,

	Hans

> 
> Thanks.
>  
> Best regards
> Shengjiu Wang
> 
>      
> 
> 
>     >
>     > The created audio device is named "/dev/audioX".
>     >
>     > Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com <mailto:shengjiu.wang@nxp.com>>
>     > ---
>     >  .../media/common/videobuf2/videobuf2-v4l2.c   |  4 ++
>     >  drivers/media/v4l2-core/v4l2-dev.c            | 17 ++++++
>     >  drivers/media/v4l2-core/v4l2-ioctl.c          | 52 +++++++++++++++++++
>     >  include/media/v4l2-dev.h                      |  2 +
>     >  include/media/v4l2-ioctl.h                    | 34 ++++++++++++
>     >  include/uapi/linux/videodev2.h                | 19 +++++++
>     >  6 files changed, 128 insertions(+)
>     >
>     > diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c b/drivers/media/common/videobuf2/videobuf2-v4l2.c
>     > index c7a54d82a55e..12f2be2773a2 100644
>     > --- a/drivers/media/common/videobuf2/videobuf2-v4l2.c
>     > +++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c
>     > @@ -785,6 +785,10 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create)
>     >       case V4L2_BUF_TYPE_META_OUTPUT:
>     >               requested_sizes[0] = f->fmt.meta.buffersize;
>     >               break;
>     > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
>     > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
>     > +             requested_sizes[0] = f->fmt.audio.buffersize;
>     > +             break;
>     >       default:
>     >               return -EINVAL;
>     >       }
>     > diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c
>     > index f81279492682..67484f4c6eaf 100644
>     > --- a/drivers/media/v4l2-core/v4l2-dev.c
>     > +++ b/drivers/media/v4l2-core/v4l2-dev.c
>     > @@ -553,6 +553,7 @@ static void determine_valid_ioctls(struct video_device *vdev)
>     >       bool is_tch = vdev->vfl_type == VFL_TYPE_TOUCH;
>     >       bool is_meta = vdev->vfl_type == VFL_TYPE_VIDEO &&
>     >                      (vdev->device_caps & meta_caps);
>     > +     bool is_audio = vdev->vfl_type == VFL_TYPE_AUDIO;
>     >       bool is_rx = vdev->vfl_dir != VFL_DIR_TX;
>     >       bool is_tx = vdev->vfl_dir != VFL_DIR_RX;
>     >       bool is_io_mc = vdev->device_caps & V4L2_CAP_IO_MC;
>     > @@ -664,6 +665,19 @@ static void determine_valid_ioctls(struct video_device *vdev)
>     >               SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_meta_out);
>     >               SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_meta_out);
>     >       }
>     > +     if (is_audio && is_rx) {
>     > +             /* audio capture specific ioctls */
>     > +             SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_cap);
>     > +             SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_cap);
>     > +             SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_cap);
>     > +             SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_cap);
>     > +     } else if (is_audio && is_tx) {
>     > +             /* audio output specific ioctls */
>     > +             SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_out);
>     > +             SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_out);
>     > +             SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_out);
>     > +             SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_out);
>     > +     }
>     >       if (is_vbi) {
>     >               /* vbi specific ioctls */
>     >               if ((is_rx && (ops->vidioc_g_fmt_vbi_cap ||
>     > @@ -927,6 +941,9 @@ int __video_register_device(struct video_device *vdev,
>     >       case VFL_TYPE_TOUCH:
>     >               name_base = "v4l-touch";
>     >               break;
>     > +     case VFL_TYPE_AUDIO:
>     > +             name_base = "audio";
>     > +             break;
>     >       default:
>     >               pr_err("%s called with unknown type: %d\n",
>     >                      __func__, type);
>     > diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
>     > index a858acea6547..26bc4b0d8ef0 100644
>     > --- a/drivers/media/v4l2-core/v4l2-ioctl.c
>     > +++ b/drivers/media/v4l2-core/v4l2-ioctl.c
>     > @@ -188,6 +188,8 @@ const char *v4l2_type_names[] = {
>     >       [V4L2_BUF_TYPE_SDR_OUTPUT]         = "sdr-out",
>     >       [V4L2_BUF_TYPE_META_CAPTURE]       = "meta-cap",
>     >       [V4L2_BUF_TYPE_META_OUTPUT]        = "meta-out",
>     > +     [V4L2_BUF_TYPE_AUDIO_CAPTURE]      = "audio-cap",
>     > +     [V4L2_BUF_TYPE_AUDIO_OUTPUT]       = "audio-out",
>     >  };
>     >  EXPORT_SYMBOL(v4l2_type_names);
>     > 
>     > @@ -276,6 +278,7 @@ static void v4l_print_format(const void *arg, bool write_only)
>     >       const struct v4l2_sliced_vbi_format *sliced;
>     >       const struct v4l2_window *win;
>     >       const struct v4l2_meta_format *meta;
>     > +     const struct v4l2_audio_format *audio;
>     >       u32 pixelformat;
>     >       u32 planes;
>     >       unsigned i;
>     > @@ -346,6 +349,12 @@ static void v4l_print_format(const void *arg, bool write_only)
>     >               pr_cont(", dataformat=%p4cc, buffersize=%u\n",
>     >                       &pixelformat, meta->buffersize);
>     >               break;
>     > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
>     > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
>     > +             audio = &p->fmt.audio;
>     > +             pr_cont(", rate=%u, format=%u, channels=%u, buffersize=%u\n",
>     > +                     audio->rate, audio->format, audio->channels, audio->buffersize);
>     > +             break;
>     >       }
>     >  }
>     > 
>     > @@ -927,6 +936,7 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
>     >       bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH;
>     >       bool is_meta = vfd->vfl_type == VFL_TYPE_VIDEO &&
>     >                      (vfd->device_caps & meta_caps);
>     > +     bool is_audio = vfd->vfl_type == VFL_TYPE_AUDIO;
>     >       bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
>     >       bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
>     > 
>     > @@ -992,6 +1002,14 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
>     >               if (is_meta && is_tx && ops->vidioc_g_fmt_meta_out)
>     >                       return 0;
>     >               break;
>     > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
>     > +             if (is_audio && is_rx && ops->vidioc_g_fmt_audio_cap)
>     > +                     return 0;
>     > +             break;
>     > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
>     > +             if (is_audio && is_tx && ops->vidioc_g_fmt_audio_out)
>     > +                     return 0;
>     > +             break;
>     >       default:
>     >               break;
>     >       }
>     > @@ -1592,6 +1610,16 @@ static int v4l_enum_fmt(const struct v4l2_ioctl_ops *ops,
>     >                       break;
>     >               ret = ops->vidioc_enum_fmt_meta_out(file, fh, arg);
>     >               break;
>     > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
>     > +             if (unlikely(!ops->vidioc_enum_fmt_audio_cap))
>     > +                     break;
>     > +             ret = ops->vidioc_enum_fmt_audio_cap(file, fh, arg);
>     > +             break;
>     > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
>     > +             if (unlikely(!ops->vidioc_enum_fmt_audio_out))
>     > +                     break;
>     > +             ret = ops->vidioc_enum_fmt_audio_out(file, fh, arg);
>     > +             break;
>     >       }
>     >       if (ret == 0)
>     >               v4l_fill_fmtdesc(p);
>     > @@ -1668,6 +1696,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops,
>     >               return ops->vidioc_g_fmt_meta_cap(file, fh, arg);
>     >       case V4L2_BUF_TYPE_META_OUTPUT:
>     >               return ops->vidioc_g_fmt_meta_out(file, fh, arg);
>     > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
>     > +             return ops->vidioc_g_fmt_audio_cap(file, fh, arg);
>     > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
>     > +             return ops->vidioc_g_fmt_audio_out(file, fh, arg);
>     >       }
>     >       return -EINVAL;
>     >  }
>     > @@ -1779,6 +1811,16 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops,
>     >                       break;
>     >               memset_after(p, 0, fmt.meta);
>     >               return ops->vidioc_s_fmt_meta_out(file, fh, arg);
>     > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
>     > +             if (unlikely(!ops->vidioc_s_fmt_audio_cap))
>     > +                     break;
>     > +             memset_after(p, 0, fmt.audio);
>     > +             return ops->vidioc_s_fmt_audio_cap(file, fh, arg);
>     > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
>     > +             if (unlikely(!ops->vidioc_s_fmt_audio_out))
>     > +                     break;
>     > +             memset_after(p, 0, fmt.audio);
>     > +             return ops->vidioc_s_fmt_audio_out(file, fh, arg);
>     >       }
>     >       return -EINVAL;
>     >  }
>     > @@ -1887,6 +1929,16 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops,
>     >                       break;
>     >               memset_after(p, 0, fmt.meta);
>     >               return ops->vidioc_try_fmt_meta_out(file, fh, arg);
>     > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
>     > +             if (unlikely(!ops->vidioc_try_fmt_audio_cap))
>     > +                     break;
>     > +             memset_after(p, 0, fmt.audio);
>     > +             return ops->vidioc_try_fmt_audio_cap(file, fh, arg);
>     > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
>     > +             if (unlikely(!ops->vidioc_try_fmt_audio_out))
>     > +                     break;
>     > +             memset_after(p, 0, fmt.audio);
>     > +             return ops->vidioc_try_fmt_audio_out(file, fh, arg);
>     >       }
>     >       return -EINVAL;
>     >  }
>     > diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h
>     > index e0a13505f88d..0924e6d1dab1 100644
>     > --- a/include/media/v4l2-dev.h
>     > +++ b/include/media/v4l2-dev.h
>     > @@ -30,6 +30,7 @@
>     >   * @VFL_TYPE_SUBDEV: for V4L2 subdevices
>     >   * @VFL_TYPE_SDR:    for Software Defined Radio tuners
>     >   * @VFL_TYPE_TOUCH:  for touch sensors
>     > + * @VFL_TYPE_AUDIO:  for audio input/output devices
>     >   * @VFL_TYPE_MAX:    number of VFL types, must always be last in the enum
>     >   */
>     >  enum vfl_devnode_type {
>     > @@ -39,6 +40,7 @@ enum vfl_devnode_type {
>     >       VFL_TYPE_SUBDEV,
>     >       VFL_TYPE_SDR,
>     >       VFL_TYPE_TOUCH,
>     > +     VFL_TYPE_AUDIO,
>     >       VFL_TYPE_MAX /* Shall be the last one */
>     >  };
>     > 
>     > diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h
>     > index edb733f21604..f840cf740ce1 100644
>     > --- a/include/media/v4l2-ioctl.h
>     > +++ b/include/media/v4l2-ioctl.h
>     > @@ -45,6 +45,12 @@ struct v4l2_fh;
>     >   * @vidioc_enum_fmt_meta_out: pointer to the function that implements
>     >   *   :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
>     >   *   for metadata output
>     > + * @vidioc_enum_fmt_audio_cap: pointer to the function that implements
>     > + *   :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
>     > + *   for audio capture
>     > + * @vidioc_enum_fmt_audio_out: pointer to the function that implements
>     > + *   :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
>     > + *   for audio output
>     >   * @vidioc_g_fmt_vid_cap: pointer to the function that implements
>     >   *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for video capture
>     >   *   in single plane mode
>     > @@ -79,6 +85,10 @@ struct v4l2_fh;
>     >   *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
>     >   * @vidioc_g_fmt_meta_out: pointer to the function that implements
>     >   *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata output
>     > + * @vidioc_g_fmt_audio_cap: pointer to the function that implements
>     > + *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio capture
>     > + * @vidioc_g_fmt_audio_out: pointer to the function that implements
>     > + *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio output
>     >   * @vidioc_s_fmt_vid_cap: pointer to the function that implements
>     >   *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for video capture
>     >   *   in single plane mode
>     > @@ -113,6 +123,10 @@ struct v4l2_fh;
>     >   *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
>     >   * @vidioc_s_fmt_meta_out: pointer to the function that implements
>     >   *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata output
>     > + * @vidioc_s_fmt_audio_cap: pointer to the function that implements
>     > + *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio capture
>     > + * @vidioc_s_fmt_audio_out: pointer to the function that implements
>     > + *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio output
>     >   * @vidioc_try_fmt_vid_cap: pointer to the function that implements
>     >   *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for video capture
>     >   *   in single plane mode
>     > @@ -149,6 +163,10 @@ struct v4l2_fh;
>     >   *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
>     >   * @vidioc_try_fmt_meta_out: pointer to the function that implements
>     >   *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata output
>     > + * @vidioc_try_fmt_audio_cap: pointer to the function that implements
>     > + *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio capture
>     > + * @vidioc_try_fmt_audio_out: pointer to the function that implements
>     > + *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio output
>     >   * @vidioc_reqbufs: pointer to the function that implements
>     >   *   :ref:`VIDIOC_REQBUFS <vidioc_reqbufs>` ioctl
>     >   * @vidioc_querybuf: pointer to the function that implements
>     > @@ -315,6 +333,10 @@ struct v4l2_ioctl_ops {
>     >                                       struct v4l2_fmtdesc *f);
>     >       int (*vidioc_enum_fmt_meta_out)(struct file *file, void *fh,
>     >                                       struct v4l2_fmtdesc *f);
>     > +     int (*vidioc_enum_fmt_audio_cap)(struct file *file, void *fh,
>     > +                                      struct v4l2_fmtdesc *f);
>     > +     int (*vidioc_enum_fmt_audio_out)(struct file *file, void *fh,
>     > +                                      struct v4l2_fmtdesc *f);
>     > 
>     >       /* VIDIOC_G_FMT handlers */
>     >       int (*vidioc_g_fmt_vid_cap)(struct file *file, void *fh,
>     > @@ -345,6 +367,10 @@ struct v4l2_ioctl_ops {
>     >                                    struct v4l2_format *f);
>     >       int (*vidioc_g_fmt_meta_out)(struct file *file, void *fh,
>     >                                    struct v4l2_format *f);
>     > +     int (*vidioc_g_fmt_audio_cap)(struct file *file, void *fh,
>     > +                                   struct v4l2_format *f);
>     > +     int (*vidioc_g_fmt_audio_out)(struct file *file, void *fh,
>     > +                                   struct v4l2_format *f);
>     > 
>     >       /* VIDIOC_S_FMT handlers */
>     >       int (*vidioc_s_fmt_vid_cap)(struct file *file, void *fh,
>     > @@ -375,6 +401,10 @@ struct v4l2_ioctl_ops {
>     >                                    struct v4l2_format *f);
>     >       int (*vidioc_s_fmt_meta_out)(struct file *file, void *fh,
>     >                                    struct v4l2_format *f);
>     > +     int (*vidioc_s_fmt_audio_cap)(struct file *file, void *fh,
>     > +                                   struct v4l2_format *f);
>     > +     int (*vidioc_s_fmt_audio_out)(struct file *file, void *fh,
>     > +                                   struct v4l2_format *f);
>     > 
>     >       /* VIDIOC_TRY_FMT handlers */
>     >       int (*vidioc_try_fmt_vid_cap)(struct file *file, void *fh,
>     > @@ -405,6 +435,10 @@ struct v4l2_ioctl_ops {
>     >                                      struct v4l2_format *f);
>     >       int (*vidioc_try_fmt_meta_out)(struct file *file, void *fh,
>     >                                      struct v4l2_format *f);
>     > +     int (*vidioc_try_fmt_audio_cap)(struct file *file, void *fh,
>     > +                                     struct v4l2_format *f);
>     > +     int (*vidioc_try_fmt_audio_out)(struct file *file, void *fh,
>     > +                                     struct v4l2_format *f);
>     > 
>     >       /* Buffer handlers */
>     >       int (*vidioc_reqbufs)(struct file *file, void *fh,
>     > diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
>     > index aee75eb9e686..a7af28f4c8c3 100644
>     > --- a/include/uapi/linux/videodev2.h
>     > +++ b/include/uapi/linux/videodev2.h
>     > @@ -153,6 +153,8 @@ enum v4l2_buf_type {
>     >       V4L2_BUF_TYPE_SDR_OUTPUT           = 12,
>     >       V4L2_BUF_TYPE_META_CAPTURE         = 13,
>     >       V4L2_BUF_TYPE_META_OUTPUT          = 14,
>     > +     V4L2_BUF_TYPE_AUDIO_CAPTURE        = 15,
>     > +     V4L2_BUF_TYPE_AUDIO_OUTPUT         = 16,
>     >       /* Deprecated, do not use */
>     >       V4L2_BUF_TYPE_PRIVATE              = 0x80,
>     >  };
>     > @@ -169,6 +171,7 @@ enum v4l2_buf_type {
>     >        || (type) == V4L2_BUF_TYPE_VBI_OUTPUT                  \
>     >        || (type) == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT           \
>     >        || (type) == V4L2_BUF_TYPE_SDR_OUTPUT                  \
>     > +      || (type) == V4L2_BUF_TYPE_AUDIO_OUTPUT                \
>     >        || (type) == V4L2_BUF_TYPE_META_OUTPUT)
>     > 
>     >  #define V4L2_TYPE_IS_CAPTURE(type) (!V4L2_TYPE_IS_OUTPUT(type))
>     > @@ -2404,6 +2407,20 @@ struct v4l2_meta_format {
>     >       __u32                           buffersize;
>     >  } __attribute__ ((packed));
>     > 
>     > +/**
>     > + * struct v4l2_audio_format - audio data format definition
>     > + * @rate:            sample rate
>     > + * @format:          sample format
>     > + * @channels:                channel numbers
>     > + * @buffersize:              maximum size in bytes required for data
>     > + */
>     > +struct v4l2_audio_format {
>     > +     __u32                           rate;
>     > +     __u32                           format;
>     > +     __u32                           channels;
>     > +     __u32                           buffersize;
>     > +} __attribute__ ((packed));
>     > +
>     >  /**
>     >   * struct v4l2_format - stream data format
>     >   * @type:    enum v4l2_buf_type; type of the data stream
>     > @@ -2412,6 +2429,7 @@ struct v4l2_meta_format {
>     >   * @win:     definition of an overlaid image
>     >   * @vbi:     raw VBI capture or output parameters
>     >   * @sliced:  sliced VBI capture or output parameters
>     > + * @audio:   definition of an audio format
>     >   * @raw_data:        placeholder for future extensions and custom formats
>     >   * @fmt:     union of @pix, @pix_mp, @win, @vbi, @sliced, @sdr, @meta
>     >   *           and @raw_data
>     > @@ -2426,6 +2444,7 @@ struct v4l2_format {
>     >               struct v4l2_sliced_vbi_format   sliced;  /* V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */
>     >               struct v4l2_sdr_format          sdr;     /* V4L2_BUF_TYPE_SDR_CAPTURE */
>     >               struct v4l2_meta_format         meta;    /* V4L2_BUF_TYPE_META_CAPTURE */
>     > +             struct v4l2_audio_format        audio;   /* V4L2_BUF_TYPE_AUDIO_CAPTURE */
>     >               __u8    raw_data[200];                   /* user-defined */
>     >       } fmt;
>     >  };
>     > --
>     > 2.34.1
>     >
> 
>     -- 
>     Sakari Ailus
> 


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
@ 2023-07-03 10:04         ` Hans Verkuil
  0 siblings, 0 replies; 41+ messages in thread
From: Hans Verkuil @ 2023-07-03 10:04 UTC (permalink / raw)
  To: Shengjiu Wang, Sakari Ailus
  Cc: alsa-devel, mchehab, Jacopo Mondi, Xiubo.Lee, lgirdwood,
	Shengjiu Wang, tiwai, linux-kernel, tfiga, nicoleotsuka,
	linuxppc-dev, broonie, perex, linux-media, festevam,
	m.szyprowski

On 03/07/2023 11:54, Shengjiu Wang wrote:
> Hi Sakari
> 
> On Fri, Jun 30, 2023 at 6:05 PM Sakari Ailus <sakari.ailus@iki.fi <mailto:sakari.ailus@iki.fi>> wrote:
> 
>     Hi Shengjiu,
> 
>     On Thu, Jun 29, 2023 at 09:37:48AM +0800, Shengjiu Wang wrote:
>     > Audio signal processing has the requirement for memory to
>     > memory similar as Video.
>     >
>     > This patch is to add this support in v4l2 framework, defined
>     > new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and
>     > V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format
>     > for audio case usage.
> 
>     Why are you proposing to add this to V4L2 framework instead of doing this
>     within ALSA?
> 
>     Also cc Hans and Jacopo.
> 
> 
> There is no such memory to memory interface defined in ALSA.  Seems
> ALSA is not designed for M2M cases.
> 
> V4L2 is designed for video, radio, image, sdr, meta...,   so I think audio can be
> naturally added to the support scope.  

While I do not have an objection as such supporting this as part of V4L2, I do
want to know if the ALSA maintainers think it is OK as well before I am going
to spend time on this.

In principle the V4L2 mem2mem framework doesn't really care what type of data
is processed, it is just a matter of adding audio types (or reusing them from ALSA,
which is presumably the intention here).

Regards,

	Hans

> 
> Thanks.
>  
> Best regards
> Shengjiu Wang
> 
>      
> 
> 
>     >
>     > The created audio device is named "/dev/audioX".
>     >
>     > Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com <mailto:shengjiu.wang@nxp.com>>
>     > ---
>     >  .../media/common/videobuf2/videobuf2-v4l2.c   |  4 ++
>     >  drivers/media/v4l2-core/v4l2-dev.c            | 17 ++++++
>     >  drivers/media/v4l2-core/v4l2-ioctl.c          | 52 +++++++++++++++++++
>     >  include/media/v4l2-dev.h                      |  2 +
>     >  include/media/v4l2-ioctl.h                    | 34 ++++++++++++
>     >  include/uapi/linux/videodev2.h                | 19 +++++++
>     >  6 files changed, 128 insertions(+)
>     >
>     > diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c b/drivers/media/common/videobuf2/videobuf2-v4l2.c
>     > index c7a54d82a55e..12f2be2773a2 100644
>     > --- a/drivers/media/common/videobuf2/videobuf2-v4l2.c
>     > +++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c
>     > @@ -785,6 +785,10 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create)
>     >       case V4L2_BUF_TYPE_META_OUTPUT:
>     >               requested_sizes[0] = f->fmt.meta.buffersize;
>     >               break;
>     > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
>     > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
>     > +             requested_sizes[0] = f->fmt.audio.buffersize;
>     > +             break;
>     >       default:
>     >               return -EINVAL;
>     >       }
>     > diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c
>     > index f81279492682..67484f4c6eaf 100644
>     > --- a/drivers/media/v4l2-core/v4l2-dev.c
>     > +++ b/drivers/media/v4l2-core/v4l2-dev.c
>     > @@ -553,6 +553,7 @@ static void determine_valid_ioctls(struct video_device *vdev)
>     >       bool is_tch = vdev->vfl_type == VFL_TYPE_TOUCH;
>     >       bool is_meta = vdev->vfl_type == VFL_TYPE_VIDEO &&
>     >                      (vdev->device_caps & meta_caps);
>     > +     bool is_audio = vdev->vfl_type == VFL_TYPE_AUDIO;
>     >       bool is_rx = vdev->vfl_dir != VFL_DIR_TX;
>     >       bool is_tx = vdev->vfl_dir != VFL_DIR_RX;
>     >       bool is_io_mc = vdev->device_caps & V4L2_CAP_IO_MC;
>     > @@ -664,6 +665,19 @@ static void determine_valid_ioctls(struct video_device *vdev)
>     >               SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_meta_out);
>     >               SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_meta_out);
>     >       }
>     > +     if (is_audio && is_rx) {
>     > +             /* audio capture specific ioctls */
>     > +             SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_cap);
>     > +             SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_cap);
>     > +             SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_cap);
>     > +             SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_cap);
>     > +     } else if (is_audio && is_tx) {
>     > +             /* audio output specific ioctls */
>     > +             SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_out);
>     > +             SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_out);
>     > +             SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_out);
>     > +             SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_out);
>     > +     }
>     >       if (is_vbi) {
>     >               /* vbi specific ioctls */
>     >               if ((is_rx && (ops->vidioc_g_fmt_vbi_cap ||
>     > @@ -927,6 +941,9 @@ int __video_register_device(struct video_device *vdev,
>     >       case VFL_TYPE_TOUCH:
>     >               name_base = "v4l-touch";
>     >               break;
>     > +     case VFL_TYPE_AUDIO:
>     > +             name_base = "audio";
>     > +             break;
>     >       default:
>     >               pr_err("%s called with unknown type: %d\n",
>     >                      __func__, type);
>     > diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
>     > index a858acea6547..26bc4b0d8ef0 100644
>     > --- a/drivers/media/v4l2-core/v4l2-ioctl.c
>     > +++ b/drivers/media/v4l2-core/v4l2-ioctl.c
>     > @@ -188,6 +188,8 @@ const char *v4l2_type_names[] = {
>     >       [V4L2_BUF_TYPE_SDR_OUTPUT]         = "sdr-out",
>     >       [V4L2_BUF_TYPE_META_CAPTURE]       = "meta-cap",
>     >       [V4L2_BUF_TYPE_META_OUTPUT]        = "meta-out",
>     > +     [V4L2_BUF_TYPE_AUDIO_CAPTURE]      = "audio-cap",
>     > +     [V4L2_BUF_TYPE_AUDIO_OUTPUT]       = "audio-out",
>     >  };
>     >  EXPORT_SYMBOL(v4l2_type_names);
>     > 
>     > @@ -276,6 +278,7 @@ static void v4l_print_format(const void *arg, bool write_only)
>     >       const struct v4l2_sliced_vbi_format *sliced;
>     >       const struct v4l2_window *win;
>     >       const struct v4l2_meta_format *meta;
>     > +     const struct v4l2_audio_format *audio;
>     >       u32 pixelformat;
>     >       u32 planes;
>     >       unsigned i;
>     > @@ -346,6 +349,12 @@ static void v4l_print_format(const void *arg, bool write_only)
>     >               pr_cont(", dataformat=%p4cc, buffersize=%u\n",
>     >                       &pixelformat, meta->buffersize);
>     >               break;
>     > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
>     > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
>     > +             audio = &p->fmt.audio;
>     > +             pr_cont(", rate=%u, format=%u, channels=%u, buffersize=%u\n",
>     > +                     audio->rate, audio->format, audio->channels, audio->buffersize);
>     > +             break;
>     >       }
>     >  }
>     > 
>     > @@ -927,6 +936,7 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
>     >       bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH;
>     >       bool is_meta = vfd->vfl_type == VFL_TYPE_VIDEO &&
>     >                      (vfd->device_caps & meta_caps);
>     > +     bool is_audio = vfd->vfl_type == VFL_TYPE_AUDIO;
>     >       bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
>     >       bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
>     > 
>     > @@ -992,6 +1002,14 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
>     >               if (is_meta && is_tx && ops->vidioc_g_fmt_meta_out)
>     >                       return 0;
>     >               break;
>     > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
>     > +             if (is_audio && is_rx && ops->vidioc_g_fmt_audio_cap)
>     > +                     return 0;
>     > +             break;
>     > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
>     > +             if (is_audio && is_tx && ops->vidioc_g_fmt_audio_out)
>     > +                     return 0;
>     > +             break;
>     >       default:
>     >               break;
>     >       }
>     > @@ -1592,6 +1610,16 @@ static int v4l_enum_fmt(const struct v4l2_ioctl_ops *ops,
>     >                       break;
>     >               ret = ops->vidioc_enum_fmt_meta_out(file, fh, arg);
>     >               break;
>     > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
>     > +             if (unlikely(!ops->vidioc_enum_fmt_audio_cap))
>     > +                     break;
>     > +             ret = ops->vidioc_enum_fmt_audio_cap(file, fh, arg);
>     > +             break;
>     > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
>     > +             if (unlikely(!ops->vidioc_enum_fmt_audio_out))
>     > +                     break;
>     > +             ret = ops->vidioc_enum_fmt_audio_out(file, fh, arg);
>     > +             break;
>     >       }
>     >       if (ret == 0)
>     >               v4l_fill_fmtdesc(p);
>     > @@ -1668,6 +1696,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops,
>     >               return ops->vidioc_g_fmt_meta_cap(file, fh, arg);
>     >       case V4L2_BUF_TYPE_META_OUTPUT:
>     >               return ops->vidioc_g_fmt_meta_out(file, fh, arg);
>     > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
>     > +             return ops->vidioc_g_fmt_audio_cap(file, fh, arg);
>     > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
>     > +             return ops->vidioc_g_fmt_audio_out(file, fh, arg);
>     >       }
>     >       return -EINVAL;
>     >  }
>     > @@ -1779,6 +1811,16 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops,
>     >                       break;
>     >               memset_after(p, 0, fmt.meta);
>     >               return ops->vidioc_s_fmt_meta_out(file, fh, arg);
>     > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
>     > +             if (unlikely(!ops->vidioc_s_fmt_audio_cap))
>     > +                     break;
>     > +             memset_after(p, 0, fmt.audio);
>     > +             return ops->vidioc_s_fmt_audio_cap(file, fh, arg);
>     > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
>     > +             if (unlikely(!ops->vidioc_s_fmt_audio_out))
>     > +                     break;
>     > +             memset_after(p, 0, fmt.audio);
>     > +             return ops->vidioc_s_fmt_audio_out(file, fh, arg);
>     >       }
>     >       return -EINVAL;
>     >  }
>     > @@ -1887,6 +1929,16 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops,
>     >                       break;
>     >               memset_after(p, 0, fmt.meta);
>     >               return ops->vidioc_try_fmt_meta_out(file, fh, arg);
>     > +     case V4L2_BUF_TYPE_AUDIO_CAPTURE:
>     > +             if (unlikely(!ops->vidioc_try_fmt_audio_cap))
>     > +                     break;
>     > +             memset_after(p, 0, fmt.audio);
>     > +             return ops->vidioc_try_fmt_audio_cap(file, fh, arg);
>     > +     case V4L2_BUF_TYPE_AUDIO_OUTPUT:
>     > +             if (unlikely(!ops->vidioc_try_fmt_audio_out))
>     > +                     break;
>     > +             memset_after(p, 0, fmt.audio);
>     > +             return ops->vidioc_try_fmt_audio_out(file, fh, arg);
>     >       }
>     >       return -EINVAL;
>     >  }
>     > diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h
>     > index e0a13505f88d..0924e6d1dab1 100644
>     > --- a/include/media/v4l2-dev.h
>     > +++ b/include/media/v4l2-dev.h
>     > @@ -30,6 +30,7 @@
>     >   * @VFL_TYPE_SUBDEV: for V4L2 subdevices
>     >   * @VFL_TYPE_SDR:    for Software Defined Radio tuners
>     >   * @VFL_TYPE_TOUCH:  for touch sensors
>     > + * @VFL_TYPE_AUDIO:  for audio input/output devices
>     >   * @VFL_TYPE_MAX:    number of VFL types, must always be last in the enum
>     >   */
>     >  enum vfl_devnode_type {
>     > @@ -39,6 +40,7 @@ enum vfl_devnode_type {
>     >       VFL_TYPE_SUBDEV,
>     >       VFL_TYPE_SDR,
>     >       VFL_TYPE_TOUCH,
>     > +     VFL_TYPE_AUDIO,
>     >       VFL_TYPE_MAX /* Shall be the last one */
>     >  };
>     > 
>     > diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h
>     > index edb733f21604..f840cf740ce1 100644
>     > --- a/include/media/v4l2-ioctl.h
>     > +++ b/include/media/v4l2-ioctl.h
>     > @@ -45,6 +45,12 @@ struct v4l2_fh;
>     >   * @vidioc_enum_fmt_meta_out: pointer to the function that implements
>     >   *   :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
>     >   *   for metadata output
>     > + * @vidioc_enum_fmt_audio_cap: pointer to the function that implements
>     > + *   :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
>     > + *   for audio capture
>     > + * @vidioc_enum_fmt_audio_out: pointer to the function that implements
>     > + *   :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
>     > + *   for audio output
>     >   * @vidioc_g_fmt_vid_cap: pointer to the function that implements
>     >   *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for video capture
>     >   *   in single plane mode
>     > @@ -79,6 +85,10 @@ struct v4l2_fh;
>     >   *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
>     >   * @vidioc_g_fmt_meta_out: pointer to the function that implements
>     >   *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata output
>     > + * @vidioc_g_fmt_audio_cap: pointer to the function that implements
>     > + *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio capture
>     > + * @vidioc_g_fmt_audio_out: pointer to the function that implements
>     > + *   :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio output
>     >   * @vidioc_s_fmt_vid_cap: pointer to the function that implements
>     >   *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for video capture
>     >   *   in single plane mode
>     > @@ -113,6 +123,10 @@ struct v4l2_fh;
>     >   *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
>     >   * @vidioc_s_fmt_meta_out: pointer to the function that implements
>     >   *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata output
>     > + * @vidioc_s_fmt_audio_cap: pointer to the function that implements
>     > + *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio capture
>     > + * @vidioc_s_fmt_audio_out: pointer to the function that implements
>     > + *   :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio output
>     >   * @vidioc_try_fmt_vid_cap: pointer to the function that implements
>     >   *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for video capture
>     >   *   in single plane mode
>     > @@ -149,6 +163,10 @@ struct v4l2_fh;
>     >   *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
>     >   * @vidioc_try_fmt_meta_out: pointer to the function that implements
>     >   *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata output
>     > + * @vidioc_try_fmt_audio_cap: pointer to the function that implements
>     > + *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio capture
>     > + * @vidioc_try_fmt_audio_out: pointer to the function that implements
>     > + *   :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio output
>     >   * @vidioc_reqbufs: pointer to the function that implements
>     >   *   :ref:`VIDIOC_REQBUFS <vidioc_reqbufs>` ioctl
>     >   * @vidioc_querybuf: pointer to the function that implements
>     > @@ -315,6 +333,10 @@ struct v4l2_ioctl_ops {
>     >                                       struct v4l2_fmtdesc *f);
>     >       int (*vidioc_enum_fmt_meta_out)(struct file *file, void *fh,
>     >                                       struct v4l2_fmtdesc *f);
>     > +     int (*vidioc_enum_fmt_audio_cap)(struct file *file, void *fh,
>     > +                                      struct v4l2_fmtdesc *f);
>     > +     int (*vidioc_enum_fmt_audio_out)(struct file *file, void *fh,
>     > +                                      struct v4l2_fmtdesc *f);
>     > 
>     >       /* VIDIOC_G_FMT handlers */
>     >       int (*vidioc_g_fmt_vid_cap)(struct file *file, void *fh,
>     > @@ -345,6 +367,10 @@ struct v4l2_ioctl_ops {
>     >                                    struct v4l2_format *f);
>     >       int (*vidioc_g_fmt_meta_out)(struct file *file, void *fh,
>     >                                    struct v4l2_format *f);
>     > +     int (*vidioc_g_fmt_audio_cap)(struct file *file, void *fh,
>     > +                                   struct v4l2_format *f);
>     > +     int (*vidioc_g_fmt_audio_out)(struct file *file, void *fh,
>     > +                                   struct v4l2_format *f);
>     > 
>     >       /* VIDIOC_S_FMT handlers */
>     >       int (*vidioc_s_fmt_vid_cap)(struct file *file, void *fh,
>     > @@ -375,6 +401,10 @@ struct v4l2_ioctl_ops {
>     >                                    struct v4l2_format *f);
>     >       int (*vidioc_s_fmt_meta_out)(struct file *file, void *fh,
>     >                                    struct v4l2_format *f);
>     > +     int (*vidioc_s_fmt_audio_cap)(struct file *file, void *fh,
>     > +                                   struct v4l2_format *f);
>     > +     int (*vidioc_s_fmt_audio_out)(struct file *file, void *fh,
>     > +                                   struct v4l2_format *f);
>     > 
>     >       /* VIDIOC_TRY_FMT handlers */
>     >       int (*vidioc_try_fmt_vid_cap)(struct file *file, void *fh,
>     > @@ -405,6 +435,10 @@ struct v4l2_ioctl_ops {
>     >                                      struct v4l2_format *f);
>     >       int (*vidioc_try_fmt_meta_out)(struct file *file, void *fh,
>     >                                      struct v4l2_format *f);
>     > +     int (*vidioc_try_fmt_audio_cap)(struct file *file, void *fh,
>     > +                                     struct v4l2_format *f);
>     > +     int (*vidioc_try_fmt_audio_out)(struct file *file, void *fh,
>     > +                                     struct v4l2_format *f);
>     > 
>     >       /* Buffer handlers */
>     >       int (*vidioc_reqbufs)(struct file *file, void *fh,
>     > diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
>     > index aee75eb9e686..a7af28f4c8c3 100644
>     > --- a/include/uapi/linux/videodev2.h
>     > +++ b/include/uapi/linux/videodev2.h
>     > @@ -153,6 +153,8 @@ enum v4l2_buf_type {
>     >       V4L2_BUF_TYPE_SDR_OUTPUT           = 12,
>     >       V4L2_BUF_TYPE_META_CAPTURE         = 13,
>     >       V4L2_BUF_TYPE_META_OUTPUT          = 14,
>     > +     V4L2_BUF_TYPE_AUDIO_CAPTURE        = 15,
>     > +     V4L2_BUF_TYPE_AUDIO_OUTPUT         = 16,
>     >       /* Deprecated, do not use */
>     >       V4L2_BUF_TYPE_PRIVATE              = 0x80,
>     >  };
>     > @@ -169,6 +171,7 @@ enum v4l2_buf_type {
>     >        || (type) == V4L2_BUF_TYPE_VBI_OUTPUT                  \
>     >        || (type) == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT           \
>     >        || (type) == V4L2_BUF_TYPE_SDR_OUTPUT                  \
>     > +      || (type) == V4L2_BUF_TYPE_AUDIO_OUTPUT                \
>     >        || (type) == V4L2_BUF_TYPE_META_OUTPUT)
>     > 
>     >  #define V4L2_TYPE_IS_CAPTURE(type) (!V4L2_TYPE_IS_OUTPUT(type))
>     > @@ -2404,6 +2407,20 @@ struct v4l2_meta_format {
>     >       __u32                           buffersize;
>     >  } __attribute__ ((packed));
>     > 
>     > +/**
>     > + * struct v4l2_audio_format - audio data format definition
>     > + * @rate:            sample rate
>     > + * @format:          sample format
>     > + * @channels:                channel numbers
>     > + * @buffersize:              maximum size in bytes required for data
>     > + */
>     > +struct v4l2_audio_format {
>     > +     __u32                           rate;
>     > +     __u32                           format;
>     > +     __u32                           channels;
>     > +     __u32                           buffersize;
>     > +} __attribute__ ((packed));
>     > +
>     >  /**
>     >   * struct v4l2_format - stream data format
>     >   * @type:    enum v4l2_buf_type; type of the data stream
>     > @@ -2412,6 +2429,7 @@ struct v4l2_meta_format {
>     >   * @win:     definition of an overlaid image
>     >   * @vbi:     raw VBI capture or output parameters
>     >   * @sliced:  sliced VBI capture or output parameters
>     > + * @audio:   definition of an audio format
>     >   * @raw_data:        placeholder for future extensions and custom formats
>     >   * @fmt:     union of @pix, @pix_mp, @win, @vbi, @sliced, @sdr, @meta
>     >   *           and @raw_data
>     > @@ -2426,6 +2444,7 @@ struct v4l2_format {
>     >               struct v4l2_sliced_vbi_format   sliced;  /* V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */
>     >               struct v4l2_sdr_format          sdr;     /* V4L2_BUF_TYPE_SDR_CAPTURE */
>     >               struct v4l2_meta_format         meta;    /* V4L2_BUF_TYPE_META_CAPTURE */
>     > +             struct v4l2_audio_format        audio;   /* V4L2_BUF_TYPE_AUDIO_CAPTURE */
>     >               __u8    raw_data[200];                   /* user-defined */
>     >       } fmt;
>     >  };
>     > --
>     > 2.34.1
>     >
> 
>     -- 
>     Sakari Ailus
> 


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
  2023-07-03  9:54       ` Shengjiu Wang
@ 2023-07-03 12:07         ` Takashi Iwai
  -1 siblings, 0 replies; 41+ messages in thread
From: Takashi Iwai @ 2023-07-03 12:07 UTC (permalink / raw)
  To: Shengjiu Wang
  Cc: Sakari Ailus, Shengjiu Wang, tfiga, m.szyprowski, mchehab,
	linux-media, linux-kernel, Xiubo.Lee, festevam, nicoleotsuka,
	lgirdwood, broonie, perex, tiwai, alsa-devel, linuxppc-dev,
	hverkuil, Jacopo Mondi

On Mon, 03 Jul 2023 11:54:22 +0200,
Shengjiu Wang wrote:
> 
> 
> Hi Sakari
> 
> On Fri, Jun 30, 2023 at 6:05 PM Sakari Ailus <sakari.ailus@iki.fi> wrote:
> 
>     Hi Shengjiu,
>    
>     On Thu, Jun 29, 2023 at 09:37:48AM +0800, Shengjiu Wang wrote:
>     > Audio signal processing has the requirement for memory to
>     > memory similar as Video.
>     >
>     > This patch is to add this support in v4l2 framework, defined
>     > new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and
>     > V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format
>     > for audio case usage.
>    
>     Why are you proposing to add this to V4L2 framework instead of doing this
>     within ALSA?
>    
>     Also cc Hans and Jacopo.
> 
> There is no such memory to memory interface defined in ALSA.  Seems
> ALSA is not designed for M2M cases.

There is no restriction to implement memory-to-memory capture in ALSA
framework.  It'd be a matter of the setup of PCM capture source, and
you can create a corresponding kcontrol element to switch the mode or
assign a dedicated PCM substream, for example.  It's just that there
was little demand for that.

I'm not much against adding the audio capture feature to V4L2,
though, if it really makes sense.  But creating a crafted /dev/audio*
doesn't look like a great idea to me, at least.


thanks,

Takashi

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
@ 2023-07-03 12:07         ` Takashi Iwai
  0 siblings, 0 replies; 41+ messages in thread
From: Takashi Iwai @ 2023-07-03 12:07 UTC (permalink / raw)
  To: Shengjiu Wang
  Cc: hverkuil, alsa-devel, lgirdwood, Jacopo Mondi, Xiubo.Lee,
	linux-kernel, Shengjiu Wang, tiwai, linux-media, tfiga,
	nicoleotsuka, linuxppc-dev, broonie, Sakari Ailus, perex,
	mchehab, festevam, m.szyprowski

On Mon, 03 Jul 2023 11:54:22 +0200,
Shengjiu Wang wrote:
> 
> 
> Hi Sakari
> 
> On Fri, Jun 30, 2023 at 6:05 PM Sakari Ailus <sakari.ailus@iki.fi> wrote:
> 
>     Hi Shengjiu,
>    
>     On Thu, Jun 29, 2023 at 09:37:48AM +0800, Shengjiu Wang wrote:
>     > Audio signal processing has the requirement for memory to
>     > memory similar as Video.
>     >
>     > This patch is to add this support in v4l2 framework, defined
>     > new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and
>     > V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format
>     > for audio case usage.
>    
>     Why are you proposing to add this to V4L2 framework instead of doing this
>     within ALSA?
>    
>     Also cc Hans and Jacopo.
> 
> There is no such memory to memory interface defined in ALSA.  Seems
> ALSA is not designed for M2M cases.

There is no restriction to implement memory-to-memory capture in ALSA
framework.  It'd be a matter of the setup of PCM capture source, and
you can create a corresponding kcontrol element to switch the mode or
assign a dedicated PCM substream, for example.  It's just that there
was little demand for that.

I'm not much against adding the audio capture feature to V4L2,
though, if it really makes sense.  But creating a crafted /dev/audio*
doesn't look like a great idea to me, at least.


thanks,

Takashi

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
  2023-07-03 12:07         ` Takashi Iwai
@ 2023-07-03 12:53           ` Mark Brown
  -1 siblings, 0 replies; 41+ messages in thread
From: Mark Brown @ 2023-07-03 12:53 UTC (permalink / raw)
  To: Takashi Iwai
  Cc: Shengjiu Wang, Sakari Ailus, Shengjiu Wang, tfiga, m.szyprowski,
	mchehab, linux-media, linux-kernel, Xiubo.Lee, festevam,
	nicoleotsuka, lgirdwood, perex, tiwai, alsa-devel, linuxppc-dev,
	hverkuil, Jacopo Mondi

[-- Attachment #1: Type: text/plain, Size: 970 bytes --]

On Mon, Jul 03, 2023 at 02:07:10PM +0200, Takashi Iwai wrote:
> Shengjiu Wang wrote:

> > There is no such memory to memory interface defined in ALSA.  Seems
> > ALSA is not designed for M2M cases.

> There is no restriction to implement memory-to-memory capture in ALSA
> framework.  It'd be a matter of the setup of PCM capture source, and
> you can create a corresponding kcontrol element to switch the mode or
> assign a dedicated PCM substream, for example.  It's just that there
> was little demand for that.

Yeah, it's not a terrible idea.  We might use it more if we ever get
better support for DSP audio, routing between the DSP and external
devices if driven from the CPU would be a memory to memory thing.

> I'm not much against adding the audio capture feature to V4L2,
> though, if it really makes sense.  But creating a crafted /dev/audio*
> doesn't look like a great idea to me, at least.

I've still not looked at the code at all.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
@ 2023-07-03 12:53           ` Mark Brown
  0 siblings, 0 replies; 41+ messages in thread
From: Mark Brown @ 2023-07-03 12:53 UTC (permalink / raw)
  To: Takashi Iwai
  Cc: hverkuil, alsa-devel, lgirdwood, Jacopo Mondi, Xiubo.Lee,
	linux-kernel, Shengjiu Wang, tiwai, linux-media, tfiga,
	nicoleotsuka, linuxppc-dev, Sakari Ailus, festevam, perex,
	mchehab, Shengjiu Wang, m.szyprowski

[-- Attachment #1: Type: text/plain, Size: 970 bytes --]

On Mon, Jul 03, 2023 at 02:07:10PM +0200, Takashi Iwai wrote:
> Shengjiu Wang wrote:

> > There is no such memory to memory interface defined in ALSA.  Seems
> > ALSA is not designed for M2M cases.

> There is no restriction to implement memory-to-memory capture in ALSA
> framework.  It'd be a matter of the setup of PCM capture source, and
> you can create a corresponding kcontrol element to switch the mode or
> assign a dedicated PCM substream, for example.  It's just that there
> was little demand for that.

Yeah, it's not a terrible idea.  We might use it more if we ever get
better support for DSP audio, routing between the DSP and external
devices if driven from the CPU would be a memory to memory thing.

> I'm not much against adding the audio capture feature to V4L2,
> though, if it really makes sense.  But creating a crafted /dev/audio*
> doesn't look like a great idea to me, at least.

I've still not looked at the code at all.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
  2023-07-03 12:53           ` Mark Brown
@ 2023-07-03 13:12             ` Hans Verkuil
  -1 siblings, 0 replies; 41+ messages in thread
From: Hans Verkuil @ 2023-07-03 13:12 UTC (permalink / raw)
  To: Mark Brown, Takashi Iwai
  Cc: Shengjiu Wang, Sakari Ailus, Shengjiu Wang, tfiga, m.szyprowski,
	mchehab, linux-media, linux-kernel, Xiubo.Lee, festevam,
	nicoleotsuka, lgirdwood, perex, tiwai, alsa-devel, linuxppc-dev,
	Jacopo Mondi

On 03/07/2023 14:53, Mark Brown wrote:
> On Mon, Jul 03, 2023 at 02:07:10PM +0200, Takashi Iwai wrote:
>> Shengjiu Wang wrote:
> 
>>> There is no such memory to memory interface defined in ALSA.  Seems
>>> ALSA is not designed for M2M cases.
> 
>> There is no restriction to implement memory-to-memory capture in ALSA
>> framework.  It'd be a matter of the setup of PCM capture source, and
>> you can create a corresponding kcontrol element to switch the mode or
>> assign a dedicated PCM substream, for example.  It's just that there
>> was little demand for that.
> 
> Yeah, it's not a terrible idea.  We might use it more if we ever get
> better support for DSP audio, routing between the DSP and external
> devices if driven from the CPU would be a memory to memory thing.
> 
>> I'm not much against adding the audio capture feature to V4L2,
>> though, if it really makes sense.  But creating a crafted /dev/audio*
>> doesn't look like a great idea to me, at least.
> 
> I've still not looked at the code at all.

My main concern is that these cross-subsystem drivers are a pain to
maintain. So there have to be good reasons to do this.

Also it is kind of weird to have to use the V4L2 API in userspace to
deal with a specific audio conversion. Quite unexpected.

But in the end, that's a decision I can't make.

So I wait for that feedback. Note that if the decision is made that this
can use V4L2, then there is quite a lot more that needs to be done:
documentation, new compliance tests, etc. It's adding a new API, and that
comes with additional work...

Regards,

	Hans

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
@ 2023-07-03 13:12             ` Hans Verkuil
  0 siblings, 0 replies; 41+ messages in thread
From: Hans Verkuil @ 2023-07-03 13:12 UTC (permalink / raw)
  To: Mark Brown, Takashi Iwai
  Cc: alsa-devel, lgirdwood, Jacopo Mondi, Xiubo.Lee, linux-kernel,
	Shengjiu Wang, tiwai, linux-media, tfiga, nicoleotsuka,
	linuxppc-dev, Sakari Ailus, festevam, perex, mchehab,
	Shengjiu Wang, m.szyprowski

On 03/07/2023 14:53, Mark Brown wrote:
> On Mon, Jul 03, 2023 at 02:07:10PM +0200, Takashi Iwai wrote:
>> Shengjiu Wang wrote:
> 
>>> There is no such memory to memory interface defined in ALSA.  Seems
>>> ALSA is not designed for M2M cases.
> 
>> There is no restriction to implement memory-to-memory capture in ALSA
>> framework.  It'd be a matter of the setup of PCM capture source, and
>> you can create a corresponding kcontrol element to switch the mode or
>> assign a dedicated PCM substream, for example.  It's just that there
>> was little demand for that.
> 
> Yeah, it's not a terrible idea.  We might use it more if we ever get
> better support for DSP audio, routing between the DSP and external
> devices if driven from the CPU would be a memory to memory thing.
> 
>> I'm not much against adding the audio capture feature to V4L2,
>> though, if it really makes sense.  But creating a crafted /dev/audio*
>> doesn't look like a great idea to me, at least.
> 
> I've still not looked at the code at all.

My main concern is that these cross-subsystem drivers are a pain to
maintain. So there have to be good reasons to do this.

Also it is kind of weird to have to use the V4L2 API in userspace to
deal with a specific audio conversion. Quite unexpected.

But in the end, that's a decision I can't make.

So I wait for that feedback. Note that if the decision is made that this
can use V4L2, then there is quite a lot more that needs to be done:
documentation, new compliance tests, etc. It's adding a new API, and that
comes with additional work...

Regards,

	Hans

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
  2023-07-03 13:12             ` Hans Verkuil
@ 2023-07-03 13:25               ` Takashi Iwai
  -1 siblings, 0 replies; 41+ messages in thread
From: Takashi Iwai @ 2023-07-03 13:25 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Mark Brown, Shengjiu Wang, Sakari Ailus, Shengjiu Wang, tfiga,
	m.szyprowski, mchehab, linux-media, linux-kernel, Xiubo.Lee,
	festevam, nicoleotsuka, lgirdwood, perex, tiwai, alsa-devel,
	linuxppc-dev, Jacopo Mondi

On Mon, 03 Jul 2023 15:12:55 +0200,
Hans Verkuil wrote:
> 
> On 03/07/2023 14:53, Mark Brown wrote:
> > On Mon, Jul 03, 2023 at 02:07:10PM +0200, Takashi Iwai wrote:
> >> Shengjiu Wang wrote:
> > 
> >>> There is no such memory to memory interface defined in ALSA.  Seems
> >>> ALSA is not designed for M2M cases.
> > 
> >> There is no restriction to implement memory-to-memory capture in ALSA
> >> framework.  It'd be a matter of the setup of PCM capture source, and
> >> you can create a corresponding kcontrol element to switch the mode or
> >> assign a dedicated PCM substream, for example.  It's just that there
> >> was little demand for that.
> > 
> > Yeah, it's not a terrible idea.  We might use it more if we ever get
> > better support for DSP audio, routing between the DSP and external
> > devices if driven from the CPU would be a memory to memory thing.
> > 
> >> I'm not much against adding the audio capture feature to V4L2,
> >> though, if it really makes sense.  But creating a crafted /dev/audio*
> >> doesn't look like a great idea to me, at least.
> > 
> > I've still not looked at the code at all.
> 
> My main concern is that these cross-subsystem drivers are a pain to
> maintain. So there have to be good reasons to do this.
> 
> Also it is kind of weird to have to use the V4L2 API in userspace to
> deal with a specific audio conversion. Quite unexpected.
> 
> But in the end, that's a decision I can't make.
> 
> So I wait for that feedback. Note that if the decision is made that this
> can use V4L2, then there is quite a lot more that needs to be done:
> documentation, new compliance tests, etc. It's adding a new API, and that
> comes with additional work...

All agreed.  Especially in this case, it doesn't have to be in V4L2
API, as it seems.

(Though, the support of audio on V4L2 might be useful if it's closely
tied with the a stream.  But that's another story.)


thanks,

Takashi

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
@ 2023-07-03 13:25               ` Takashi Iwai
  0 siblings, 0 replies; 41+ messages in thread
From: Takashi Iwai @ 2023-07-03 13:25 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: alsa-devel, lgirdwood, Jacopo Mondi, Xiubo.Lee, linux-kernel,
	Shengjiu Wang, tiwai, linux-media, tfiga, nicoleotsuka,
	linuxppc-dev, Mark Brown, Sakari Ailus, festevam, perex, mchehab,
	Shengjiu Wang, m.szyprowski

On Mon, 03 Jul 2023 15:12:55 +0200,
Hans Verkuil wrote:
> 
> On 03/07/2023 14:53, Mark Brown wrote:
> > On Mon, Jul 03, 2023 at 02:07:10PM +0200, Takashi Iwai wrote:
> >> Shengjiu Wang wrote:
> > 
> >>> There is no such memory to memory interface defined in ALSA.  Seems
> >>> ALSA is not designed for M2M cases.
> > 
> >> There is no restriction to implement memory-to-memory capture in ALSA
> >> framework.  It'd be a matter of the setup of PCM capture source, and
> >> you can create a corresponding kcontrol element to switch the mode or
> >> assign a dedicated PCM substream, for example.  It's just that there
> >> was little demand for that.
> > 
> > Yeah, it's not a terrible idea.  We might use it more if we ever get
> > better support for DSP audio, routing between the DSP and external
> > devices if driven from the CPU would be a memory to memory thing.
> > 
> >> I'm not much against adding the audio capture feature to V4L2,
> >> though, if it really makes sense.  But creating a crafted /dev/audio*
> >> doesn't look like a great idea to me, at least.
> > 
> > I've still not looked at the code at all.
> 
> My main concern is that these cross-subsystem drivers are a pain to
> maintain. So there have to be good reasons to do this.
> 
> Also it is kind of weird to have to use the V4L2 API in userspace to
> deal with a specific audio conversion. Quite unexpected.
> 
> But in the end, that's a decision I can't make.
> 
> So I wait for that feedback. Note that if the decision is made that this
> can use V4L2, then there is quite a lot more that needs to be done:
> documentation, new compliance tests, etc. It's adding a new API, and that
> comes with additional work...

All agreed.  Especially in this case, it doesn't have to be in V4L2
API, as it seems.

(Though, the support of audio on V4L2 might be useful if it's closely
tied with the a stream.  But that's another story.)


thanks,

Takashi

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
  2023-07-03 13:12             ` Hans Verkuil
@ 2023-07-03 17:58               ` Mark Brown
  -1 siblings, 0 replies; 41+ messages in thread
From: Mark Brown @ 2023-07-03 17:58 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Takashi Iwai, Shengjiu Wang, Sakari Ailus, Shengjiu Wang, tfiga,
	m.szyprowski, mchehab, linux-media, linux-kernel, Xiubo.Lee,
	festevam, nicoleotsuka, lgirdwood, perex, tiwai, alsa-devel,
	linuxppc-dev, Jacopo Mondi

[-- Attachment #1: Type: text/plain, Size: 920 bytes --]

On Mon, Jul 03, 2023 at 03:12:55PM +0200, Hans Verkuil wrote:

> My main concern is that these cross-subsystem drivers are a pain to
> maintain. So there have to be good reasons to do this.

> Also it is kind of weird to have to use the V4L2 API in userspace to
> deal with a specific audio conversion. Quite unexpected.

> But in the end, that's a decision I can't make.

> So I wait for that feedback. Note that if the decision is made that this
> can use V4L2, then there is quite a lot more that needs to be done:
> documentation, new compliance tests, etc. It's adding a new API, and that
> comes with additional work...

Absolutely, I agree with all of this - my impression was that the target
here would be bypass of audio streams to/from a v4l2 device, without
bouncing through an application layer.  If it's purely for audio usage
with no other tie to v4l2 then involving v4l2 does just seem like
complication.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
@ 2023-07-03 17:58               ` Mark Brown
  0 siblings, 0 replies; 41+ messages in thread
From: Mark Brown @ 2023-07-03 17:58 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: alsa-devel, lgirdwood, Jacopo Mondi, Xiubo.Lee, Takashi Iwai,
	linux-kernel, Shengjiu Wang, tiwai, linux-media, tfiga,
	nicoleotsuka, linuxppc-dev, Sakari Ailus, festevam, perex,
	mchehab, Shengjiu Wang, m.szyprowski

[-- Attachment #1: Type: text/plain, Size: 920 bytes --]

On Mon, Jul 03, 2023 at 03:12:55PM +0200, Hans Verkuil wrote:

> My main concern is that these cross-subsystem drivers are a pain to
> maintain. So there have to be good reasons to do this.

> Also it is kind of weird to have to use the V4L2 API in userspace to
> deal with a specific audio conversion. Quite unexpected.

> But in the end, that's a decision I can't make.

> So I wait for that feedback. Note that if the decision is made that this
> can use V4L2, then there is quite a lot more that needs to be done:
> documentation, new compliance tests, etc. It's adding a new API, and that
> comes with additional work...

Absolutely, I agree with all of this - my impression was that the target
here would be bypass of audio streams to/from a v4l2 device, without
bouncing through an application layer.  If it's purely for audio usage
with no other tie to v4l2 then involving v4l2 does just seem like
complication.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
  2023-07-03 13:25               ` Takashi Iwai
@ 2023-07-04  3:55                 ` Shengjiu Wang
  -1 siblings, 0 replies; 41+ messages in thread
From: Shengjiu Wang @ 2023-07-04  3:55 UTC (permalink / raw)
  To: Takashi Iwai
  Cc: Hans Verkuil, Mark Brown, Sakari Ailus, Shengjiu Wang, tfiga,
	m.szyprowski, mchehab, linux-media, linux-kernel, Xiubo.Lee,
	festevam, nicoleotsuka, lgirdwood, perex, tiwai, alsa-devel,
	linuxppc-dev, Jacopo Mondi

On Mon, Jul 3, 2023 at 9:25 PM Takashi Iwai <tiwai@suse.de> wrote:

> On Mon, 03 Jul 2023 15:12:55 +0200,
> Hans Verkuil wrote:
> >
> > On 03/07/2023 14:53, Mark Brown wrote:
> > > On Mon, Jul 03, 2023 at 02:07:10PM +0200, Takashi Iwai wrote:
> > >> Shengjiu Wang wrote:
> > >
> > >>> There is no such memory to memory interface defined in ALSA.  Seems
> > >>> ALSA is not designed for M2M cases.
> > >
> > >> There is no restriction to implement memory-to-memory capture in ALSA
> > >> framework.  It'd be a matter of the setup of PCM capture source, and
> > >> you can create a corresponding kcontrol element to switch the mode or
> > >> assign a dedicated PCM substream, for example.  It's just that there
> > >> was little demand for that.
> > >
> > > Yeah, it's not a terrible idea.  We might use it more if we ever get
> > > better support for DSP audio, routing between the DSP and external
> > > devices if driven from the CPU would be a memory to memory thing.
> > >
> > >> I'm not much against adding the audio capture feature to V4L2,
> > >> though, if it really makes sense.  But creating a crafted /dev/audio*
> > >> doesn't look like a great idea to me, at least.
> > >
> > > I've still not looked at the code at all.
> >
> > My main concern is that these cross-subsystem drivers are a pain to
> > maintain. So there have to be good reasons to do this.
> >
> > Also it is kind of weird to have to use the V4L2 API in userspace to
> > deal with a specific audio conversion. Quite unexpected.
> >
> > But in the end, that's a decision I can't make.
> >
> > So I wait for that feedback. Note that if the decision is made that this
> > can use V4L2, then there is quite a lot more that needs to be done:
> > documentation, new compliance tests, etc. It's adding a new API, and that
> > comes with additional work...
>
> All agreed.  Especially in this case, it doesn't have to be in V4L2
> API, as it seems.
>
> (Though, the support of audio on V4L2 might be useful if it's closely
> tied with the a stream.  But that's another story.)
>

audio is a stream,  for this m2m audio case, V4L2 is the best choice
I found.

I know there is API change for V4L2,  but V4L2 is a good framework
for memory to memory,  I think it is worth to do this change.

if implement this M2M case in ALSA,  we may need to create a sound
card and open it twice for playback and capture,  it is complicated
to do this,  and I am not sure if there is any other issue besides these.
I can't find an example in the ALSA framework for this case.

best regards
wang shengjiu

>
>
> thanks,
>
> Takashi
>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
@ 2023-07-04  3:55                 ` Shengjiu Wang
  0 siblings, 0 replies; 41+ messages in thread
From: Shengjiu Wang @ 2023-07-04  3:55 UTC (permalink / raw)
  To: Takashi Iwai
  Cc: nicoleotsuka, alsa-devel, lgirdwood, Jacopo Mondi, Xiubo.Lee,
	linux-kernel, Shengjiu Wang, tiwai, linux-media, tfiga,
	Hans Verkuil, linuxppc-dev, Mark Brown, Sakari Ailus, perex,
	mchehab, festevam, m.szyprowski

[-- Attachment #1: Type: text/plain, Size: 2655 bytes --]

On Mon, Jul 3, 2023 at 9:25 PM Takashi Iwai <tiwai@suse.de> wrote:

> On Mon, 03 Jul 2023 15:12:55 +0200,
> Hans Verkuil wrote:
> >
> > On 03/07/2023 14:53, Mark Brown wrote:
> > > On Mon, Jul 03, 2023 at 02:07:10PM +0200, Takashi Iwai wrote:
> > >> Shengjiu Wang wrote:
> > >
> > >>> There is no such memory to memory interface defined in ALSA.  Seems
> > >>> ALSA is not designed for M2M cases.
> > >
> > >> There is no restriction to implement memory-to-memory capture in ALSA
> > >> framework.  It'd be a matter of the setup of PCM capture source, and
> > >> you can create a corresponding kcontrol element to switch the mode or
> > >> assign a dedicated PCM substream, for example.  It's just that there
> > >> was little demand for that.
> > >
> > > Yeah, it's not a terrible idea.  We might use it more if we ever get
> > > better support for DSP audio, routing between the DSP and external
> > > devices if driven from the CPU would be a memory to memory thing.
> > >
> > >> I'm not much against adding the audio capture feature to V4L2,
> > >> though, if it really makes sense.  But creating a crafted /dev/audio*
> > >> doesn't look like a great idea to me, at least.
> > >
> > > I've still not looked at the code at all.
> >
> > My main concern is that these cross-subsystem drivers are a pain to
> > maintain. So there have to be good reasons to do this.
> >
> > Also it is kind of weird to have to use the V4L2 API in userspace to
> > deal with a specific audio conversion. Quite unexpected.
> >
> > But in the end, that's a decision I can't make.
> >
> > So I wait for that feedback. Note that if the decision is made that this
> > can use V4L2, then there is quite a lot more that needs to be done:
> > documentation, new compliance tests, etc. It's adding a new API, and that
> > comes with additional work...
>
> All agreed.  Especially in this case, it doesn't have to be in V4L2
> API, as it seems.
>
> (Though, the support of audio on V4L2 might be useful if it's closely
> tied with the a stream.  But that's another story.)
>

audio is a stream,  for this m2m audio case, V4L2 is the best choice
I found.

I know there is API change for V4L2,  but V4L2 is a good framework
for memory to memory,  I think it is worth to do this change.

if implement this M2M case in ALSA,  we may need to create a sound
card and open it twice for playback and capture,  it is complicated
to do this,  and I am not sure if there is any other issue besides these.
I can't find an example in the ALSA framework for this case.

best regards
wang shengjiu

>
>
> thanks,
>
> Takashi
>

[-- Attachment #2: Type: text/html, Size: 3611 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
  2023-07-03 17:58               ` Mark Brown
@ 2023-07-04  4:03                 ` Shengjiu Wang
  -1 siblings, 0 replies; 41+ messages in thread
From: Shengjiu Wang @ 2023-07-04  4:03 UTC (permalink / raw)
  To: Mark Brown
  Cc: Hans Verkuil, Takashi Iwai, Sakari Ailus, Shengjiu Wang, tfiga,
	m.szyprowski, mchehab, linux-media, linux-kernel, Xiubo.Lee,
	festevam, nicoleotsuka, lgirdwood, perex, tiwai, alsa-devel,
	linuxppc-dev, Jacopo Mondi

On Tue, Jul 4, 2023 at 1:59 AM Mark Brown <broonie@kernel.org> wrote:

> On Mon, Jul 03, 2023 at 03:12:55PM +0200, Hans Verkuil wrote:
>
> > My main concern is that these cross-subsystem drivers are a pain to
> > maintain. So there have to be good reasons to do this.
>
> > Also it is kind of weird to have to use the V4L2 API in userspace to
> > deal with a specific audio conversion. Quite unexpected.
>
> > But in the end, that's a decision I can't make.
>
> > So I wait for that feedback. Note that if the decision is made that this
> > can use V4L2, then there is quite a lot more that needs to be done:
> > documentation, new compliance tests, etc. It's adding a new API, and that
> > comes with additional work...
>
> Absolutely, I agree with all of this - my impression was that the target
> here would be bypass of audio streams to/from a v4l2 device, without
> bouncing through an application layer.  If it's purely for audio usage
> with no other tie to v4l2 then involving v4l2 does just seem like
> complication.
>

This audio use case is using the v4l2 application layer. in the user space
I need to call below v4l2 ioctls to implement the feature:
VIDIOC_QUERYCAP
VIDIOC_TRY_FMT
VIDIOC_S_FMT
VIDIOC_REQBUFS
VIDIOC_QUERYBUF
VIDIOC_STREAMON
VIDIOC_QBUF
VIDIOC_DQBUF
VIDIOC_STREAMOFF

why the driver was put in the ALSA, because previously we implemented
the ASRC M2P (memory to peripheral) in ALSA,  so I think it is better to
add M2M driver in ALSA.  The hardware IP is the same. The compatible
string is the same.

Best regards
Wang Shengjiu

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
@ 2023-07-04  4:03                 ` Shengjiu Wang
  0 siblings, 0 replies; 41+ messages in thread
From: Shengjiu Wang @ 2023-07-04  4:03 UTC (permalink / raw)
  To: Mark Brown
  Cc: nicoleotsuka, alsa-devel, lgirdwood, Jacopo Mondi, Xiubo.Lee,
	Takashi Iwai, linux-kernel, Shengjiu Wang, tiwai, linux-media,
	tfiga, Hans Verkuil, linuxppc-dev, Sakari Ailus, perex, mchehab,
	festevam, m.szyprowski

[-- Attachment #1: Type: text/plain, Size: 1602 bytes --]

On Tue, Jul 4, 2023 at 1:59 AM Mark Brown <broonie@kernel.org> wrote:

> On Mon, Jul 03, 2023 at 03:12:55PM +0200, Hans Verkuil wrote:
>
> > My main concern is that these cross-subsystem drivers are a pain to
> > maintain. So there have to be good reasons to do this.
>
> > Also it is kind of weird to have to use the V4L2 API in userspace to
> > deal with a specific audio conversion. Quite unexpected.
>
> > But in the end, that's a decision I can't make.
>
> > So I wait for that feedback. Note that if the decision is made that this
> > can use V4L2, then there is quite a lot more that needs to be done:
> > documentation, new compliance tests, etc. It's adding a new API, and that
> > comes with additional work...
>
> Absolutely, I agree with all of this - my impression was that the target
> here would be bypass of audio streams to/from a v4l2 device, without
> bouncing through an application layer.  If it's purely for audio usage
> with no other tie to v4l2 then involving v4l2 does just seem like
> complication.
>

This audio use case is using the v4l2 application layer. in the user space
I need to call below v4l2 ioctls to implement the feature:
VIDIOC_QUERYCAP
VIDIOC_TRY_FMT
VIDIOC_S_FMT
VIDIOC_REQBUFS
VIDIOC_QUERYBUF
VIDIOC_STREAMON
VIDIOC_QBUF
VIDIOC_DQBUF
VIDIOC_STREAMOFF

why the driver was put in the ALSA, because previously we implemented
the ASRC M2P (memory to peripheral) in ALSA,  so I think it is better to
add M2M driver in ALSA.  The hardware IP is the same. The compatible
string is the same.

Best regards
Wang Shengjiu

[-- Attachment #2: Type: text/html, Size: 2281 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
  2023-07-04  4:03                 ` Shengjiu Wang
@ 2023-07-07  3:13                   ` Shengjiu Wang
  -1 siblings, 0 replies; 41+ messages in thread
From: Shengjiu Wang @ 2023-07-07  3:13 UTC (permalink / raw)
  To: Mark Brown
  Cc: Hans Verkuil, Takashi Iwai, Sakari Ailus, Shengjiu Wang, tfiga,
	m.szyprowski, mchehab, linux-media, linux-kernel, Xiubo.Lee,
	festevam, nicoleotsuka, lgirdwood, perex, tiwai, alsa-devel,
	linuxppc-dev, Jacopo Mondi

Hi Mark

On Tue, Jul 4, 2023 at 12:03 PM Shengjiu Wang <shengjiu.wang@gmail.com>
wrote:

>
>
> On Tue, Jul 4, 2023 at 1:59 AM Mark Brown <broonie@kernel.org> wrote:
>
>> On Mon, Jul 03, 2023 at 03:12:55PM +0200, Hans Verkuil wrote:
>>
>> > My main concern is that these cross-subsystem drivers are a pain to
>> > maintain. So there have to be good reasons to do this.
>>
>> > Also it is kind of weird to have to use the V4L2 API in userspace to
>> > deal with a specific audio conversion. Quite unexpected.
>>
>> > But in the end, that's a decision I can't make.
>>
>> > So I wait for that feedback. Note that if the decision is made that this
>> > can use V4L2, then there is quite a lot more that needs to be done:
>> > documentation, new compliance tests, etc. It's adding a new API, and
>> that
>> > comes with additional work...
>>
>> Absolutely, I agree with all of this - my impression was that the target
>> here would be bypass of audio streams to/from a v4l2 device, without
>> bouncing through an application layer.  If it's purely for audio usage
>> with no other tie to v4l2 then involving v4l2 does just seem like
>> complication.
>>
>
> This audio use case is using the v4l2 application layer. in the user space
> I need to call below v4l2 ioctls to implement the feature:
> VIDIOC_QUERYCAP
> VIDIOC_TRY_FMT
> VIDIOC_S_FMT
> VIDIOC_REQBUFS
> VIDIOC_QUERYBUF
> VIDIOC_STREAMON
> VIDIOC_QBUF
> VIDIOC_DQBUF
> VIDIOC_STREAMOFF
>
> why the driver was put in the ALSA, because previously we implemented
> the ASRC M2P (memory to peripheral) in ALSA,  so I think it is better to
> add M2M driver in ALSA.  The hardware IP is the same. The compatible
> string is the same.
>
>
> Could you please share more of your ideas about this patch? and could
you please check further about this implementation.

I tried to find a good interface in ALSA for this m2m request, but didn't
find one,  then I try the V4L2, find it is good this audio case.

but it needs to extend the V4L2 API.

I have no idea how to go on, could you please recommend?

best regards
wang shengjiu

>
>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
@ 2023-07-07  3:13                   ` Shengjiu Wang
  0 siblings, 0 replies; 41+ messages in thread
From: Shengjiu Wang @ 2023-07-07  3:13 UTC (permalink / raw)
  To: Mark Brown
  Cc: nicoleotsuka, alsa-devel, lgirdwood, Jacopo Mondi, Xiubo.Lee,
	Takashi Iwai, linux-kernel, Shengjiu Wang, tiwai, linux-media,
	tfiga, Hans Verkuil, linuxppc-dev, Sakari Ailus, perex, mchehab,
	festevam, m.szyprowski

[-- Attachment #1: Type: text/plain, Size: 2147 bytes --]

Hi Mark

On Tue, Jul 4, 2023 at 12:03 PM Shengjiu Wang <shengjiu.wang@gmail.com>
wrote:

>
>
> On Tue, Jul 4, 2023 at 1:59 AM Mark Brown <broonie@kernel.org> wrote:
>
>> On Mon, Jul 03, 2023 at 03:12:55PM +0200, Hans Verkuil wrote:
>>
>> > My main concern is that these cross-subsystem drivers are a pain to
>> > maintain. So there have to be good reasons to do this.
>>
>> > Also it is kind of weird to have to use the V4L2 API in userspace to
>> > deal with a specific audio conversion. Quite unexpected.
>>
>> > But in the end, that's a decision I can't make.
>>
>> > So I wait for that feedback. Note that if the decision is made that this
>> > can use V4L2, then there is quite a lot more that needs to be done:
>> > documentation, new compliance tests, etc. It's adding a new API, and
>> that
>> > comes with additional work...
>>
>> Absolutely, I agree with all of this - my impression was that the target
>> here would be bypass of audio streams to/from a v4l2 device, without
>> bouncing through an application layer.  If it's purely for audio usage
>> with no other tie to v4l2 then involving v4l2 does just seem like
>> complication.
>>
>
> This audio use case is using the v4l2 application layer. in the user space
> I need to call below v4l2 ioctls to implement the feature:
> VIDIOC_QUERYCAP
> VIDIOC_TRY_FMT
> VIDIOC_S_FMT
> VIDIOC_REQBUFS
> VIDIOC_QUERYBUF
> VIDIOC_STREAMON
> VIDIOC_QBUF
> VIDIOC_DQBUF
> VIDIOC_STREAMOFF
>
> why the driver was put in the ALSA, because previously we implemented
> the ASRC M2P (memory to peripheral) in ALSA,  so I think it is better to
> add M2M driver in ALSA.  The hardware IP is the same. The compatible
> string is the same.
>
>
> Could you please share more of your ideas about this patch? and could
you please check further about this implementation.

I tried to find a good interface in ALSA for this m2m request, but didn't
find one,  then I try the V4L2, find it is good this audio case.

but it needs to extend the V4L2 API.

I have no idea how to go on, could you please recommend?

best regards
wang shengjiu

>
>

[-- Attachment #2: Type: text/html, Size: 3336 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
  2023-07-07  3:13                   ` Shengjiu Wang
@ 2023-07-19  9:15                     ` Shengjiu Wang
  -1 siblings, 0 replies; 41+ messages in thread
From: Shengjiu Wang @ 2023-07-19  9:15 UTC (permalink / raw)
  To: Mark Brown
  Cc: Hans Verkuil, Takashi Iwai, Sakari Ailus, Shengjiu Wang, tfiga,
	m.szyprowski, mchehab, linux-media, linux-kernel, Xiubo.Lee,
	festevam, nicoleotsuka, lgirdwood, perex, tiwai, alsa-devel,
	linuxppc-dev, Jacopo Mondi

Hi Mark

On Fri, Jul 7, 2023 at 11:13 AM Shengjiu Wang <shengjiu.wang@gmail.com>
wrote:

> Hi Mark
>
> On Tue, Jul 4, 2023 at 12:03 PM Shengjiu Wang <shengjiu.wang@gmail.com>
> wrote:
>
>>
>>
>> On Tue, Jul 4, 2023 at 1:59 AM Mark Brown <broonie@kernel.org> wrote:
>>
>>> On Mon, Jul 03, 2023 at 03:12:55PM +0200, Hans Verkuil wrote:
>>>
>>> > My main concern is that these cross-subsystem drivers are a pain to
>>> > maintain. So there have to be good reasons to do this.
>>>
>>> > Also it is kind of weird to have to use the V4L2 API in userspace to
>>> > deal with a specific audio conversion. Quite unexpected.
>>>
>>> > But in the end, that's a decision I can't make.
>>>
>>> > So I wait for that feedback. Note that if the decision is made that
>>> this
>>> > can use V4L2, then there is quite a lot more that needs to be done:
>>> > documentation, new compliance tests, etc. It's adding a new API, and
>>> that
>>> > comes with additional work...
>>>
>>> Absolutely, I agree with all of this - my impression was that the target
>>> here would be bypass of audio streams to/from a v4l2 device, without
>>> bouncing through an application layer.  If it's purely for audio usage
>>> with no other tie to v4l2 then involving v4l2 does just seem like
>>> complication.
>>>
>>
>> This audio use case is using the v4l2 application layer. in the user space
>> I need to call below v4l2 ioctls to implement the feature:
>> VIDIOC_QUERYCAP
>> VIDIOC_TRY_FMT
>> VIDIOC_S_FMT
>> VIDIOC_REQBUFS
>> VIDIOC_QUERYBUF
>> VIDIOC_STREAMON
>> VIDIOC_QBUF
>> VIDIOC_DQBUF
>> VIDIOC_STREAMOFF
>>
>> why the driver was put in the ALSA, because previously we implemented
>> the ASRC M2P (memory to peripheral) in ALSA,  so I think it is better to
>> add M2M driver in ALSA.  The hardware IP is the same. The compatible
>> string is the same.
>>
>>
>> Could you please share more of your ideas about this patch? and could
> you please check further about this implementation.
>
> I tried to find a good interface in ALSA for this m2m request, but didn't
> find one,  then I try the V4L2, find it is good this audio case.
>
> but it needs to extend the V4L2 API.
>
> I have no idea how to go on, could you please recommend?
>
>
Should I implement the asrc m2m driver as a separate v4l2 driver?
And move it to the /driver/media folder ? In ALSA part, just need
register the platform device.

The bridge between ALSA and V4L2 framework can be the header
file in /include/sound/

Does it sound better?

Best regards
Wang Shengjiu

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH 1/6] media: v4l2: Add audio capture and output support
@ 2023-07-19  9:15                     ` Shengjiu Wang
  0 siblings, 0 replies; 41+ messages in thread
From: Shengjiu Wang @ 2023-07-19  9:15 UTC (permalink / raw)
  To: Mark Brown
  Cc: nicoleotsuka, alsa-devel, lgirdwood, Jacopo Mondi, Xiubo.Lee,
	Takashi Iwai, linux-kernel, Shengjiu Wang, tiwai, linux-media,
	tfiga, Hans Verkuil, linuxppc-dev, Sakari Ailus, perex, mchehab,
	festevam, m.szyprowski

[-- Attachment #1: Type: text/plain, Size: 2592 bytes --]

Hi Mark

On Fri, Jul 7, 2023 at 11:13 AM Shengjiu Wang <shengjiu.wang@gmail.com>
wrote:

> Hi Mark
>
> On Tue, Jul 4, 2023 at 12:03 PM Shengjiu Wang <shengjiu.wang@gmail.com>
> wrote:
>
>>
>>
>> On Tue, Jul 4, 2023 at 1:59 AM Mark Brown <broonie@kernel.org> wrote:
>>
>>> On Mon, Jul 03, 2023 at 03:12:55PM +0200, Hans Verkuil wrote:
>>>
>>> > My main concern is that these cross-subsystem drivers are a pain to
>>> > maintain. So there have to be good reasons to do this.
>>>
>>> > Also it is kind of weird to have to use the V4L2 API in userspace to
>>> > deal with a specific audio conversion. Quite unexpected.
>>>
>>> > But in the end, that's a decision I can't make.
>>>
>>> > So I wait for that feedback. Note that if the decision is made that
>>> this
>>> > can use V4L2, then there is quite a lot more that needs to be done:
>>> > documentation, new compliance tests, etc. It's adding a new API, and
>>> that
>>> > comes with additional work...
>>>
>>> Absolutely, I agree with all of this - my impression was that the target
>>> here would be bypass of audio streams to/from a v4l2 device, without
>>> bouncing through an application layer.  If it's purely for audio usage
>>> with no other tie to v4l2 then involving v4l2 does just seem like
>>> complication.
>>>
>>
>> This audio use case is using the v4l2 application layer. in the user space
>> I need to call below v4l2 ioctls to implement the feature:
>> VIDIOC_QUERYCAP
>> VIDIOC_TRY_FMT
>> VIDIOC_S_FMT
>> VIDIOC_REQBUFS
>> VIDIOC_QUERYBUF
>> VIDIOC_STREAMON
>> VIDIOC_QBUF
>> VIDIOC_DQBUF
>> VIDIOC_STREAMOFF
>>
>> why the driver was put in the ALSA, because previously we implemented
>> the ASRC M2P (memory to peripheral) in ALSA,  so I think it is better to
>> add M2M driver in ALSA.  The hardware IP is the same. The compatible
>> string is the same.
>>
>>
>> Could you please share more of your ideas about this patch? and could
> you please check further about this implementation.
>
> I tried to find a good interface in ALSA for this m2m request, but didn't
> find one,  then I try the V4L2, find it is good this audio case.
>
> but it needs to extend the V4L2 API.
>
> I have no idea how to go on, could you please recommend?
>
>
Should I implement the asrc m2m driver as a separate v4l2 driver?
And move it to the /driver/media folder ? In ALSA part, just need
register the platform device.

The bridge between ALSA and V4L2 framework can be the header
file in /include/sound/

Does it sound better?

Best regards
Wang Shengjiu

[-- Attachment #2: Type: text/html, Size: 3920 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2023-07-19 14:01 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-29  1:37 [PATCH 0/6] Add audio support in v4l2 framework Shengjiu Wang
2023-06-29  1:37 ` [PATCH 1/6] media: v4l2: Add audio capture and output support Shengjiu Wang
2023-06-30 10:05   ` Sakari Ailus
2023-06-30 10:05     ` Sakari Ailus
2023-07-03  9:54     ` Shengjiu Wang
2023-07-03  9:54       ` Shengjiu Wang
2023-07-03 10:04       ` Hans Verkuil
2023-07-03 10:04         ` Hans Verkuil
2023-07-03 12:07       ` Takashi Iwai
2023-07-03 12:07         ` Takashi Iwai
2023-07-03 12:53         ` Mark Brown
2023-07-03 12:53           ` Mark Brown
2023-07-03 13:12           ` Hans Verkuil
2023-07-03 13:12             ` Hans Verkuil
2023-07-03 13:25             ` Takashi Iwai
2023-07-03 13:25               ` Takashi Iwai
2023-07-04  3:55               ` Shengjiu Wang
2023-07-04  3:55                 ` Shengjiu Wang
2023-07-03 17:58             ` Mark Brown
2023-07-03 17:58               ` Mark Brown
2023-07-04  4:03               ` Shengjiu Wang
2023-07-04  4:03                 ` Shengjiu Wang
2023-07-07  3:13                 ` Shengjiu Wang
2023-07-07  3:13                   ` Shengjiu Wang
2023-07-19  9:15                   ` Shengjiu Wang
2023-07-19  9:15                     ` Shengjiu Wang
2023-06-29  1:37 ` [PATCH 2/6] ASoC: fsl_asrc: define functions for memory to memory usage Shengjiu Wang
2023-06-29  1:37 ` [PATCH 3/6] ASoC: fsl_easrc: " Shengjiu Wang
2023-06-29 10:59   ` Fabio Estevam
2023-06-29 10:59     ` Fabio Estevam
2023-06-30  3:23     ` Shengjiu Wang
2023-06-30  3:23       ` Shengjiu Wang
2023-06-29  1:37 ` [PATCH 4/6] ASoC: fsl_asrc: Add memory to memory driver Shengjiu Wang
2023-06-29 11:38   ` Mark Brown
2023-06-29 11:38     ` Mark Brown
2023-06-30  3:37     ` Shengjiu Wang
2023-06-30  3:37       ` Shengjiu Wang
2023-06-30 11:22       ` Mark Brown
2023-06-30 11:22         ` Mark Brown
2023-06-29  1:37 ` [PATCH 5/6] ASoC: fsl_asrc: enable memory to memory function Shengjiu Wang
2023-06-29  1:37 ` [PATCH 6/6] ASoC: fsl_easrc: " Shengjiu Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.